Experience

 
 
 
 
 

Research Scientist

Oracle Labs

Nov 2024 – Present Burlington, MA
Research on privacy, security and access contol of LLMs.
 
 
 
 
 

Postdoctoral Researcher

Meta FAIR

Jan 2023 – Nov 2024 Menlo Park, CA
Research on memorization evaluation of LLMs and VLMs.
 
 
 
 
 

Research Intern

Microsoft Redmond Lab

May 2021 – Aug 2021 Seattle, WA
Research on extracting data from generative language models.
 
 
 
 
 

Graduate Research Assistant

University of Virginia

Aug 2016 – Dec 2022 Charlottesville, VA
Research on privacy preserving machine learning.
 
 
 
 
 

Research and Development Senior Analyst

Accenture Technology Labs

Jan 2015 – Jul 2016 Bengaluru, KA
Application of machine learning for solving software engineering problems.

Recent Posts

Models leak distribution information that can be used for attribute inference.

Results for the ongoing work on attribute inference defense.

Membership inference attacks are effective even for skewed priors.

A hypothesis testing approach to evaluate the Xbox One X HDD.

What seems safe, might not be safe in practice.

Projects

*

Inference Privacy Risks of Machine Learning Models

Comparing the differential privacy implementations by quantifying their privacy leakage.

Decentralized Certificate Authorities

Allowing certificate authorities to sign digital certificates in a secure and distributed way.

Privacy Preserving Machine Learning

Combining differential privacy and multi-party computation techniques for private machine learning.

Recent Publications

Quickly discover relevant content by filtering publications.

We propose a white-box attribute inference attack that is able to extract distribution information from the model.

We propose a white-box attribute inference attack that is able to extract distribution information from the model.

We propose black-box and gray-box active pattern extraction attacks that extract sensitive data patters from the Smart Reply model.

We propose novel membership inference attack and a threshold selection procedure to improve the existing attacks.

We propose differentially private algorithm for non-convex empirical risk minimization with reduced gradient complexity.