Skills

Python

Statistics

Java

Github

Bitbucket

AWS

Experience

 
 
 
 
 

Postdoctoral Researcher

Meta FAIR

Jan 2023 – Present Menlo Park
Research on privacy preserving machine learning.
 
 
 
 
 

Research Intern

Microsoft Redmond Lab

May 2021 – Aug 2021 Seattle
Research on extracting data from generative language models.
 
 
 
 
 

Graduate Research Assistant

University of Virginia

Aug 2016 – Dec 2022 Virginia
Research on privacy preserving machine learning.
 
 
 
 
 

Research and Development Senior Analyst

Accenture Technology Labs

Jan 2015 – Jul 2016 Bangalore
Application of machine learning for solving software engineering problems.

Awards and Achievements

Awarded Student Travel Grant for presenting at USENIX Security 2019

Awarded Student Travel Grant for presenting at NeurIPS 2018

Filed 3 Patents in Accenture Technology Labs

Recent Posts

Models leak distribution information that can be used for attribute inference.

Results for the ongoing work on attribute inference defense.

Membership inference attacks are effective even for skewed priors.

A hypothesis testing approach to evaluate the Xbox One X HDD.

What seems safe, might not be safe in practice.

Projects

*

Inference Privacy Risks of Machine Learning Models

Comparing the differential privacy implementations by quantifying their privacy leakage.

Decentralized Certificate Authorities

Allowing certificate authorities to sign digital certificates in a secure and distributed way.

Privacy Preserving Machine Learning

Combining differential privacy and multi-party computation techniques for private machine learning.

Recent Publications

Quickly discover relevant content by filtering publications.

We propose a white-box attribute inference attack that is able to extract distribution information from the model.

We propose black-box and gray-box active pattern extraction attacks that extract sensitive data patters from the Smart Reply model.

We propose novel membership inference attack and a threshold selection procedure to improve the existing attacks.

We propose differentially private algorithm for non-convex empirical risk minimization with reduced gradient complexity.

We compare the privacy leakage of ML models trained with different differential privacy relaxations and different privacy budgets.

Pen Sketches