Machine Learning

Attribute Inference Attacks Pose Distribution Inference Risk to Models

Models leak distribution information that can be used for attribute inference.

Are Attribute Inference Attacks Just Imputation?

We propose a white-box attribute inference attack that is able to extract distribution information from the model.

Combing for Credentials: Active Pattern Extraction from Smart Reply

We propose black-box and gray-box active pattern extraction attacks that extract sensitive data patters from the Smart Reply model.

Defense Against Attribute Inference

Results for the ongoing work on attribute inference defense.

Merlin, Morgan, and the Importance of Thresholds and Priors

Membership inference attacks are effective even for skewed priors.

Revisiting Membership Inference Under Realistic Assumptions

We propose novel membership inference attack and a threshold selection procedure to improve the existing attacks.

Efficient Privacy-Preserving Nonconvex Optimization

We propose differentially private algorithm for non-convex empirical risk minimization with reduced gradient complexity.

Evaluating Differentially Private Machine Learning in Practice

We compare the privacy leakage of ML models trained with different differential privacy relaxations and different privacy budgets.

Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization

We combine differential privacy and MPC for privacy preserving distributed learing of strongly-convex ERM algorithms.

Evaluating Differentially Private Machine Learning in Practice

What seems safe, might not be safe in practice.