We propose a white-box attribute inference attack that is able to extract distribution information from the model.
Models leak distribution information that can be used for attribute inference.
We propose a white-box attribute inference attack that is able to extract distribution information from the model.
We propose black-box and gray-box active pattern extraction attacks that extract sensitive data patters from the Smart Reply model.
Results for the ongoing work on attribute inference defense.
Membership inference attacks are effective even for skewed priors.
We propose novel membership inference attack and a threshold selection procedure to improve the existing attacks.
We propose differentially private algorithm for non-convex empirical risk minimization with reduced gradient complexity.
We compare the privacy leakage of ML models trained with different differential privacy relaxations and different privacy budgets.
We combine differential privacy and MPC for privacy preserving distributed learing of strongly-convex ERM algorithms.