Membership inference attacks are effective even for skewed priors.
We propose novel membership inference attack and a threshold selection procedure to improve the existing attacks.
We propose differentially private algorithm for non-convex empirical risk minimization with reduced gradient complexity.
We compare the privacy leakage of ML models trained with different differential privacy relaxations and different privacy budgets.
We combine differential privacy and MPC for privacy preserving distributed learing of strongly-convex ERM algorithms.
What seems safe, might not be safe in practice.
Comparing the differential privacy implementations by quantifying their privacy leakage.
Combining differential privacy and multi-party computation techniques for private machine learning.