Multi-party computation protocols provide computational security to both data and model through encrypted computations. Differential privacy, on the other hand, provides information theoretic privacy by perturbing the computations with noise. These techniques consider a different threat model and have orthogonal goals. In this project, we combine both technuqies to achieve strong security and privacy for machine learning on sensitive data.
Privacy Preserving Machine Learning
Publications
Efficient Privacy-Preserving Nonconvex Optimization
We propose differentially private algorithm for non-convex empirical risk minimization with reduced gradient complexity.
Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization
We combine differential privacy and MPC for privacy preserving distributed learing of strongly-convex ERM algorithms.
Aggregating Private Sparse Learning Models Using Multi-Party Computation
We use MPC to aggregate models in a private and distributed way.