What seems safe, might not be safe in practice.
Comparing the differential privacy implementations by quantifying their privacy leakage.
We identify vague terms across English, Portuguese and Spanish software requirement documents.
We use MPC to aggregate models in a private and distributed way.
Combining differential privacy and multi-party computation techniques for private machine learning.