Overview

DP provides a strong, provable guarantee of privacy. It ensures that the output of an algorithm doesn't change significantly if any single individual's data is added or removed.

How it Works

It typically involves adding a carefully calculated amount of 'noise' to the data or the model's gradients. This masks individual contributions while still allowing the model to learn the overall trends.

Use in AI

Differential privacy is increasingly used when training models on sensitive data to ensure that the model doesn't 'memorize' and potentially leak private information during inference.

Related Terms