Approaches and techniques that enable AI systems to learn from or operate on sensitive data while protecting individual privacy. These methods allow organisations to derive value from data while minimising privacy risks to individuals.
Privacy-preserving AI techniques address the tension between data utility and privacy protection, enabling organisations to develop effective AI systems without compromising individual privacy. Approaches include technical methods like differential privacy (adding precisely calibrated noise to data), federated learning (training models across distributed datasets without centralising data), and homomorphic encryption (performing computations on encrypted data). These techniques are increasingly important as privacy regulations become more stringent and consumers grow more concerned about data usage.
A healthcare research organisation using federated learning to develop a diagnostic AI model that learns from patient data across multiple hospitals without requiring sensitive medical records to leave each hospital’s secure environment.