Security & Privacy in AI
Balancing recognition capabilities with privacy protection
About This Project
This project focuses on analyzing and enhancing a facial processing pipeline for robotic vision systems with a strong emphasis on privacy protection. The project implements differential privacy techniques to protect individuals' identities while still enabling necessary personal recognition for security applications. The solution balances the competing needs of functionality and privacy, ensuring that facial data is processed in a way that preserves anonymity while maintaining the utility of the vision system for its intended purpose.
Core Concepts
- Differential privacy fundamentals
- Facial recognition technology
- Privacy-preserving machine learning
- Data anonymization techniques
- Security threat modeling for AI systems
- Privacy by design principles
Key Knowledge/Skills
- Federated learning, differential privacy frameworks
- Computer vision with privacy constraints
- Security testing methodologies for AI
- Privacy legislation compliance (GDPR, CCPA)
- Adversarial attacks and defenses
- Privacy-utility tradeoff analysis
Coursework Covered
Privacy-Preserving ML & Security