In this Privacyespresso episode, we are joined by Privacyrules expert Sarah Cannataci from the law firm Fenech & Fenech. During this episode we dive into the intersection of artificial intelligence, data protection, and privacy.
Starting with a GDPR perspective, Sara sheds light on the key data protection principles that come into play when AI systems, like ChatGPT, process personal data:
- Lawfulness, Fairness, and Transparency: AI’s collection and use of personal data raise questions about legality and fairness. Are current AI systems truly compliant with these principles?
- Accuracy and Integrity: Accurate data processing is crucial. Instances of inaccuracy, like false information generated by AI models, can lead to reputational harm and breaches of integrity.
- Accountability and Legal Personality: Who takes the responsibility when AI systems generate problematic outputs? The absence of legal personality for AI complicates accountability.
- Automated Decision Making and Biases: Inherent biases in AI systems can lead to unfair decisions, challenging the GDPR’s provisions against solely automated decisions.
- Data Subject Rights: Fulfilling data subject rights, especially the right to erasure, becomes complex when AI systems are trained on vast datasets.
In this episode, Sara also discusses the significance of “Privacy by Design,” emphasizing the importance of considering data protection from the start when developing AI systems. She delves into the intriguing question of who should be held accountable – developers, users, or the AI itself?
Whether you’re a developer or a user of AI systems, staying informed is crucial! The GDPR and the emerging AI Act are just the tip of the iceberg.
Watch the full episode to gain insights into the evolving landscape of AI, privacy, and data protection.