Members’ updatesResourcesJuly 5, 2023EU framework : The Artificial Intelligence Act

https://www.privacyrules.com/wp-content/uploads/2023/07/steve-johnson-ZPOoDQc8yMw-unsplash-scaled.jpg

The European Parliament has adopted a draft regulation on artificial intelligence, paving the way for the upcoming Artificial Intelligence Act.

This act, proposed by the European Commission, aims to establish a legal framework for the development, deployment, and use of AI systems in the EU. Its goal is to ensure that AI systems used in the EU are transparent, reliable, and safe, and that they respect fundamental rights. Once the AI Act is finalized and implemented, it will have a significant impact on various stakeholders. AI developers, providers, users, and operators will need to comply with the regulations, regardless of their location within or outside the EU.

The AI act defines three categories of AI systems, each subject to specific obligations and security requirements based on the level of risk they pose. Prohibited systems, high-risk systems, and manipulative systems. Systems with an unacceptable level of risk, such as those using manipulative techniques or emotion recognition in certain areas, will be strictly prohibited. In addition, high-risk AI systems, including biometric identification, critical infrastructure management, and law enforcement applications, will face mandatory requirements and must be declared to the EU. Furthermore, AI systems with specific risks of manipulation, like chat bots or systems generating deep fakes, will be subject to transparency requirements. On the other hand, other AI systems that do not fall into these high-risk categories can be developed and used within the EU without additional legal obligations beyond existing legislation.

While we await the implementation of the AI Act, it’s important to note that the General Data Protection Regulation (GDPR) remains the primary regulatory safeguard for AI systems that collect users’ personal data. In the meantime, European data protection agencies have released guidelines on the appropriate and lawful use of AI, offering organizations self-assessment tools to evaluate the GDPR compliance of their AI systems.

To delve deeper into this important topic, watch our privacyespresso with Privacyrules expert Jean Christophe Chevallier from the French law firm YDES.