HighlightsPrivacy espresso seriesResourcesJune 8, 2023EU framework series, episode 1: The Artificial Intelligence Act

1)What is the Artificial Intelligence Act? 

The Artificial Intelligence Act is a regulation proposed by the European Commission that aims to establish a legal framework for the development, deployment, and use of AI systems in the European Union. Its goal is to ensure that AI systems used in the EU are transparent, reliable, and safe, and that they respect fundamental rights.

2)When the Artificial Intelligence Act will be applicable?  

The AI Act was proposed by the European Commission in April 2021, and now needs to be reviewed and adopted jointly by the European Parliament and the Council of the EU in order to be applicable.  

On 6 December 2022, the Council adopted a “general approach” to speed up the legislative procedure and even facilitate an agreement with the Parliament. 

During the last votes in the Parliament, heated discussions about Chat GPT and new, disruptive AI applications, caused delays in the process.  

However, the Parliament recently agreed on a draft of its position, and the leading committees voted on it on the 11 May 2023. The final vote in plenary session will take place in June 2023 

The final negotiations between the Commission, the Council and the Parliament will then begin and, once an agreement is reached, the AI Act will enter into force.  

Most of the provisions will be applicable another 24 months later, during which time companies and organizations will have to ensure that their AI systems comply with the requirements and obligations set out in the regulation. 

3)Who is concerned by this regulation and what is considered as an AI?  

The AI Act will affect a wide range of stakeholders involved in the development, deployment, and use of AI systems in the EU.  

First, it will apply to AI developers and providers. Companies and organizations placing AI systems on the European market will need to comply with the requirements and obligations set out in the regulation, whether such providers are established in a Member State or not. 

Users1 and operators of AI systems will also have to comply with those requirements. This includes not only companies that use or operate AI systems in the European Union, but also companies located outside the EU, if the results produced by the AI system are used in a Member State 

Lastly, the AI systems will impact European citizens as they are interacting with them, for instance with AI generated images on the internet, or using Chat GPT, etc. The AI Act aims to protect our rights and interests during those interactions. 

While lawmakers agree that consumers need to be protected, they appear unsure of what they need to be protected from. The definition of an AI system proposed by the Commission has initially been amended by the Council. It put forward a narrower definition, listing specific AI techniques, such as machine learning, logic, knowledge, or statistics, because Member States were concerned that traditional software would be included. However, the Parliament radically changed it in its latest draft of May 2023 and deleted this list in order to facilitate the regulation’s adaptation to unknown and potentially disruptive AI technologies. 

4)What will it change?  

Three categories of AI systems are identified in the AI Act, with each category being subject to specific obligations and security requirements depending on the level of risk they pose. 

  • First, AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited. This category includes systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring. The AI Act also forbids emotion recognition systems in law enforcement, border management, workplace, and educational institutions, as well as the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement. 
  • Then the AI Act regulates high-risk AI systems that are creating adverse impact on people’s safety or their fundamental rights.  

This includes AI systems used for:  

  • Biometric identification and categorization of humans 
  • Management of critical infrastructure 
  • Education 
  • Employment 
  • Access to essential private services and public services and benefits 
  • Law enforcement 
  • Migration, asylum, and border control management 
  • Administration of justice and democratic processes 

The AI Act imposes a range of mandatory requirements to the providers of such systems, related to documentation, risk management systems, governance, transparency, or safety, depending on their status. These systems must also be declared to the EU and bear a CE mark. 

  • Lastly, some AI systems which pose specific risks of manipulation, such as systems that interact with humans (chat bot) or systems that generate deep fakes, are subject to transparency requirements. In practice, this means the user must clearly be informed that it is interacting with an AI system. 

All other AI systems can be developed and used in the EU without additional legal obligations than existing legislation. 

5)Who will be responsible for enforcing the regulation, and which sanctions will be applied? 

National authorities in the EU will be responsible for enforcing the regulation and ensuring that AI systems used in their respective countries comply with the requirements and obligations set out in the regulation.  

In France, the Council of State recommends that the French Data Protection Authority (CNIL) becomes the national supervisory authority for AI systems under the AI Act, as well as the opinion that the CNIL and its counterparts issued in June 2021.  

The European Artificial Intelligence Board will provide guidance and support to national authorities in this regard. 

The sanctions have evolved with each iteration of the AI Act and might change again as a result of the ‘trilogue’. 

According to the version proposed by the European Commission, authorities will be allowed to fine companies up to 30 million euros or up to 6% of worldwide annual turnover in two cases: 

  • If a prohibited AI system is placed on the European market. 
  • If an AI system doesn’t comply with the quality requirements for the data. 

The supply of incorrect, incomplete or misleading information to competent authorities in reply to a request will be fined up to 10 million euros or 2 % of its worldwide annual turnover. 

The non-compliance of the AI system with any other requirements or obligations under the regulation will be penalised up to 20 million euros or 4% of annual turnovers. 

6)Until this act comes into force, how are artificial intelligence systems regulated? 

In the meantime, the GDPR is the main regulatory safeguard from the illicit artificial intelligence systems as they collect users’ personal data. 

The EDPB members recently discussed the enforcement action undertaken by the Italian data protection authority against Open AI about Chat GPT. They decided to launch a dedicated task force to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities. 

Furthermore, European data protection agencies have released guidelines on the appropriate and lawful use of AI. The French CNIL, for instance, provided organisations with a self-assessment guide for artificial intelligence systems. It is an analysis grid which they can use to assess by themselves the maturity of their artificial intelligence systems with regard to the GDPR.