
On 12 July 2024, the EU published the AI Act (Regulation 2024/1689) in the EU Official Journal, laying down harmonised rules on artificial intelligence (AI Act). The AI Act is the world’s first of its kind. It aims to ensure the safe marketing and use of AI systems and their outputs, consistent with EU fundamental rights as well as to stimulate investment and innovation in AI in Europe.
The AI Act will enter into force on 1 August 2024 and will apply 24 months later (i.e. on 1 August 2026).
However, some provisions will apply earlier:
- Provisions on prohibited practices will start to apply on 1 February 2025;
- Provisions on general-purpose AI systems will start to apply on 1 August 2025.
Others, i.e., obligations of high risk-systems, will start to apply later, on 1 February 2027.
This gradual application follows the AI Act’s risk-based approach differentiating between various requirements and prohibitions according to the potential risks that AI systems may represent. The higher the risk of an AI system harming society, the stricter the regulatory requirements.
Scope and application
The AI Act aims to cover all AI systems and be as technology neutral and future proof as possible.
The scope of the AI Act is broad. The AI act is not sector-specific and applies to:
- AI system providers, which place AI systems on the market or put them into operation, irrespective of whether those providers are established within the EU or in a third country;
- AI system providers or users that are located in a third country, if the output produced by the AI system is used in the EU;
- Importers and distributors of AI systems;
- Users of AI systems located within the EU.
Such users can be product manufacturers who put a product on the market under their own name or trademark if such products are accompanied by, or incorporating, an AI system.
Risk-based approach to AI regulation
The AI Act follows a risk-based approach, categorizing between uses of AI systems that create (i) an unacceptable and prohibited risk, (ii) a high risk, (iii) a limited risk and (iv) a low or minimal risk.
(i) Prohibited AI practices
The AI Act’s list of prohibited AI systems includes systems whose use is deemed unacceptable as contrary to EU fundamental rights, such as:
- AI systems that exploit the weakliness of vulnerable groups or manipulate persons beyond their consciousness in order to distort their behaviour in a manner that may cause harm;
- AI-based social scoring done by public authorities for general purposes;
- The use of ‘real time’ remote biometric identification systems in public spaces for law enforcement purposes.
Examples of prohibited AI systems are:
- An AI system used in commercial trucks that emits inaudible sounds to drivers, designed to encourage them subconsciously to drive longer hours than safe;
- A children's toy with an integrated AI voice that encourages kids to perform dangerous stunts or challenges under the guise of playing a game.
AI systems that present an unacceptable risk are not subject to compliance requirements; they are outright forbidden. In any event, companies need to:
- Be transparent about their use of AI systems to avoid manipulation of individuals beyond their consciousness;
- Conduct assessments of their AI systems to identify potential harm that these systems could cause to individuals;
- Conduct assessments to identify vulnerabilities of vulnerable groups using their AI systems and implement appropriate measures to prevent these vulnerabilities from being exploited.
(ii) High-risk AI systems
The AI Act contains specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons.
The classification of an AI system as high-risk is based on the intended purpose of the AI system. The AI Act sets two main categories of high-risk AI systems:
- AI systems intended to be used as safety component of regulated products that are subject to third-party assessment under the relevant sectorial legislation (e.g., medical devices, children’s toys), and;
- AI systems that are listed in Annex III of the AI Act, such as AI systems used for biometric identification and categorisation of natural persons, management and operation of critical infrastructure, education and vocational training, employment and workers management.
High-risk AI systems will be permitted on the European market subject to compliance with several requirements and an ex ante conformity assessment. The AI Act establishes various obligations for providers and other participants along the AI value chain (users, importers, distributors, authorized representatives).
Companies that market or put into service high-risk AI systems, will be required to:
- Establish an appropriate risk management system (to identify risks, communicate risks to the user, and eliminate or reduce risks as far as possible);
- Use appropriate and high-quality training, validation and testing data sets;
- Draw up technical documentation relating to the AI system in a way that it can provide national competent authorities the necessary information to assess the compliance of the AI system with the requirements of the AI Act;
- Enable automatic recording of events (‘logs’) while the high-risk AI systems is operating;
- Ensure transparency by enabling users to understand the AI system’s output and use it appropriately;
- Ensure the AI system can be and is overseen by natural persons;
- Register the AI system in the EU database;
- Conduct post-market reporting and monitoring;
- Draw up an EU declaration of conformity and affix the CE marking of conformity.
(iii) Transparency obligations for AI systems presenting limited risk
AI systems that interact with individuals, are used to detect emotions or for biometric categorisation, and generate or manipulate content (‘deep fakes’) will be subject to strict transparency obligations.
For instance, AI systems that detect a user’s emotions will be required to inform users accordingly. Deepfakes must be labelled appropriately, subject to some limitations for legitimate purposes (such as freedom of expression and freedom of the arts and sciences).
(iv) AI systems presenting low or minimal risk
AI systems presenting low or minimal risk will not be subject to additional legal obligations. Nevertheless, the AI act encourages the industry to draw up codes of conduct to incentivise the voluntary application of high-risk AI systems requirements to low-risk AI systems.
Moreover, participants along the AI value chain (users, distributors, providers, …) may still be subject to obligations arising from other legislations like the consumer protection, intellectual property rights, data protection and digital service legislation.
Enforcement framework and penalties
Market surveillance authorities will be able to prohibit or withdraw from the market an AI system presenting a risk to health, safety or the protection of fundamental rights of individuals, if the AI system operator fails to take adequate corrective measures to ensure that the AI system concerned no longer presents that risk and to comply with the AI Act.
Infringements shall be subject to administrative fines of up to 7,5M, 15M or 35M EUR depending on the infringement or, and only if the offender is a company, up to 1%, 3% or 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
And now what?
The presence of AI is growing fast. People and companies are not always aware that they are using or being confronted with AI. The entry into force of the AI Act requires companies to increase their consciousness about their use of AI and to adapt their risk and quality management processes to ensure compliance.
Manufacturers, distributors and IT providers may need to (i) amend existing contracts to mitigate compliance liability risks, (ii) document compliance, and (iii) integrate new AI policies that strike a correct balance between protecting innovation and being transparent. Such policies must ensure compliance with other legal obligations, such as the protection of copyrights and other intellectual property rights.
In the coming months, while closely monitoring the EU’s drawing of guidelines on prohibited practices and high-risk AI systems, we will assist clients in translating legal requirements into workable policies and practices.