Introduction
On 21 April 2021, the European Commission published a new Regulation on Artificial Intelligence (“AI”) (the “AI Regulation”). When it comes into force, the AI Regulation will be the first ever comprehensive regulatory regime for the use of AI. It adopts a risk-based approach: there will be different requirements according to the level of risk that a technology carries.
The AI Regulation is promoted as having EU values at its core, with a focus on protecting safety, quality and the rights of individuals. This can be contrasted with other major global AI markets, notably the US and China.
The EU has form for developing regulations of this nature: in the privacy world, the GDPR has been a great success in improving and protecting individuals’ rights with respect to their data privacy, although this has come at considerable cost to businesses. There are some features of the AI Regulation which will be familiar from the GDPR (e.g. extra-territorial reach and scarily high fines). Indeed, businesses which develop or employ AI will be able to draw on their experience of implementing a GDPR compliance programme, when designing a similar programme for AI Regulation compliance. In this way, while the AI Regulation could be seen as a headache for AI developers and users, it can also be viewed as an opportunity to build trust with stakeholders and members of the public alike, in the context of technologies that can often be viewed with suspicion.
Which technology does the AI Regulation cover?
The AI Regulation applies to the use of any AI system defined as:
Software that is developed with one or more of the following techniques and approaches:
and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
The proposed definition of AI is wide and could potentially catch software which might not usually be considered to be AI, particularly in the field of search and optimisation software.
The AI Regulation will not apply to AI that is already on the market at the time the AI Regulation comes into effect (so-called ‘legacy AI’) until the AI is repurposed or substantially modified. There are other exemptions relating to public, government or military systems.
Who does the AI Regulation apply to?
Providers: you will be a ‘provider’ under the AI Regulation if you:
Providers have the most obligations under the AI Regulation.
An Importer will be an EU entity that puts on the market an AI system that bears the name or trade mark of an entity established outside the EU.
A Distributor will be any other entity in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market without changing it: e.g. a reseller.
Users: all other business (non-consumer) user of an AI system.
In which countries will the AI Regulation apply?
The AI Regulation applies to:
Following Brexit, the AI Regulation will not automatically apply in the UK, but it is likely to influence any future UK regulation of AI. Also, due to the extraterritorial application of the AI Regulation, it will effectively apply to all UK businesses with end users in the EU.
What does the AI Regulation say?
In accordance with the risk-based approach, the AI Regulation differentiates between AI technologies by separating them into four categories: unacceptable risk, high risk, limited risk and minimal risk. We summarise some of the key provisions of the AI Regulation below.
Unacceptable Risk AI | ||
Which AI systems are affected? | AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour and that causes or is likely to cause that person or another person physical or psychological harm | |
AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person within that group and that causes or is likely to cause that person or another person physical or psychological harm | ||
Social scoring by or on behalf of public authorities in certain circumstances | ||
‘Real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (subject to exceptions) | ||
Restrictions | All unacceptable risk AI systems are prohibited. | |
Penalties | Fine of EUR 30m or 6% of worldwide annual turnover (higher of). | |
High risk AI | ||
Which AI systems are affected? | Biometric identification and categorisation of individuals. | Real-time and post remote biometric ID of individuals. |
Management and operation of critical infrastructure | Safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity | |
Education and vocational training | Assigning people to schools and other educational or training settings | |
Student testing | ||
Employment, workers management and access to self-employment | Recruitment, screening or filtering applications, evaluating candidates in interviews or tests. | |
Making decisions on promotion and termination of employment, for task allocation and for monitoring and evaluating performance and behaviour. | ||
Access to and enjoyment of essential private services and public services and benefits | Use by public authorities to evaluate the eligibility of people for public benefits and services. | |
Evaluation of the creditworthiness of people or establishing their credit score, with the exception of AI systems put into service by small scale providers for their own use | ||
Dispatching, or to establish priority in the dispatching of emergency first response services, including by firefighters and ambulance | ||
Law enforcement | Various types of AI systems fall within this category, including polygraphs, risk of offending/reoffending. | |
Migration, asylum and border control management | Various types of AI systems fall within this category, including polygraphs, security risks, assessing asylum and visa applications. | |
Administration of justice and democratic processes | Assisting a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts. | |
Safety components of products, or is itself a product covered by certain EU product safety rules and those rules require the product to undergo a third-party conformity assessment | Machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices, aviation, agricultural and forestry vehicles, two- or three-wheel vehicles and quadricycles, marine equipment, rail systems, motor vehicles, trailers and parts. | |
What are the obligations/restrictions? | Risk management system to be implemented and maintained. | Must be a continuous iterative process run throughout the entire lifecycle of a high-risk AI system. |
Data and data governance | Techniques involving the training of models with data must be developed on the basis of training, validation and testing data sets that meet certain quality criteria. | |
Technical documentation | Documentation must demonstrate the system’s compliance with the High Risk AI requirements of the AI Regulation.To be drawn up before the system is placed on the market or put into service and kept up-to date. | |
Record-keeping (logs) | The system must have the capability to keep logs while it is operating. To ensure traceability. | |
Transparency and provision of information to users | Operation must be sufficiently transparent to enable users to interpret the system’s output and use it appropriately.A list of mandatory information to be provided. | |
Human oversight | The system must be designed and developed in such a way that they can be effectively overseen by humans during the period in which the AI system is in use, including with appropriate human-machine interface tools | |
Accuracy, robustness and cybersecurity | Must be appropriate to the system’s intended purpose and they must perform consistently. | |
Registration | Standalone AI systems to be registered in EU register. | |
Ongoing monitoring and reporting | Serious incidents to be reported. | |
Who is responsible for compliance of high-risk AI systems? | Providers of the system | Providers have overall responsibility for compliance with the above requirements. |
Product manufacturers – if a high-risk AI system is used/sold with the product | Applies to certain products listed in the Annex to the AI Regulation.Product manufacturer will have the same obligations as a provider. | |
Importers of an AI system | Responsible for checking that the system conforms to the requirements of the AI Regulation.Notification obligations if the system presents certain risks.Must appoint an authorised representative in the EU to carry out certain compliance obligations. | |
Distributors | Responsible for checking that the provider or importer has complied with the AI Regulation.Notification obligations if the system presents certain risks.Obligation to take corrective action if the system does not conform. | |
Users | Must use them in accordance with instructions for use.If user controls input data, this must be relevant to intended purpose.Monitor system for risks and notify accordingly/stop using system if risk occurs.Keep logs if under their control.Carry out a data protection impact assessment. | |
Penalties | Fine of up to EUR 30m or 6% of worldwide annual turnover (higher of). | For breach of the data and data governance obligations. |
Fine of up to EUR 20m or 4% of worldwide annual turnover (higher of). | For breach of any other obligations under the AI Regulation. | |
Fine of up to EUR 10m or 2% of worldwide annual turnover (higher of). | For supply of incorrect, incomplete or misleading information to authorities. | |
Limited risk AI systems | ||
Which AI systems are affected? | AI systems intended to interact with natural persons, emotion recognition system or a biometric categorisation system, systems producing deep fakes (with exceptions for systems used in policing/criminal justice). | |
What are the obligations? | Transparency obligations. | |
Penalties | Fine of up to EUR 20m or 4% of worldwide annual turnover (higher of). | |
Minimal risk AI systems | ||
Which AI systems are affected? | All other AI systems | |
What are the obligations? | None. |
Implications for business
The AI Regulation is still in draft form and has a long way to go before it potentially bites. Then, once it has finished the EU’s legislative process, there will be a grace period of two years. This means that the AI Regulation is unlikely to apply until at least 2024.
That said, given the likely cost to business of compliance with the new regime, it would be prudent for businesses to take the AI Regulation into account as early as possible, while acknowledging that some provisions may change as the draft AI Regulation evolves.
Any business employing a high-risk AI system in its products or services should pay particular attention to the provisions on data and data governance, as breach of these requirements carries the highest possible penalty and is accordingly likely to be high on the regulator’s list of compliance checks.
[This note is intended as a high level introduction to the AI Regulation. We will be producing a series of notes about the draft AI Regulation, focussing on specific areas or developments of the AI Regulation over the coming months.]