This is part two of our examination of the European Union’s new artificial intelligence law, the (“EU AI Act”). In part one, we introduced the scope of the EU AI Act and discussed what types of AI systems are outright banned. In this article, we look at what types of AI systems are considered high-risk by the EU and what the resulting compliance requirements are.
What Systems are High Risk?
The EU AI Act specifies a number of types of AI systems that are considered to be high risk, including AI systems intended for use:
- as safety components of a regulated product;
- as remote biometric identification systems used to identify someone (excluding biometric authentication systems used to confirm that a specific person is the person who the person claims to be);
- as components for critical infrastructure;
- in job recruitment, targeted job advertisements, and evaluating job candidates;
- in making work-related decisions, such as promotions and terminations, allocation of tasks, and evaluating performance;
- in evaluating creditworthiness (excluding fraud detection);
- for risk assessment and pricing of life and health insurance; and
- for profiling.
Exceptions
Generally, an AI system is not considered high risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. AI systems engaged in profiling are always high risk and may not use the exceptions.
To qualify for an exception, an AI system must meet at least one of the following:
- it performs a narrow procedural task;
- it improves the result of a previously completed human activity;
- it (i) detects decision-making patterns or deviations from prior decision-making patterns, and (ii) is not meant to replace or influence the previously completed human assessment, without proper human review; or
- it only performs a preparatory task to an assessment relevant for a high risk activity.
If a provider determines that an exception applies to an AI system that would otherwise be high risk, (i) the assessment must be documented prior to the AI system being placed on the market or put into service, and (ii) the provider must comply with the registration requirement (see below).
Requirements for High Risk AI Systems
Providers of high risk systems must comply with a detailed set of requirements, including:
- maintaining data quality and data governance standards;
- providing detailed technical documentation;
- meeting logging and record-keeping requirements;
- providing for human oversight;
- performing a fundamental rights impact assessment;
- conducting a conformity assessment and demonstrating compliance with it;
- meeting an appropriate level of accuracy, robustness, and cybersecurity; and
- establishing a post-market monitoring system.
Conformity Assessments
A conformity assessment is used to demonstrate compliance with applicable requirements of the EU AI Act. Providers of high risk systems may, depending on the functions performed by the high risk system, have the option of performing a conformity assessment based on internal controls, which does not require the involvement of a regulatory authority or a conformity assessment based on the provider’s quality management system and technical documentation, which does require the involvement of a regulatory authority.
Registration
High risk AI systems must be registered with EU authorities prior to being placed on the market or put into service. If a provider of an AI system has determined that an AI system is not high risk due to the exceptions above, then it still must be registered with EU authorities prior to being placed on the market or put into service.
In our next article, we will look at the requirements for general purpose AI models.