This is part three of our examination of the European Union’s new artificial intelligence law (the “EU AI Act”). In part one, we introduced the scope of the EU AI Act and discussed what types of AI systems are outright banned. In part two, we discussed high-risk AI systems. In this article, we look at the requirements for general-purpose AI models.
General-Purpose AI Model
The EU AI Act defines a “general-purpose AI model” as an AI model, including one that is trained with a large amount of data using self-supervision at scale, that displays significant generality, is capable of competently performing a wide range of tasks, and can be integrated into a variety of downstream systems or applications. These are often referred to as “foundation models.” Examples include Open AI’s GPT, Anthropic’s Claude, Meta’s Llama, and Stability AI’s Stable Diffusion.
The EU AI Act creates a sub-category for general-purpose AI models with systemic risk. “Systemic risk” is defined as having (i) a significant impact on the EU market due to a model’s reach or (ii) reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole that can be propagated at scale. Examples of systemic risk include major accidents, disruptions of critical economic sectors, negative effects on democratic processes, and the dissemination of discriminatory information. “High-impact capabilities” are capabilities that match or exceed the capabilities of the most advanced general-purpose AI models.
A general-purpose AI model has systemic risk if it has been determined through the use of appropriate technical tools and methodologies to have high-impact capabilities or has been designated by an EU scientific panel as having high-impact capabilities. The provider of a general-purpose AI model must notify the EU, through a designated authority created by each member state, within two weeks of determining that its general-purpose AI model represents a systemic risk. The EU can also assign a scientific panel to evaluate general-purpose AI models that have not been designated as having systemic risk.
A general-purpose AI model is presumed to have high impact capabilities and, therefore, to have systemic risk, when the cumulative amount of computation used for its training, measured in floating point operations (“FLOPs”), is greater than 1025. The cumulative amount of computation used for training includes computation across all activities and methods that enhance the capabilities of the model prior to deployment, such as pre-training, synthetic data generation, and finetuning. FLOPs represent the number of calculations performed during training, expressed as a floating point number. The higher the value, the greater the power of a model. For a sense of scale, most of us would consider 10 trillion to be a very large number. 10 trillion is only 1*1013.
The EU science panel is directed to take the following into account when determining whether a category for a general-purpose AI model has systemic risk:
- the number of parameters of the model;
- the quality or size of the training data set;
- the amount of computation used for training the model;
- the input and output modalities of the model;
- the benchmarks and evaluations of capabilities of the model, including consideration of the number of tasks that can be performed by the model without additional training; adaptability to learn new, distinct tasks; its level of autonomy and scalability; and the tools it has access to;
- whether it has a high impact on the internal market due to its reach, which is presumed to be true when a general-purpose AI model has at least 10,000 registered business users in the EU; and
- the number of registered end-users, whether individuals or business users.
Obligations for Providers of General-Purpose AI Models
Providers of all general-purpose AI models, regardless of their systemic risk status, must:
- Maintain technical documentation that includes certain minimum components, such as the tasks the model is intended to perform, the model’s architecture and number of parameters, the types and formats of inputs and outputs, applicable acceptable use policies, and the license;
- Provide additional detailed documentation to providers of AI systems that will integrate the general-purpose AI model into their AI systems;
- Adopt a policy on compliance with EU copyright law; and
- Make publicly available a detailed summary of the content used for training the general-purpose AI model.
Obligations for Providers of General-Purpose AI Models with Systemic Risk
Providers of general-purpose AI models with systemic risk have additional obligations, including to:
- Use standardized protocols and tools to evaluate the model, including conducting and documenting adversarial testing intended to identify and mitigate systemic risks;
- Assess and mitigate possible EU-level systemic risks;
- Document and report to the EU Artificial Intelligence Office information about serious incidents and possible corrective measures; and
- Ensure adequate cybersecurity protection.
In the next article, we look at low risk AI systems as well as enforcement issues.