EU AI Act Glossary
Key terms and definitions from the EU AI Act — explained in plain language for compliance teams, legal professionals, and business leaders.
AI System
Software that generates outputs (predictions, recommendations, decisions, content) that can influence environments it interacts with, using machine-learning or logic-based approaches.
AI Literacy
The skills, knowledge, and understanding that allow providers, deployers, and affected persons to make informed decisions about AI systems, including awareness of opportunities and risks.
AI Office
The EU body (within the European Commission) responsible for supervising and enforcing the AI Act, especially for general-purpose AI models.
Algorithmic Impact Assessment
An evaluation of how an AI system affects individuals and communities, including rights, safety, and societal effects. Related to FRIA.
Biometric Categorisation
AI-based assignment of persons to categories based on biometric data (e.g., ethnicity, gender). Restricted under the Act.
CE Marking
A conformity marking indicating that a high-risk AI system meets EU AI Act requirements before it can be placed on the EU market.
Conformity Assessment
The process of verifying that a high-risk AI system meets all applicable requirements of the AI Act before being placed on the market or put into service.
Data Governance
The practices and policies ensuring that training, validation, and testing data sets for AI systems are relevant, representative, free of errors, and complete.
Deep Fake
AI-generated or manipulated image, audio, or video content that falsely appears to be authentic. Must be labelled under the AI Act.
Deployer
Any person or organization that uses an AI system under its authority, except where the system is used in the course of a personal, non-professional activity.
Emotion Recognition System
An AI system that identifies or infers emotions or intentions of natural persons on the basis of their biometric data. Prohibited in workplaces and education.
EU AI Act (Regulation 2024/1689)
The European Union’s comprehensive legal framework for artificial intelligence, establishing harmonised rules for the development, placing on the market, and use of AI systems.
EU Database for High-Risk AI
A public EU database where high-risk AI systems must be registered before being placed on the market or put into service.
Foundation Model
See General-Purpose AI Model. The term "foundation model" was used in earlier drafts but replaced by GPAI in the final regulation.
FRIA (Fundamental Rights Impact Assessment)
A mandatory assessment for certain high-risk AI deployers (public bodies, banking, insurance, critical infrastructure) evaluating risks to fundamental rights before deployment.
General-Purpose AI (GPAI) Model
An AI model trained on broad data at scale that can be adapted to a wide range of tasks. Subject to transparency obligations; systemic-risk models face additional requirements.
High-Risk AI System
An AI system that poses significant risks to health, safety, or fundamental rights. Listed in Annex III or used as a safety component in products covered by Annex I.
Human Oversight
Measures ensuring that humans can effectively oversee, intervene in, or override AI system decisions, especially for high-risk systems.
Notified Body
An independent organization designated by a Member State to carry out third-party conformity assessments for certain high-risk AI systems.
Placing on the Market
The first making available of an AI system on the EU market, whether for payment or free of charge.
Post-Market Monitoring
A systematic process for collecting and reviewing experience from AI systems after placement on the market, to ensure continued compliance and identify risks.
Provider
The entity that develops or commissions an AI system and places it on the market or puts it into service under its own name or trademark.
Putting into Service
The supply of an AI system for first use directly to the deployer or for own use, for its intended purpose.
Real-Time Remote Biometric Identification
AI-based identification of persons at a distance in real time using biometric data (e.g., facial recognition in public spaces). Strictly regulated.
Regulatory Sandbox
A controlled environment set up by a national authority allowing businesses to develop, test, and validate innovative AI systems under regulatory supervision for a limited time.
Risk Classification
The AI Act’s four-tier system categorising AI by risk level: Unacceptable (banned), High (regulated), Limited (transparency), and Minimal (no specific obligations).
Serious Incident
An event involving an AI system that directly or indirectly leads to death, serious damage to health, property, or the environment, or a serious breach of fundamental rights.
Systemic Risk
A risk specific to high-impact GPAI models (trained with >10²⁵ FLOPs) that may have significant effects on the EU market or serious consequences for public health, safety, or fundamental rights.
Transparency Obligations
Requirements for AI systems that interact with persons, generate content, or perform emotion recognition/biometric categorisation to disclose their AI nature to users.