Resources/EU AI Act Guide

The EU AI Act: What Your Company Needs to Know and Do

A practical guide for business operators — not lawyers — on the EU AI Act obligations, deadlines, and how to comply. Updated March 2026.

Last updated: March 2026·15 min read

What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive law regulating artificial intelligence. Adopted on May 21, 2024, and published in the Official Journal of the EU on July 12, 2024, it establishes harmonized rules for placing AI systems on the EU market and using them within the European Union.

The regulation takes a risk-based approach: the higher the risk an AI system poses, the stricter the requirements. It applies to anyone placing AI systems on the EU market or using AI whose output is intended for use in the EU — regardless of where the developer or deployer is based.

This means that a company outside the EU offering AI-powered tools to European customers is also subject to the regulation. The only exclusions are purely military/national security systems, R&D prototypes before market placement, and personal non-commercial use.

Timeline: What's Already Enforceable and What's Coming

The AI Act doesn't apply all at once. It enters into force in phases, with different obligations becoming enforceable at different dates.

Aug 1, 2024

AI Act enters into force

Published in the Official Journal of the EU.

Feb 2, 2025

Prohibited Practices (Art. 5) + AI Literacy (Art. 4)

Already enforceable for over a year. These are your first two obligations.

Aug 2, 2025

GPAI obligations + AI Office operational

Rules for general-purpose AI models. Market Surveillance Authorities designated.

Aug 2, 2026

Full application: High-risk obligations

All deployer obligations (Art. 26), FRIA (Art. 27), transparency (Art. 50). This is the big deadline.

Aug 2, 2027

High-risk AI in regulated products

AI embedded in medical devices, machinery, automotive (Annex I products).

Aug 2, 2030

Legacy public sector systems

AI systems already in use by public bodies must comply.

As of March 2026, Art. 4 and Art. 5 have been enforceable for over a year. In 5 months (August 2, 2026), ALL deployer obligations for high-risk systems become enforceable. Companies should treat August 2026 as the binding deadline.

Risk Classification: Four Levels

Unacceptable

Prohibited practices (Art. 5). Banned entirely in the EU.

Penalty: Up to €35M or 7% of turnover

Examples: Social scoring, manipulative AI, emotion recognition at work

High

Annex III use cases + safety components. Full compliance required.

Penalty: Up to €15M or 3% of turnover

Examples: HR screening, credit scoring, biometric ID, critical infrastructure

Limited

AI interacting with people. Transparency obligations.

Penalty: Up to €7.5M or 1% of turnover

Examples: Chatbots, content recommendation, ad targeting

Minimal

Most AI systems. No mandatory obligations beyond AI Literacy.

Penalty: None (except Art. 4)

Examples: Spam filters, translation tools, content generation, analytics

The tool itself is not high-risk — only its use case is. The same ChatGPT can be minimal risk (writing product descriptions), limited risk (customer chatbot), or high risk (screening CVs) depending on how your company uses it.

Prohibited Practices (Art. 5) — Already Enforceable

Article 5 lists eight categories of AI systems that are entirely banned in the EU. These have been enforceable since February 2, 2025, and carry the highest tier of penalties (up to €35M or 7% of global turnover).

  1. 1

    Manipulative and deceptive AI

    AI using subliminal, manipulative, or deceptive techniques to distort behavior, causing or likely to cause significant harm.

  2. 2

    Exploiting vulnerabilities

    AI deliberately exploiting weaknesses due to age, disability, or socio-economic situation to distort behavior.

  3. 3

    Social scoring

    AI evaluating or classifying people based on social behavior or personal characteristics, leading to detrimental treatment in unrelated contexts.

  4. 4

    Crime prediction from profiling

    AI assessing the risk of committing a crime solely based on profiling or personality traits.

  5. 5

    Untargeted facial scraping

    Creating facial recognition databases by scraping images from the internet or CCTV in an untargeted manner.

  6. 6

    Emotion recognition at work/school

    AI inferring emotions in workplace and educational settings (except for medical or safety reasons).

  7. 7

    Biometric categorization by sensitive traits

    AI categorizing people based on biometrics to deduce race, political opinions, union membership, religion, or sexual orientation.

  8. 8

    Real-time remote biometric identification

    In public spaces by law enforcement (with narrow exceptions for missing persons, imminent threats, and serious crime suspects).

For e-commerce companies: personalized ads based on user preferences are NOT inherently manipulative. However, AI scoring customers for return risk that leads to detrimental treatment in unrelated contexts could potentially fall under social scoring.

“Am I a Deployer?” — When the AI Act Applies to Your Company

Under the AI Act, a deployer is any natural or legal person that uses an AI system under its authority — except where the AI is used in the course of a personal, non-professional activity. If your company uses AI tools in its business operations, you are almost certainly a deployer.

This includes using third-party AI services: if your HR team uses an AI screening tool, if your finance team uses AI-powered credit assessment, or if your customer service runs an AI chatbot — the company is a deployer for each of those systems.

The distinction matters because deployers have their own set of obligations under Art. 26, separate from provider (developer) obligations. You cannot simply rely on your vendor being compliant — you have independent legal responsibilities.

You become a “provider” (and inherit provider duties) if you put your own brand on a high-risk AI system, or if you substantially modify a system in a way that affects its compliance or changes its intended purpose.

Your Obligations as a Deployer — Complete List

Already enforceable (since February 2, 2025)

Art. 4ALL

AI Literacy

Ensure that staff involved in the operation and use of AI systems have a sufficient level of AI literacy. This applies to every company using any AI system, regardless of risk level. You must provide an AI Literacy training course appropriate to the context and role of each person.

Art. 5ALL

Prohibited Practices Screening

Screen all AI systems in use to confirm none fall under the eight prohibited categories. Document the screening process and results. This applies to every company, regardless of whether you use high-risk systems.

Enforceable from August 2, 2026 (for high-risk systems)

Art. 26(1)HIGH

Use in accordance with instructions

Use high-risk AI systems in accordance with the provider's instructions of use. Ensure the system is used only for its intended purpose and within its technical boundaries.

Art. 26(2)HIGH

Human oversight

Assign human oversight to natural persons who have the necessary competence, training, authority, and resources. The overseer must be able to understand the system's capabilities and limitations.

Art. 50HIGH

Transparency obligations

Inform individuals that they are subject to a decision made by a high-risk AI system. Provide clear disclosures when AI-generated or manipulated content is presented.

Art. 26(7)HIGH

Record-keeping (logging)

Keep logs automatically generated by the high-risk AI system, to the extent they are under your control. Retain for a minimum period appropriate to the intended purpose — at least 6 months unless specified otherwise by law.

Art. 26(9)HIGH

Cooperation with authorities

Cooperate with relevant national competent authorities and provide them with information and access necessary for their supervisory activities.

Art. 27HIGH

Fundamental Rights Impact Assessment (FRIA)

Perform a FRIA before putting a high-risk AI system into use — if you are a body governed by public law, a private entity providing public services, or using AI for credit scoring or life/health insurance risk assessment.

Art. 26(5)HIGH

Input data relevance

Ensure input data is relevant and sufficiently representative for the intended purpose of the high-risk AI system, to the extent you exercise control over the input data.

Art. 26(6)HIGH

Monitoring & incident reporting

Monitor the operation of high-risk AI systems and report serious incidents or malfunctions to the provider and relevant Market Surveillance Authority.

Art. 73HIGH

Serious incident reporting

Report serious incidents to the Market Surveillance Authority of the Member State where the incident occurred. A serious incident is one that results in death, serious harm to health, serious disruption of critical infrastructure, or serious breach of fundamental rights.

FRIA: Who Needs One and Who Doesn't

A Fundamental Rights Impact Assessment (FRIA) is required under Art. 27 before deploying a high-risk AI system — but only for specific categories of deployers. The FRIA must assess the impact on fundamental rights, describe mitigation measures, and be notified to the relevant Market Surveillance Authority.

Public bodies

All bodies governed by public law deploying high-risk AI systems.

Private entities providing public services

Private companies delivering services in the public interest (e.g., education, healthcare, utilities).

Credit scoring companies

Any deployer using high-risk AI for evaluating creditworthiness of natural persons.

Life & health insurance risk

Deployers using high-risk AI for risk assessment and pricing in life and health insurance.

If your company is a private business that does not provide public services, and does not use AI for credit scoring or life/health insurance risk assessment, you are not required to perform a FRIA under the current text of the regulation. However, your other deployer obligations under Art. 26 still apply in full.

The European Commission has the power to expand the categories of deployers required to perform a FRIA through delegated acts. Monitor regulatory developments — this list may grow over time.

Transparency Notices: When and How

The AI Act requires transparency at multiple levels. Deployers must ensure that individuals interacting with or affected by AI systems are appropriately informed. Art. 50 sets out three main transparency obligations:

AI interaction notices

When a person interacts directly with an AI system (e.g., a chatbot), they must be informed that they are interacting with AI — unless this is obvious from the circumstances. The notice must be clear, timely, and provided before or at the start of the interaction.

AI decision notices

Persons subject to decisions made or materially assisted by high-risk AI systems must be informed of that fact. This includes HR decisions (hiring, promotion), credit decisions, insurance assessments, and any other high-risk use case under Annex III.

Synthetic content notices

AI-generated or manipulated images, audio, video, or text that could be mistaken for authentic content must be labelled as artificially generated or manipulated. This applies to deepfakes, AI-generated articles, and synthetic media distributed to the public.

Human Oversight: What It Actually Means

Art. 26(2) requires deployers to assign human oversight of high-risk AI systems to natural persons who have the necessary competence, training, and authority. This is not a checkbox exercise — it requires genuine organizational commitment.

  • Understand the capabilities and limitations of the AI system and be able to properly monitor its operation
  • Remain aware of the possible tendency of automatically relying on the output (automation bias)
  • Be able to correctly interpret the AI system's output, taking into account the specific characteristics of the system and the available interpretation tools
  • Be able to decide not to use the AI system or to disregard, override, or reverse the output in any particular situation
  • Be able to intervene in the operation of the system or interrupt it through a 'stop' button or similar procedure

This means a real person, with real authority, who understands the tool. Not just a name in a spreadsheet.

Market Surveillance Authorities: Who Will Enforce This

Each EU Member State must designate one or more national competent authorities to act as Market Surveillance Authorities (MSAs) for the AI Act. These bodies will be responsible for monitoring compliance, conducting investigations, and imposing penalties.

As of March 2026, most Member States have designated or are in the process of designating their MSAs. The landscape is varied — some countries have created dedicated AI agencies, while others have assigned the role to existing regulators.

CountryAuthorityStatus
🇩🇪GermanyBundesnetzagentur + BaFinDesignated
🇫🇷FranceDGCCRF + CNILProposed
🇮🇹ItalyACNDesignated
🇪🇸SpainAESIADedicated agency
🇳🇱NetherlandsAutoriteit PersoonsgegevensDesignated
🇫🇮FinlandTraficomFirst enforcement XII/2025
🇵🇱PolandNot yet designatedIn progress
🇮🇪Ireland15 competent bodiesDistributed model

Penalties and Fines — Realistic Context

Tier 1 — Art. 5 Violations

€35M or 7%

Prohibited practices — the highest penalty tier for the most serious violations.

Tier 2 — Art. 26 Violations

€15M or 3%

Deployer obligations — non-compliance with high-risk system requirements.

Tier 3 — Incorrect Information

€7.5M or 1%

Supplying incorrect, incomplete, or misleading information to authorities.

For small and mid-size companies, penalties are capped at the lower of the two thresholds (percentage or absolute amount). Deployer penalties specifically fall under Tier 2 — up to €15M or 3% of turnover for violations of Art. 26. Authorities can also order withdrawal of AI systems from the market.

Headline figures like €35M are designed for the largest violations by the largest companies. For a 200-person company, the practical risk is proportionally smaller — but the legal obligation is the same regardless of company size.

Relationship with GDPR

The AI Act and GDPR are complementary regulations, not competing ones. The AI Act explicitly states that it does not affect the application of GDPR. In practice, many AI systems that process personal data will need to comply with both regulations simultaneously. Understanding the overlap is essential for efficient compliance.

DPIA (GDPR)FRIA (AI Act)
FocusPersonal data protection and privacyFundamental rights (broader scope including non-discrimination, accessibility)
Oversight bodyNational Data Protection AuthorityMarket Surveillance Authority
PenaltyUp to €20M or 4% of turnoverUp to €15M or 3% of turnover
Who must complyAny controller processing high-risk personal dataSpecific deployer categories (public bodies, public services, credit/insurance)
Can be combined?Yes — Art. 27(4) allows a single integrated assessmentYes — Art. 27(4) allows a single integrated assessment

Art. 86 of the AI Act introduces a right to explanation for individuals affected by decisions made with high-risk AI systems. This complements (and in some cases extends) the rights already available under GDPR Art. 22 regarding automated decision-making. Companies should design their compliance programs to address both regulations in an integrated manner.

How to Start: Practical First Steps

1

Inventory all AI systems

Map every AI tool, service, and system in use across your organization. Include third-party SaaS tools with AI features, internally built models, and embedded AI components. You cannot comply with rules you don't know apply to you.

2

Classify each system by risk level

For each AI system in your inventory, determine whether it falls under prohibited, high-risk, limited-risk, or minimal-risk. Focus on your specific use case, not the tool's general capabilities.

3

Screen for prohibited practices immediately

This is already enforceable. Review your entire inventory against the eight prohibited categories in Art. 5. Document your screening and conclusions. If anything is borderline, seek legal advice.

4

Implement AI Literacy training

Also already enforceable. Provide an AI Literacy training course to all staff who operate, supervise, or make decisions based on AI systems. The training must be appropriate to their role and the AI systems they interact with.

5

Prepare for high-risk obligations

If you deploy any high-risk AI systems (Annex III use cases), begin preparing for the full set of deployer obligations under Art. 26 before August 2, 2026. This includes human oversight assignments, logging infrastructure, and monitoring processes.

6

Engage your AI providers

Request compliance documentation from your AI vendors: conformity declarations, instructions of use, technical specifications, and information about the data used for training. You will need this to fulfill your own obligations.

7

Assign internal ownership

Designate a person or team responsible for AI Act compliance. This could be your existing DPO, a dedicated AI governance officer, or a compliance team. Clear ownership prevents obligations from falling through the cracks.

Disclaimer: This guide is provided for informational purposes only and does not constitute legal advice. While we strive to keep the information accurate and up to date, the EU AI Act is a complex regulation subject to ongoing interpretation by national authorities and courts. Companies should consult qualified legal counsel for advice specific to their situation. AI and Shine provides compliance tools and guidance, but is not a law firm and does not provide legal services.

Ready to start your compliance journey?

Book a demo and see how AI and Shine handles every step — from inventory to compliance proof.