Feb 2025
Prohibited practices in force (Art. 5)
Regulation
The world's first AI regulation is already in force. This is what you need to know and do before the fines arrive
EU AI Act fines can reach 7 % of global annual revenue — more than double the maximum under the GDPR.
The EU AI Act is already in force and August 2026 is the real deadline for most companies. Any AI system that affects employment, credit, insurance, or education is probably high-risk. The fines exceed those of the GDPR: up to 7 % of global revenue. You have four months to take inventory, classify risks, and document everything. This article explains exactly what to do.
Feb 2025
Prohibited practices in force (Art. 5)
Aug 2025
GPAI obligations (GPT-4, Claude, Gemini…)
Aug 2026
High risk — real deadline for companies
Aug 2027
Products regulated by other directives
35 %
Companies using AI in hiring (high risk)
7 %
Maximum fine on global revenue
€ 35 M
Maximum absolute fine for prohibited practices
Aug 2026
Deadline for high-risk systems
The GDPR arrived in 2018 and companies took years to take it seriously. The first fines are remembered as a belated wake-up call. The EU AI Act won’t be any different — except for one detail: the maximum fines nearly double those of the GDPR. Where the GDPR reached 4% of global annual revenue, the AI Act reaches 7%. And the clock is already ticking.
August 2026 is the date when high-risk AI systems become fully enforceable across all 27 European Union countries. Four months remain. This article explains what the regulation is, what affects you, and what you need to do before that date arrives.
Regulation (EU) 2024/1689, known as the EU AI Act, is the world’s first artificial intelligence regulation with the force of law. It is not a directive: it is a regulation, which in European law means it applies directly in all 27 member states without the need for national transposition. From the moment it enters into force, it is law in Spain, France, Germany, and the rest of Europe, with no intermediate steps.
It was published in the Official Journal of the European Union on June 12, 2024 and entered into force on August 1, 2024. Its architecture is phased: different blocks of the regulation activate on different dates between 2025 and 2027.
The organizing principle is the risk-based approach: the more a system of AI can harm people, the stricter the obligations. Not all AI is regulated equally. A spam filter and a personnel selection system don’t receive the same treatment.
The AI Act divides AI systems into four levels. Knowing which category each tool your company uses falls into is the mandatory first step.
| Category | What it includes | Regime |
|---|---|---|
| Unacceptable risk | State social scoring, subliminal manipulation, exploitation of vulnerabilities, real-time biometrics in public spaces | Prohibited — In force since Feb. 2025 |
| High risk | Critical infrastructure, employment and HR, credit, insurance, education, justice, migration | Regulated — Enforceable from Aug. 2026 |
| Limited risk | Chatbots that appear human, deepfakes | Mandatory transparency |
| Minimal risk | Spam, recommenders, AI in video games | Free — No specific obligations |
These practices have been completely banned in the EU since February 2, 2025. If your company or any provider you use employs them, you are violating the regulation right now:
Annex III of the regulation lists the sectors where AI systems are automatically classified as high risk. These include:
Chatbots that interact with people must declare they are AI when there is a reasonable possibility the user believes they are speaking with a human. The same applies to deepfakes: content generated or manipulated by AI that represents real people must be labeled as such. This obligation is already in force.
Spam filters, recommendation engines with no impact on rights, AI in video games, writing assistants with no consequences on decisions: no specific obligations under the AI Act. They can continue operating as before.
If one of your systems falls into the high-risk category, the regulation imposes nine specific obligations before deploying it — and throughout its entire operational life:
Meeting these nine obligations is not a one-week project. For most companies, it is a three-to-six-month process if starting from scratch.
The most common mistake isn’t using prohibited AI. It’s not knowing that the system you’re using is high risk. These are the most underestimated cases:
CV screening and AI-powered candidate selection — If you use a tool that automatically filters, scores, or ranks candidates before a human sees them, it is high risk under Annex III. LinkedIn Recruiter with AI filters, Workday with candidate scoring, any ATS with automated screening. 35% of European companies already use AI at some point in their hiring process.
Credit scoring — Any model that assigns a solvency score or probability of default falls under Annex III’s essential services category. It is high risk regardless of whether you developed it yourself or buy it from a provider as SaaS.
Insurance chatbots that decide coverage — If the chatbot doesn’t just inform but determines whether coverage applies or what premium corresponds, it is high risk. The key is whether the system influences a decision that materially affects a person.
AI for evaluating students — Plagiarism detection systems that automate sanctions, e-learning platforms that adjust trajectories based on automatic evaluation, tools that generate grades without human review: high risk under the education block.
Predictive maintenance systems in critical infrastructure — If the AI predicts failures in water, gas, electricity, or transport networks and that prediction influences operational decisions, high risk under the infrastructure block.
The right question isn’t “Do we use high-risk AI?” but “What decisions that affect people does AI make or influence in our company?”
General-purpose AI models — what the industry calls foundation models — have their own regime under the AI Act. GPT-4, Claude, Gemini, Llama, Mistral: all are subject to specific obligations that have been active since August 2025.
For all GPAI models, regardless of size:
For models with systemic impact — defined as those trained with more than 10²⁵ FLOPs (currently GPT-4 and equivalents) — additional requirements include:
If your company builds products on top of APIs from these models, GPAI compliance is the model provider’s responsibility — but you remain liable for how you use that model in systems that may be high risk.
The AI Act dates are not symbolic. Each one activates a different regime with real legal consequences.
February 2, 2025 activated the absolute prohibitions. If your company uses social scoring or real-time biometric recognition without legal justification, it has been violating the regulation for over a year.
August 2, 2025 activated the obligations for GPAI models. Foundation model providers are already under AI Office supervision.
August 2, 2026 is the date most companies ignore — and the most important one. From that day forward, all high-risk AI systems listed in Annex III must fully comply with Articles 6 through 49 of the regulation. National authorities will be able to initiate investigations and impose sanctions.
From today until August 2026, four months remain. For a company starting the process from scratch, it’s just enough time if action is taken now.
August 2, 2027 incorporates high-risk products regulated by other European directives — medical, aeronautical, and automotive sectors. That block has an extra year of margin.
The GDPR has generated more than 5 billion euros in fines since 2018. The AI Act doubles the maximum scale:
| Infringement | Maximum fine |
|---|---|
| Prohibited practices (Art. 5) | €35M or 7% of global revenue |
| Non-compliance with requirements (Arts. 6-49) | €15M or 3% of global revenue |
| Incorrect information to supervisors | €7.5M or 1% of global revenue |
Direct comparison: the GDPR has a maximum of 4% of global revenue. The AI Act reaches 7% — 1.75x more severe in the worst case.
For SMEs, the regulation states that the lesser amount between the revenue percentage and the absolute figures will apply. But even the absolute figures — 35 million for a prohibited practice — are devastating for any mid-sized company.
There is another important nuance: AI Act fines can accumulate with GDPR fines when an infringement also involves personal data. A personnel selection system that discriminates and violates privacy can receive sanctions under both regulations.
Four months are enough if the process starts this week. Here is the sequence:
Step 1 — Inventory of AI systems in use
Map all the tools that use AI in your company: from the HR ATS to the CRM with predictive scoring, analytics dashboards, customer service chatbots, and any SaaS that processes people’s data with AI. Not just the systems you developed internally — also the ones you buy as a service. The AI Act applies to those who deploy AI systems, not just those who develop them.
Step 2 — Risk classification per system
For each identified system, determine its category according to Annex III. The key question: does this system influence decisions that materially affect people — their employment, credit, access to services, training, rights? If the answer is yes, it is likely high risk. If there are doubts, the conservative interpretation is the correct one: treat the system as high risk.
Step 3 — Gap analysis vs. Art. 9-15 requirements
For each high-risk system, assess the current state against the nine obligations: does technical documentation exist? Are there activity logs? Is there operational human oversight or just formal oversight? Are training data documented and reviewable? The gap analysis determines the actual work that remains to be done.
Step 4 — Implementation: documentation, logs, human oversight
This is the bulk of the work. It’s not just about filling out forms: real processes must be implemented. The Human-in-the-loop cannot be a checkbox in a workflow — it must be a review with real authority to override the AI’s decision. Logs must be configured. Technical documentation must be kept up to date. In many organizations, this requires changes to operational processes, not just systems.
Step 5 — Registration with the competent authority
In Spain, the designated national authority to supervise the AI Act is AESIA (Agencia Española de Supervisión de la Inteligencia Artificial). High-risk systems in regulated sectors must be registered in the European database before operating. The registration process requires complete technical documentation — which is why the previous steps must come first.
The difference between companies that will be compliant in August and those that won’t comes down to one thing: the former started their inventory in the first quarter of 2026. The latter are still postponing it.
The EU AI Act created a new European body: the AI Office, attached to the European Commission. It is not an independent agency like ENISA, but a unit within the Commission with direct supervisory powers over GPAI models.
Its main functions:
The AI Office published its first interpretive guidelines in February 2025. For companies that use third-party models, these guidelines clarify where the provider’s responsibility ends and where the operator’s begins.
If you are a CISO or DPO: The AI Act overlaps with the GDPR for any AI system that processes personal data. Add the AI Act as a layer to the DPIA you already perform. If your organization has GDPR compliance maturity, the documentation and logging infrastructure already exists — it just needs to be adapted.
If you run a company: The legal risk from August 2026 falls on operators — the companies that deploy AI systems, not just those who develop them. Buying a SaaS with AI does not exempt you. Demand contractual clauses from your providers declaring whether their systems are high risk.
If you work in HR: Review your stack: ATS, video screening platforms, competency assessments, any tool that generates a candidate score. If an automated ranking exists before a human sees the profile, you have a high-risk system that must meet the nine Annex III requirements.
If you are a developer building AI systems: If you build for Annex III sectors, compliance must be designed from the start — technical documentation, logs, and human oversight are not last-minute add-ons. If you use third-party GPAI models, review their terms of service: all of them added AI Act clauses in 2025.
The EU AI Act is not a bureaucratic obstacle to innovation. It is the first serious attempt to establish global rules of the game for a technology that already affects millions of everyday decisions. Companies that treat it as a differentiation opportunity — “we comply with the AI Act” as a trust signal — will have an advantage over those that treat it as a compliance cost.
August 2026 is not the end of something. It is the beginning of the scenario where companies that didn’t do their homework start receiving the first investigations. Four months remain. The inventory can start today.
Related
660.000 millones en infraestructura, solo el 5% de las empresas con ROI real y un historial de expectativas que siempre corren más rápido que la realidad
La brecha entre la inversión en IA y el valor real que genera — y qué pueden hacer las empresas para estar en el 5% que sí funciona
Subscribe to the AISHA editorial newsletter to stay up to date with new pieces, reports and tools.
Go to newsletter