AI Compliance Vendors

EU AI Act prohibited practices (Article 5)

Eight categories of AI practice are outright banned across the EU under Article 5 of the AI Act — enforced from 2 February 2025, with the highest fine tier (EUR 35M or 7% of turnover).

Last updated April 27, 2026 · Every fact traceable to a public source

EU AI Act Article 5 prohibits eight specific AI practices across the EU regardless of risk classification or sector. The prohibitions began applying on 2 February 2025 — the first set of EU AI Act obligations to take effect — and breaches sit in the highest fine tier (EUR 35 million or 7% of worldwide annual turnover). The European Commission published Article 5 guidelines on 4 February 2025 (C(2025) 884 final) to interpret scope and exceptions.

What are the eight prohibited categories?

Article 5(1)(a)–(h): (a) subliminal, manipulative, or deceptive techniques distorting behavior to cause significant harm; (b) exploitation of vulnerabilities due to age, disability, or socio-economic situation; (c) social scoring by public or private actors leading to detrimental treatment; (d) predictive policing based solely on profiling or personality traits; (e) untargeted scraping of facial images from the internet or CCTV to build facial-recognition databases; (f) emotion inference in workplaces and education (with narrow medical/safety exceptions); (g) biometric categorisation inferring race, political opinions, trade-union membership, religious beliefs, sex life, or sexual orientation; (h) real-time remote biometric identification in publicly accessible spaces for law-enforcement purposes (with narrow exceptions).

How is "prohibited" different from "high-risk"?

High-risk systems (Article 6 + Annex III) are allowed but subject to extensive obligations — risk-management, data governance, human oversight, conformity assessment. Prohibited practices cannot be placed on the market, put into service, or used at all, regardless of safeguards. They are categorically banned.

When did the prohibitions start applying?

On 2 February 2025, six months after the AI Act’s entry into force on 1 August 2024. This made Article 5 the very first obligation set under the Act to apply. Article 99 of the Act sets the maximum fine at EUR 35 million or 7% of worldwide annual turnover (whichever is higher) for breaches of Article 5.

Are there meaningful exceptions?

Yes, but they are narrow. Real-time remote biometric identification by law enforcement is permitted only for specific objectives (targeted search for victims, prevention of imminent threats to life or terrorist attacks, identification of suspects of serious crimes listed in Annex II) and requires prior judicial or administrative authorisation. Emotion inference is exempted when used strictly for medical or safety reasons. Biometric categorisation is permitted in lawful filtering or law-enforcement datasets when categorising biometric data lawfully acquired.

What did the Commission guidelines clarify?

The Commission’s 4 February 2025 guidelines on Article 5 (C(2025) 884 final) explained the boundary between prohibited "manipulative" practices and lawful persuasion (the harm threshold), the meaning of "subliminal techniques", and the scope of the workplace and education emotion-inference ban. The guidelines are non-binding but are the primary interpretive resource for compliance teams pending case law.

Related

Sources

Editorial independence

This FAQ is editorial. No vendor can pay to be highlighted or ranked in answers, and the written commentary on this page is payment-free. Featured slots in directory listings are always labeled where they appear. Read our methodology for details.