EU AI Act FRIA Deep Dive: Article 27 Compliance for Deployers (2026)
A complete deployer-side methodology for the EU AI Act Article 27 Fundamental Rights Impact Assessment: who must conduct one, the six mandatory elements, how it differs from a DPIA, stakeholder engagement, market-surveillance notification, and worked examples for credit scoring, life and health insurance pricing, and public-sector benefits decisions.
By AI Compliance Vendors Editorial · Published April 26, 2026 · Last verified April 26, 2026
TL;DR
Article 27 of Regulation (EU) 2024/1689 — the EU AI Act — requires specific deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) before first use. The obligation applies from 2 August 2026 under Article 113.
Three groups are in scope: (1) public bodies deploying any Annex III high-risk AI system except those in point 2 (critical infrastructure); (2) private entities providing public services deploying those same systems; and (3) any deployer — public or private — using AI systems for creditworthiness evaluation (Annex III, point 5(b)) or life and health insurance risk assessment and pricing (Annex III, point 5(c)).
A FRIA is not a DPIA, not a conformity assessment, and not a NIST AI RMF impact assessment. It covers the full spectrum of fundamental rights in the EU Charter of Fundamental Rights, not only data protection. Non-compliance with deployer obligations under Article 26 — which encompasses the Article 27 framework — carries fines of up to €15 million or 3 percent of global annual turnover under Article 99(4).
For context, see also the EU AI Act compliance complete guide and the AI Impact Assessment template on this site.
What Article 27 Actually Requires
Article 27 of Regulation (EU) 2024/1689, as confirmed by the EU AI Act Explorer, is titled "Fundamental rights impact assessment for high-risk AI systems." Its five paragraphs impose a structured pre-deployment obligation, a trigger for updates, a notification duty, a coordination rule with the GDPR's DPIA, and a mandate for the EU AI Office to develop a template questionnaire.
The core obligation reads, in full:
"Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5(b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce."
Several points in this single sentence deserve close attention:
- Timing is absolute. The assessment must occur prior to deploying the system. A post-deployment FRIA does not satisfy the legal obligation under Article 27(1).
- The Article 6(2) reference anchors the FRIA to the Annex III high-risk list. AI systems that are high-risk only under Article 6(1) — products subject to existing EU harmonisation legislation — do not trigger the FRIA obligation; those systems fall under Article 6(1) obligations which apply from 2 August 2027 under Article 113(c).
- Point 2 of Annex III is excluded. Critical infrastructure AI — systems used as safety components in the management of critical digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity — is explicitly carved out from the FRIA obligation, as confirmed in Annex III of the EU AI Act.
- The financial sector carve-in is explicit. Deployers of creditworthiness and insurance pricing AI must conduct a FRIA regardless of whether they are public bodies or private entities providing public services. The Annex III 5(b) and 5(c) deployers face the FRIA obligation by virtue of the type of AI system, not the type of organisation.
Paragraph 2 establishes that the obligation applies to the first use of the system. A deployer deploying the same or a materially similar system in a second context may rely on a previously conducted FRIA or on an impact assessment conducted by the provider — but only if the elements assessed remain current. If the deployer determines that any element in paragraph 1 has changed or is no longer accurate, an update is mandatory.
Paragraph 3 requires that once the FRIA is complete, the deployer notifies the relevant market surveillance authority of the results, by submitting the completed template questionnaire referenced in paragraph 5. Deployers operating under the exceptional derogation in Article 46(1) — which covers emergency situations such as public security or protection of life and health — may be temporarily exempt from this notification.
Paragraph 4 is the DPIA coordination rule: where obligations under Article 27(1) are already met through a DPIA conducted pursuant to Article 35 of Regulation (EU) 2016/679 (GDPR) or Article 27 of Directive (EU) 2016/680, the FRIA shall complement that DPIA — not replace it. The word "complement" is deliberate: a DPIA alone never satisfies the FRIA obligation.
Paragraph 5 charges the EU AI Office with developing a template questionnaire, including through an automated tool, to facilitate deployer compliance. As of the publication of this guide, the EU AI Office had not yet published the official template. Deployers should monitor the European Commission's AI Act page for its release.
Recital 96 provides the legislative intent: the FRIA is designed so that deployers identify specific risks to the rights of individuals or groups likely to be affected, and identify measures to be taken in the case of materialisation of those risks. The recital also explicitly contemplates stakeholder involvement, including representatives of affected groups, independent experts, and civil society organisations, particularly in the public sector.
Who Must Conduct a FRIA
Not every deployer of a high-risk AI system is subject to Article 27. The obligation applies to three distinct categories, each grounded in the regulatory text.
Category 1: Bodies Governed by Public Law
Any deployer that is a body governed by public law must conduct a FRIA before deploying any Annex III high-risk AI system, with the single exception of critical infrastructure systems (Annex III, point 2). The EU AI Act does not define "bodies governed by public law" internally, but the term has a well-established meaning under EU procurement law — Article 2(4) of Directive 2014/24/EU defines these as bodies established to meet needs in the general interest, having legal personality, and financed mainly by the state, regional or local authorities, or other such bodies, or subject to management supervision by those bodies.
In practice, this means central government ministries, regional and local authorities, public hospitals, public universities, national social security bodies, national employment agencies, and any body meeting the public law definition under national implementing legislation. This category accounts for the largest number of FRIA-obligated deployers across the EU.
Category 2: Private Entities Providing Public Services
Private entities that are not technically governed by public law but which provide public services are also within scope when they deploy Annex III high-risk AI systems (again, excluding point 2). Recital 96 gives examples: education, healthcare, social services, housing, and administration of justice. The recital acknowledges that "services important for individuals that are of public nature may also be provided by private entities" — a private hospital, a housing association, or a contracted welfare-to-work provider operating under a public mandate would fall here, as explained in the analysis published by Securiti.
The exact scope of "public services" for private entities will depend in part on member state law, since the Act does not further define the term.
Category 3: All Deployers of Specific Financial AI Systems
The third category applies regardless of the deployer's legal character or public service mandate. Any deployer — private bank, insurer, fintech platform, or otherwise — that uses either of the following high-risk AI systems must conduct a FRIA before first deployment, as confirmed by Annex III, points 5(b) and 5(c):
- Annex III, point 5(b): AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.
- Annex III, point 5(c): AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.
This category is narrower than it might appear. The fraud detection carve-out in 5(b) is explicit, but legal analysis by Scanlex notes that "the boundary is not self-evident and requires legal analysis to confirm." Compliance teams should not assume that a system used for fraud detection is automatically excluded if it simultaneously generates creditworthiness signals.
Who Is Not Subject to Article 27
A standard private company — even one deploying high-risk AI — is not required to conduct a FRIA unless it falls into Category 2 or 3. As the aiacto.eu analysis explains: "If you are a standard private company — not a critical infrastructure operator — simply using an AI-powered HR tool or a commercial scoring solution, you are not in principle subject to the FRIA." That deployer's obligations are governed by Article 26 — human oversight, logging, monitoring, informing affected individuals — but not the formal FRIA of Article 27.
FRIA vs DPIA vs AI Impact Assessment: What Overlaps, What Does Not
The three instruments address overlapping but legally distinct concerns. Conflating them is one of the most common compliance errors.
| Dimension | DPIA (GDPR Art. 35) | FRIA (AI Act Art. 27) | NIST AI RMF Impact Assessment |
|---|---|---|---|
| Legal basis | Article 35, Regulation (EU) 2016/679 | Article 27, Regulation (EU) 2024/1689 | NIST AI RMF (voluntary US framework) |
| Scope | Risks to personal data processing and data subject rights | Full spectrum of fundamental rights under the EU Charter | Trustworthiness risks: bias, explainability, robustness, security |
| Trigger | High-risk personal data processing | Deployment of Annex III high-risk AI system by in-scope deployer | Voluntary; no legal trigger |
| Mandatory in EU | Yes — where processing is high-risk | Yes — for categories in Art. 27(1) | No |
| Mandatory consultation with DPO | Yes, under GDPR Art. 35(2) | Not specified | N/A |
| Notification to authority | Only if residual risk is high and no mitigation found (prior consultation, GDPR Art. 36) | Mandatory — submit completed template to market surveillance authority (Art. 27(3)) | No regulatory obligation |
| Rights covered | Personal data, privacy | All Charter rights: dignity, non-discrimination, expression, remedy, workers' rights, etc. | N/A (risk-management construct) |
| Can they be combined? | Yes — Art. 27(4) permits complementary conduct | Yes — FRIA complements DPIA | Yes — where internal frameworks align |
Key points for compliance officers:
The FRIA covers rights the DPIA does not touch. A DPIA asks whether personal data processing creates high risk to data subjects. A FRIA asks whether the AI system creates risks to any fundamental right of any affected person or group — including rights that have nothing to do with privacy. The right to non-discrimination (Article 21 of the EU Charter), the right to an effective remedy (Article 47), workers' rights (Articles 27-31), and the rights of children (Article 24) are all in scope. A credit scoring model could violate the non-discrimination right without processing a single item of sensitive personal data.
A DPIA and a FRIA can be conducted jointly. Article 27(4) explicitly permits this where the same processing operation triggers both instruments. The practical result, as explained by aiactblog.nl, is that organisations can "add the fundamental rights analysis to your existing DPIA" rather than creating two separate documents — provided the FRIA covers the full Charter scope beyond the DPIA's data-protection focus.
The GDPR DPIA must occur earlier in the lifecycle. As noted by the Croatian data protection regulator AZOP and reported by TechGDPR, a DPIA must be carried out at the beginning, before development commences, whereas a FRIA is a pre-deployment obligation. Combining the two is possible at the design phase, but a retrospective FRIA does not satisfy Article 27 any more than a retrospective DPIA satisfies GDPR Article 35.
The NIST AI RMF is a US voluntary framework structured around four functions (Map, Measure, Manage, Govern). It is not legally binding in the EU, its risk taxonomy is primarily technical, and it does not require any notification to any authority. Where an organisation uses the NIST AI RMF as part of its AI governance programme, the FRIA elements — particularly the fundamental rights mapping and stakeholder engagement — will need to be added as a supplement, not assumed as equivalent.
The 6 Elements Article 27 Mandates in a FRIA
Article 27(1) specifies six mandatory elements, enumerated as (a) through (f). These are minimum requirements; the EU AI Office template, when released, may require additional fields.
Element (a): Description of the deployer's processes. The assessment must document the specific processes within the deployer's operations in which the AI system will be used, aligned with the provider's stated intended purpose under Article 13. This is not a generic system description — it is a process-level mapping that connects the AI system's function to the deployer's actual operational workflows. For a bank deploying a creditworthiness model, this means documenting whether the output is used for loan origination decisions, limit reviews, pricing, or all three.
Element (b): Period of time and frequency of use. The assessment must state how long the system will be used and with what frequency — daily processing batches, real-time decision streams, periodic reviews. This matters because the scale and pace of use directly affects the volume and urgency of any rights impact.
Element (c): Categories of natural persons and groups likely to be affected. This element requires the deployer to identify who will be subject to the system's outputs — not only the direct subjects (loan applicants, insurance policyholders) but also groups that may be indirectly affected. The euairisk.com methodology recommends developing a "rights heat map" that distinguishes high-, medium-, and low-impact categories. Particular attention should be given to vulnerable populations: children, elderly persons, people with disabilities, ethnic minorities, persons with low digital literacy, and persons in economically dependent situations.
Element (d): Specific risks of harm. For each category of affected persons or groups identified under (c), the FRIA must assess the specific risks of harm to their fundamental rights, taking into account the information given by the provider pursuant to Article 13. The Article 13 reference is significant: providers of high-risk AI systems are legally required to supply instructions for use that include information on the system's capabilities and limitations, known risks, and data governance. Deployers should request and review this documentation before conducting the FRIA.
Element (e): Description of human oversight implementation. The assessment must describe how human oversight measures will be implemented in accordance with the provider's instructions for use, as required under Article 14. This means specifying who will review AI outputs, under what circumstances, with what authority to override, and with what training. Human oversight is not satisfied by a theoretical ability to override; it requires documented operational governance.
Element (f): Measures to be taken if risks materialise. The assessment must describe concrete measures — including arrangements for internal governance and complaint mechanisms — that the deployer will implement if the identified risks become actual harms. This is forward-looking operational planning, not an abstract commitment. An affected individual denied credit or charged a higher insurance premium must have a practical avenue for redress. The complaint mechanism must be operational before the system goes live.
Step-by-Step FRIA Methodology
Step 1: Confirm Applicability
Before any assessment work begins, confirm in writing that the deployer falls into one of the three Article 27 categories and that the AI system is a high-risk system under Article 6(2) and Annex III (excluding point 2). Document the legal basis for your determination. Check whether the financial fraud carve-out in Annex III, point 5(b) applies to any AI system that touches creditworthiness signals.
For internal tracking, see also the EU AI Act vendor selection guide on this site for tools that support system classification.
Step 2: Obtain Provider Documentation
Request and review all documentation the provider is required to supply under Article 13, including: - Technical documentation per Annex IV - Instructions for use (system capabilities, limitations, and contraindications) - Known biases, performance limitations, and failure modes - Data governance details: training data composition, data sources, and known representativeness gaps - Human oversight guidance specific to the system
If the provider has already conducted an impact assessment, obtain it. Under Article 27(2), you may rely on this assessment for similar deployments, subject to confirming currency.
Step 3: Assemble a Multidisciplinary Assessment Team
A FRIA requires expertise that no single function possesses. At minimum, assemble: - Legal/compliance: AI Act obligations, Charter of Fundamental Rights interpretation - Data protection officer (DPO): DPIA coordination and GDPR interface - Technical: system architecture, model behaviour, output interpretation - Domain expert: credit risk, insurance actuarial, benefits administration (as applicable) - HR or affected-community liaison: if the system affects employees or members of the public - Procurement/vendor management: provider documentation interface
Step 4: Map Affected Persons and Groups
Produce a structured list of all categories of individuals who are directly subject to the AI system's outputs, as well as groups that may be indirectly affected. For each category, record: - Relationship to the system (applicant, policyholder, benefit claimant, etc.) - Volume (approximate number of persons per period) - Vulnerability factors (age, disability, financial precarity, language, digital literacy) - Existing inequalities relevant to the deployment context
The euairisk.com framework notes that cumulative effects — where the AI system interacts with other systems targeting the same population — can create aggregate harms that no individual system assessment would capture. One documented case: banks using similar credit scoring AI in the same geographies effectively created geographic exclusion zones where no residents could access credit.
Step 5: Conduct Fundamental Rights Mapping Against the EU Charter
For each fundamental right listed in the Charter of Fundamental Rights of the European Union, assess: - Whether the right can be affected by the AI system in your specific deployment context - The nature of impact (positive, negative, or mixed; direct or indirect) - Likelihood of impact and severity (scale, scope, irremediability, probability)
Rights to consider, at minimum:
| Charter Article | Right | Relevance to AI Deployers |
|---|---|---|
| Art. 1 | Human dignity | Automated decisions that dehumanise or stigmatise |
| Art. 8 | Protection of personal data | Overlaps with DPIA; data minimisation in AI inputs |
| Art. 20-21 | Equality; Non-discrimination | Proxy discrimination via biased training data |
| Art. 24 | Rights of the child | Systems processing data of or affecting minors |
| Art. 25 | Rights of elderly persons | Differential impact of automated decisions on older cohorts |
| Art. 26 | Integration of persons with disabilities | Accessibility of AI-driven services |
| Art. 41 | Right to good administration | Public-sector AI used to make or assist administrative decisions |
| Art. 47 | Right to effective remedy | Absence of meaningful appeal or explanation mechanism |
| Arts. 27-31 | Workers' rights | AI used in employment, scheduling, performance monitoring |
Step 6: Stakeholder Engagement
Recital 96 states that deployers "could involve relevant stakeholders, including the representatives of groups of persons likely to be affected by the AI system, independent experts, and civil society organisations in conducting such impact assessments." For public-sector deployers, this participation expectation approaches a normative requirement, and as the Lund University academic analysis notes, member states may impose more demanding consultation requirements.
Stakeholder engagement for a benefits eligibility system deployed by a public authority might include: community meetings with benefit recipients, written input from disability rights organisations, consultation with legal aid providers, and a public comment period on the draft FRIA.
Document all stakeholder feedback and how it was addressed. Unanswered objections should be specifically noted with the deployer's rationale.
Step 7: Assess and Document Specific Risks of Harm
For each fundamental right identified as potentially impacted in Step 5, produce a structured risk assessment: - Describe precisely how the AI system could affect the right - Assess severity using a consistent scale (high/medium/low across scale, scope, irremediability, and probability) - Note where information from the provider's Article 13 documentation informs the risk level - Identify whether the risk materialises as a statistical pattern (systematic discrimination in outputs) or as an individual failure mode (erroneous denial of benefit)
Step 8: Design and Document Mitigation Measures
For each identified risk, specify: - Technical measures (model monitoring for disparate impact, regular bias audits, explainability tooling) - Procedural safeguards (mandatory human review thresholds, escalation protocols, exception-handling workflows) - Governance arrangements (accountability assignment, escalation paths, periodic review schedule) - Complaint and redress mechanisms (accessible channels, response timelines, remedy options including reversal)
Be specific. "Implement human oversight" is not a mitigation measure. "Any credit decision resulting in a denial where the AI confidence score is below X percent will be reviewed within five business days by a senior credit officer with authority to override the AI output, and the applicant will be notified of the review process in writing" is.
Step 9: Document Human Oversight Implementation
Produce a dedicated section describing how the deployer will implement human oversight as specified in the provider's instructions for use under Article 14. Include: - Named role (by title, not person) responsible for oversight - Training requirements and evidence of completion before go-live - Technical controls that surface AI outputs requiring human review - Escalation paths when human oversight identifies a system issue - Log retention requirements (deployers of high-risk AI are required to maintain logs for at least six months under Article 26)
Step 10: Finalise, Notify, and Plan for Review
Compile the FRIA document, verify that all six Article 27(1) elements are addressed, and submit the completed assessment (using the official AI Office template once published, or your organisation's template aligned to the six elements until then) to the relevant national market surveillance authority pursuant to Article 27(3).
Establish a formal review schedule. The assessment must be updated if the deployer determines that any relevant element has changed. Triggers for review include: - Material changes to the AI system (model updates, retraining, new data sources) - Changes to the deployment context (new populations, new geographies, new use cases) - Adverse incidents flagged through complaint mechanisms or post-market monitoring - Material new guidance from the EU AI Office or national authorities
Stakeholder Engagement Requirements
Article 27(3) does not impose a rigid statutory stakeholder consultation protocol on all deployers. However, Recital 96 explicitly anticipates it: "Where appropriate, to collect relevant information necessary to perform the impact assessment, deployers of high-risk AI systems, in particular when AI systems are used in the public sector, could involve relevant stakeholders."
For public-sector deployers, the combination of the Act's text, the recital, and the academic commentary in the Lund University / European Journal of Law and Technology analysis creates a strong expectation of genuine stakeholder participation, especially where the AI system will affect citizens in administrative decisions. The article argues that the FRIA mechanism "offers Member States a critical lever to secure fundamental rights and foster human-centric and trustworthy AI" — and that member states can require deeper assessments than the AI Act baseline.
For financial sector deployers (Annex III, 5(b) and 5(c)), the recital's "in particular when AI systems are used in the public sector" language might suggest consultation is less pressing. However, the rights at stake — non-discrimination in access to financial services, right to effective remedy — are serious enough that compliance teams should document why they determined broader consultation was not warranted, rather than simply omitting it.
Minimum consultation documentation should include: - Which groups were consulted or considered for consultation, and why - The method of engagement (written submissions, focus groups, public notice, expert workshops) - A summary of feedback received and how it influenced the assessment - For public-sector deployers: whether any ongoing monitoring committee with community representation was established
Notification to the Market Surveillance Authority
Once the FRIA is complete, Article 27(3) requires the deployer to notify the relevant market surveillance authority of the results. The notification is accomplished by submitting the completed template questionnaire (referred to in Article 27(5)) as part of the notification.
Key practical points:
Which authority? Each member state must designate at least one market surveillance authority and at least one notifying authority under Article 70. Member states were required to designate these authorities by 2 August 2025. As of early 2026, only eight out of 27 member states had formally designated their single point of contact, representing a significant implementation lag. Compliance teams should monitor the AI Act national implementation plans page for updates in their relevant jurisdictions.
No official template yet. As of publication, the EU AI Office had not released the Article 27(5) template questionnaire. The kla.digital analysis and aiactblog.nl template guide both recommend building internal templates aligned to the six Article 27(1) elements in the interim, while monitoring the AI Office for the official release. When the official template is published, deployers may need to adapt their documentation.
Emergency exemption. Under Article 46(1), market surveillance authorities may authorise AI system deployment without completing the full conformity assessment procedure for exceptional reasons of public security, protection of life and health, environmental protection, or protection of key industrial and infrastructural assets. In those cases, the notification obligation under Article 27(3) may also be suspended temporarily — but the Article 27(3) text specifies this exemption applies only "in the case referred to in Article 46(1)," meaning it is a narrow carve-out.
No equivalent to GDPR "prior consultation." Unlike GDPR Article 36, which requires supervisory authority consultation before proceeding where a DPIA identifies high residual risk, there is no prior approval mechanism in Article 27. Notification is post-FRIA, not pre-deployment approval.
Template Fields and Examples
Pending the official AI Office questionnaire, the following template structure, derived from the six mandatory Article 27(1) elements and cross-referenced with published practitioner templates from kla.digital and aiactblog.nl, provides a working structure:
| Section | Required Content |
|---|---|
| 1. Deployer identification | Legal name, registered address, DPO contact, responsible officer for this FRIA |
| 2. AI system identification | System name and version, provider name and contact, date of provider documentation received |
| 3. Intended purpose and process description (Art. 27(1)(a)) | The specific business processes in which the system will be used; alignment with provider's intended purpose |
| 4. Period and frequency of use (Art. 27(1)(b)) | Start date of deployment, expected duration, frequency (real-time / batch / periodic), estimated volume of decisions per period |
| 5. Affected persons and groups (Art. 27(1)(c)) | Categories of natural persons directly subject to AI outputs; indirectly affected groups; vulnerability factors per group |
| 6. Fundamental rights mapping | Charter right-by-right assessment: applicability, nature of potential impact, severity rating |
| 7. Specific risks of harm (Art. 27(1)(d)) | Risk-by-risk description linked to Article 13 provider information; likelihood and severity assessment |
| 8. Human oversight implementation (Art. 27(1)(e)) | Roles, training requirements, technical controls, escalation protocols, log retention |
| 9. Risk materialisation measures (Art. 27(1)(f)) | Internal governance arrangements; complaint mechanism description and access; remediation procedures |
| 10. Stakeholder engagement | Who was consulted; method; summary of feedback; response; ongoing engagement plan |
| 11. DPIA coordination | Whether a DPIA was conducted; FRIA-DPIA relationship; elements satisfied by the DPIA |
| 12. Date and version | Date of FRIA completion; version number; planned review date; conditions triggering early review |
Common Mistakes
Treating the FRIA as an Extended DPIA
The most widespread error is completing a data protection impact assessment and relabelling it a FRIA. A DPIA under GDPR Article 35 focuses on risks to personal data processing. A FRIA under AI Act Article 27 covers all fundamental rights in the EU Charter — including rights that have no connection to personal data. A credit scoring model that discriminates on the basis of postcode (a proxy for ethnicity) implicates the non-discrimination right whether or not the model processes any personal data directly. Any FRIA that omits a structured Charter rights analysis beyond the data-protection chapter is legally deficient.
Conducting the FRIA After Deployment
Article 27(1) is unambiguous: the assessment must be completed prior to deploying the system. A retrospective FRIA does not satisfy the legal obligation. As the aiacto.eu analysis states: "Conducting it seriously, before deployment, is both a legal requirement and an act of accountability towards the people your AI systems affect." Organisations that have already deployed in-scope AI systems should treat the obligation as applicable from 2 August 2026 and conduct the FRIA before that date rather than after. Where a system is already in production, a pre-2026 FRIA completed before the obligation attaches is the correct approach.
No Stakeholder Consultation, No Documentation of Why Not
Omitting stakeholder engagement without documenting the rationale creates an audit vulnerability. Even where consultation is not strictly mandatory, the absence of any consideration of affected groups' perspectives weakens the quality of the rights analysis and may draw scrutiny from market surveillance authorities.
Failing to Update the FRIA When Material Changes Occur
Article 27(2) requires updating the FRIA whenever any element listed in paragraph 1 has changed or is no longer up to date. Common triggers that organisations fail to treat as update triggers: model retraining on new data, deployment to a new member state or customer segment, changes to the complaint mechanism, and organisational restructuring that changes oversight roles.
Treating Article 27 as a Provider Obligation
Article 27 is a deployer obligation. The AI system provider has separate obligations under Articles 9-15 and 16, including maintaining technical documentation and instructions for use. But the FRIA must be conducted by the entity that deploys and uses the system. A contractual clause requiring the provider to conduct the FRIA on behalf of the deployer does not satisfy the deployer's own legal obligation.
Not Connecting the FRIA to Operational Procedures
A FRIA that sits in a compliance archive but is not connected to operational governance, complaint handling, and human oversight procedures is a document, not a compliance programme. The complaint mechanism described in element (f) must be operational on the day the system goes live. Human oversight described in element (e) must be staffed, trained, and technically supported from day one.
How AI Governance Platforms Support FRIAs
The following assessments are based on review of each vendor's publicly accessible website as of April 2026. Where a vendor's public materials do not explicitly reference FRIA or Article 27, this is noted.
[Credo AI](/vendors/credo-ai) (credo.ai): Credo AI's EU AI Act page lists "Carry out a Fundamental Rights Impact Assessment" as one documented obligation under its FAQ section. The page does not describe specific FRIA tooling or automation. FRIA-specific tooling not publicly documented as of April 2026; verify directly with vendor.
[Holistic AI](/vendors/holistic-ai) (holisticai.com): Holistic AI's platform includes EU AI Act compliance frameworks with built-in control mapping and gap analysis, and the platform describes capabilities including risk profiling, bias testing, automated compliance workflows, and audit-ready evidence collection. The public homepage and EU AI Act compliance pages reviewed did not explicitly reference FRIA or Article 27. FRIA-specific tooling not publicly documented as of April 2026; verify directly with vendor.
[Saidot](/vendors/saidot) (saidot.ai): Saidot is an EU-native SaaS platform for AI governance that advertises step-by-step compliance templates and out-of-the-box templates for the AI Act, with evidence reuse across systems. The public homepage reviewed did not explicitly reference FRIA or Article 27. FRIA-specific tooling not publicly documented as of April 2026; verify directly with vendor.
[Trustible](/vendors/trustible) (trustible.ai): The Trustible public homepage reviewed did not explicitly reference FRIA, Fundamental Rights Impact Assessment, or Article 27 of the EU AI Act. FRIA-specific tooling not publicly documented as of April 2026; verify directly with vendor.
[Modulos](/vendors/modulos) (modulos.ai): Modulos positions itself as an AI governance, risk, and compliance platform covering 14+ frameworks including the EU AI Act and ISO 42001. The public homepage reviewed did not explicitly reference FRIA or Article 27. FRIA-specific tooling not publicly documented as of April 2026; verify directly with vendor.
[Enzai](/vendors/enzai): The domain enzai.ai did not resolve at the time of research. Cannot confirm vendor existence or product capabilities; verify directly.
For a comparative assessment of AI governance platforms that support EU AI Act compliance workflows, see the EU AI Act compliance tools guide on this site.
General capability gaps to evaluate when assessing any platform for FRIA support: - Does the platform include a Charter of Fundamental Rights mapping template? - Can the platform generate a structured six-element Article 27 report? - Does the platform support stakeholder feedback documentation? - Can the platform track FRIA version history and update triggers? - Does the platform include a notification workflow for market surveillance authority submission?
Example FRIA Scenarios
Scenario 1: Credit Scoring at a Consumer Bank (Annex III, point 5(b))
A commercial bank deploys a third-party AI system to evaluate mortgage applicants' creditworthiness. The system uses open banking data, credit bureau data, and behavioural signals to produce a probability-of-default score that directly influences loan approval and interest rate pricing.
FRIA trigger: Annex III, point 5(b) — any deployer of creditworthiness AI, regardless of public/private character, per Article 27(1).
Affected groups: Mortgage applicants, including first-time buyers; renters in urban areas; applicants from minority ethnic backgrounds (where postcode or behavioural proxies may introduce indirect discrimination); applicants with irregular income (gig workers, self-employed, recently returned from maternity leave).
Key fundamental rights at risk: Non-discrimination (EU Charter, Art. 21) — postcode-based proxies could systematically disadvantage applicants from deprived areas correlating with ethnicity; right to effective remedy (Art. 47) — applicants must be able to contest adverse AI-driven decisions; data protection (Art. 8) — scope of behavioural data inputs.
FRIA elements: The bank documents that the system will be used in loan origination (Art. 27(1)(a)); processes approximately 8,000 applications per month during a three-year initial period (b); affects applicants meeting the above profiles (c); risks include proxy discrimination and erroneous denials creating financial exclusion (d); a senior credit analyst reviews all AI-rejected applications above a loan threshold, with authority to override (e); appeals are handled within 15 business days via a dedicated mortgage complaints team (f).
Notification: Results submitted to the national market surveillance authority (in this jurisdiction, the financial regulator designated under Article 70) using the Article 27(5) template once published.
Scenario 2: Life Insurance Risk Pricing (Annex III, point 5(c))
A life insurer introduces an AI model that analyses medical history, lifestyle data from wearables, and socioeconomic indicators to produce individual risk scores used to price life insurance premiums.
FRIA trigger: Annex III, point 5(c) — risk assessment and pricing in relation to natural persons in the case of life insurance, per Annex III.
Affected groups: All policyholders and new applicants for life insurance products; sub-groups at elevated risk of systematic disadvantage include applicants with pre-existing conditions, applicants who decline to share wearable data, and applicants from socioeconomic profiles where lifestyle proxies may correlate with protected characteristics.
Key fundamental rights at risk: Non-discrimination (Art. 21) — actuarial proxies for risk may encode discrimination based on disability, ethnicity, or socioeconomic status; human dignity (Art. 1) — decisions about insurability that label individuals as uninsurable may raise dignity concerns; right to effective remedy (Art. 47) — policyholders must be able to contest AI-driven pricing decisions.
FRIA process notes: The insurer obtains the provider's Article 13 technical documentation, which includes known performance disparities across age cohorts. The FRIA documents these disparities, maps them to the non-discrimination right, and specifies quarterly bias audits as a mitigation measure. The complaint mechanism is integrated into the insurer's existing ombudsman referral process.
Scenario 3: Social Benefits Eligibility at a Public Authority (Annex III, point 5(a) / public body)
A regional social services authority deploys an AI system to assess eligibility for means-tested housing benefit. The system processes income, employment, and household data to produce an eligibility probability score, which human case workers use to prioritise applications for manual review.
FRIA trigger: The deployer is a body governed by public law; the system falls under Annex III, point 5(a) — AI used by public authorities to evaluate eligibility for essential public assistance benefits, per Article 27(1).
Affected groups: Housing benefit applicants, including single-parent households, persons with disabilities, recent immigrants, and individuals with non-standard employment arrangements. The authority's FRIA notes that applicants with limited digital literacy may have lower-quality data profiles, reducing their probability scores unfairly.
Key fundamental rights at risk: Non-discrimination (Art. 21) — applicant profiles with proxy characteristics; right to good administration (Art. 41) — the system must not deprive applicants of timely, fair decisions; right to effective remedy (Art. 47) — appeal rights; human dignity (Art. 1) — automated scoring of individuals in precarious housing situations.
Stakeholder engagement: The authority conducted three community consultation sessions with benefit recipient advocacy groups, received written submissions from a disability rights organisation and a migrants' legal aid service, and incorporated feedback that the complaint mechanism be available in six languages. This process is documented in the FRIA's stakeholder section.
FRIA result: Submitted to the member state's designated market surveillance authority (single point of contact) per Article 27(3).
FAQ
1. Does the FRIA apply to every high-risk AI system a public body deploys?
Not quite. The FRIA applies to public bodies deploying any Annex III high-risk AI system except those falling under Annex III, point 2 (critical infrastructure — road traffic management, water/gas/electricity supply systems). All other Annex III categories — biometrics, education, employment, access to essential services, law enforcement, migration, administration of justice — require a FRIA from public-body deployers, per the unambiguous language of Article 27(1).
2. A private bank that is not a "public body" and does not provide "public services" — does it still need a FRIA for its credit scoring AI?
Yes. Article 27(1) creates a separate and independent obligation for deployers of high-risk AI systems referred to in Annex III, points 5(b) and 5(c), regardless of whether the deployer is a public body or a private entity providing public services. A commercial bank deploying creditworthiness AI is unambiguously in scope, as confirmed by Scanlex's financial sector compliance analysis.
3. If we have already completed a DPIA, do we also need a FRIA?
Yes, unless every Article 27(1) element is already fully addressed in the DPIA — which is unlikely. Article 27(4) states that the FRIA shall complement the DPIA. The two instruments can be conducted jointly and documented in a single integrated report, but the FRIA must add the elements that go beyond data protection: the full Charter rights analysis, the human oversight description, the complaint mechanism, and the market surveillance authority notification. As the TechGDPR analysis notes, a DPIA and FRIA can be combined into one document, but timing matters: the DPIA should occur before development, while the FRIA must occur before first deployment.
4. What is the penalty for failing to conduct a FRIA?
Deployer obligations, including Article 27, fall under Article 99(4) of the EU AI Act. Non-compliance carries administrative fines of up to €15 million or 3 percent of total worldwide annual turnover for the preceding financial year, whichever is higher. Member states may also impose non-monetary measures, including warnings and orders to suspend deployment. As the EU AI Act Checklist analysis notes, this is the penalty tier for August 2026 deadline non-compliance.
5. What is the EU AI Office template, and when will it be released?
Article 27(5) requires the EU AI Office to develop a template questionnaire, including through an automated tool, to facilitate deployers in complying with their Article 27 obligations. As of the publication date of this guide, the official template had not been published. Organisations should build internal templates aligned to the six Article 27(1) elements and adapt them when the official template is released. Monitor the European Commission's AI Act page for publication announcements.
6. Can we rely on the AI system provider's impact assessment instead of conducting our own FRIA?
Partially. Article 27(2) states that a deployer "may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider." However, this reliance is conditional: the deployer must verify that the elements assessed remain accurate and current for its specific deployment context. A provider's generic impact assessment will rarely address the deployer's specific processes, the deployer's specific affected populations, or the deployer's complaint mechanism. The notification obligation under Article 27(3) also rests on the deployer, not the provider.
7. Is there a statutory deadline for submitting the FRIA notification to the market surveillance authority?
Article 27(3) requires notification "once the assessment referred to in paragraph 1 of this Article has been performed" — which must occur before first deployment. In practice, this means the notification should be submitted at the time of or immediately following FRIA completion, before the system goes live. There is no separate statutory grace period for notification beyond the pre-deployment timing of the assessment itself.
8. How does the Article 27 FRIA relate to the EU AI database registration requirement under Article 49?
These are separate obligations. Article 49 requires deployers that are public authorities to register their use of high-risk AI systems in the EU database before deployment. The FRIA notification under Article 27(3) is addressed to the national market surveillance authority — not the EU database. The two obligations overlap in timing (both pre-deployment) but are procedurally distinct. Some practitioners recommend submitting both simultaneously to maintain a clean audit trail.
Sources / Further Reading
All sources cited in this guide are publicly accessible. Readers are encouraged to consult primary regulatory sources directly.
Primary Regulatory Sources - Regulation (EU) 2024/1689 (EU AI Act) — EUR-Lex - EU AI Act Explorer — artificialintelligenceact.eu — Article 27 text - Annex III — EU AI Act Explorer - Article 26 (Deployer obligations) — EU AI Act Explorer - Article 99 (Penalties) — EU AI Act Explorer - Article 70 (Market surveillance authorities) — EU AI Act Explorer - Regulation (EU) 2016/679 (GDPR) — EUR-Lex — Article 35 (DPIA) - European Commission AI Act regulatory page - EU Charter of Fundamental Rights
Academic and Analytical Sources - Eduardo Gill-Pedro, "Fundamental Rights Impact Assessments in the EU's AI Act," Lund University / EJLT - CEDPO Micro-Insight Paper: Fundamental Rights Impact Assessments
Practitioner Guides - AiActo: FRIA Guide — AI Act Fundamental Rights Impact Assessment - euairisk.com: Fundamental Rights Impact Assessments — Article 27 - aiactblog.nl: Free FRIA Template for Article 27 - kla.digital: EU AI Act FRIA Template - Scanlex: AI Compliance for Regulated Financial Institutions - Securiti: Article 27 FRIA Overview - aiactblog.nl: EU AI Act Risk Assessments Overview - TechGDPR: Combining FRIA with DPIA - EU AI Act Checklist: Fines and Penalties - AI Act national implementation plans
On This Site - EU AI Act compliance complete guide - AI Impact Assessment template - EU AI Act vendor selection guide - Best EU AI Act compliance tools