AI Compliance Vendors

Free tool

Fundamental Rights Impact Assessment generator

Article 27 of the EU AI Act requires deployers of certain high-risk AI systems to conduct a Fundamental Rights Impact Assessment before first use. This tool walks Article 27(1)(a) through (f) one section at a time, produces a live markdown preview, and lets you download or print the result.

Organisation & system

Article 27 applicability triggers

(a) Deployer processes — Art. 27(1)(a)
(b) Period and frequency — Art. 27(1)(b)
(c) Affected categories — Art. 27(1)(c)
(d) Specific risks of harm — Art. 27(1)(d)

Add a row for each fundamental rights risk identified.

No risks added yet.

(e) Human oversight — Art. 27(1)(e)
(f) If risks materialise — Art. 27(1)(f)
Governance metadata

Live preview

# Fundamental Rights Impact Assessment (FRIA)

**Article 27, Regulation (EU) 2024/1689**

| | |
|---|---|
| Organisation (deployer) | — |
| Deployer type | private-sector |
| AI system | — |
| Provider | — |
| Article 27 applicability | Verify whether Article 27 applies before relying on this assessment. |
| Owner responsible | — |
| Review cadence | annually, and after any material change |
| Generated | 2026-04-26 |

---

## (a) Deployer processes in which the high-risk AI system is used (Art. 27(1)(a))

**Intended purpose (per provider instructions for use):** —

**Processes:** —

---

## (b) Period of time and frequency (Art. 27(1)(b))

- Start date: —
- Duration: —
- Frequency of use: —

---

## (c) Categories of natural persons and groups likely to be affected (Art. 27(1)(c))

**Primary categories:** —

**Vulnerable groups:** —

**Estimated reach (per period):** —

---

## (d) Specific risks of harm to those categories (Art. 27(1)(d))

This section reflects information provided by the provider under Article 13 (transparency obligations). Where provider information is missing or insufficient, the deployer must request it before deployment.

_No risks documented yet. The FRIA must list specific risks per affected category._

---

## (e) Human oversight measures (Art. 27(1)(e))

Per the provider's instructions for use under Article 13:

—

**Operator training and competence:**

—

---

## (f) Measures if risks materialise (Art. 27(1)(f))

**Internal governance:**

—

**Complaint mechanism (Art. 26(11)):**

—

**Notification obligations:**

—

---

## Notification to the market surveillance authority

Per Article 27(3), upon completion of this FRIA, the deployer must notify the market surveillance authority of the results using the template that the AI Office is to publish (Article 27(5)).

If the AI system has already been used and a previous FRIA covered the elements above, no new FRIA is required unless any of the elements change (Article 27(2)).

---

## Relationship to GDPR DPIA

Where the deployer is also a controller required to carry out a Data Protection Impact Assessment under Article 35 GDPR, the FRIA shall complement (not replace) that DPIA (Article 27(4)).

---

## Primary sources

- Regulation (EU) 2024/1689 (EU AI Act) — https://eur-lex.europa.eu/eli/reg/2024/1689/oj
- Article 27 (FRIA) — https://artificialintelligenceact.eu/article/27/
- Article 26 (deployer obligations) — https://artificialintelligenceact.eu/article/26/
- Article 13 (transparency to deployers) — https://artificialintelligenceact.eu/article/13/
- GDPR Article 35 (DPIA) — https://eur-lex.europa.eu/eli/reg/2016/679/oj

---

_This FRIA template is provided by aicompliancevendors.com. It is a starting point and not legal advice. Verify the analysis with counsel and the relevant market surveillance authority before relying on it._

Who has to do a FRIA?

Article 27(1) names two categories of deployer that must complete a FRIA:

  1. Bodies governed by public law, and private operators providing public services, that deploy any high-risk AI system listed in Annex III (other than the critical-infrastructure category in point 2).
  2. Deployers of high-risk AI systems used for credit-worthiness evaluation or credit scoring (Annex III point 5(b)) or for risk assessment and pricing in life and health insurance (Annex III point 5(c)).

The obligation applies to deployers — the legal or natural person using the system under their authority — not to providers. Providers carry the parallel Article 9 risk-management obligation, which has different mechanics.

What Article 27(1) requires

The FRIA must contain:

  • (a) a description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose;
  • (b) a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;
  • (c) the categories of natural persons and groups likely to be affected by its use in the specific context;
  • (d) the specific risks of harm likely to have an impact on those categories, taking into account the information given by the provider pursuant to Article 13;
  • (e) a description of the implementation of human oversight measures, according to the instructions for use;
  • (f) the measures to be taken in case those risks materialise, including the arrangements for internal governance and complaint mechanisms.

The generator follows that structure exactly — each section maps to a sub-paragraph, and the output uses the same lettering so reviewers can cross-reference against the regulation without translation.

Notification and updates

Article 27(3) requires the deployer to notify the market surveillance authority of the results of the assessment by submitting the filled-out template referred to in Article 27(5). The AI Office is mandated to develop the template; an early draft template was circulated by the AI Office in late 2025. Where any of the elements above change during the use of the high-risk AI system, the deployer must take the necessary steps to update the information.

FRIA vs. DPIA

Article 27(4) addresses the overlap with GDPR Article 35 directly: where any of the obligations under Article 27 are already met through a Data Protection Impact Assessment conducted under Article 35 GDPR, the FRIA shall complement that DPIA. In practice this means most organisations already running a DPIA process can extend that workflow to cover the additional FRIA elements (categories of affected persons, fundamental-rights risks, and human-oversight implementation) rather than building a separate process. The generator output is structured so it can be appended to an existing DPIA without reformatting.

Limitations of this generator

The generator produces a draft based on your inputs. It does not verify that your system is actually high-risk under Annex III, it does not file your Article 27(3) notification, and it does not capture the provider's Article 13 information for you — you must request that information from the provider and incorporate it into section (d). For borderline cases or where the FRIA forms part of regulatory submissions, review with counsel and your data protection officer is non-negotiable.

What to do next

A FRIA is one of three documents most high-risk-AI deployers need in 2026 alongside an Article 9 risk register and an Article 6 classification record. Pair this with our AI risk register (mapped to NIST AI RMF and ISO 42001) and EU AI Act classifier.

Sources