AI auditors & assurance firms

Independent third parties that audit AI systems for bias, accuracy, explainability, and regulatory compliance. Many jurisdictions (notably NYC Local Law 144) require independent bias audits before deployment.

BSI Group

London, UK · Founded 1901

BSI Group is a global standards body and certification firm, accredited to issue ISO/IEC 42001 AI Management System certifications — one of the first accredited bodies globally for this standard.

ISO/IEC 42001 certificationISO/IEC 27001ISO 9001AI assurance training

Accredited: UKAS-accredited ISO/IEC 42001 certification body, ISO/IEC 27001, ISO 9001

Accenture Responsible AI

Dublin, Ireland

Enterprise-wide responsible AI governance consulting, testing, and monitoring at scale Accenture's Responsible AI practice is anchored by seven guiding principles (fairness, accountability, transparency, safety, compliance, human-by-design, and sustainability) and a formal enterprise Responsible AI compliance program implemented across the firm's 742,000 employees in 2022. The practice offers AI risk assessments, governance framework design, systemic enablement for responsible AI testing, and ongoing monitoring and compliance management. In May 2024, Accenture appointed Arnab Chakraborty as its first Chief Responsible AI Officer. **Notable work:** Implemented enterprise-wide Responsible AI compliance program (2022) across 742,000 employees; appointed first Chief Responsible AI Officer (Arnab Chakraborty, May 2024); launched Accenture Responsible AI Platform on AWS (August 2024)

AI governance framework design and implementationAI risk assessments and regulatory readinessResponsible AI testing and monitoringEU AI Act compliance and advisory

Accredited: eu-ai-act, nist-ai-rmf, gdpr

BABL AI

Iowa City, United States · Founded 2018

Independent algorithmic audits and certifications ensuring global AI regulatory compliance Founded in 2018 by Dr. Shea Brown, BABL AI is a global algorithmic auditing firm offering independent third-party audits, ISO/IEC 42001 certification, EU AI Act conformity assessments, NYC Local Law 144 bias audits, NIST AI RMF readiness assessments, and AI auditor training. BABL's audit methodology aligns with international assurance standards (ISAE 3000) used by Big Four firms, and has been recognized in academic research from the Centre for the Governance of AI and University of Cambridge as a credible frontier AI compliance reviewer. **Notable work:** Recognized by Cambridge/Oxford Martin study as qualified frontier AI compliance reviewer; founding member of International Association of Algorithmic Auditors (IAAA); first published recommendations to European Commission on DSA/AIA audit methodology

NYC Local Law 144 bias auditsEU AI Act conformity assessmentsISO/IEC 42001 certificationNIST AI RMF readiness

Accredited: eu-ai-act, nist-ai-rmf, iso-iec-42001, nyc-local-law-144

Boston Consulting Group Responsible AI

Boston, United States

ISO 42001-certified RAI consulting across strategy, governance, testing, and culture BCG's Responsible AI practice offers a battle-tested five-pillar RAI framework covering strategy, governance, key processes, technology, and culture—delivered through RAI maturity assessments, bias-testing frameworks, GenAI evaluator tools (ARTKIT), and AI impact transparency tools (FACET). In January 2026, BCG became one of the first 100 organizations worldwide—and the only premium consulting firm—to achieve ISO/IEC 42001 certification for its AI Management System. Chief AI Ethics Officer Steven Mills leads the practice. **Notable work:** One of first 100 organizations worldwide—and only premium consulting firm—to achieve ISO/IEC 42001 certification (January 2026); developed open-source ARTKIT GenAI evaluation library; BCG and MIT Sloan Management Review joint study on GenAI and responsible AI maturity

Responsible AI strategy and governance frameworksGenAI safety evaluation (ARTKIT)AI bias testing and fairness frameworksISO/IEC 42001 AI management system implementation

Accredited: iso-iec-42001, eu-ai-act, nist-ai-rmf

Credo AI

San Francisco, United States · Founded 2020

Enterprise AI governance platform enabling continuous, contextual compliance and audit Founded in 2020, Credo AI is an enterprise AI governance platform named a Leader in the Forrester Wave™ for AI Governance Solutions (Q3 2025) and recognized in Gartner's Market Guide for AI Governance Platforms (2025). The platform offers pre-built policy packs for EU AI Act, NIST AI RMF, ISO/IEC 42001, and SOC 2, with automated evidence generation, shadow AI discovery, continuous risk monitoring, and audit-ready documentation. Clients include Mastercard and Principal Financial Group. **Notable work:** Named Leader in Forrester Wave™: AI Governance Solutions Q3 2025 with 12 perfect scores; recognized in Gartner Market Guide for AI Governance Platforms 2025; World Economic Forum Technology Pioneer; actively contributed to EU AI Act, NIST AI RMF, and ISO 42001 frameworks; Mastercard and Principal Financial Group deployments

EU AI Act and NIST AI RMF compliance automationAI registry and shadow AI discoveryContinuous bias, security, and drift monitoringAI governance for financial services and healthcare

Accredited: eu-ai-act, nist-ai-rmf, iso-iec-42001, gdpr

Crowe AI Risk and Governance

Chicago, United States

Internal audit-led AI risk management, governance frameworks, and enterprise controls Crowe's AI risk and governance advisory practice helps organizations manage AI risks through internal audit support, governance framework design, risk appetite definition, and enterprise AI controls. The firm's consulting partners—including Crystal Jareske—advise clients on establishing clear AI ownership, building well-documented AI governance programs, and aligning AI controls with enterprise risk management frameworks. Crowe Global has also noted plans to expand AI governance and cybersecurity management services in 2026. **Notable work:** Published step-by-step AI risk management guidance (2025); Crystal Jareske (Crowe consulting partner) featured in Dallas Business Journal on internal audit and AI governance; Crowe Global noted AI Governance and Cybersecurity Management services planned for 2026

Internal audit support for AI governanceAI risk appetite and control framework designEnterprise AI lifecycle risk managementAI governance and cybersecurity integration

Accredited: nist-ai-rmf

Deloitte Trustworthy AI

New York, United States

Embedding trust across the AI lifecycle with a multidimensional framework Deloitte's Trustworthy AI™ practice spans seven trust dimensions—transparent, fair, robust, privacy-respecting, safe, secure, and accountable—embedded across strategy, governance, model risk management, and engineering. The practice offers AI Audit and Assurance services alongside regulatory advisory, AI model risk management, and agentic AI governance. Deloitte is ranked #1 globally in Security Consulting by Gartner and a Leader in Worldwide AI Services by IDC. **Notable work:** Developed Trustworthy AI™ framework spanning seven trust dimensions; aligned practice to US AI Bill of Rights and SB 53 frontier AI law; ranked #1 in Security Consulting globally by Gartner

AI governance and regulatory supportAI model risk managementAlgorithmic bias and fairness reviewsAgentic AI trust and safety

Accredited: nist-ai-rmf, eu-ai-act, iso-iec-42001, gdpr

EY AI Assurance

London, United Kingdom

Human-led and AI-powered assurance spanning governance, risk, controls, and client diagnostics EY's AI assurance practice offers diagnostics, governance assessments, risk management, and controls services to help clients navigate AI-enabled transformations responsibly. The suite—spanning AI diagnostics, governance, risk management, and controls—is backed by EY's own deployment of responsible AI across 160,000 global audit engagements on the EY Canvas platform. EY has joined the Stanford University Institute for Human-Centered Artificial Intelligence Industrial Affiliates Program and is a recognized 'Frontier Firm' in Microsoft's Frontier Firm AI Initiative. **Notable work:** Launched enterprise-scale agentic AI across 160,000 global audit engagements; named Frontier Firm by Microsoft/Harvard Digital Data Design Institute; joined Stanford HAI Industrial Affiliates Program

AI governance and risk management assessmentsIndependent AI assurance and controls reviewsResponsible AI diagnosticsAgentic AI oversight

Accredited: nist-ai-rmf, eu-ai-act, iso-iec-42001, gdpr

Eticas

Barcelona, Spain · Founded 2012

Socio-technical AI audits combining algorithm review with community impact analysis Founded in 2012 by Dr. Gemma Galdon-Clavell, Eticas has pioneered independent AI assurance using a socio-technical auditing methodology that examines both model behavior and real-world social impact. The firm works across four continents with governments, regulators, corporations, and civil society, and delivers both consultancy and software. Notable adversarial audits include assessments of YouTube and TikTok migration content algorithms, ride-hailing apps in Spain (Uber, Cabify, Bolt), and the RisCanvi criminal justice AI used in Catalonia. **Notable work:** Adversarial audits of YouTube and TikTok migration content algorithms; audit of ride-hailing apps (Uber, Cabify, Bolt) in Spain; audit of RisCanvi criminal justice AI in Catalonia; founder Dr. Galdon-Clavell advises international institutions and has delivered TED and TechCrunch talks

Socio-technical algorithmic auditsAdversarial audits of platform algorithmsCommunity-led AI accountability auditsEU AI Act and Digital Services Act readiness

Accredited: eu-ai-act, gdpr

Eticas.ai

Barcelona, Spain · Founded 2015

Eticas.ai is a European algorithmic audit firm with a strong track record in public-sector and fundamental-rights-sensitive AI audits. Based in Barcelona.

Algorithmic auditsFundamental rights impact assessmentsEU AI Act conformity support

Accredited: Spanish Data Protection Agency recognized, EU research partnerships

ForHumanity

Armonk, United States · Founded 2016

Open-source certification schemes and auditor credentialing for independent AI audit Founded in November 2016 by Ryan Carrier, ForHumanity is a 501(c)(3) non-profit building an 'infrastructure of trust' for AI through its Independent Audit of AI Systems (IAAIS) certification schemes. The organization produces binary (compliant/non-compliant) audit criteria mapped to GDPR, EU AI Act, NYC Local Law 144, Children's Code, and other laws, and trains and certifies ForHumanity Certified Auditors (FHCAs) who then conduct third-party audits for deploying organizations. Hundreds of global volunteers contribute to its crowdsourced criteria. **Notable work:** Developed first-of-kind IAAIS audit manual; submitted recommendations to UK ICO and EU regulators on AI audit infrastructure; wrote about independent AI audit in 2017, making it a primary focus before major regulatory frameworks existed; ForHumanity FHCA certification recognized by UK Information Commissioner's Office

Independent Audit of AI Systems (IAAIS) certification schemesForHumanity Certified Auditor (FHCA) credentialingEU AI Act audit criteria developmentGDPR and children's data AI compliance audits

Accredited: eu-ai-act, gdpr, nyc-local-law-144

Grant Thornton AI Risk Advisory

Chicago, United States

Board-level AI governance oversight, internal audit advisory, and NIST AI RMF application Grant Thornton's Risk Advisory practice advises boards, audit committees, and management on AI governance, risk management, and responsible AI programs. The firm publishes guidance on applying NIST AI RMF to generative AI, assists organizations in building enterprise-wide AI risk strategies, and helps internal audit functions develop AI-specific audit approaches. Key practice leaders include Vikrant Rai (Cybersecurity and AI risk) and Will Whatton (Technology Modernization and AI data capabilities). **Notable work:** Published 'How to apply the NIST risk framework to GenAI' (2024); hosted 'Governing AI with Confidence' webinar for audit and risk leaders (March 2026); published 'Seven AI questions used by leading boards'

Board and audit committee AI governance advisoryNIST AI RMF application to generative AIInternal audit AI risk program developmentAI data strategy and governance

Accredited: nist-ai-rmf, eu-ai-act

Holistic AI

London, United Kingdom · Founded 2020

End-to-end AI governance platform with bias, privacy, and robustness audits Founded in 2020 by Dr. Adriano Koshiyama and Dr. Emre Kazim at University College London, Holistic AI provides an enterprise AI governance platform and third-party AI audit services covering bias, efficacy, robustness, privacy, and explainability. The firm has completed 200+ AI audits (including a program for Unilever spanning 300+ AI initiatives with 50% risk mitigation outcomes) and offers regulation-specific assessments for EU AI Act, NYC Local Law 144, and ISO/IEC 42001. Holistic AI founders collaborate with NIST AI Safety Institute, the UN AI Advisory Body, and the EU AI Act GPAI Code of Practice working groups. **Notable work:** Completed 200+ AI audits; Unilever engagement spanning 300+ AI initiatives with 50% risk mitigation rate; founders active in NIST AI Safety Institute, UN AI Advisory Body, OECD Network of Experts on AI, and EU AI Act GPAI Code of Practice

Algorithmic bias and fairness auditsEU AI Act conformity assessmentsISO/IEC 42001 complianceAI robustness and privacy testing

Accredited: eu-ai-act, iso-iec-42001, nist-ai-rmf, nyc-local-law-144

KPMG AI Assurance

New York, United States

Trusted AI framework powering gap assessments, model validation, and attestation KPMG's AI Assurance practice, launched in September 2025, provides AI model risk assessments, model validation, real-time systems assessments (RTSA), and formal AI assurance and attestation against standards including SOC, FedRamp, SWIFT, and HiTrust. The practice builds on KPMG's broader AI Trust services—covering governance frameworks, security, regulatory compliance, and AI inventory—all mapped to KPMG's Ethics and Trusted AI Framework. KPMG has also helped Microsoft develop and enhance its responsible AI tools and Responsible AI program. **Notable work:** Expanded AI Trust services with new AI Assurance capabilities in September 2025; helped Microsoft develop and enhance its Responsible AI program for partners and customers

AI model risk assessment and validationAI assurance and attestation (SOC, FedRamp, HiTrust)Real-time AI systems assessmentsISO/IEC 42001 AI management system governance

Accredited: nist-ai-rmf, eu-ai-act, iso-iec-42001, sr-11-7

Luminos.Law (now ZwillGen AI Division)

Washington, United States · Founded 2019

Legal and technical AI fairness audits, red-team testing, and governance counsel Founded in 2019 as BNH.AI and rebranded as Luminos.Law, this boutique was the first law firm jointly run by lawyers and data scientists specializing in comprehensive AI audits. In January 2025, Luminos.Law was acquired by ZwillGen to launch ZwillGen's AI Division, combining AI fairness audits, red-team testing, AI governance policies, data de-identification certifications, and generative AI risk management with established legal and policy guidance. The firm is based in Washington, DC. **Notable work:** First law firm to specialize in comprehensive AI audits for global enterprises; acquired by ZwillGen in January 2025 to launch dedicated AI Division combining legal counsel with technical audit capabilities

Privileged AI fairness audits and certificationsGenerative AI red-team testingAI governance policies and proceduresData de-identification assessments

Accredited: eu-ai-act, nist-ai-rmf, gdpr, nyc-local-law-144

NCC Group AI Security

Manchester, United Kingdom

AI/ML security assessments combining penetration testing expertise with governance reviews NCC Group's AI security practice offers AI readiness assessments, AI/ML threat modeling, bias and toxicity testing, secure development lifecycle testing, red teaming (including OWASP LLM Top 10 methodology), and cloud security reviews for AI/ML infrastructure. The practice maps to ISO 42001, NIST AI Risk Management Framework, and EU AI Act. NCC Group has conducted AI security research for Google (AI hardware security, 2024) and is recognized as a Strong Performer in the Forrester Wave™: Cybersecurity Consulting Services in Europe, Q1 2024. **Notable work:** Conducted AI hardware security analysis for Google (April–May 2024); Strong Performer in Forrester Wave™ Cybersecurity Consulting Services in Europe Q1 2024; published AI/ML threat model analysis whitepaper

AI/ML red teaming and adversarial testingAI bias and toxicity assessmentsAI/ML secure development lifecycle reviewsAI readiness and governance framework alignment

Accredited: eu-ai-act, nist-ai-rmf, iso-iec-42001

ORCAA

New York, United States · Founded 2016

Algorithmic audits rooted in fairness, accountability, bias measurement, and governance Founded in 2016 by Cathy O'Neil (author of Weapons of Math Destruction), ORCAA is a leading algorithmic auditing consultancy offering comprehensive audit reports, quantitative bias analyses via its patent-pending Pilot platform, AI governance consulting, and NYC Local Law 144 hiring-algorithm bias audits. ORCAA is an inaugural member of the US AI Safety Institute Consortium and has audited systems for Uber, Olay, and healthcare entities under ONC's HTI-1 Final Rule in partnership with Duke Institute for Health Innovation. **Notable work:** Published algorithmic audit of Uber's AI governance; audited Olay Skin Advisor; founding member of US AI Safety Institute Consortium; Cathy O'Neil presented at NIST AI RMF workshop and Senator Schumer's AI Insight Forum

Algorithmic bias auditsNYC Local Law 144 hiring-tool auditsAI governance and risk management consultingHealthcare predictive-model compliance (HTI-1)

Accredited: nist-ai-rmf, nyc-local-law-144, sr-11-7

Protiviti AI Governance

Menlo Park, United States

Responsible AI governance frameworks embedded across the full enterprise AI lifecycle Protiviti's AI governance practice helps organizations build, govern, and manage AI responsibly by embedding governance, security, and risk management across the AI lifecycle. Services span AI governance framework design, model testing and security reviews, responsible AI adoption frameworks aligned to NIST AI RMF, and internal audit support for AI risk. Protiviti has partnered with AuditBoard to deliver AI-driven GRC solutions and built a responsible AI governance framework for a major hospitality company. **Notable work:** Partnered with AuditBoard to deliver AI-driven GRC solutions (October 2025); built responsible AI governance framework for a major hospitality company; published 'Enabling Enterprise AI Adoption Through Next-Generation Governance' whitepaper

AI governance framework designAI model testing and security reviewsInternal audit support for AI riskNIST AI RMF-aligned governance programs

Accredited: nist-ai-rmf, eu-ai-act

PwC Responsible AI

New York, United States

First to market with AICPA-standard independent assurance over AI systems and governance PwC's Assurance for AI is performed under AICPA standards and provides independent assurance over AI governance, oversight, and operation—addressing bias, model drift, security, and third-party risk. The service can be aligned with NIST AI RMF, ISO 42001, EU AI Act, and other leading frameworks, and is produced at intervals suited to stakeholder needs. PwC also offers broader Responsible AI advisory spanning governance program assessments, SOX-relevant AI controls reviews, and regulatory readiness. **Notable work:** Launched 'Assurance for AI'—described as a first-to-market solution providing formal independent assurance over AI systems under AICPA standards

Independent AI assurance under AICPA standardsAI governance program evaluationSOX-relevant AI controls reviewsEU AI Act regulatory readiness

Accredited: nist-ai-rmf, eu-ai-act, iso-iec-42001, gdpr

RSM AI Risk and Governance

Chicago, United States

Proprietary AI Governance Framework for responsible adoption in the middle market RSM US offers comprehensive AI governance consulting services through its proprietary, continuously-evolving AI Governance Framework that incorporates elements from NIST AI RMF, ISO/IEC 42001, COSO, and other best-practice frameworks. Services include AI governance and strategy risk assessments, control design, monitoring program development, and audit-readiness preparation. RSM has also published detailed analysis of COSO's generative AI guidance and its implications for internal control. RSM's 4,000+ assurance professionals use the firm's AI-powered RSM Luca audit ecosystem. **Notable work:** Published COSO GenAI governance analysis linking AI risk to internal control framework (2026); launched Ask Luca GenAI tool across 4,000+ assurance professionals (January 2026); committed $1 billion over three years to AI strategy and digital transformation

AI governance and strategy risk assessmentsProprietary AI Governance Framework (multi-standard)GenAI internal control program designAI audit-readiness and evidence preparation

Accredited: nist-ai-rmf, iso-iec-42001

Responsible AI Institute

United States · Founded 2016

Standards-aligned third-party AI verification, independent badging, and enterprise governance frameworks Founded in 2016, the Responsible AI Institute (RAI Institute) is an independent non-profit providing third-party assurance through its TrustX and OMA verification programs, which are aligned to 17 global standards including ISO/IEC 42001, NIST AI RMF, and the EU AI Act. Rather than issuing certifications, RAI Institute issues independently verified badges covering AI security, governance, regulatory compliance, workforce impact, and sustainability. Enterprise membership (from $50,000/year) includes access to AI governance frameworks, working groups, and co-created thought leadership. **Notable work:** TrustX program aligned to 17 global standards; RAISE Pathways program powered by 1,100+ AI controls; member of World Economic Forum Global AI Action Alliance (GAIA); only independent non-profit providing third-party AI assurance verification

Third-party AI verification and badging (TrustX, OMA)AI governance frameworks and toolsNIST AI RMF and ISO 42001 alignmentEU AI Act readiness assessment

Accredited: eu-ai-act, nist-ai-rmf, iso-iec-42001

Trail of Bits AI/ML Assurance

New York, United States

Security-first AI assurance combining threat modeling, red teaming, and safety research Trail of Bits launched its ML/AI assurance practice in 2023, bringing together safety and security methodologies to evaluate potential risks and determine necessary safety measures for AI-based systems. Services include MLOps pipeline assessments, AI risk assessments using operational design domains, model capability evaluations, AI red teaming, and security training. Trail of Bits has audited AI agents for clients including Perplexity and participated in DARPA's AI Cyber Challenge for automated vulnerability detection. **Notable work:** Launched ML/AI assurance practice in 2023; audited Perplexity Comet browser AI agent (discovered prompt injection techniques enabling Gmail data exfiltration); participated in DARPA AI Cyber Challenge; submitted response to OSTP National Priorities for AI RFI

AI/ML security and safety auditsMLOps pipeline vulnerability assessmentsAI red teaming and adversarial capability evaluationAI risk framework evaluation

Accredited: nist-ai-rmf