For HR leaders, employment counsel, and AI compliance officers at organizations that screen, rank, or promote candidates using algorithmic tools. NYC Local Law 144 (effective July 5, 2023) was the first law in the world to mandate annual independent bias audits of automated employment decision tools (AEDTs), with civil penalties of $500–$1,500 per violation per day. A December 2025 audit by the New York State Office of the State Comptroller found that 17 of 32 reviewed companies showed potential non-compliance — nearly 17× the rate DCWP had self-identified — signaling sharper enforcement ahead. Beyond New York City, the Colorado AI Act (effective February 1, 2026) classifies employment as a "consequential decision" category requiring risk management programs and annual impact assessments; California FEHA regulations (effective October 1, 2025) apply existing anti-discrimination law directly to algorithmic hiring tools; Illinois and Maryland have active AEDT-specific disclosure laws; and EEOC guidance makes clear that Title VII applies regardless of whether a human or an algorithm made the decision. This page evaluates the tools and independent auditors best positioned to help organizations navigate this expanding patchwork — from one-time bias audits required for immediate LL 144 compliance to continuous governance platforms built for the multi-jurisdiction future.
Last verified April 25, 2026
Editorial independence: aicompliancevendors.com does not accept vendor payment for inclusion or ranking. Every pick below is editor-selected against the criteria stated on this page, and every factual claim is traceable to a cited public source.
Organizations and HR tech vendors that need a credentialed, independent third-party LL 144 bias audit with a documented auditor framework comparable to financial audit standards
Employers and HR tech vendors seeking an independent bias audit from the algorithmic auditing firm that helped define the field, with inference-based demographic estimation when applicant demographic data is unavailable
HR teams and HR tech vendors wanting a purpose-built AI governance platform that automates LL 144 bias audits — including with synthetic data when real applicant data is unavailable — while tracking 25+ global regulations from a single dashboard
Regulated-industry teams and HR tech vendors that need end-to-end AI GRC with private-cloud or on-premises deployment options and explicit LL 144 bias audit services
Enterprises that want a Forrester Wave Leader governance platform with a pre-built NYC LL 144 Policy Pack that translates regulatory obligations into automated controls, evidence generation, and cross-jurisdiction reuse
How we decided which vendors qualify for inclusion.
Documented, specific NYC Local Law 144 service or workflow on the vendor's own product or service page — generic "AI fairness" claims are not accepted as evidence of scope.
Covers the three LL 144 compliance pillars: independent bias audit (selection rates and impact ratios by sex and race/ethnicity, including intersections), compliant public-summary publication, and candidate notice workflow.
Auditor independence: for pure audit firms, the firm must have no employment relationship or financial interest in the employer using the AEDT. For platform vendors offering audit support, the platform must explicitly acknowledge the independence requirement and note when a third-party signoff is still needed.
Multi-jurisdiction reach or evidence of expanding regulatory coverage (Colorado AI Act, EEOC, FEHA, Illinois AI Video Interview Act) — the US employment AI landscape will demand more than a single-law audit within 24 months.
Active service delivery: audit engagements or platform features shipped or updated within the 12 months preceding April 2026.
Independent validation: at least one external reference (academic citation, named customer, government case study, or analyst recognition) that is not produced by the vendor itself.
Each vendor's NYC LL 144 product or service page was reviewed directly; marketing copy alone was not accepted as evidence of capability. For audit firms, published audit reports were reviewed where publicly available. For platform vendors, product documentation and case studies were evaluated for specificity of LL 144 workflow coverage. Ranking reflects combination of: depth of LL 144 workflow support, demonstrated audit track record (for audit firms) or platform maturity (for software vendors), independence posture, and multi-jurisdiction coverage breadth. The Holistic AI, BABL AI, and ORCAA positions reflect their distinct specialization in LL 144 audits as part of a broader algorithmic audit practice; FairNow, Fairly AI, and Credo AI reflect purpose-built or dedicated HR AI governance platform capabilities. No paid placement has influenced ranking or inclusion.
Best for: Organizations wanting a single vendor for LL 144 audit support, documentation automation, and continuous bias monitoring across the full HR tech stack
Holistic AI operates a dedicated NYC Bias Audit Solution that automates the end-to-end LL 144 compliance workflow: data ingestion and cleaning, impact ratio calculation by sex, race/ethnicity, and their intersections, flagging of disparities against the four-fifths heuristic, and generation of the compliant public summary required for website posting. The platform produces all required deliverables — selection rates by protected class, the ready-to-publish audit summary, and the candidate notice template — in one documented workflow, reducing the annual "compliance scramble" the firm says dominates most in-house programs. Holistic AI explicitly notes that LL 144 requires auditor independence and advises clients that a third-party sign-off may still be needed; that transparency prevents buyers from mistaking the platform for the audit itself. Beyond LL 144, Holistic AI's broader HR governance offering covers regulatory mapping across jurisdictions (including EU AI Act, NIST AI RMF, and ISO 42001), continuous LLM drift detection, and an HR tech inventory and risk assessment workflow that satisfies the annual-review requirement in Colorado SB 205. The UCL research grounding for its bias methodology and its April 2026 Runtime Agentic Monitoring launch distinguish it from pure-GRC platforms that have added AI compliance modules. No public pricing; enterprise-only engagement.
Strengths
Automated LL 144 workflow: data cleaning, impact ratios, public summary, and candidate notice in one platform.
Transparent about auditor independence limits — explicitly notes third-party sign-off may be required.
Multi-jurisdiction HR governance (LL 144, Colorado SB 205, EU AI Act) from a single platform, with continuous monitoring between annual audits.
Limitations
Not itself an independent auditor under LL 144 — clients who need a standalone independent audit must engage a separate firm.
Enterprise-only pricing with no public rates; evaluation requires a sales conversation.
Best for: Organizations and HR tech vendors that need a credentialed, independent third-party LL 144 bias audit with a documented auditor framework comparable to financial audit standards
BABL AI is one of the most active independent LL 144 auditors in the market, with named public audit reports for vendors including Eightfold AI (March 2026) and HackerRank (2024) — both audited under the firm's Criterion Audit Framework, which applies Public Company Accounting Oversight Board (PCAOB) auditing standard AS 1105 on Audit Evidence to algorithmic systems. Lead auditors hold ForHumanity certification specifically for NYC AEDT Bias Audits, satisfying the independence requirement DCWP enforces. The three-section audit scope (disparate impact quantification, governance, and risk assessment) goes beyond the LL 144 minimum of statistical disparate impact analysis, producing Pass/Minor Remediation/Fail opinions that are defensible to regulators. No software download or platform integration is required by the auditee — the evidence-portal workflow typically completes in two to three weeks after documentation submission, a material advantage for employers facing an urgent audit deadline. BABL AI research (co-authored by firm principals) was published at ACM FAccT 2024, providing independent academic validation of the audit methodology. BABL AI Lead Auditor Shea Brown is also CEO and a University of Iowa faculty member; the firm's research has received funding from IBM's Tech Ethics Lab and Notre Dame Research Center.
Strengths
Fully independent third-party audit with PCAOB-standard evidence methodology and ForHumanity-certified lead auditors — satisfies LL 144's independence requirement without relying on employer self-attestation.
Named public audit reports (Eightfold AI, HackerRank) confirm active delivery; audit framework peer-reviewed at ACM FAccT 2024.
No software integration required; documentation-portal workflow completes in 2–3 weeks, making it viable for urgent timelines.
Limitations
Pure audit firm: does not provide a continuous monitoring platform or multi-jurisdiction governance software between annual audits.
No public pricing; audit cost must be scoped through a free consultation.
Best for: Employers and HR tech vendors seeking an independent bias audit from the algorithmic auditing firm that helped define the field, with inference-based demographic estimation when applicant demographic data is unavailable
ORCAA (O'Neil Risk Consulting & Algorithmic Auditing) was founded by Cathy O'Neil — mathematician, Harvard PhD, and author of the National Book Award semifinalist *Weapons of Math Destruction* — placing it among the most publicly recognized names in algorithmic auditing. ORCAA has offered a dedicated NYC Local Law 144 Bias Audit service since the law came into force in July 2023; when NBC News covered the law's effective date, they quoted Cathy O'Neil directly. The firm's Pilot platform conducts disparate-impact-style and related analyses on real-world data from live deployments or test data, covering both vendor clients building hiring tools and employer clients using third-party tools. A material differentiator: ORCAA's inference methodology can model race/ethnicity and gender even when applicant-level demographic data does not exist — a common gap that blocks many employers from completing LL 144 audits without external help. ORCAA is an inaugural member of the US AI Safety Institute Consortium and has published work in the MIT Sloan Management Review (Summer 2024). Uber has published an ORCAA algorithmic governance report, providing a named external validation. The firm's broader Ethical Matrix framework for Algorithmic Audit encompasses generative AI, automated decision systems, predictive models, and facial recognition, which positions it for the broader US employment AI patchwork beyond LL 144.
Strengths
Independent bias audit from the firm that helped define the field; Cathy O'Neil quoted by NBC News on NYC LL 144's effective date.
Proprietary double-firewall inference methodology enables audits when applicant demographic data is unavailable — a critical capability for data-sparse employers.
US AI Safety Institute inaugural member; published in MIT Sloan Management Review; Uber published ORCAA's algorithmic governance report.
Limitations
Consulting firm model: not a continuous monitoring software platform; governance between audits requires separate tooling.
No public pricing or audit turnaround SLA published; initial scoping required.
Best for: HR teams and HR tech vendors wanting a purpose-built AI governance platform that automates LL 144 bias audits — including with synthetic data when real applicant data is unavailable — while tracking 25+ global regulations from a single dashboard
FairNow built its product around the specific workflows that LL 144 and employment AI regulations create: AI inventory management, centralized bias audit scheduling and reporting, synthetic data library for audits when historical applicant data is insufficient or the organization declines to share it with a third party, and continuous regulatory monitoring across 25+ jurisdictions including NYC LL 144, Colorado SB 205, EU AI Act, South Korea Basic AI Act, and ISO 42001. The synthetic-data approach is particularly valuable for HR tech vendors whose tools have not yet launched or whose clients cannot provide demographic data — the UK government's AI Assurance Catalogue featured FairNow's synthetic bias evaluation technique as a published case study. FairNow also offers direct third-party bias audit services, positioning it as both the software platform and, optionally, the independent auditor. In October 2025, FairNow was acquired by AuditBoard (now rebranded to Optro), the GRC platform used by over 50% of the Fortune 500 — a material change that significantly expands FairNow's enterprise distribution but also means ongoing evaluation of how the product roadmap evolves under its new parent. Pre-acquisition, FairNow published case studies for HR tech vendor Humanly. Buyers should confirm current product boundaries during evaluation.
Strengths
Synthetic Data Library enables LL 144 bias audits without requiring real applicant demographic data — solves a blocking problem for pre-launch tools and data-sensitive employers.
25+ global regulation and standard tracking (LL 144, Colorado SB 205, EU AI Act, ISO 42001, and more) in a single platform, with automated compliance evidence.
UK Government AI Assurance Catalogue case study independently validates the synthetic audit methodology.
Limitations
Acquired by AuditBoard (now Optro) in October 2025 — product roadmap and integration timelines under new parent are still evolving; verify current feature boundaries during evaluation.
Best for: Regulated-industry teams and HR tech vendors that need end-to-end AI GRC with private-cloud or on-premises deployment options and explicit LL 144 bias audit services
Fairly AI (operating as Asenion following its June 2025 acquisition of Swedish firm anch.AI) maintains a dedicated HR tech bias audit offering under asenion.ai/hr-tech, covering resume screening, video interview analysis, performance appraisals, and retention prediction — all AEDT categories in scope for LL 144 and related US employment AI laws. The firm positions its "AI-Compliance-in-a-Box" as 4× faster than conventional audits with 100% compliance success rate across engagements, though these figures are self-reported. Fairly AI's core platform differentiator is deployment flexibility: private-cloud and on-premises options address data residency requirements that prevent some regulated employers from sharing candidate data with SaaS platforms. The broader Asenion platform provides end-to-end AI GRC — fairness testing, privacy testing, security testing, and regulatory compliance — with a three-line-of-defence workflow built from financial services model risk management conventions. IDC MarketScape recognition in 2023 and 2024, representation in four Gartner AI TRiSM categories, and a live GenAI testing case study on HR recruitment applications (presented with impress.ai at the AI Verify Foundation event in February 2026) provide third-party validation. The anch.AI acquisition and Asenion rebrand create naming discontinuity — confirm current product identity during procurement.
Strengths
Private-cloud and on-premises deployment accommodates data residency requirements that prevent sharing candidate demographic data with external SaaS platforms.
Explicit HR AEDT audit service page covering all major AEDT categories (resume screening, video interviews, performance appraisal, retention prediction).
IDC MarketScape 2023 and 2024 recognition; four Gartner AI TRiSM categories; GenAI HR recruitment testing case study at AI Verify Foundation event (February 2026).
Limitations
Asenion rebrand following anch.AI acquisition creates naming discontinuity in procurement processes — verify current product identity and contractual entity.
Compliance success and speed claims (4× faster, 100% compliance) are self-reported; no published independent audit reports confirmed at time of evaluation.
Best for: Enterprises that want a Forrester Wave Leader governance platform with a pre-built NYC LL 144 Policy Pack that translates regulatory obligations into automated controls, evidence generation, and cross-jurisdiction reuse
Credo AI built one of the earliest dedicated NYC LL 144 Policy Packs in the market, translating the law's requirements into actionable controls within its Responsible AI Governance Platform — including bias assessment, impact ratio and intersectionality analysis via Credo AI Lens (open-source), transparency reporting, candidate notice documentation, and continuous governance workflows. The AdeptID case study (published February 2023 and featured in the UK Government AI Assurance Catalogue) documents a talent-matching startup achieving full LL 144 compliance within two months using the Policy Pack, with Credo AI performing an independent third-party review of the assessment report. Credo AI's Policy Pack library has since expanded to cover Colorado AI Act (separate developer and deployer packs), Utah AI Policy Act, and a consolidated Control Library that maps shared requirements across jurisdictions — enabling evidence reuse across LL 144, Colorado, and EU AI Act without duplicate documentation. The 2025 Forrester Wave AI Governance recognition (Leader with 12 perfect scores) and the 2025 collaboration that made Credo AI Policy Packs the compliance accelerator foundation in IBM watsonx.governance provide strong analyst and ecosystem validation. Enterprise-only; no public pricing. The 2026 Agent Registry for multi-agent and agentic AI governance extends the platform's relevance as HR tech vendors adopt generative AI in hiring workflows.
Strengths
Pre-built NYC LL 144 Policy Pack with automated controls, impact ratio assessment, and evidence generation — AdeptID achieved full compliance in two months; UK Government AI Assurance Catalogue case study independently validates the approach.
Forrester Wave AI Governance Leader (2025, 12 perfect scores); Policy Packs power compliance accelerators in IBM watsonx.governance.
Cross-jurisdiction evidence reuse: LL 144 evidence maps to Colorado AI Act and EU AI Act controls, reducing multi-law compliance burden.
Limitations
Enterprise-only; no self-serve tier and no public pricing — requires a sales conversation to evaluate, which extends procurement cycles.
Platform governance focus: Credo AI is not itself a LL 144 independent auditor; buyers needing a standalone third-party audit report must engage an audit firm separately.
Criteria-based recommendations for the most common shortlist scenarios.
The single most urgent decision for any employer using algorithmic tools in hiring or promotion in New York City is whether an independent bias audit has been completed within the past 12 months — if not, exposure is active. For pure audit needs, BABL AI and ORCAA are the two auditors in this list with the clearest public track records of completed LL 144 reports; both operate without requiring platform integrations and can scope an engagement quickly. Holistic AI is the strongest choice for organizations that want to combine audit workflow automation with continuous governance, provided they pair the platform with an independent auditor for the formal sign-off LL 144 requires. FairNow (now part of Optro/AuditBoard) is the best fit for HR tech vendors whose tools lack sufficient real-world historical data for a conventional disparate impact analysis — its synthetic data capability solves a blocking problem for pre-launch products. Fairly AI (Asenion) is the differentiated choice for regulated employers with data residency requirements that preclude sending candidate data to a SaaS cloud. Credo AI is best for enterprises already running multi-jurisdiction AI governance programs that want to consolidate LL 144 into a unified policy framework and reuse evidence across Colorado, EU AI Act, and other overlapping obligations. One point applies across all picks: the December 2025 NY State Comptroller audit found significantly higher non-compliance rates than DCWP had self-detected, and DCWP has committed to strengthening enforcement. Organizations that have been coasting on complaint-driven enforcement passivity should treat that finding as a structural shift, not a data anomaly.
What we did not include
Transparency about exclusions.
OneTrust AI Governance and ServiceNow AI Governance cover employment AI within broader enterprise GRC programs but do not publish LL 144-specific workflow documentation at the level of the six picks above; both are covered in the AI Governance Platforms collection. IBM watsonx.governance has LL 144 exposure through its Credo AI Policy Pack integration but is not itself a LL 144 compliance product. Workday and other ATS vendors whose products are the subject of LL 144 audits (not the providers of audit services) are out of scope for this collection. Eticas Tech, LatticeFlow AI, and Trustible have algorithmic audit practices but have not published LL 144-specific services at the depth of the six picks above. Warden AI is an active LL 144 auditor not ranked here because its published footprint is smaller than the six selected firms as of April 2026 — a full vendor profile is available in the directory.
Frequently asked
Which employers and agencies must comply with NYC Local Law 144?+
Any employer or employment agency that uses an automated employment decision tool to screen candidates for hiring or employees for promotion where the position is based in New York City — including remote positions with a NYC nexus. DCWP's published FAQs shifted the geographic focus from where the candidate resides to the location of the position, meaning a company headquartered outside NYC must still comply if it is using an AEDT to fill a NYC-based or NYC-connected role. Vendors of AEDTs may initiate a bias audit on behalf of their employer clients using data aggregated across multiple users of the tool, which effectively extends compliance obligations to HR tech vendors regardless of their own location. The law applies to employers of any size; there is no small-employer exemption.
What exactly does an independent bias audit under LL 144 require?+
A bias audit under LL 144 is an impartial evaluation by an independent auditor — someone with no employment relationship or financial interest in the employer or AEDT vendor. The audit must assess disparate impact by sex, race/ethnicity, and their intersections. For categorical (pass/fail) systems, auditors calculate the selection rate for each demographic group and the impact ratio (each group's selection rate divided by the highest rate). For continuous (scored) systems, auditors binarize scores at the median and calculate scoring rates and impact ratios equivalently. Groups representing fewer than 2% of the data may be excluded from the impact ratio calculation, but their group size and rate must still appear in the published summary. The four-fifths (80%) rule is the widely used screening heuristic, but LL 144 does not set a mandatory threshold — employers using a tool with a sub-80% impact ratio are not automatically in violation, but discrimination in hiring remains unlawful under Title VII, NYSHRL, and NYC HRL regardless. A compliant public summary must include: the date of the audit, source and explanation of data used, number of individuals assessed (including those in unknown categories), selection or scoring rates, and impact ratios for all included categories. The summary must remain publicly posted for at least six months after the most recent use of the tool.
What are the penalties for non-compliance with NYC Local Law 144?+
The NYC Department of Consumer and Worker Protection may impose civil penalties of up to $500 for a first violation and $500–$1,500 for each subsequent violation. Each day on which an AEDT is used without a compliant bias audit constitutes a separate violation. Failure to provide required candidate notice is also a separate violation. There is no private right of action under LL 144 itself — enforcement is DCWP-only — but discrimination claims arising from biased AEDTs can be pursued under Title VII (EEOC), the NYC Human Rights Law, and the NYSHRL, which do carry private rights of action and substantially greater damages exposure. The December 2025 NY State Comptroller audit found that DCWP's enforcement had been significantly under-resourced and complaint-driven, but DCWP committed to adopting the majority of the Comptroller's recommendations, including stronger bias audit reviews and broader investigative techniques. Organizations that have not yet complied should not assume low enforcement probability will persist.
How must candidates be notified before an AEDT is used on their application?+
Employers must give candidates and employees at least 10 business days' advance notice before using an AEDT to screen them. The notice must specify: that an AEDT will be used, which job qualifications and characteristics the tool will assess, the types of data collected and their sources, the employer's data retention policy for information collected through the tool, and instructions for requesting an alternative selection process or reasonable accommodation if one is available. Notice may be provided via the employer's website, in the job posting, or by U.S. mail or email. Failure to provide compliant notice is a separate violation from the audit requirement — meaning an employer could face parallel penalty exposure for both an outdated or missing bias audit and for inadequate candidate notice. Employers using multiple AEDTs for the same hiring decision should ensure each tool is covered by its own audit and notice.
How does the Colorado AI Act change employment AI compliance obligations, and does it overlap with NYC LL 144?+
Colorado SB 24-205 (Colorado AI Act), effective February 1, 2026, classifies employment and employment opportunities as "consequential decision" categories. Deployers of "high-risk" AI systems — those that make or substantially factor into consequential decisions — must use reasonable care to prevent algorithmic discrimination, implement a documented risk management policy, complete impact assessments at least annually, review each high-risk system annually, post a public statement describing high-risk systems deployed and how risks are managed, notify individuals before a consequential AI-driven decision is made, and furnish an adverse-action notice with a description of the AI's role if an adverse decision is reached. Small employers (fewer than 50 employees who do not use their own data to train the AI) are exempt from the risk management policy, impact assessment, and website statement requirements. LL 144 and the Colorado AI Act overlap in that both require impact assessment and transparency documentation for employment AI, but they differ in enforcement mechanism (DCWP civil penalties vs. Colorado AG enforcement), scope (NYC's categorical pass/fail and scored system analysis vs. Colorado's broader "consequential decision" framing), and required documentation. A governance platform that generates overlapping evidence artifacts — as Credo AI and FairNow both do — can reduce the compliance burden materially for organizations operating in both jurisdictions.