The Texas AI Act (TRAIGA): Complete Compliance Guide for January 1, 2026

Texas HB 149 takes effect January 1, 2026. This guide walks through prohibited practices, penalties up to $200,000 per violation, the 60-day cure period, NIST AI RMF safe harbor, and the 36-month sandbox — with every provision cited to primary source.

By ACV Editorial · April 24, 2026 · 14 min read · Last reviewed April 24, 2026

The Texas AI Act (TRAIGA): Complete Compliance Guide for January 1, 2026

On 22 June 2025, Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act — HB 149, commonly known as TRAIGA — into law. The statute becomes effective 1 January 2026, making Texas the third state (after Colorado and Utah) to enact a comprehensive AI governance law, and the first to pair that law with a statewide regulatory sandbox and a preemption provision that nullifies all city and county AI ordinances.

This guide is written for the person operationalising TRAIGA compliance: the GRC lead, in-house counsel, Chief AI Officer, or privacy director working through a 2026 readiness plan. It covers every prohibited practice, every penalty tier, the exclusive enforcement track, the NIST AI RMF affirmative defense, and the practical sequencing decisions teams have to make before the effective date. Every statutory claim cites primary source text as enrolled.


Key Dates and Why 1 January 2026 Matters

  • 1 September 2025 — Companion healthcare bill SB 1188 took effect (physician-review requirements for AI-generated diagnostic records; US-based EHR storage; disclosure duties).
  • 1 January 2026 — TRAIGA's substantive prohibitions (Tex. Bus. & Com. Code §§ 551–554) become enforceable.
  • 60-day cure period — AG must provide written notice and 60 days to cure before filing a civil action (§ 552.107).
  • 36 months — Maximum duration of a sandbox participation agreement under Chapter 553.

The Texas Legislative Budget Board has projected a negative fiscal impact of more than $25 million over the 2026–27 biennium, with over $10 million in recurring annual costs driven by 20 new FTEs, enforcement technology, and expert consulting (Texas Policy Research). Expect active enforcement.


Who TRAIGA Applies To

TRAIGA covers a person that: (1) promotes, advertises, or conducts business in Texas; (2) produces a product or service used by Texas residents; or (3) develops or deploys an artificial intelligence system in Texas (Tex. Bus. & Com. Code § 551.002).

The statute reaches both developers and deployers — but, unlike the original HB 1709 draft, the enacted version removed most impact-assessment and risk-management duties for private-sector actors. Heightened obligations fall on governmental entities and regulated occupations. (Latham & Watkins; K&L Gates).

Important carve-outs: - Insurance — TRAIGA "may not be construed to … authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance." (§ 552.002). - Local preemption — TRAIGA expressly nullifies all city and county AI ordinances, creating a single statewide standard (Wiley Rein).


The Prohibited Practices — Intent-Based, Not Impact-Based

TRAIGA's single most important structural feature is that liability under the private-sector prohibitions requires intent. The enacted text of § 552.056(c) states that disparate impact alone is insufficient to establish discriminatory intent. This distinguishes TRAIGA from both the Colorado AI Act (impact-based) and the EU AI Act (risk-tiered, regardless of intent).

The prohibited acts — grouped by code section — are:

§ 552.051 — Behavioural Manipulation AI systems "developed or deployed with the sole intent of" inciting a person to self-harm, harm another, or engage in criminal activity are prohibited (Tex. Bus. & Com. Code § 552.051).

§ 552.052 — Governmental Social Scoring Governmental entities are prohibited from developing or deploying AI systems for the purpose of evaluating or classifying a natural person based on social behaviour or known/predicted personal characteristics for the purpose of detrimental treatment unrelated to the context in which the data was originally collected.

§ 552.053 — Governmental Biometric Identification from Public Media Governmental entities may not use AI to capture biometric identifiers from publicly available media for the purpose of unique identification without consent. This provision pairs with the CUBI amendments (discussed below).

§ 552.054 — Infringement of Constitutional Rights AI systems developed or deployed with the sole intent of infringing, restricting, or otherwise impairing a person's rights under the US Constitution are prohibited.

§ 552.055 — Discrimination Against a Protected Class AI systems developed or deployed with the intent to unlawfully discriminate against a protected class — as defined in federal and Texas civil rights law — are prohibited.

§ 552.056 — Deepfake and CSAM Prohibitions AI systems developed or deployed with the sole intent of producing, assisting in the production of, or distributing: (1) child sexual abuse material (CSAM), or (2) unlawful deepfake imagery, are prohibited. This section contains the critical intent-only rule: disparate impact alone is not sufficient to prove discriminatory intent.


Penalty Structure — What Violations Actually Cost

TRAIGA's penalty ladder is more granular than Colorado's. The ranges under §§ 552.106–552.108:

Violation TypeMinimumMaximum
Curable violation (corrected within 60-day cure window)$10,000$12,000
Uncurable violation$80,000$200,000
Continuing violation (per day)$2,000$40,000
Secondary sanctions — licensed individuals / state-licensed entitiesup to $100,000 per violation + license suspension/revocation

For a single intentional uncurable violation affecting thousands of Texas consumers, compounded daily continuation penalties can escalate exposure into the tens of millions within weeks. The AG may additionally seek injunctive relief, disgorgement of profits traceable to the violation, and restitution for affected consumers (Mayer Brown).

Secondary agency sanctions — for licensed professionals such as physicians, attorneys, accountants, therapists, or financial advisors — can be imposed in addition to the § 552 civil penalties. A single AI-driven violation in a regulated profession can therefore yield: (a) an AG civil penalty up to $200,000; (b) a licensing-board penalty up to $100,000; and (c) continuing daily penalties up to $40,000 until the violation is cured.


Enforcement — Exclusively the Texas Attorney General

Under § 552.106, enforcement authority rests exclusively with the Texas Attorney General. There is no private right of action — affected consumers cannot directly sue developers or deployers for TRAIGA violations (Skadden; Blank Rome).

The procedural sequence: 1. AG notice — written notice to the alleged violator identifying the specific practice, section, and factual basis. 2. 60-day cure period — the target has sixty days from receipt of notice to cure the violation and provide a written statement certifying that the violation has been cured and will not recur (§ 552.107). 3. Civil action — only after the cure window expires without adequate cure may the AG file a civil action. A complete and timely cure with written certification bars civil action for that violation. 4. Injunctive relief — the AG may seek temporary or permanent injunctions to halt ongoing violations.

What this means operationally: Design your intake and response process so that once an AG notice arrives, your cross-functional team (legal, GRC, product, engineering) can evaluate, remediate, and file the written cure certification within 60 days. Missing that window moves the violation from $10,000–$12,000 per incident to $80,000–$200,000 per incident.


The Affirmative Defense — NIST AI RMF as Safe Harbor

TRAIGA includes an express affirmative defense for substantial compliance with a recognised AI risk framework. Under § 552.105(e)(2)(D), a person has an affirmative defense to a violation if they substantially comply with the most current version of the NIST AI Risk Management Framework's Generative AI Profile (NIST-AI-600-1) or "another nationally or internationally recognised risk management framework for artificial intelligence systems."

This is a meaningful safe harbor. Comparable frameworks that should qualify include: - NIST AI RMF 1.0 (core framework, January 2023) and its March 2025 update - ISO/IEC 42001:2023 — AI management systems (certifiable standard) - Sector-specific frameworks (for regulated industries) approved by the relevant Texas agency

Additional affirmative defenses in the same section cover third-party misuse (developer/deployer not responsible for a third party's unauthorized misuse of an AI system contrary to documented terms) and results-of-testing (documented red-team exercises and vulnerability testing performed in good faith).

Operational implication

If you were already on a NIST AI RMF or ISO 42001 roadmap, that work is now a regulated safe harbor in Texas. If you were not, the § 552.105 defense is one of the clearer practical rationales for adopting a recognised framework ahead of 1 January 2026. Our NIST AI RMF vs ISO/IEC 42001 comparison walks through sequencing and certification costs.


The Texas Regulatory Sandbox — Chapter 553

TRAIGA creates what has been described as the "first-in-the-nation" state AI regulatory sandbox (Baker Botts; JD Supra / Baker Botts). The programme is administered by the Texas Department of Information Resources (DIR) under Chapter 553 of the Business & Commerce Code.

Key features: - Up to 36 months per sandbox participation agreement - Limited regulatory relief — participants may receive waivers of certain Texas laws and rules that would otherwise impede testing of an AI system - Core prohibitions not waivable — the § 552 prohibited practices (CSAM, deepfakes, constitutional violations, social scoring, intentional discrimination) are not subject to waiver - Consumer protections required — participants must implement consumer-disclosure and complaint-handling procedures - DIR reporting — DIR submits an annual report to the legislature including number of participants, overall impact, and legislative recommendations (§ 553.103)

For AI developers in regulated domains (healthcare, financial services, education) the sandbox is a potential path to controlled deployment of systems that would otherwise face regulatory uncertainty. For most broadly-deployed consumer AI products the sandbox is not necessary — standard compliance under § 552 suffices.


The Texas Artificial Intelligence Council — Chapter 554

TRAIGA creates a new Texas Artificial Intelligence Council administratively attached to DIR. Key structural features (§§ 554.001–554.103):

  • Seven members — three appointed by the Governor, two by the Lieutenant Governor, two by the Speaker of the House
  • Four-year staggered terms, with the Governor appointing the chair
  • Qualification areas — Texas residents with expertise in AI systems, data privacy and security, technology ethics or law, public policy, AI risk management, governmental operations, or anticompetitive practices
  • Not a rulemaking body — the Council "may not" adopt binding rules or guidance, interfere with or override a state agency, or perform duties not granted by TRAIGA (§ 554.103)
  • Training mandate — the Council "shall conduct training programs for state agencies and local governments on the use of artificial intelligence systems" (§ 554.102)

In effect, the Council is an advisory body. It does not have the rulemaking authority of the Colorado AG under the Colorado AI Act. Compliance is governed by the § 552–553 statutory text, not by Council interpretations — although Council studies and recommendations will shape future legislative amendments.


CUBI Amendments — Biometric Data After TRAIGA

Section 2 of HB 149 amends the Texas Capture or Use of Biometric Identifier Act (CUBI) — Tex. Bus. & Com. Code § 503.001 — in two important ways.

No Implied Consent from Publicly Available Images

The new § 503.001(b-1) states: "an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate." (Skadden).

This closes the "Clearview-style" loophole: training facial recognition or identity systems on scraped public images does not constitute consent.

AI Training Exemption — Except for Unique Identification

CUBI does not apply to "the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual." (§ 503.001(e)(2); Baker Botts).

Practical effect: foundational AI training on biometric data (for general-purpose vision models, for example) is exempt from CUBI, but deploying the resulting system to uniquely identify individuals remains regulated.


How TRAIGA Compares to Colorado and the EU AI Act

DimensionTRAIGA (Texas HB 149)Colorado AI Act (SB 24-205)EU AI Act (Reg. 2024/1689)
Effective date1 January 202630 June 2026 (delayed from Feb 1, 2026)1 Aug 2024 (in force); full application 2 Aug 2026
Liability triggerIntent-basedImpact-based (algorithmic discrimination in consequential decisions)Risk-tier classification
Disparate impactInsufficient to prove discrimination (§ 552.056(c))Central to enforcementRisk-based
Risk tiersNone — specific intentional acts prohibitedBinary — high-risk vs not coveredFour tiers (unacceptable, high, limited, minimal)
Max penalty$200,000 per uncurable violation$20,000 per violation under CCPA€35M or 7% of global turnover
Cure period60 daysTBDNone (general)
Safe harborNIST AI RMF GenAI Profile + comparable frameworksRebuttable presumption for NIST AI RMF adherenceHarmonised standards + conformity assessments
Impact assessments (private sector)Not requiredRequired for deployersRequired for high-risk systems
Regulatory sandbox36-month Texas sandboxNoneMember states required to establish by 2026
Private right of actionNoNoNo

Sources: HB 149 enrolled text; Colorado SB 24-205; EU AI Act Article 99 penalties; Baker Botts comparison.

If you operate in multiple jurisdictions, our State AI Law Tracker shows every enacted state law side by side with effective dates and penalty ranges.


Federal Preemption Risk — The Moving Target

TRAIGA's enforceability is shadowed by two federal developments. Organisations building compliance programmes must track both:

1. "One Big Beautiful Bill" Moratorium (failed, July 2025). The House-passed version of the OBBB (H.R. 1) contained a 10-year moratorium on state AI regulation. The Senate voted 99–1 to strip the moratorium before passage (Goodwin Law). The enacted bill contains no AI preemption provisions.

2. December 11, 2025 Executive Order. President Trump signed "Ensuring a National Policy Framework for Artificial Intelligence" directing (a) a DOJ AI Litigation Task Force to challenge state AI laws on constitutional grounds; (b) a Commerce Department evaluation of "onerous" state AI laws; (c) conditioning of federal BEAD broadband funding on states not passing such laws (Alston & Bird; Sidley Austin). The order does not directly preempt state law — executive orders lack statutory preemption force — but creates ongoing litigation risk for TRAIGA enforcement.

Practical guidance: As of April 2026, TRAIGA remains fully in force. The statutory effective date of 1 January 2026 has passed and Texas AG enforcement authority is active. Organisations should build compliance programmes on the assumption TRAIGA will be enforced, while monitoring DOJ litigation activity and any Commerce Department "onerous state AI law" designation that could affect federal funding exposure.


A 90-Day TRAIGA Readiness Plan

For organisations beginning formal TRAIGA readiness work in 2026 — or validating an existing plan — the following sequence covers the critical ground.

Days 1–30: Scope and Inventory

  • AI inventory — catalogue every AI system that: (a) is developed by your organisation; (b) is deployed in products or services used by Texas residents; or (c) supports Texas employees, operations, or government contracts.
  • Intent documentation — for each system, document the intended purpose and use cases. Intent is the pivotal liability standard under TRAIGA; contemporaneous design documentation is the primary evidence of non-prohibited intent.
  • Protected-class risk map — identify systems that could plausibly affect protected classes (employment, credit, housing, insurance, healthcare, education). Even if impact alone does not establish liability, intent-based claims often start with statistical disparity findings.

Days 31–60: Controls and Framework Adoption

  • Framework selection — formally adopt the NIST AI RMF 1.0 + GenAI Profile (NIST AI RMF framework page) or ISO/IEC 42001 (ISO 42001 framework page). Document the decision in a board- or executive-approved policy.
  • Gap analysis — map existing AI governance controls to § 552.105 affirmative-defense elements: substantial compliance with a recognised framework, third-party misuse controls, and documented testing (red-team, adversarial, bias).
  • Biometric controls — audit any AI system that processes biometric identifiers for compliance with the amended CUBI (§ 503.001(b-1) and (e)(2)). Confirm either explicit consent or that the use falls outside "uniquely identifying a specific individual."

Days 61–90: Cure Capability and Governance

  • AG notice response playbook — design a cross-functional response playbook that can evaluate an AG notice, determine cure feasibility, remediate, and file a written cure certification within 60 days. Assign roles to legal, GRC, product, engineering, and executive approval.
  • Disclosure and consumer rights — for healthcare uses, confirm AI disclosure controls comply with SB 1188 (effective 1 September 2025). For governmental-facing services, ensure explicit AI-interaction disclosure.
  • Training — deliver TRAIGA training to product, engineering, legal, and procurement teams. Document attendance and content as part of the § 552.105 affirmative-defense evidence package.

The most common readiness failure pattern is treating TRAIGA as a legal-only matter. The intent standard requires documented design decisions, which means product and engineering teams must be part of the compliance programme — not consulted after the fact.


Key Takeaways

  • TRAIGA (Texas HB 149) is effective 1 January 2026, enforceable exclusively by the Texas AG, with penalties up to $200,000 per uncurable violation and $40,000 per day for continuing violations.
  • Liability under the private-sector prohibitions is intent-based. Disparate impact alone is explicitly insufficient under § 552.056(c).
  • The 60-day cure period under § 552.107 is the single most important procedural feature. A complete and timely cure, certified in writing, bars civil action for the noticed violation.
  • Substantial compliance with the NIST AI RMF GenAI Profile or a comparable recognised framework is an affirmative defense under § 552.105(e)(2)(D).
  • The statute nullifies all local AI ordinances, creating a single statewide standard — an advantage for multi-city Texas operations.
  • TRAIGA does not impose impact assessments or risk-management policies on most private-sector actors, unlike Colorado's AI Act or the EU AI Act.
  • CUBI amendments close the "publicly-available images" consent loophole and exempt foundational AI training — but deployment for unique identification remains regulated.
  • The 36-month regulatory sandbox under Chapter 553 provides a controlled-deployment path, but the core § 552 prohibitions are not waivable.
  • Federal preemption via the December 2025 executive order creates litigation risk, but TRAIGA remains fully in force as of April 2026.

For a vendor shortlist that explicitly supports NIST AI RMF, ISO 42001, and US state AI laws, see /best/ai-governance-platforms.


Sources

  1. Texas HB 149 Enrolled Bill Text: https://capitol.texas.gov/tlodocs/89R/billtext/html/HB00149F.htm
  2. Texas HB 149 Introduced Version: https://capitol.texas.gov/tlodocs/89R/billtext/html/HB00149I.htm
  3. LegiScan — Texas HB 149 Legislative History: https://legiscan.com/TX/bill/HB149/2025
  4. Sidley Austin — Texas H.B. 149 Enrolled Text PDF: https://www.sidley.com/en/-/media/resource-pages/ai-monitor/laws-and-regulations/texas-hb-149-responsible-artificial-intelligence-act.pdf
  5. Skadden — Texas Charts New Path on AI With Landmark Regulation (June 23, 2025): https://www.skadden.com/insights/publications/2025/06/texas-charts-new-path-on-ai-with-landmark-regulation
  6. Baker Botts — Texas Enacts Responsible AI Governance Act (July 16, 2025): https://www.bakerbotts.com/thought-leadership/publications/2025/july/texas-enacts-responsible-ai-governance-act-what-companies-need-to-know
  7. Latham & Watkins — Texas Signs Responsible AI Governance Act Into Law (June 23, 2025): https://www.lw.com/en/insights/texas-signs-responsible-ai-governance-act-into-law
  8. Wiley Rein — Texas Responsible AI Governance Act Enacted (June 24, 2025): https://www.wiley.law/alert-Texas-Responsible-AI-Governance-Act-Enacted
  9. Greenberg Traurig — TRAIGA: Key Provisions (June 23, 2025): https://www.gtlaw.com/en/insights/2025/6/traiga-key-provisions-of-texas-new-artificial-intelligence-governance-act
  10. K&L Gates — Pared Back Version of TRAIGA Signed Into Law: https://www.klgates.com/Pared-Back-Version-of-the-Texas-Responsible-Artificial-Intelligence-Governance-Act-Signed-Into-Law-6-24-2025
  11. Blank Rome — New AI Regulations Come into Play with TRAIGA: https://www.blankrome.com/publications/new-ai-regulations-come-play-texas-responsible-artificial-intelligence-governance-act
  12. Akin Gump — Texas Enacts HB 149 and SB 1188: https://www.akingump.com/en/insights/ai-law-and-regulation-tracker/texas-enacts-a-pair-of-ai-governance-laws-hb-149-and-sb-1188
  13. Mayer Brown — Texas Passes Unique AI Law Focused on Prohibited Practices: https://www.mayerbrown.com/en/insights/publications/2025/06/texas-passes-unique-artificial-intelligence-law-focused-on-prohibited-practices
  14. NIST — AI Risk Management Framework: Generative AI Profile (NIST-AI-600-1): https://airc.nist.gov/airmf-resources/airmf/genai/
  15. Texas Policy Research — HB 149 Fiscal Impact Analysis: https://www.texaspolicyresearch.com/bills/89th-legislature-hb-149/
  16. Colorado SB 24-205 — leg.colorado.gov: https://leg.colorado.gov/bills/sb24-205
  17. EU AI Act Article 99 — artificialintelligenceact.eu: https://artificialintelligenceact.eu/article/99/
  18. Goodwin Law — Federal AI Moratorium Analysis: https://www.goodwinlaw.com/en/insights/publications/2025/05/alerts-practices-aiml-house-passes-10-year-federal-moratorium
  19. Alston & Bird — Trump Executive Order on State AI Regulation: https://www.alston.com/en/insights/publications/2025/12/trump-executive-order-state-ai-regulation
  20. Sidley Austin — Unpacking the December 11, 2025 Executive Order: https://www.sidley.com/en/insights/newsupdates/2025/12/unpacking-the-december-11-2025-executive-order

Keep reading