Who must comply with TRAIGA and does it apply to out-of-state companies?+
TRAIGA (HB 149) applies to any person or entity that conducts business in Texas, offers products or services to Texas residents, or develops or deploys an AI system in Texas — regardless of where the company is headquartered. The Texas Business & Commerce Code defines "AI system" broadly as "any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments." This means a software company in California that sells an AI-powered hiring tool used by Texas employers is within scope. Government agencies are also covered — with elevated disclosure obligations — except for hospital districts and higher education institutions, which are expressly excluded. Individuals "acting in a commercial or employment context" are excluded from the consumer disclosure provisions, but the prohibited-conduct provisions apply to developers and deployers broadly.
What does TRAIGA actually prohibit, and what is the intent requirement?+
TRAIGA prohibits four categories of AI system development or deployment: (1) intentionally manipulating human behavior to encourage self-harm, harm to others, or criminal activity; (2) with the sole intent of infringing, restricting, or impairing constitutional rights; (3) with the intent to unlawfully discriminate against a protected class under federal or Texas law; and (4) with the sole intent of producing or distributing child pornography or unlawful deepfake sexual content. The intent requirement is the law's defining feature and its primary departure from other frameworks. The Texas AG must prove purposeful intent — a disparate impact on a protected class alone is not sufficient to establish discrimination under TRAIGA. This means an AI hiring tool that produces discriminatory outcomes without any deliberate design intent is not automatically a TRAIGA violation, though it may still create liability under separate federal or state civil rights laws. For government agencies, TRAIGA adds non-intent-based obligations: mandatory plain-language AI disclosure, prohibition on social-scoring algorithms, and prohibition on biometric identification without individual consent.
What are TRAIGA's penalties and how does the 60-day cure period work?+
TRAIGA provides a tiered civil penalty structure enforced exclusively by the Texas Attorney General. Curable violations — those a court determines can be remedied — are subject to fines of $10,000 to $12,000 per violation. Incurable violations range from $80,000 to $200,000 per violation. Continuing violations after a finding of liability carry $2,000 to $40,000 per day. A breach of a written cure statement to the AG — effectively a broken promise to fix a curable violation — is treated as a separate $10,000–$12,000 violation. Additionally, state agencies can impose license suspensions or revocations and monetary penalties up to $100,000 for licensees found liable for TRAIGA violations. Before any enforcement action, the AG must provide written notice and a 60-day cure window. To cure, the company must (a) cure the violation, (b) submit a written statement to the AG documenting how it was cured, and (c) demonstrate internal policy changes to prevent recurrence. There is no private right of action — individuals cannot sue directly under TRAIGA.
How does TRAIGA differ from the Colorado AI Act and the EU AI Act?+
Three frameworks, three fundamentally different design choices. TRAIGA is intent-based: Texas asks whether you deliberately deployed AI to discriminate, manipulate, or harm. Unintentional harm is not automatically a violation. The Colorado AI Act (SB 24-205, effective June 30, 2026) is impact-based: Colorado asks whether your high-risk AI system could cause algorithmic discrimination regardless of your intent. Good intentions are not a defense in Colorado — you must demonstrate reasonable care through documented risk assessments. The EU AI Act is risk-classification-based: Brussels categorizes AI systems by use-case risk level (prohibited, high-risk, limited-risk, minimal-risk) and applies escalating obligations accordingly, with GPAI model rules layered on top. TRAIGA also differs procedurally: it has no private right of action and no annual bias audit or formal impact assessment mandate. Colorado requires annual impact assessments for each high-risk system and offers consumers a right to appeal AI-assisted decisions and request human review — neither obligation exists under TRAIGA. TRAIGA's NIST AI RMF safe harbor is explicit by name; Colorado's equivalent requires demonstrating "reasonable care" without naming a specific framework. For multi-state companies, building a governance program anchored on NIST AI RMF and TRAIGA documentation requirements positions you to satisfy Colorado and EU AI Act requirements with incremental additional documentation.
What is TRAIGA's NIST AI RMF affirmative defense and how do compliance tools support it?+
TRAIGA Section 551.106 creates a rebuttable presumption of reasonable care — effectively an affirmative defense — for developers, distributors, and deployers that are "in compliance with a nationally recognized artificial intelligence risk management framework, such as the framework developed by the National Institute of Standards and Technology." This is the most actionable compliance path available under TRAIGA for private sector organizations. The NIST AI Risk Management Framework 1.0 (published January 2023) organizes AI risk management around four core functions: Govern, Map, Measure, and Manage. Organizations must be able to demonstrate — with documentation — that their AI governance program reflects these functions for each relevant AI system. Compliance tools support this by mapping AI inventory records, risk assessments, testing protocols, and post-deployment monitoring logs to NIST AI RMF categories, generating audit-ready evidence packages that can be produced in response to an AG civil investigative demand within the 60-day cure window. The affirmative defense is rebuttable — it shifts the burden of proof but can be overcome if the AG shows the framework alignment was nominal rather than substantive. Document the alignment in writing, update it as AI systems change, and retain records.