On May 31, 2025, the Texas Legislature passed House Bill 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). TRAIGA sets forth disclosure requirements for government entity AI developers and deployers, outlines prohibited uses of AI, and establishes civil penalties for violations. On June 2, 2025, the bill was sent to the governor of Texas for review and signed into law on June 22. TRAIGA takes effect on Jan. 1, 2026, the latest in a string of states, including California, Colorado, and Utah, that have passed AI legislation.
To Whom Does TRAIGA Apply? Key Definitions
TRAIGA applies to two “groups”: (1) covered persons and entities1, which include developers and deployers,2 and (2) government entities3.
Covered Persons and Entities
Covered persons and entities, each a “person,” are defined as any person who (1) promotes, advertises, or conducts business in Texas; (2) produces a product or service Texas residents use; or (3) develops or deploys an artificial intelligence system in Texas.4
Developers and Deployers
A “developer” is a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in Texas, and a “deployer” is a person who deploys an artificial intelligence system for use in Texas.5
Government Entities
A “governmental entity” is any department, commission, board, office, authority, or other administrative unit of Texas or of any political subdivision of Texas that exercises governmental functions under the authority of the laws of Texas.6 The definition specifically excludes hospital districts and institutions of higher education.7
Consumer
“Consumer” means an individual who is a resident of Texas “acting only in an individual or household context.”8 Accordingly, employment or commercial uses are not subject to TRAIGA.
Artificial Intelligence System
TRAIGA broadly defines an “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”9
How Would TRAIGA Be Enforced?
The Texas attorney general (AG) has exclusive authority to enforce the law (with rare exceptions where licensing state agencies have limited enforcement power, discussed further below).10 TRAIGA does not, however, provide for a private right of action.11
Notice and Opportunity to Cure
Before the AG can bring an action, the AG must send a written notice of violation to the alleged violator.12 The alleged violator then has 60 days to:
- cure the alleged violation;
- provide supporting documentation showing the cure; and
- update or revise internal policies to prevent further violation.13
- Curable violations: $10,000 – $12,000 per violation;14
- Uncurable violations: $80,000 - $200,000 per violation;15
- Ongoing violations: $2,000 - $40,000 per day.16
- a third party misuses the AI in a manner TRAIGA prohibits;
- such person discovers a violation through testing or good faith audits; or
- such person substantially complies with NIST’s AI Risk Management Framework or similar, recognized standards.18
- suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and/or
- fines up to $100,000.19
- assign a social score;22
- uniquely identify a specific individual using biometric data, without the individual’s consent;23
- incite or encourage self-harm, crime, or violence;27
- infringe, restrict, or otherwise impair an individual’s rights guaranteed under the U.S. Constitution;28
- unlawfully discriminate against a protected class in violation of state or federal law.29 Note that the law explicitly does not recognize “disparate impact” alone as sufficient to demonstrate an intent to discriminate.30
- produce or distribute certain sexually explicit content or child pornography, including deep fakes.32
- Applicability assessment. Companies should inventory all AI systems developed or deployed in Texas to determine whether such AI meets TRAIGA’s definition of “any machine-based system that infers from inputs to generate outputs, which can influence physical or virtual environments.”35 Assessments should include third-party AI tools used, such as chatbots.
- Use case analysis. Companies should consider whether their AI systems: (1) interact with consumers, (2) potentially infringe on rights under the Constitution, (3) affect protected classes, or (4) may be perceived to manipulate behavior (i.e., encouraging self-harm, crime, or violence).
- Notice requirement review. Governmental agencies should implement clear and conspicuous disclosures (which can be hyperlinked36) wherever AI interacts with Texas consumers and ensure such disclosures are written in plain language and contain no dark patterns.
- Risk framework alignment. Companies and government entities may wish to align current AI programs with nationally/internationally recognized AI risk frameworks such as NIST’s AI Risk Management Framework. TRAIGA specifically offers a safe harbor for “substantial compliance” with these frameworks.37
- Sandbox program participation. Companies developing a novel AI product should consider joining the sandbox program. Participants may obtain legal protection and limited access to the Texas market to test innovative AI systems in a compliance-friendly environment.38
- Federal AI moratorium. On May 22, 2025, the House of Representatives passed a proposal to impose a 10-year moratorium (ban) on state-level laws regulating AI. The proposal was included in the “One Big Beautiful Bill.” After Senate deliberation, the moratorium remains in the bill as of June 21, 2025. If the AI moratorium passes, it would preempt TRAIGA, along with other active state AI-related bills and enacted state AI laws.
Civil Penalties
TRAIGA also sets forth civil penalties, which include the following categories:
Additionally, the AG may seek injunctive relief, attorneys’ fees, and investigative costs.17
Safe Harbors
TRAIGA provides for safe harbors and affirmative defenses. A person is not liable under TRAIGA if:
State Agency Enforcement Actions
If the AG finds that a person licensed, registered, or certified by a state agency has violated TRAIGA and recommends additional enforcement by the applicable agency, such agency may impose other sanctions such as:
How Would TRAIGA Work?
The sections on disclosures to consumers and the prohibited uses of AI may have implications for businesses.
Disclosure to Consumers
Government agencies are required to disclose to each consumer, before or at the time of interaction, that the consumer is interacting with AI (even if such disclosure would be obvious to a reasonable consumer).20 The disclosure must be clear and conspicuous, written in plain language, and not use a dark pattern.21
Prohibited Uses
TRAIGA specifically prohibits a government entity from using AI to:
– Under the law, “biometric data” is defined as “data generated by automatic measurements of an individual's biological characteristics.”24
- The term includes a fingerprint, voiceprint, eye retina, or iris, or other unique, biological pattern or characteristic that is used to identify a specific individual.25
- The term does not include a physical or digital photograph or data generated from a physical or digital photograph; a video, or audio recording or data generated from a video or audio recording; or information collected, used, or stored for health care treatment, payment, or operations under HIPAA.26
TRAIGA specifically prohibits a person from using AI to:
– “Protected Class” is defined as “a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.”31
TRAIGA also establishes a sandbox program to allow companies to test AI in a controlled setting without full regulatory compliance33 and creates the Texas Artificial Intelligence Council to provide guidance and review ethical and legal issues related to AI34.
TRAIGA Compliance Considerations
1 HB00149F, Sec. 551.002. Covered persons and entities are not defined in the law like developers, deployers, and government entities. Covered persons and entities are listed in the Applicability Section.
2 HB00149F, Sec. 552.001.
3 HB00149F, Sec. 552.001(3).
4 HB00149F, Sec. 551.002(1-3).
5 HB00149F, Sec. 552.001(1-2).
6 HB00149F, Sec. 552.001(3).
7 Id.
8 HB00149F, Sec. 552.001(2).
9 HB00149F, Sec. 551.001(1).
10 HB00149F, Sec. 552.101(a).
11 HB00149F, Sec. 552.101(b).
12 HB00149F, Sec. 552.104(a).
13 HB00149F, Sec. 552.101(b).
14 HB00149F, Sec. 552.104(a)(1).
15 HB00149F, Sec. 552.104(a)(2).
16 HB00149F, Sec. 552.104(a)(3).
17 HB00149F, Sec. 552.104(b)(2-3).
18 HB00149F, Sec. 552.105(e)(1-2).
19 HB00149F, Sec. 552.106(a-b).
20 HB00149F, Sec. 552.051(b-c).
21 HB00149F, Sec. 552.051(d)(1-3).
22 HB00149F, Sec. 552.053.
23 HB00149F, Sec. 552.054.
24 HB00149F, Sec. 552.054(a).
25 Id.
26 Id.
27 HB00149F, Sec. 552.052.
28 HB00149F, Sec. 552.055.
29 HB00149F, Sec. 552.056.
30 Id. at 3(c).
31 HB00149F, Sec. 552.056(a)(3).
32 HB00149F, Sec. 552.057.
33 HB00149F, Sec. 553.
34 HB00149F, Sec. 554.
35 HB00149F, Sec. 551.001(1).
36 HB00149F, Sec. 552.051(e).
37 HB00149F, Sec. 552.105(e)(2)(D).
38 HB00149F, Sec. 553.051(a).