A Breakdown of the Texas Responsible AI Governance Act (TRAIGA)
Artificial Intelligence | ISO 42001
Published: Jun 17, 2025
The widespread use of AI is in full force, reshaping industries, economies, societies, and business practices. From healthcare diagnostics and financial forecasting to enhanced education and public services, AI systems are being deployed at unprecedented speed and scale. With its rapid adoption comes both immense benefits and equal amounts of concern over transparency, accountability, fairness, and privacy.
As the implications of AI have become more complex, the call for robust AI governance has increased. The risks of unregulated AI development and deployment include algorithmic bias, opaque decision-making, and unchecked data use. In response, governing bodies around the world have started to establish legislative frameworks to ensure AI is developed and used responsibly, ethically, and in ways that align with societal values and benefits—and Texas is no exception.
The Texas Responsible AI Governance Act (TRAIGA) has emerged as a crucial AI governance approach designed to address the needs and challenges of the use of AI systems in Texas. In this article, we’ll define TRAIGA and its current status, as well as break down the key provisions and regulatory implications of TRAIGA HB 149 and provide considerations for remaining compliant with emerging AI regulations.
What is TRAIGA?
In a significant legislative effort to regulate the use of AI systems within the state of Texas, State Representative Giovanni Capriglione introduced House Bill 149 (originally HB 1709), now known as the Texas Responsible AI Governance Act (TRAIGA HB 149) in December 2024. Modeled after other emerging AI frameworks such as the EU AI Act and Colorado's AI legislation, TRAIGA aims to establish a risk-based approach to AI governance, focusing on consumer protection, public safety, and the ethical and beneficial deployment of AI systems.
The bill comes with the creation of an advisory body within the Department of Information Resources, responsible for monitoring AI use in state government, flagging harmful AI practices, and recommending appropriate and timely AI policy updates. Additionally, TRAIGA mandates risk assessments, transparency reports, and consumer protection measures for AI businesses, reiterating its focus on accountability and the protection of individual freedoms.
What is the Status of TRAIGA HB 149?
After Rep. Giovanni Capriglione spent two years crafting and revising the bill, TRAIGA HB 149 was filed on March 14, 2025, and subsequently passed on April 23, 2025, in a 146-3 state House vote. On Friday, May 23, 2025, the bill was unanimously approved in a Senate floor vote. If the bill proceeds as expected and is officially signed into law by Gov. Greg Abbott, the Act will go into effect on January 1, 2026.
Key Provisions and Regulatory Impacts of TRAIGA HB 149
Considering the state of Texas has a sizeable population and notable tech sector, TRAIGA HB 149 comes with significant regulatory implications, some of which are outlined in the key provisions below:
Focus on Government-Deployed AI Systems
TRAIGA places particular emphasis on AI systems deployed by Texas government agencies or entities, especially those classified as "high-risk." High-risk AI systems are defined as those that make or contribute to consequential decisions affecting individuals in areas such as employment, education, financial services, healthcare, and housing. Government agencies utilizing such systems are required to implement oversight mechanisms to prevent algorithmic discrimination and ensure transparency in decision-making processes.
Additionally, TRAIGA mandates that government agencies disclose to consumers when they are interacting with any AI system, prior to the interaction occurring. TRAIGA also bans the creation of “social scores”, which refers to using AI algorithms to assign scores to individual users based on their social behavior or characteristics. The government is also prohibited from using AI for the use of biometric data or identifiers to identify specific individuals. Also notably, the bill prohibits “dark pattern” interactions, meaning user interfaces designed to manipulate or impair user autonomy.
Establishment of the Texas Artificial Intelligence Council
The Act includes a proposal for the creation of the Texas Artificial Intelligence Council, a regulatory body responsible for:
- Monitoring compliance with TRAIGA provisions and flagging harmful AI practices
- Issuing guidelines and advisory opinions on AI deployment
- Recommending regulatory and legislative updates to address emerging AI challenges
- Reporting annually on AI governance efforts
It is proposed that this council be comprised of ten members, appointed by government officials, with expertise in various fields including risk management, AI ethics, and governance. This council is envisioned as a central authority to oversee the ethical and responsible use of AI across the state and would have rulemaking authority to ensure AI systems are compliant with state laws.
Prohibitions on Harmful AI Applications
TRAIGA bans AI systems designed to manipulate human behavior to incite harm or criminal activity and those intended to unlawfully discriminate against protected classes. TRAIGA explicitly prohibits certain AI applications deemed to pose unacceptable risks, including:
- Social scoring systems that evaluate individuals based on personal behavior
- AI systems that manipulate human behavior through subliminal techniques
- Inference of sensitive personal attributes (e.g., race, religion, disability) without consent
- Emotion recognition technologies deployed without user consent
- Generation of unlawful explicit content or deepfakes
These prohibitions, among others, aim to safeguard individual rights and prevent discriminatory or manipulative practices through the development or deployment of AI systems.
Regulatory Sandbox Program
To foster innovation and facilitate the use of AI systems in Texas while ensuring safety, TRAIGA introduces an AI Regulatory Sandbox Program. This program aims to balance innovation with the protection of human rights and allows developers to test AI systems under a supervised environment with temporary exemptions from certain regulatory requirements. As such, it offers safeguards against costly and time-consuming lawsuits.
The sandbox is designed to facilitate the development and testing of innovative AI applications, provide a controlled setting to assess potential risks, and encourage responsible innovation by offering guidance and oversight. Participation in the sandbox requires detailed reporting, and protections can be revoked if a project poses public harm or fails to meet obligations.
Key Considerations for Remaining Compliant with Emerging AI Regulations
Organizations operating in Texas should proactively prepare for compliance with TRAIGA by taking the following measures:
- Conduct AI Impact and Risk Assessments:
Evaluate existing and developing AI systems to determine if they fall under the "high-risk" category and regularly assess the societal and ethical impacts on the individuals involved. - Implement Transparency Measures:
Ensure that AI decision-making processes are transparent, and affected individuals are informed about how decisions are made. - Establish Oversight Mechanisms:
Develop internal policies and procedures to monitor AI system performance and address any instances of algorithmic discrimination. - Engage with the AI Council:
Stay informed about guidelines and recommendations issued by the Texas Artificial Intelligence Council to align practices with state expectations. - Participate in the Sandbox Program:
For innovative AI applications, consider applying to the Regulatory Sandbox Program to test systems in a controlled environment. - Become ISO 42001 Certified:
Achieving ISO 42001 certification comes with ethical AI management validation and enhanced risk management while demonstrating your commitment to the responsible use and deployment of AI systems.
Even if you don’t practice business within the state of Texas, taking these steps can help your organization navigate the evolving regulatory landscape and contribute to the responsible development and deployment of AI technologies. If you have additional questions about emerging AI regulations, ISO 42001 certification specifically, or any other AI compliance service, Schellman can help. Contact us today and we’ll get back to you shortly.
In the meantime, discover additional AI insights in these helpful resources:
About Jason Lam
Jason Lam is a Senior Manager with Schellman based in New York City, NY. Prior to joining Schellman in 2015, Jason worked as an Enterprise Risk Management Associate at Freed Maxick CPAs, specializing in Sarbanes-Oxley compliance audits and Service Organization Controls (SOC) examinations. Over a year of experience comprised of serving clients in various industries ranging from financial services, healthcare, call centers, and data centers. Jason is now mainly dedicated to performing Service Organization Controls (SOC) examinations.