| Go-To Guide: |
|
As 2025 draws to a close, AI regulation continues to accelerate across the globe. States in the U.S. and regions like the EU have been particularly active, creating a complex landscape for businesses and organizations leveraging AI. Here’s a summary of the five key trends to watch with an eye on 2026.
1. California’s AI Regulation: Safety, Transparency, HR Oversight, and Algorithmic Pricing
California has enacted numerous AI regulations this year. Over the next few years, businesses involved in AI will be subjected to new laws around AI safety, transparency of AI training data, oversight of AI in the context of employment, and the use of AI in pricing algorithms.
- AI Safety Act (Effective Date: Jan. 1, 2026):1 Establishes protections for employees from retaliation for certain reporting of AI-related risks or critical safety concerns to authorities, including whistleblower disclosures related to specific AI models. Establishes the CalCompute public AI cloud consortium under the Government Operations Agency to advance AI development and deployment.
- AI Training Data and Transparency Laws (Effective Date: Jan. 1, 2026):2 Requires covered providers to publish a high-level summary of training data used in generative AI systems, including sources, data types, IP/personal information, processing details, and relevant dates. Covered providers must offer watermarks and latent disclosures on AI-generated content, provide an AI detection tool, and ensure third-party licensees maintain these disclosure capabilities. Large platforms must label machine-readable provenance data, and modifications that disable disclosure functions are prohibited.
- HR and Automated Decision Systems (ADS) (Senate Bill):3 Prohibits discriminatory impacts on protected groups, limits ADS use in background checks, requires accommodation, holds employers responsible for vendors, and mandates four-year retention of ADS data.
- Prohibition on “Common Pricing Algorithm (Effective Date: Jan. 1, 2026):”4 Strengthens antitrust oversight by prohibiting the use or distribution of AI-driven “common pricing algorithms” to align or coerce pricing and lowers the pleading standard for civil claims under the Cartwright Act.
- Responsible AI safety and education act or “RAISE” Act:5 Targets developers with high AI training costs, mandating safety policies, risk-mitigation frameworks, and prohibiting deployment of certain models. Violations could reach $10 million for a first offense and $30 million for repeat offenses.
- Anti-Addiction Social Media Labels:6 Requires platforms using “infinite scroll” or similar designs deemed “addictive” to display warning labels, with fines for noncompliance.
- Synthetic Performers Disclosures:7 Mandates disclosure in commercial ads using AI-generated digital actors (synthetic performers), with penalties for violations.
- Expanded Right of Publicity:8 Strengthens consent requirements for using deceased persons’ voice or likeness, including AI-generated or synthetic media.
- LOADinG Expansion:9 Broadens oversight of automated decision-making in government, requiring state, local, and educational agencies to publicly inventory AI tools, conduct disclosures, and improve transparency.
2. New York’s AI Regulation: Algorithmic Oversight and HR Innovations
New York has passed several AI bills that establish standards in multiple areas. New York City already implemented Local Law 144, requiring employers with automated employment decision systems (ADS) to conduct bias audits and disclose automated decision-making in hiring. At the state level, several bills passed in the 2025 legislative session that now await Gov. Hochul’s action:
3. Colorado AI Legislation Hits Pause
Colorado postponed implementation from February 1, 2026 to June 30, 2026.10 The Colorado AI Act11 establishes requirements for developers and deployers of certain “high-risk” artificial intelligence systems, including obligations related to risk-management, disclosures, and mitigation of algorithmic discrimination.
Meanwhile, the recent federal executive order on National AI Policy Framework indicated an effort to establish a national AI framework and directs federal agencies to challenge or state intentions to preempt state AI laws allegedly deemed overly burdensome or inconsistent with that framework, which could affect how Colorado’s law is viewed going forward; see our GT Alert.
At this point, Colorado’s requirements still stand and remain valid under state law. However, the EO signals a federal posture that might invite future federal preemption arguments, or DOJ litigation, contesting state AI regulations.
4. EU AI Act Update: Key Delay Under Discussion Amid Industry Pressure
Although the EU AI Act formally entered into force in August 2024, the European Commission is now reportedly preparing to delay implementation of its most onerous provisions by up to one year. The proposed postponement targets high-risk AI system rules which are currently slated to become effective August 2027, which has drawn pressure from U.S. tech companies, Member States, and other stakeholders. The delay would be part of a broader “digital simplification” package that also includes easing other tech regulations. Some EU officials have floated a “stop-the-clock” mechanism, arguing that technical standards and guidance are not sufficiently mature to support compliance. However, critics warn that pushing back the rules may undermine the Act’s protections and credibility.
5. AI Companions and Therapists: Emerging Legal and Ethical Frontiers
States are moving quickly to regulate AI companions and therapeutic chatbots, focusing on safety, disclosures, and limits on AI-driven emotional or clinical support. The Illinois Wellness and Oversight for Psychological Resources Act12, effective since Aug. 4, 2025, bars unlicensed or “unregulated” AI systems from providing psychotherapy and restricts how licensed professionals may use AI: only for limited support functions, only with written disclosure and consent, and never to make therapeutic decisions, interact directly with clients, detect emotions, or generate treatment plans without human review. Exemptions apply for religious counseling, peer support, and public self-help materials.
New York’s AI Model Companion Law,13 if signed into law, would require AI companion models to include crisis-response protocols for self-harm, harm to others, or financial exploitation, along with clear notices that the system is non-human and information about crisis-service providers.
Utah’s Artificial Intelligence Policy Act14, as amended, which took effect on May 1, 2024, requires conspicuous disclosures for certain covered licensed professionals when users interact with GenAI, with stricter, mandatory disclosures for “high-risk” interactions involving sensitive data or significant personal decisions. Disclosures must be given verbally for oral exchanges and electronically for written ones, and the law expressly blocks companies from avoiding liability by blaming the AI itself.
States are also focusing on youth protection and high-risk interactions. California’s LEAD for Kids Act,15 if enacted, would prohibit using children’s data to train or fine-tune AI without appropriate consent and require developers and deployers to prevent unintended use by or on children. It also includes whistleblower protections.
Conclusion
2026 may continue these trends, with active states passing discrete bills regulating areas deemed to be higher risk and a potential slowdown on comprehensive AI laws.
1 CA SB 53 (AI Safety Act).
2 CA AB 2013 (Generative Artificial Intelligence Training Data Transparency Act); CA SB 942 (California AI Transparency Act); and CA SB 853 (Supplements the AI Transparency Act) effective Jan. 1, 2027.
3 Amendments to Cal. Code Regs., tit 2, §§ 11008-11079.
4 CA AB 325.
5 NY S.6953-B / A.6453-B.
6 NY S.4505 / A.5346.
7 NY S.8420-A / A.8887-B.
8 NY S.8391 / A.8882.
9 NY S.7599-C / A.8295-D.
10 CO SB-004, which amends SB 24-205.
11 SB 24-205.
12 IL HB 1806.
[13] NY A.B. 6767.
[14] UT SB 149 as amended by SB 226.
[15] CA AB-1064.