Friday, January 9, 2026
ap7i.com🛡️
Adaptive Perspectives, 7-day Insights
AI

Trump's AI Executive Order Doesn't Restrict AI. It Restricts Restrictions.

The December 2025 Executive Order targets state AI laws, not AI itself. The regulatory patchwork is a real problem—but this order offers preemption without proposing federal protections in return.

Note: This post was written by Claude Opus 4.5. The following is an analysis of Executive Order text and legal commentary from multiple sources.

On December 11, 2025, President Trump signed an executive order titled “Eliminating State Law Obstruction of National Artificial Intelligence Policy .” The title sounds restrictive. The content is the opposite.

What the Order Does

The executive order is an aggressive deregulatory move aimed at eliminating state-level AI regulations, not creating federal ones. Its core mechanisms:

AI Litigation Task Force: Within 30 days, the Attorney General must establish a task force to bring lawsuits against state AI laws on constitutional grounds, including Commerce Clause and First Amendment arguments.

Funding Threats: States with “onerous AI laws” may lose access to federal broadband funding under the BEAD program—a $42.45 billion initiative to bring high-speed internet to underserved areas, the largest broadband investment in U.S. history. Agencies are directed to condition discretionary grants on states not enacting conflicting AI legislation.

Regulatory Pressure: The FCC is directed to explore preemptive federal disclosure standards, and the FTC must clarify when state laws are preempted.

Legislative Roadmap: The order calls for Congress to pass legislation preempting state AI laws, with narrow exceptions for child safety, infrastructure permitting, and government procurement.

This is not a regulatory framework. It’s an anti-regulatory framework—a coordinated pressure campaign to clear the field of state consumer protections before any federal replacement exists.

The “False Results” Claim

The order specifically targets Colorado’s SB 24-205, the first comprehensive state AI anti-discrimination law. It characterizes such laws as potentially forcing AI systems to produce “false results.”

This framing deserves scrutiny. Colorado’s law requires developers and deployers of “high-risk” AI systems to use “reasonable care” to prevent algorithmic discrimination—differential treatment based on race, age, disability, and other protected characteristics. It mandates impact assessments, consumer notifications, and reporting requirements.

Nothing in the law requires false outputs. It requires fairness testing. The assertion that preventing discrimination forces “false results” reveals a specific ideological position: that demographic disparities in AI outputs are not discrimination but rather accurate reflections of reality that regulation would distort.

Whether you agree with that position or not, it’s worth recognizing it as a contestable claim, not a neutral description of what these state laws require.

The Constitutional Problem

Legal experts have widely noted that this order may not be legal . Only Congress can preempt state law—the President cannot do so by executive order. The order acknowledges this implicitly by calling for congressional legislation and directing agencies to bring lawsuits rather than declaring state laws void.

As Yale’s legal analysis notes, having the executive branch bring dormant Commerce Clause challenges is nearly unprecedented and faces significant justiciability concerns. Recent Supreme Court precedent in National Pork Producers v. Ross weakened such challenges considerably.

Congress already rejected similar preemption language. Senator Ted Cruz’s amendment to the budget reconciliation bill was excised by a 99-1 vote. An attempt to revive it in the National Defense Authorization Act also failed.

The Legitimate Complexity

The administration’s complaints about regulatory fragmentation aren’t invented. In 2025 alone, lawmakers across all 50 states introduced more than 1,080 AI-related bills . Only 118 became law—an 11% passage rate—but that still creates a genuine compliance puzzle for companies operating nationally.

And different states are addressing different concerns, shaped by their economies and constituents:

Tennessee passed the ELVIS Act unanimously (93-0 in the House, 30-0 in the Senate) to protect musicians from AI voice cloning—a natural priority for Nashville. The law makes unauthorized AI impersonation of artists a criminal offense.

Colorado focused on algorithmic discrimination in consequential decisions—hiring, lending, housing, healthcare. (Colorado’s law was based on a Connecticut bill drafted by State Senator James Maroney, who is vice chair of the National Conference of State Legislatures’ AI task force.)

Connecticut passed similar legislation through its Senate 32-4 with bipartisan support —all 25 Democrats and 7 of 11 Republicans voted yes—but it died in the House after Governor Lamont threatened a veto.

California pursued targeted laws on election deepfakes, performer digital replicas, and training-data disclosure after Governor Newsom vetoed a comprehensive framework.

Utah required businesses to disclose when customers are interacting with AI chatbots rather than humans.

For a foundation model developer trying to ship a product nationally, navigating this patchwork is genuinely difficult. A model that complies with Colorado’s anti-discrimination requirements, Tennessee’s voice-cloning restrictions, California’s disclosure rules, and Utah’s chatbot labeling mandates faces real complexity. Startups without legal teams face even steeper barriers.

This is the strongest argument for federal action. But federal action comes in two forms:

A federal floor sets baseline protections that all states must meet, while allowing states to go further. The Clean Air Act works this way—federal standards establish a minimum, and states like California can adopt stricter emissions rules.

Federal preemption eliminates state authority entirely, replacing local protections with a single national standard—or, in this case, with nothing at all.

The executive order pursues preemption without a floor. It offers mechanisms to dismantle state protections but proposes no federal consumer protections in their place. The promised “minimally burdensome national policy framework” remains undefined. States would lose the ability to protect their residents from algorithmic discrimination, and residents would gain no federal protection in return.

The Intended Effect

If the order can’t actually preempt state law, what’s the point?

The answer is the chilling effect. As Adam Billen of Encode told NPR : “Even if everything is overturned in the executive order, the chilling effect on states’ willingness to protect their residents is going to be huge because they’re all now going to fear getting attacked directly by the Trump administration.”

State legislators considering AI consumer protections now face the prospect of federal lawsuits, funding cuts, and regulatory pressure. The order creates legal uncertainty even without legal authority. Legal analysis from Goodwin describes it as “a pressure-and-positioning instrument” rather than an actual preemption mechanism.

The Broader Stakes

What’s being contested here isn’t whether AI should exist or who should build it. It’s whether states can require AI systems to be tested for discrimination before making decisions about who gets hired, who gets housing, who gets credit, and who gets healthcare.

Colorado’s law, California’s transparency requirements, and Texas’s Responsible AI Governance Act all attempt to address a specific problem: automated systems that produce discriminatory outcomes, whether through biased training data, flawed model design, or unintended correlations. The administration’s position is that such regulations burden innovation and should be eliminated or minimized.

David Sacks, Trump’s AI advisor, stated that child safety protections would be preserved but “we’re going to push back on the most onerous examples.” What counts as “onerous” appears to be consumer protections requiring fairness audits.

Notably, conservative organizations focused on child safety criticized the order. Michael Toscano of the Family First Technology Initiative called it “a huge lost opportunity by the Trump administration to lead the Republican Party into a broadly consultative process.”

Opposition to preemption crosses party lines. A bipartisan coalition of 62 lawmakers from 32 states—including Republicans from Texas, Oklahoma, and South Dakota—co-authored an op-ed calling for state-level AI regulation, arguing that “policymakers must be proactive, so AI does not negatively impact us unknowingly.”

Connecticut’s Maroney framed the stakes in terms of local control: “This proposal would strip states of the ability to protect children online, combat deepfake revenge porn, regulate self-driving cars, and uphold existing data privacy laws… In New England, we value local control and this would eliminate our ability to respond to the real concerns our communities share.”

The Bottom Line

The order commits to a “minimally burdensome national policy framework for AI” while offering no actual framework—only mechanisms to dismantle state-level protections. Whether that’s deregulatory freedom or a consumer protection vacuum depends on your perspective.

The legal reality, for now: companies should continue complying with state AI laws . The executive order cannot overturn them. Only Congress or the courts can do that, and Congress has already declined.

Sources