A symbolic look at the growing clash between federal and state power as the U.S. debates who should set the rules for AI. Image Source: ChatGPT-5

White House Weighs Blocking State AI Laws as Safety Debate Grows

Key Takeaways: Federal Push to Override State AI Laws

  • President Donald Trump is weighing an executive order that would block states from enforcing AI regulations and establish a federal standard.

  • Draft order would create an AI Litigation Task Force, restrict federal funding to states with “onerous” AI rules, and push the FTC to issue guidance preempting state laws.

  • The move arrives amid record state AI activity, with all 50 states legislating on AI in 2025 and more than 1,000 AI bills introduced nationwide.

  • The push has sparked bipartisan resistance, dividing Republicans and drawing criticism from Democrats, governors, and tech safety advocates.

  • Tech leaders and PACs are lobbying heavily for federal preemption, while polling shows strong public demand for slower, more regulated AI development.

Trump Weighs Executive Order to Block State AI Regulations

President Donald Trump is considering an executive order that would bar states from implementing their own artificial intelligence (AI) regulations, according to reporting from The Hill and CNN. The draft order, viewed by both outlets, reflects the administration’s latest attempt to establish a single federal AI framework that would override an expanding patchwork of state rules.

The proposal would direct Attorney General Pam Bondi to form an AI Litigation Task Force charged with challenging state-level AI measures and pursuing legal strategies to restrict state authority. The draft also suggests conditioning certain federal funds on states’ willingness to avoid what the administration deems “onerous” AI laws.

The order further calls on the Federal Trade Commission (FTC) to issue guidance clarifying how federal consumer protection law applies to AI systems and where it may preempt conflicting state regulations.

“We remain in the earliest days of this technological revolution and are still in a race with adversaries for supremacy within it. Our national security demands that we win this race,” the draft reads. It argues that “American AI companies must be free to innovate without cumbersome regulation” and warns that “state legislatures have introduced over 1,000 AI bills that threaten to undermine that innovative culture.”

A White House official told both outlets that, until an order is formally announced, any discussion remains “speculation.”

State AI Lawmaking Has Exploded Across the Country

According to the National Conference of State Legislatures (NCSL), state AI activity has surged at unprecedented levels:

  • 2024 session: At least 45 states, Puerto Rico, the Virgin Islands, and Washington, D.C. introduced AI bills; 31 states enacted laws or resolutions.

  • 2025 session: All 50 states, Puerto Rico, the Virgin Islands, and D.C. introduced AI bills; 38 states enacted or adopted around 100 measures.

In many cases, states stepped in because Congress failed for years to pass comprehensive AI legislation. Early federal proposals stalled amid partisan division, disagreements over how strict regulations should be, and intense lobbying by technology companies. Meanwhile, agencies struggled to keep pace as AI spread into elections, workplaces, and public services. With federal action delayed — and in some cases redirected toward lighter-touch requirements favored by industry — states began crafting their own safeguards to address immediate risks to residents.

From deepfake bans to algorithmic hiring rules, many of these measures target high-impact applications where AI can affect public safety, elections, and civil rights — protections that state lawmakers argue are necessary to safeguard their own residents in the absence of timely federal action.

Critics worry that federal preemption could nullify these protections just as states finally begin to regulate AI’s most harmful uses.

Congressional Republicans Divided Over Federal Preemption

Trump and his allies have pushed for a federal moratorium on state AI laws since the start of his second term, aligning with Silicon Valley conservatives who favor minimal regulation.

Beginning in early 2025, Trump and his allies pursued several high-profile attempts to halt state AI enforcement. They first tried to insert a nationwide ban on state AI laws into the president’s major tax-cut bill, arguing that a single federal standard was essential to protect innovation. When that effort collapsed, GOP lawmakers shifted to a more formal proposal: standalone amendments that would have imposed a ten-year moratorium on state AI regulations — a measure strongly backed by industry-aligned groups and major Silicon Valley donors.

These proposals weren’t identical: the tax-cut bill language sought a broad, immediate ban, while the later amendments explicitly codified a 10-year freeze on state enforcement. Both efforts ultimately failed under bipartisan resistance, but the push for federal preemption continued to resurface throughout Trump’s second term.

Shortly after the Senate removed the 10-year moratorium from Trump’s domestic policy bill, the administration released a Silicon Valley-friendly AI action plan — a package of initiatives and policy recommendations centered on scaling back AI regulation to bolster U.S. competitiveness. The plan promoted lighter oversight, voluntary industry standards, and close federal–industry coordination, underscoring the administration’s preference for minimal constraints on AI companies.

The preemption effort has also repeatedly fractured Republicans, triggering pushback from both conservative populists and state-level leaders:

  • Rep. Marjorie Taylor Greene (R-Ga.) threatened to sink Trump’s tax-cut bill over the preemption language.

  • Punchbowl News reported this week that GOP leaders were again considering a provision to the National Defense Authorization Act (NDAA) that would block state AI measures.

  • Sen. Brian Schatz (D-Hawaii) called the idea a “poison pill.”

  • Sen. Josh Hawley (R-Mo.) argued it “shows what money can do.”

  • Gov. Sarah Huckabee Sanders (R-Ark.) said, "This summer I led 20 GOP governors to pressure Congress to vote down its 10-year prohibition on state-level AI regulations — protecting Arkansas’ AI child-exploitation ban and other commonsense safeguards,” she wrote on X. “Now isn’t the time to backtrack. Drop the preemption plan now and protect our kids and communities.”

  • Florida Gov. Ron DeSantis (R) also condemned the proposal as “federal government overreach,” warning that stripping states of authority would effectively subsidize Big Tech. “Stripping states of jurisdiction to regulate AI is a subsidy to Big Tech and will prevent states from protecting against online censorship, predatory applications that target children, intellectual property violations, and data-center intrusions on power and water resources,” he wrote on X.

By Tuesday, Trump publicly endorsed attaching the measure to the defense bill, asserting that “overregulation by the states” threatens U.S. competitiveness.

“We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes,” he wrote on Truth Social, warning that without it, “China will easily catch us in the AI race.”

CNN: Safety Advocates Warn Preemption Could Remove Critical Protections

Reporting from CNN highlights growing alarm among AI safety experts, consumer protection groups, and state lawmakers who argue that blocking state laws could weaken safeguards in high-risk areas.

States have acted in the absence of comprehensive federal AI legislation, targeting issues such as:

  • AI-generated deepfakes harming election integrity

  • Algorithmic discrimination in hiring or housing

  • AI harms to children, including addictive systems and explicit content

A growing number of reports have also highlighted instances of AI-induced delusions and cases where AI systems have contributed to self-harm among vulnerable users, raising alarms for mental-health professionals. At the same time, companies such as OpenAI and Meta have scrambled to block minors from accessing adult or explicit AI content — reinforcing concerns that, without enforceable guardrails, young people could encounter harmful or age-inappropriate material.

Hundreds of organizations — including tech-worker unions, civil-rights groups, consumer-protection advocates, and academic institutions — also sent signed letters to Congress this week warning that federal preemption could remove vital safeguards.

J.B. Branch, a Big Tech accountability advocate at Public Citizen, cautioned that “AI scams are exploding, children have died by suicide linked to harmful online systems, and psychologists are warning about AI-induced breakdowns.” Blocking state regulations, Branch argued, could “shield Silicon Valley from responsibility” during a period of rapidly escalating consumer harm.

Tech Industry Lobbying Intensifies for a Single Federal Standard

Major technology companies and investors have launched an aggressive campaign calling for a unified federal AI framework that supersedes state laws.

According to AiNews.com reporting from September 2025:

  • Meta launched the American Technology Excellence Project, a super PAC investing “tens of millions” to oppose state AI regulation.

  • Andreessen Horowitz and OpenAI President Greg Brockman funded the Leading the Future PAC with over $100 million to resist new rules.

  • Industry leaders backed a proposal for a 10-year federal moratorium on state AI regulation (ultimately struck down).

  • More than 1,000 AI-related bills were introduced across the U.S. in 2025 alone — prompting tech companies to argue that a “patchwork of state regulations” threatens innovation and U.S. advantage over China.

Trump’s draft order echoes these concerns, stating that a “minimally burdensome, uniform national policy framework” is essential to maintaining U.S. AI leadership.

AI Leaders Themselves Are Split on Regulation

Sam Altman: From “Regulate Me” to “Don’t Slow Us Down”

According to the Associated Press (AP), Sam Altman told Congress in May 2023 that government intervention would be “critical to mitigating the risks of increasingly powerful AI systems.”

At the time, he emphasized public anxiety about AI harms, saying:
“We understand that people are anxious about how it could change the way we live. We are too.”

But by May 2025, as Wired reported, Altman reversed course. Instead of pushing for outside oversight bodies, he aligned with Sen. Ted Cruz, warning that overregulation — including the EU’s approach and a California bill later vetoed — would be “disastrous.”

“We need the space to innovate and to move quickly,” Altman argued, calling instead for “sensible regulation that does not slow us down.”

Anthropic’s View: Federal Regulation Ideal — but States Needed as Backstop

In October 2024, Anthropic wrote that governments should act “urgently” within 18 months to implement AI policy before the window for proactive risk prevention closes.

Anthropic argued:

  • Federal legislation is the ideal vehicle for managing catastrophic AI risks.

  • A uniform national framework could strengthen U.S. diplomacy and leverage federal expertise in bioterrorism or cybersecurity.

  • But state regulation must serve as a backstop, given the slow pace of Congress.

  • Federal regulation, if enacted, could preempt state laws — but only once a strong national framework exists.

According to Inc., Anthropic CEO Dario Amodei told Anderson Cooper that he is “deeply uncomfortable” with unelected individuals like himself and Sam Altman making decisions with “wide-reaching consequences” without oversight. He reaffirmed his long-held position supporting “responsible and thoughtful regulation” of the technology.

Public Opinion Strongly Favors Safety and Slower Deployment

Polling published by AiNews.com in October 2025, via the Future of Life Institute (FLI), shows overwhelming public demand for stronger controls:

  • 73% favor slow, heavily regulated development of advanced AI.

  • 64% support an immediate pause until AI systems are proven safe.

  • Only 5% support fast, unregulated advancement.

This stands in stark contrast to the industry’s push for rapid AI deployment with minimal constraints, where companies argue that heavier regulations could slow innovation, increase compliance costs, and weaken U.S. competitiveness.

Q&A: Federal vs. State Authority in AI Governance

Q: What would Trump’s draft executive order actually do?
A: It would direct the Attorney General to create an AI Litigation Task Force to challenge state AI laws, push the FTC to issue federal guidance that could preempt state rules, and potentially restrict certain federal funds to states that pass what the administration considers “onerous” AI regulations. The goal is to establish a single national standard that overrides state-level laws.

Q: Why have states been passing so many AI laws on their own?
A: States acted because Congress spent years unable to agree on comprehensive AI legislation, and federal agencies struggled to keep pace with emerging risks. As AI entered elections, workplaces, healthcare, and online platforms, state lawmakers crafted their own rules to address deepfakes, discrimination, child safety, and other immediate AI harms affecting their residents.

Q: What are the main concerns about blocking state AI regulations?
A: Safety advocates warn that removing state authority could eliminate critical consumer protections. They point to rising reports of AI-induced delusions, self-harm risks, deepfake abuses, discriminatory hiring algorithms, and minors accessing explicit AI content. Hundreds of organizations have urged Congress not to “shield Silicon Valley from responsibility” by wiping out state safeguards.

Q: Why do some tech leaders and lawmakers support a single federal standard?
A: Supporters argue that a patchwork of 50 state laws creates high compliance costs, slows deployment, and could weaken U.S. competitiveness against China. Companies like OpenAI, Meta, and major AI investors have pushed for national preemption, saying a uniform framework would give businesses clarity and accelerate innovation.

Q: How do AI companies themselves differ on regulation?
A: Top leaders are divided. Sam Altman shifted from urging oversight in 2023 to warning in 2025 that strict rules could be “disastrous” and slow innovation. Anthropic, by contrast, says federal regulation is essential for managing catastrophic AI risks — but supports state laws as a necessary backstop until Congress acts. Dario Amodei has also voiced discomfort with private companies making consequential decisions without external oversight.

What This Means: A Defining Battle Over Who Governs AI

The clash over Trump’s new draft order reflects a deeper tension: Who should govern the rules for AIstates, Congress, or the White House?

If the draft order advances:

  • Federal authority would override state AI laws across the country.

  • States would lose the ability to regulate AI harms, including child safety, discrimination, healthcare, deepfakes, and election risks.

  • Tech companies would gain greater certainty and reduced compliance burdens.

  • The U.S. could unify under a single national standard to compete with China — but with fewer immediate safeguards.

If states retain authority:

  • Important guardrails addressing real AI harms could remain in place.

  • The regulatory landscape would stay fragmented, complicating compliance.

  • Some states may advance protections faster than federal agencies.

  • Innovation advocates warn the U.S. could fall behind global competitors.

Rather than a simple choice between speed and safety, many experts argue the U.S. can take a balanced path — one where strong guardrails and clear accountability coexist with innovation. Well-designed rules do not necessarily slow progress; they can clarify expectations, reduce uncertainty, and prevent the types of AI harms that erode public trust. The challenge for policymakers is crafting a regulatory framework that protects citizens while still allowing America’s AI ecosystem to advance responsibly and competitively.

This debate is far from settled, and the decisions made in the next several months may define U.S. AI governance for a generation.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.

Keep Reading

No posts found