
SB 53 would make California the first state to impose AI safety requirements on frontier model developers. Image Source: ChatGPT-5
Anthropic Backs California’s AI Safety Bill SB 53, Breaking with Tech Opposition
Key Takeaways: Anthropic’s Support for California AI Safety Bill SB 53
Anthropic endorsed SB 53, a California bill requiring frontier AI developers to publish safety frameworks and reports.
The endorsement is a major win for SB 53, which faces opposition from groups like the Consumer Technology Association and Chamber for Progress.
SB 53 targets extreme risks, including AI systems assisting in biological weapons creation or cyberattacks.
The bill would apply only to large AI labs generating more than $500 million in revenue, such as OpenAI, Google, Anthropic, and xAI.
Governor Gavin Newsom has not indicated whether he will sign the bill, after vetoing Senator Wiener’s previous AI safety bill, SB 1047.
Anthropic Endorses SB 53 Amid Industry Pushback
On Monday, Anthropic formally endorsed SB 53, a bill authored by California state senator Scott Wiener that would establish the first state-level transparency requirements for major AI model developers.
“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” the company wrote in a blog post. “The question isn’t whether we need AI governance — it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”
Anthropic’s support marks a rare break from the broader tech industry, where groups like the Consumer Technology Association (CTA) and Chamber for Progress are lobbying against the bill.
What SB 53 Requires from AI Developers
If passed, SB 53 would require leading AI labs such as OpenAI, Anthropic, Google, and xAI to:
Create and implement safety frameworks for frontier AI models.
Release public safety and security reports before deploying new systems.
Provide whistleblower protections for employees raising safety concerns.
The bill specifically targets “catastrophic risks” — defined as incidents causing at least 50 deaths or over $1 billion in damages. Its provisions focus on preventing AI from providing expert-level assistance in areas like biological weapons development or large-scale cyberattacks.
Legislative Status and Political Context
The California Senate has already approved a prior version of SB 53, but the bill still requires a final vote before it can reach Governor Gavin Newsom’s desk. Newsom has not stated a position, though he previously vetoed Senator Wiener’s last AI safety bill, SB 1047.
Pushback against state-level AI regulation has been strong. Silicon Valley investors like Andreessen Horowitz and Y Combinator opposed SB 1047, while the Trump administration has repeatedly threatened to block states from passing their own AI laws, citing risks to American innovation in competition with China.
Critics argue that AI regulation should remain federal. Last week, Matt Perault, Head of AI Policy, and Jai Ramaswamy, Chief Legal Officer of Andreessen Horowitz, warned that state AI bills could violate the Commerce Clause of the U.S. Constitution by restricting interstate commerce.
Divisions Within the Tech Industry
Despite the opposition, some within the AI community argue state action is needed now. Anthropic co-founder Jack Clark wrote on X: “We have long said we would prefer a federal standard. But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”
In contrast, OpenAI’s chief global affairs officer Chris Lehane sent a letter to Governor Newsom in August, urging him not to pass regulations that might push AI startups out of California. However, the letter did not mention SB 53 by name.
Former OpenAI head of policy research Miles Brundage sharply criticized Lehane’s stance on X, calling the letter “filled with misleading garbage about SB 53 and AI policy generally.” SB 53 is designed to apply only to the world’s largest AI companies, specifically those generating more than $500 million in annual revenue.
Expert Opinions on SB 53
Policy experts say SB 53 is a more modest approach than earlier proposals. Dean Ball, senior fellow at the Foundation for American Innovation and former White House AI policy adviser, argued in August that SB 53 has a realistic chance of becoming law. Ball, who opposed SB 1047, praised SB 53’s “respect for technical reality” and “measure of legislative restraint.”
Senator Wiener said the bill was shaped with input from an expert panel convened by Governor Newsom, which was co-led by Stanford researcher Fei-Fei Li and World Labs co-founder.
How SB 53 Fits Into Broader AI Governance
Most major AI labs already publish internal safety reports — including OpenAI, Google DeepMind, and Anthropic. However, these reports are self-imposed and sometimes delayed. SB 53 seeks to make them legally binding requirements, with financial penalties for noncompliance.
Earlier in September, lawmakers amended the bill to remove a provision requiring third-party audits of AI companies — a controversial measure that tech firms argued was overly burdensome.
Q&A: California AI Safety Bill SB 53
Q: What is SB 53?
A: SB 53 is a California bill requiring frontier AI developers to publish safety frameworks, reports, and protections against catastrophic risks.
Q: Which companies would SB 53 apply to?
A: It targets major AI labs with over $500 million in revenue, including OpenAI, Anthropic, Google, and xAI.
Q: Why is Anthropic supporting SB 53?
A: Anthropic says AI governance is needed now, even if federal standards are preferable. The company called the bill a “solid blueprint.”
Q: Who is opposing SB 53?
A: Industry groups like the Consumer Technology Association, Chamber for Progress, and investors such as Andreessen Horowitz oppose the bill, citing risks to innovation.
Q: What risks does SB 53 address?
A: It focuses on catastrophic risks, including potential misuse of AI in bioweapons creation and cyberattacks, not issues like deepfakes.
What This Means: State-Led AI Governance Gains Momentum
Anthropic’s endorsement of SB 53 is a pivotal moment in the fight over AI regulation. While many in Silicon Valley and Washington argue that only federal standards are viable, the bill demonstrates that states may step in when federal action lags.
If enacted, SB 53 would create the first legally binding transparency rules for frontier AI labs in the U.S. It would also test whether California — home to many of the largest AI developers — can enforce standards without driving companies out of the state.
The endorsement also underscores divisions within the AI industry itself: while some in OpenAI warn of regulatory overreach, Anthropic and some policy experts see state legislation as a necessary interim step.
With lawsuits, safety debates, and geopolitical concerns mounting, SB 53 could set an influential precedent for how the U.S. balances innovation with oversight in the AI era.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiroo’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.