- AiNews.com
- Posts
- New York Passes Landmark AI Safety Bill Targeting Frontier Models
New York Passes Landmark AI Safety Bill Targeting Frontier Models

Image Source: ChatGPT-4o
New York Passes Landmark AI Safety Bill Targeting Frontier Models
New York lawmakers have passed the RAISE Act, a new bill aimed at preventing high-risk artificial intelligence systems from contributing to large-scale disasters. If signed into law, the bill would establish the country’s first legally mandated transparency rules for the developers of frontier AI models—those trained using vast computing resources and capable of generating powerful, unpredictable outcomes.
The bill specifically targets advanced systems from companies like OpenAI, Google, and Anthropic. It requires these companies to disclose detailed safety practices and to report any incidents that pose public risks, such as harmful AI behavior or security breaches. The goal: prevent scenarios resulting in the deaths or injuries of more than 100 people or economic damages exceeding $1 billion.
The RAISE Act passed Thursday and now heads to Governor Kathy Hochul, who can sign it, request changes, or veto it.
A New Legal Standard for AI Transparency
The bill defines its scope narrowly, focusing on companies whose AI models were trained using more than $100 million in computing resources and are available to New York residents. These are the same systems that underpin major generative AI products like ChatGPT and Gemini.
If enacted, the law would require covered companies to:
Publish comprehensive safety and security reports on frontier models.
Report AI safety incidents, including unintended model behaviors and cases of model theft.
Face civil penalties of up to $30 million for non-compliance.
Importantly, the bill does not require AI developers to implement so-called “kill switches,” nor does it hold post-training developers accountable for critical harms—two provisions that drew sharp criticism in earlier legislation like California’s vetoed SB 1047.
Designed to Avoid Chilling Innovation
The bill’s co-sponsor, New York State Senator Andrew Gounardes, said the legislation was carefully crafted to balance safety concerns with the need to preserve innovation, particularly for startups and academic researchers.
“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” Gounardes told TechCrunch. “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.”
Nathan Calvin, vice president of State Affairs at Encode and a key architect of both the RAISE Act and SB 1047, said the bill was designed to sidestep past pitfalls. He emphasized that it targets only large-scale labs and avoids provisions that would impose technical mandates on small developers.
Still, tech industry backlash has been strong. Andreessen Horowitz general partner Anjney Midha called the RAISE Act “yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead,” echoing opposition from Y Combinator and other pro-innovation groups that opposed SB 1047.
Industry Response: Mixed and Cautious
Some AI companies have voiced concern about the bill’s breadth. Anthropic, which recently called for federal transparency rules, hasn’t taken an official position, but co-founder Jack Clark said the bill could unintentionally impact smaller firms.
Senator Gounardes pushed back, saying the bill’s thresholds were designed to ensure that small companies are exempt.
OpenAI, Google, and Meta declined to comment on the legislation.
A recurring criticism of the bill is that AI companies might choose to withdraw their most advanced models from New York rather than comply with state-specific rules—a concern that mirrors regulatory pushback seen in Europe. But Assemblymember Alex Bores, a co-sponsor of the bill, dismissed that as unlikely.
“I don’t want to underestimate the political pettiness that might happen,” Bores said, “but I am very confident that there is no economic reason for [AI companies] to not make their models available in New York.”
What This Means
The RAISE Act signals a shift in how U.S. policymakers are approaching AI regulation: not through sweeping, one-size-fits-all mandates, but with targeted rules aimed at the companies building the most powerful systems. If signed into law, it would be the first to legally require transparency and safety reporting from the very firms shaping the future of generative AI.
By tying legal obligations to the scale of model training and the reach of deployment, the bill introduces a measurable threshold for public accountability. It reflects growing concern from AI researchers, ethicists, and public officials that unchecked development of frontier models could carry systemic risks—including accidents, misuse, or cascading economic effects.
Just as important, the RAISE Act could pressure federal lawmakers to act. With states like New York stepping into regulatory gaps, the absence of national standards becomes more noticeable—and more politically urgent.
Whether other states follow New York’s lead or wait for federal guidance, the RAISE Act makes one thing clear: the era of voluntary self-regulation for the most advanced AI labs may be coming to an end.
As debates over AI safety intensify at the federal level, the RAISE Act could become a template for future regulation—or a flashpoint in the ongoing clash between AI oversight and open innovation.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.