Image Source: ChatGPT-4o

Meta Refuses to Sign EU’s Voluntary AI Safety Guidelines

Key Takeaways:

  • Meta has declined to sign the EU’s voluntary GPAI Code of Practice, arguing it exceeds the scope of the AI Act.

  • The GPAI Code sets voluntary commitments around transparency, copyright, and safety for general-purpose AI models.

  • Models using over 10²⁵ FLOPs, like Meta’s Llama 4, are considered to carry systemic risk under the Code.

  • The EU AI Act takes effect August 2, mandating compliance from all general-purpose AI providers regardless of GPAI participation.

  • Meta faces heightened scrutiny in the EU, with recent fines totaling more than €997 million for data privacy and antitrust violations.

Meta objects to voluntary code tied to high-risk AI models

Two weeks before the EU AI Act comes into force, the European Commission released a voluntary framework—the GPAI Code of Practice—urging companies to commit to responsible development and deployment of large-scale AI models. The Code targets general-purpose AI (GPAI) models trained using massive computing resources, and calls on developers to increase transparency, uphold copyright protections, and implement enhanced safety protocols for high-risk systems.

Meta, however, has rejected the invitation to sign. The company argues the guidelines create unnecessary legal uncertainty and extend beyond what is required by the binding EU AI Act.

“We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it,” wrote Joel Kaplan, Meta’s chief global affairs officer, in a LinkedIn post. “This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

“With today’s guidelines, the Commission supports the smooth and effective application of the AI Act,” said Henna Virkkunen, EVP for tech sovereignty, security and democracy. “By providing legal certainty on the scope of the AI Act obligations for general-purpose AI providers, we are helping AI actors, from start-ups to major developers, to innovate with confidence, while ensuring their models are safe, transparent, and aligned with European values.”

GPAI Code targets large-scale models trained with over 10²⁵ FLOPs

The Commission’s GPAI Code focuses on models trained using more than 10²³ floating point operations (FLOPs)—a threshold that includes nearly all modern frontier models. Developers whose models exceed 10²⁵ FLOPs are asked to meet additional commitments, given the systemic risks associated with their scale.

Over 30 large AI models, including those from Meta, OpenAI, Google, and Anthropic, fall into this upper range. Meta’s Llama 4 model, trained with an estimated 5 × 10²⁵ FLOPs, is one such example.

Despite its decision not to join the Code, Meta will still be subject to the binding provisions of the AI Act when it takes effect on August 2, 2025.

“The Code of Practice is a voluntary tool, but a solid benchmark,” said European Commission spokesperson Thomas Regnier. “If a provider decides not to sign the Code of Practice, it will have to demonstrate other means of compliance. Companies who choose to comply via other means may be exposed to more regulatory scrutiny by the AI Office.”

Meta points to broader industry concerns over EU regulation

Kaplan also referenced recent criticism from European businesses—including Siemens, Airbus, and BNP Paribas—that argue the EU’s approach could hinder innovation. In an open letter earlier this month, these firms urged EU leadership to pause implementation of the AI Act, citing potential economic impacts.

“We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,” Kaplan wrote.

Meta’s broader regulatory tensions in the EU

Meta’s resistance to the GPAI Code comes amid escalating tension with EU regulators. The company was fined €200 million (~$232.6 million) in April for violating Europe's Digital Markets Act (DMA) with its “Consent or Pay” advertising model. In a follow-up letter, the Commission warned Meta that its approach remains non-compliant.

Earlier, in November 2024, Meta was fined an additional €797.72 million (~$927.19 million) for violating EU antitrust rules by linking Facebook Marketplace to the broader Facebook platform.

Fast Facts for AI Readers

Q: What is the GPAI Code of Practice?

A: A voluntary EU framework for general-purpose AI developers that promotes transparency, copyright protection, and AI safety for large-scale models.

Q: Why did Meta refuse to sign?

A: Meta says the Code introduces legal uncertainties and requirements beyond the scope of the binding EU AI Act.

Q: Does refusal exempt Meta from EU rules?

A: No. The EU AI Act takes effect August 2, and all GPAI developers must comply—whether or not they sign the voluntary Code.

Q: What is the FLOPs threshold for systemic risk?

A: 10²⁵ FLOPs. Models trained with computing power above this are expected to meet enhanced safety standards under the Code.

What This Means

The European Commission’s GPAI Code is designed to encourage responsible behavior from developers of the world’s most powerful AI systems. While nonbinding, the Code sets expectations that could influence regulatory scrutiny and future enforcement priorities. Meta’s refusal to participate reinforces growing friction between U.S. tech companies and European regulators—particularly around AI governance and platform accountability.

As the AI Act’s binding requirements take effect in August, Meta and other providers will face heightened oversight regardless of voluntary commitments. The debate over voluntary vs. binding rules signals a deeper reckoning: how to build public trust in systems too powerful to go unchecked—before consequences outpace control.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.

Keep Reading

No posts found