While designed to balance safety and innovation, Europe’s AI Code of Practice is facing criticism. Intended to complement the AI Act as a voluntary guide, some now fear its non-binding nature could be exploited as a loophole.

For over a year, a fierce battle raged over the document. The European Commission, positioning itself as pro-innovation, had to navigate between aggressive industry lobbying and pressure from civil society groups and MEPs, who feared the code was becoming too lenient – undermining the very AI Act it was supposed to support.

A question loomed in the background: Is it even possible to create meaningful regulation that companies adopt voluntarily – without it becoming a toothless gesture?

Meta says no – and that’s not a bad thing

While firms like OpenAI and France’s Mistral have signed on, Meta – the tech giant behind Facebook and Instagram – refused. Paradoxically, this is not a sign of failure. It is evidence that the code wasn’t entirely bent to industry interests.

Meta’s rejection reveals that the document still contains commitments some firms are not willing to make. In today’s power dynamics, that can be seen as a regulatory success.

Meta has a long record of testing EU laws to their limits, often undermining them publicly. Its playbook is well established: introduce a controversial feature, provoke public backlash, frame EU scrutiny as anti-innovation, threaten to pull services, and finally tweak the system just enough to give the appearance of compliance, while sidestepping the spirit of the rules.

Zuckerberg plays hardball

Meta’s controlling shareholder Mark Zuckerberg – whose company is now valued at roughly €1.5 trillion – has made no secret of his goal. He wants to win the global AI race, even if it means clashing with European law.

Reports suggest that in a bid to build a “superintelligence” team, Meta has offered top AI talent pay packages of up to €260 million – sums that 99.996% of European companies could never match.

In the EU, Meta is fighting battles on several fronts. In June 2024, it paused its AI rollout after a wave of complaints over how it handles user data. When it resumed in April 2025 under pressure, the opt-out mechanisms for users were only minimally adjusted. The pattern held: push boundaries, avoid accountability.

Meta is also locked in a dispute with the European Commission over its controversial “pay or consent” ad model, which has already resulted in a €200 million fine and may trigger further sanctions. Ongoing investigations are probing potential violations of platform rules and alleged cooperation with sanctioned Russian publishers.

Is Europe ready for a power struggle?

Meta has not signed the code; however, it will still have to comply with the AI Act by August 2. The problem is that the EU’s track record on enforcing tech rules against global giants remains shaky. Penalties, though increasingly hefty, still fail to deter repeat violations.

Whether the code becomes a meaningful regulatory tool or just political window dressing will depend on how it is implemented. Voluntary ethical codes, lacking enforcement mechanisms, have historically proven weak. The richest firms will sign on – if the language is vague enough to avoid legal risk.

This makes early criticism from NGOs all the more relevant. Even during the consultation phase, watchdogs warned that the code had been watered down under industry pressure. In their view, the Commission allowed the tech sector to shape the document too much, potentially hollowing out key provisions of the AI Act.

More than just technology at stake

The clash over the code is about more than just technical standards. It is a test of the EU’s entire model for regulating technology – one that tries to balance fundamental rights, the public interest, and innovation. However, it is becoming increasingly clear that these goals do not always align neatly.

Meta is not here to negotiate. It is playing for dominance. Zuckerberg’s strategy is built on political pressure, media spin, and legal loopholes. There’s even talk of appealing to Donald Trump to push the EU to soften its rules. In this narrative, EU regulations are framed as trade barriers, creating another front in a transatlantic economic war.

If Europe truly wants to lead in responsible AI, it must do more than legislate. It must enforce. Especially when the opponent is a trillion-euro giant.

Allowing global companies to openly defy codes of conduct or strong-arm regulators into compromises would undermine not just the Commission’s credibility, but the entire AI Act itself.

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.