From Promises to PR: The Fact-Checking Illusion

The EU’s Code of Practice on Disinformation and Digital Services Act (DSA) were designed to hold online platforms accountable. But EDMO’s latest report shows that many of these companies are sticking to the bare minimum — or experimenting with alternative, often ineffective, moderation tools.

Since 2018, big names like Meta, Google, Microsoft, and TikTok have signed up to the Code. And from July 2025, it becomes legally binding under the DSA. Sounds promising, right? Not quite.

According to EDMO, only Google earned a “high” rating for supporting fact-checking efforts. Meta, Microsoft, and TikTok were ranked “low” or “partial.” These platforms might label misleading content, but rarely explain how effective those labels actually are. Worse — most don’t use any clear method to measure results.

Meta’s Move: From Experts to the Crowd

In January 2025, Mark Zuckerberg announced that Meta would phase out its cooperation with independent fact-checkers. Instead, it plans to rely on a “community notes” system — a model copied from X (formerly Twitter), where users themselves assess content accuracy.

Here’s the twist: that’s not technically illegal under the DSA. The law requires platforms to reduce online risks and be transparent about how they do it — but it doesn’t dictate exactly how fact-checking should happen.

Still, shifting from professional fact-checkers to crowd-sourced moderation raises serious concerns. According to EDMO, this move weakens the EU’s efforts to fight disinformation and could make platforms more vulnerable to abuse. Mozilla Foundation went further — calling it a “betrayal” of the DSA’s goals.

So even if it’s legal, it may undermine the whole purpose of the law.

The Enforcement Gap: Law vs. Reality

One of the biggest issues isn’t what the law says — it’s whether anyone is enforcing it. EDMO warns that many platform reports are vague, incomplete, or unverifiable. There’s no clear timeline for integrating the Code into the DSA, and the European Commission hasn’t yet assessed Meta’s risk reports or presented follow-up plans for other companies.

Meanwhile, X has officially exited the Code, and Meta won’t confirm if it plans to stay in.

Here’s a fun fact: Wikipedia — yes, the volunteer-run encyclopedia — is the only major platform that uses community notes properly, in line with EU law. Other platforms? Not so much.

Regulation vs. Reality: What’s at Stake?

While the Code encourages cooperation with trusted fact-checkers, many platforms are doing the opposite. Meta is even considering pulling out of fact-checking entirely in the EU — just as the Code becomes legally binding.

This legal grey zone creates a loophole: companies can say they’re compliant while cutting essential tools that actually reduce the spread of disinformation. In practice, user protection ends up depending more on corporate goodwill than on enforceable law.

And because there’s no requirement to use professional fact-checking as the only method, more platforms might adopt crowd-sourced moderation. In theory, it boosts participation. In reality? It blurs accountability and weakens quality control.

Can the EU actually enforce its own rules?

So here’s the real question: can the EU actually make these companies follow the law?

Formal proceedings are already underway against Meta and X. But without real deadlines or serious penalties, it’s hard to imagine these platforms changing course.

Google’s efforts are currently rated the best. But for most of Big Tech, it’s not about lack of resources — it’s about lack of pressure. And without that, even the strongest law can’t protect Europe’s digital space from the flood of disinformation.

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.