Videos featuring politicians and celebrities look convincing — until you learn they were never involved at all. Behind these ads are not public institutions but highly coordinated criminal networks. So is the EU’s Digital Services Act really protecting people from falling into financial traps?
Deepfake Politics and Fake Profits
It often starts with a familiar face. Germany’s Defence Minister Boris Pistorius appears on Facebook, assuring viewers that a new government programme can guarantee profits for every citizen. In Ireland, politician Heather Humphreys promotes Quantum AI, claiming it can bring families “financial freedom.”
There’s only one problem: none of these people ever said these words. The videos are fabricated using advanced AI tools — voice cloning, deepfake visuals, and synthetic scripts — all produced by cybercriminals.
Such videos are just the entry point to complex investment scams powered by artificial intelligence, fake news portals, and fabricated endorsements. One click on an ad triggers a cascade of calls from “financial advisors” pushing people to invest — sometimes a few hundred euros, sometimes their entire savings. As long as the online dashboard shows rising profits, victims believe everything is real. The truth only emerges when they try to withdraw their money.
A Growing Crisis: Billions Lost Across Europe
According to EU tech commissioner Henna Virkkunen, people in Europe lose over €4 billion each year to fraudulent financial ads. In Ireland alone, victims have handed over an estimated €100 million since 2021. Portuguese police have been investigating more than 3,000 cryptocurrency-related scams between 2022-2024.
This has moved far beyond simple cybercrime. Europol warns that financial scams have reached an “unprecedented scale,” and criminal groups that once focused on drug or arms trafficking are shifting to digital fraud. Norwegian prosecutor Andre Hvoslef-Eide even notes that, for some networks, scam revenues have now overtaken profits from narcotics.
In Germany, prosecutors say financial losses run into the billions — with so many cases that investigators must prioritise only the ones with the strongest evidence. Ireland reports a 21% rise in cases within just three months, with individual victims losing tens of millions collectively.
“Trusted Flaggers”: A Tool With Limited Power
The EU introduced a system of trusted flaggers — organisations authorised under the Digital Services Act (DSA) to report illegal content directly to platforms with priority handling. In theory, it’s a powerful tool. In practice, its impact is limited.
Valentine Auer from the Vienna Telecommunications Institute explains that she identifies thousands of fraudulent ads on Facebook and Instagram every day. Some disappear quickly when flagged individually, but Meta often stops responding once the volume increases. Adding to the problem, each report can contain only 20 links at a time, making the process painfully slow.
Flaggers in Lithuania and Greece report similar experiences. Debunk EU alone has detected over one million suspicious ads, viewed 1.4 billion times, yet many remain online for weeks.
Platforms as Unwilling Gatekeepers
Meta — which owns Facebook and Instagram — says it fights fraud and bans “deceptive claims” in its advertising rules. But the reality looks different. Scam ads often reappear in slightly edited versions, and moderation is mostly automated.
With 250 million users in Europe, Meta is the region’s biggest advertising platform, and ads account for 98% of its global revenue. In 2024, the company claimed its ads brought “€213 billion of added value” to the EU economy. Meanwhile, thousands of Europeans were losing their life savings through sponsored posts featuring deepfaked politicians and billionaires.
Google has had slightly better results: in 2024 it introduced mandatory verification for advertisers posting financial content in Ireland. Scams on Google dropped — but surged on Facebook. Meta still does not require advertiser verification in the EU, even though it does in the UK, India, and Australia.
The DSA: A European Law Without Bite?
The Digital Services Act, introduced in 2022, was meant to be a breakthrough in regulating Big Tech, including fines of up to 6% of global revenue. Yet its most important clause — a ban on forcing platforms to monitor content proactively — has created a huge loophole.
This rule was designed to prevent censorship and protect free speech. But in practice, it means platforms have no duty to remove illegal content unless someone reports it. When they do react, it’s usually too late.
“It’s useless,” says Paul O’Brien from Bank of Ireland. “A victim rarely remembers the exact ad they saw, and months later it’s impossible to trace.”
Banks and consumer organisations are calling for changes in other EU laws, such as financial services regulations, to require mandatory verification of financial advertisers. For now, the Commission has not moved forward, and neither EU governments nor the Parliament are pushing for it.
Fighting a Losing Battle
From the perspective of those on the front lines, the trusted flagger system is a band-aid on a deep wound. The EU does not have a suffient number, and only a handful specialise in financial scams. This is nowhere near enough to effectively counter industrial-scale fraud.
Criminal networks, meanwhile, use automation and AI to generate dozens of ad variations, each running for only a few hours before disappearing — or being replaced instantly once removed.
It’s a stark reminder that in today’s digital ecosystem, profit still outweighs protection — and European citizens are paying the price.
Written by
Shape the conversation
Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.
