Here’s the uncomfortable truth: most of us believe we’re better at spotting fake news than we truly are, often underestimating the difficulty and complexity involved in accurately identifying misinformation.

Research shows that roughly 90% of people believe they’re above average at identifying misinformation, while three out of four actually overestimate their skills (PNAS, 2021). This overconfidence is particularly pronounced among young people.

The Flash Eurobarometer Social Media Survey 2025 found that 71% of 15–24-year-olds across the EU felt confident in recognising disinformation, the highest of any age group, yet 78% of the same cohort reported encountering fake news at least weekly. This dangerous gap between confidence and actual exposure means that people who think they’re immune to fake news are often the most likely to share it.

The good news? Professional fact-checkers use techniques that anyone can learn, and the most effective ones take seconds, not hours.

Why Your Brain Is Wired to Fall for It

Before we get to the how-to, it’s worth understanding why fake news works so well. Misinformation doesn’t succeed because people are stupid—it succeeds because it’s designed to exploit how we naturally process information.

Stories that make you angry, scared, or smugly validated spread up to ten times faster than accurate reporting (MIT, 2018). That’s not a bug in human psychology; it’s a feature. We’re wired to pay attention to threats and to information that confirms what we already believe. Algorithms recognise this, which is why your feed is full of content engineered to elicit emotional responses.

The platforms themselves amplify the problem. On TikTok, which has the highest prevalence of misinformation of any major platform at approximately 20% (SIMODS/EDMO, 2024), content goes viral based on engagement rather than accuracy. Facebook’s 2017 decision to give angry reactions five times more algorithmic weight than likes turbocharged the spread of toxic and false content. And on X (formerly Twitter), fake news spreads up to six times faster than real news (Vosoughi et al., 2018).

This isn’t about blaming yourself for getting fooled—it’s about recognising that you’re up against a system built to fool you.

The Red Flags That Take Seconds to Spot

Fake news stories share identifiable patterns. Learning to recognise them quickly becomes second nature with practice.

Language is the fastest tell. Sensationalist headlines designed to provoke outrage, ALL CAPS text, excessive punctuation (!!!), and urgency language like “Share before they delete this!” are classic indicators. If a headline makes you feel furious or terrified before you’ve even read the story, that’s your cue to slow down. Phrases like “They don’t want you to see this” or “You won’t BELIEVE what happened next” are engineered to bypass critical thinking by triggering emotional responses.

Missing attribution is a massive red flag. Legitimate news outlets have clear bylines, dates, and named sources. If a story references “experts say” or “studies show” without linking to the actual study or naming the expert, be deeply sceptical. Anonymous or unnamed sources might be legitimate in investigative journalism, but they’re also a favourite tool of misinformation peddlers.

Check the URL carefully. One of the most effective misinformation tactics is creating fake websites that look like trusted outlets. Russia’s Doppelganger operation created clones of Der Spiegel, Le Parisien, and The Guardian (EU DisinfoLab, 2024). Look for misspellings in the domain (BBCnews.com.co instead of bbc.com), unusual extensions (.info, .xyz, .com.co), or domains registered very recently. A site claiming to be an established news outlet but registered last week is almost certainly fake.

AI-generated content has its own tells. For text, watch for repetitive phrasing, overly smooth and generic writing, and clusters of common AI phrases like “It’s important to note”, “delve into”, or “multifaceted”. For images, look for unnatural skin texture, inconsistencies in details such as ears and hair, and implausible architectural backgrounds. Video deepfakes often have blurred transitions and unnatural lighting. But here’s the catch: AI detection methods evolve fast, so what worked six months ago might not work now. The EU AI Act requires labelling of AI-generated content, but compliance is inconsistent.

The 30-Second Fact-Check Techniques That Actually Work

Professional fact-checkers don’t spend hours verifying every claim—they use a set of rapid techniques that you can copy.

The single most powerful tool is called lateral reading. Instead of reading deeply within a suspicious article, immediately open a new browser tab and Google the source name or the core claim. A Stanford study found that 100% of professional fact-checkers correctly identified which sources were credible using this method, often in seconds, while only 40% of university students could and they took far longer (Wineburg & McGrew, 2019).

The key is this: don’t trust what a website says about itself. Check what independent, established sources say about it. Open Wikipedia. See if trusted news outlets have reported the same story. If only one obscure site or random social media account has this “exclusive,” it’s probably not real.

For images and videos, use reverse image search. Right-click any image and select “Search Google for image,” or use Google Lens on mobile. This instantly reveals whether the image has been used before in a different context. During the Valencia floods, countless images from previous disasters were shared as if they were current. A reverse image search would have caught them immediately.

The SIFT method gives you a framework: Stop, Investigate the source, Find better coverage, Trace claims to their original context (Caulfield, 2017). The critical first step—Stop—takes five seconds and prevents the reflexive sharing that drives misinformation’s spread. When you see information you automatically agree with, that’s exactly when you need to pause.

Finally, if something seems too perfect, too outrageous, or too convenient, it probably is.

Your 30-Second Verification Checklist

STOP (3 seconds): Before engaging, pause. Notice your emotional reaction. Anger, fear, or smug validation are red flags.

SCAN (5 seconds): Check for sensationalist language, missing author/date, suspicious URLs, urgency pressure, or claims that sound too perfect.

SEARCH SIDEWAYS (15 seconds): Open a new tab. Google the source name or core claim. Are established, independent outlets reporting this? If only one obscure source has it, be sceptical.

CHECK THE IMAGE (7 seconds): Right-click and “Search Google for image” or use Google Lens. Has this image been used before in a different context?

If in doubt, don’t share. Waiting costs nothing. Sharing false information has real consequences.

The 30-second rule isn’t intended to turn you into a professional fact-checker. Instead, it encourages developing a habit: pausing before sharing, opening a second tab to verify, and asking yourself, “who benefits if I believe this?” In those few seconds between viewing and sharing a post, this reflex becomes your most valuable tool.

References

Caulfield, M. (2017) Web literacy for student fact-checkers. Pullman, WA: Washington State University. Available at: https://webliteracy.pressbooks.com/ (Accessed: 11 February 2026).

EU DisinfoLab (2024) Doppelganger: media clones serving Russian propaganda. Brussels: EU DisinfoLab. Available at: https://www.disinfo.eu/doppelganger (Accessed: 11 February 2026).

European Commission (2025) Flash Eurobarometer 545: social media survey 2025. Brussels: European Commission.

European Digital Media Observatory (2024) What our first measurement says about disinformation on major platforms in Europe. Paris: Science Feedback. Available at: https://science.feedback.org/first-measurement-disinformation-major-platforms-europe/ (Accessed: 11 February 2026).

London School of Economics (2024) How X fuelled UK riot misinformation. LSE Research for the World. Available at: https://www.lse.ac.uk/research/research-for-the-world/society/x-undermined-democracy-uk-riots (Accessed: 11 February 2026).

Lyons, B.A., Montgomery, J.M., Guess, A.M., Nyhan, B. and Reifler, J. (2021) ‘Overconfidence in news judgments is associated with false news susceptibility’, Proceedings of the National Academy of Sciences, 118(23), e2019527118. doi: 10.1073/pnas.2019527118.

Vosoughi, S., Roy, D. and Aral, S. (2018) ‘The spread of true and false news online’, Science, 359(6380), pp. 1146–1151. doi: 10.1126/science.aap9559.

Wineburg, S. and McGrew, S. (2019) ‘Lateral reading and the nature of expertise: reading less and learning more when evaluating digital information’, Teachers College Record, 121(11), pp. 1–40. doi: 10.1177/016146811912101102.

 

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.