Does Fact-Checking Work on Social Media?


Does Fact-Checking Work? Here’s What the Science Says

Communication and misinformation researchers reveal the value of fact-checking, where perceived biases come from and what Meta’s decision could mean

Meta Logo Background with iphone showing Zuckergerg posts.

Meta plans to scrap its third-party fact-checking programme in favour of X-like ‘community notes’.

PA Images/Alamy Stock Photo

It is said that a lie can fly halfway around the world while the truth is getting its boots on. That trek to challenge online falsehoods and misinformation got a little harder this week, when Facebook’s parent company Meta announced plans to scrap the platform’s fact-checking programme, which was set up in 2016 and pays independent groups to verify selected articles and posts.

The company said that the move was to counter fact checkers’ political bias and censorship. “Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact-check and how,” Meta’s chief global-affairs officer Joel Kaplan wrote on 7 January.

Nature spoke to communication and misinformation researchers about the value of fact-checking, where perceived biases come from and what Meta’s decision could mean.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Positive influence

In terms of helping to convince people that information is true and trustworthy, “fact-checking does work”, says Sander van der Linden, a social psychologist at the University of Cambridge, UK, who acted as an unpaid adviser on Facebook’s fact-checking programme in 2022. “Studies provide very consistent evidence that fact-checking does at least partially reduce misperceptions about false claims.”

For example, a 2019 meta-analysis of the effectiveness of fact-checking in more than 20,000 people found a “significantly positive overall influence on political beliefs”.

“Ideally, we’d want people to not form misperceptions in the first place,” adds van der Linden. “But if we have to work with the fact that people are already exposed, then reducing it is almost as good as it as it’s going to get.”

Fact-checking is less effective when an issue is polarized, says Jay Van Bavel, a psychologist at New York University in New York City. “If you’re fact-checking something around Brexit in the UK or the election in United States, that’s where fact-checks don’t work very well,” he says. “In part that’s because people who are partisans don’t want to believe things that make their party look bad.”

But even when fact-checks don’t seem to change people’s minds on contentious issues, they can still be helpful, says Alexios Mantzarlis, a former fact checker who directs the Security, Trust, and Safety Initiative at Cornell Tech in New York City.

On Facebook, articles and posts deemed false by fact checkers are currently flagged with a warning. They are also shown to fewer users by the platform’s suggestion algorithms, Mantzarlis says, and people are more likely to ignore flagged content than to read and share it.

Flagging posts as problematic could also have knock-on effects on other users that are not captured by studies of the effectiveness of fact-checks, says Kate Starbird, a computer scientist at the University of Washington in Seattle. “Measuring the direct effect of labels on user beliefs and actions is different from measuring the broader effects of having those fact-checks in the information ecosystem,” she adds.

More misinformation, more red flags

Regarding Meta’s claims of bias among fact-checkers, Van Bavel agrees that misinformation from the political right does get fact-checked and flagged as problematic — on Facebook and other platforms — more often than does misinformation from the left. But he offers a simple explanation.

“It’s largely because the conservative misinformation is the stuff that is being spread more,” he says. “When one party, at least in the United States, is spreading most of the misinformation, it’s going to look like fact-checks are biased because they’re getting called out way more.”

There are data to support this. A study published in Nature last year showed that, although politically conservative people on X, formerly Twitter, were more likely to be suspended from the platform than were liberals, they were also more likely to share information from news sites that were judged as low quality by a representative group of laypeople.

“If you wanted to know whether a person is exposed to misinformation online, knowing if they’re politically conservative is your best predictor of that,” says Gordon Pennycook, a psychologist at Cornell University in Ithaca, New York, who worked on that analysis.

Implementation matters

Meta’s chief executive Mark Zuckerberg has said that in place of third-party fact-checking, Facebook could adopt a system similar to the ‘community notes’ used by X, in which corrections and context are crowdsourced from users and added to posts.

Research shows that those systems can also work to correct misinformation, up to a point. “The way it’s been implemented on X actually doesn’t work very well,” says van der Linden. He points to an analysis done last year that found the community notes on X were often added to problematic posts too late to reduce engagement, because they came after false claims had already spread widely. X vice-president of product Keith Coleman told Reuters last year that community notes “maintains a high bar to make notes effective and maintain trust”.

“Crowdsourcing is a useful solution, but in practice it very much depends on how it’s implemented,” van der Linden adds. “Replacing fact checking with community notes just seems like it would make things a lot worse.”

This article is reproduced with permission and was first published on January 10, 2025.



Source link

About The Author

Scroll to Top