Study: Truthful yet misleading Facebook posts drove COVID vaccine reluctance much more than outright lies did

Woman looking at her phone

skyNext / iStock

Today in Science, a study shows that unflagged, factual but misleading Facebook posts reduced the intent to receive the COVID-19 vaccine 46 times more than did false posts flagged by fact-checkers as misinformation, which the authors say points to the need to consider the reach and impact of content rather than just its veracity.

The researchers, from the Massachusetts Institute of Technology (MIT) and the University of Pennsylvania (UPenn), surveyed thousands of participants about the influence of the headlines from 130 vaccine-related news stories on their intent to vaccinate. They also asked a separate group of people to rate the headlines on attributes such as plausibility and political bent.

Then the team extrapolated the survey results to predict the influence of 13,206 vaccine-related Facebook links in the first 3 months of the COVID-19 vaccine rollout (January to March 2021) on the vaccination intentions of the platform's roughly 233 million US users. 

"We posit that two conditions must be met for content to have widespread impact on people's behavior: People must see it, and, when seen, it must affect their behavior," the researchers wrote. "That is, we define 'impact' as the combination of exposure and persuasive influence."

46-fold difference between content types

Posts containing false claims about the COVID-19 vaccine (eg, microchips being placed in vaccines) lowered vaccination intentions by 1.5 percentage points, and content suggesting that the vaccine was harmful to health also reduced vaccination intentions, regardless of any effect of the headline's truthfulness.

Flagged links to misinformation received 8.7 million views, accounting for 0.3% of the 2.7 billion vaccine-related link views. In contrast, stories that fact-checkers didn't flag but that still implied that vaccines were harmful—many of them from credible mainstream news outlets—were viewed hundreds of millions of times. One example of unflagged yet misleading content was a story about a rare case of a young, healthy person who died after receiving the COVID vaccine.

The links that fact-checkers flagged as misinformation were, when viewed, more likely to reduce vaccine intentions than unflagged links. But after weighting each link's persuasive effect by the number of views, the effect of unflagged vaccine-skeptical posts eclipsed that of flagged falsities. 

Unflagged vaccine-skeptical content reduced vaccination intention by 2.28 percentage points per Facebook user, compared with −0.05 percentage points for flagged content—a 46-fold difference. 

Exposure had greater role than flagging status

Although flagged posts had more of an impact when viewed, differences in exposure almost entirely determined the ultimate impact. 

For example, a single vaccine-skeptical Chicago Tribune article, "A healthy doctor died two weeks after getting a COVID vaccine; CDC is investigating why," was viewed by more than 50 million Facebook users (over 20% of Facebook's US users) and garnered more than six times the number of views than all flagged content combined.

Content moderation focuses on identifying the most egregiously false information—but that may not be an effective way of identifying the most overall harmful content.

Jennifer Allen, PhD

The authors concluded that flagged misinformation lowers vaccination intentions, but, given the comparatively low rates of exposure, this content had a much smaller role in driving overall vaccine hesitancy than unflagged vaccine-skeptical posts. 

"Content moderation focuses on identifying the most egregiously false information—but that may not be an effective way of identifying the most overall harmful content," lead author Jennifer Allen, PhD, said in an MIT news release. "Platforms should also prioritize reviewing content from the people or organizations with the largest numbers of followers while balancing freedom of expression."

Allen proposed user content moderation as a approach to countering misinformation. "Crowdsourcing fact-checking and moderation works surprisingly well," she said in a UPenn news release. “That's a potential, more democratic solution.”

This week's top reads