Facebook's policy on anti-COVID vaccine content didn't stop users from finding it, study suggests

Social media icons on phone

Pornpak Khunatorn / iStock

Facebook's removal of some COVID-19 vaccine misinformation didn't drive down user engagement with the content—likely because the social media platform's architecture allowed users to view and interact with it and let groups boost each other's content or repost deleted posts, suggests an analysis led by George Washington University researchers.

For the study, published today in Science Advances, researchers used the CrowdTangle content-monitoring tool to search for COVID-19 vaccine-related Facebook postings. They compared data from more than 200,000 posts to pages and groups created from November 15, 2019, to November 15, 2020, with posts to the same pages and groups from November 16, 2020, to February 28, 2022.

On November 18, 2020, Facebook announced its removal of "Stop Mandatory Vaccination," one of its largest anti-vaccine fan pages. On December 3, it announced that false claims about COVID-19 vaccines and accounts that repeatedly posted them would be deleted. Five days later, it broadened its policy to include vaccine misinformation in general. The researchers said the study is the first to scientifically evaluate the effects of Facebook's policy.

Policy shiftednot decreasedengagement

By February 28, 2022, Facebook had removed 49 pages (76% of them containing anti-vaccine content) and 31 groups (90% anti-vaccine). Anti-vaccine pages and groups were 2.1 times more likely than their pro-vaccine counterparts to have been removed. Five anti-vaccine groups (5%) changed their settings from public to private.

Posts on both anti-vaccine and pro-vaccine pages declined. Anti-vaccine post volumes fell 1.5 times more than pro-vaccine volumes (relative risk [RR], 0.68), and posts on anti-vaccine group pages also fell. Anti-vaccine group post volumes declined 3.6 times more than their pro-vaccine counterparts.

This finding—that people were equally likely to engage with vaccine misinformation before and after Facebook’s extensive removal efforts—is incredibly concerning.

Lorien Abroms, PhD

But there were no significant changes in engagement with anti-vaccine page content (RR, 0.73), and the volume of false claims rose. Engagement with anti-vaccine groups was 33% higher than expected on the basis of prepolicy trends (RR, 1.33), but this was not significant relative to pro-vaccine group trends (RR, 1.22).

A robustness check on a second sample of anti-vaccine groups identified on July 28, 2021, showed that engagement counts were similar to those before the Facebook policy, which the study authors said raises the possibility that the platforms policy shifted—rather than drove down—engagement.

"This finding—that people were equally likely to engage with vaccine misinformation before and after Facebook’s extensive removal efforts—is incredibly concerning," senior author Lorien Abroms, PhD, said in a George Washington University news release. "It shows the difficulty that we face as a society in removing health misinformation from public spaces."

Largest rise in claims about vaccine's side effects

The proportions of several topics that seemed to violate Facebook's community standards for pages rose after the policy was announced.

The largest increase occurred in allegations that COVID-19 vaccination results in severe adverse reactions (odds ratio [OR], 1.41). Increases were also noted in reports of vaccine-related hospitalization and death (OR, 1.23), promotion of alternative medicine (OR, 1.32), allegations of negative effects of vaccines on immunity due to toxic ingredients (OR, 1.24), and posts focusing on children (OR, 1.34).

Other proliferating topics included school vaccine mandates (OR, 2.02), other vaccine mandates (OR, 1.21), legislation opposing vaccination (OR, 1.06), and anti-vaccine medical advice (OR, 1.14). Anti-vaccine groups also discussed misleading information—particularly those pertaining to vaccine safety, effectiveness, and mandates—more often than before the policy. "These increases were significantly smaller in magnitude than increases in provaccine groups, suggesting that they were not attributable to Facebook's policies," the researchers wrote.

The likelihood of a user engaging with posts containing low-credibility links was 2.7 times higher than expected based on prepolicy patterns, and the odds that groups had posts with such links was 1.2 times greater.

Compared with prepolicy trends, the probability that a post contained a link to a politically polarizing website climbed in both pages (OR, 1.35) and groups (OR, 1.51), and engagement with these links rose on anti-vaccine pages (OR, 2.37).

'Like,' 'angry' reactions boost exposure

"Prior work suggests that removal of antivaccine Facebook groups may have been associated with increased activity on other platforms, such as Twitter," they wrote. Also, "links to 'alternative' social media platforms—BitChute, Rumble, and Gab—as YouTube and Twitter [increased after they] began removing COVID-19 misinformation on 14 October 2020 and 1 March 2021, respectively."

Similarly, a simulation model built and calibrated to prepolicy data best reproduced data from after the prepolicy period when the researchers assumed that removals shifted demand for antivaccine content to remaining pages and groups rather than lowering it. "Together, these results suggest that content and account removals may not have dissuaded audiences from seeking out antivaccine misinformation," they wrote.

Individuals that are highly motivated to find and share anti-vaccine content are just using the system the way it's designed to be used, which makes it hard to balance those behaviors against public health or other public safety concerns. You have to change the architecture if you want to create that balance.

David Broniatowski, PhD

Rather, Facebook's system architecture, which determines how information flows through the platform, may have enabled anti-vaccine content producers and users to forge new paths to such content, such as through pages administered by anti-vaccine opinion leaders, discussion groups that coordinate to repost content, and interaction with posts to promote them in news feeds, the researchers said.

"Individuals that are highly motivated to find and share anti-vaccine content are just using the system the way it's designed to be used, which makes it hard to balance those behaviors against public health or other public safety concerns. You have to change the architecture if you want to create that balance," lead author David Broniatowski, PhD, said in the release.

The researchers added that Facebook's newsfeed algorithm, designed to promote content that has generated "meaningful social interaction" as measured by reactions such as "like" and "angry," could also have increased exposure to false claims.

"Just as the products of building architecture must conform to building codes to protect public health and safety, social media platform designers must consider the public health consequences of their architectural choices and perhaps even develop codes that are consistent with best scientific evidence for reducing online harms" the authors concluded.

This week's top reads