Get Started
Log In
Menu
Get Started
Log In

Countermeasures: Key Findings and Gaps From Empirical Research

by JON BATEMAN, ELONNAI HICKOK, JACOB N. SHAPIRO, LAURA COURCHESNE, JULIA ILHARDT, on Sep 24, 2021 10:10:11 AM

The article was originally published by the Carnegie Endowment for International Peace. The full article can be found here.

Fake News

Image by Carlos Amarillo/Shutterstock.com

Over the past few years, institutions around the world have been scrambling to do more against malign influence operations. Major social media platforms have announced expansions of content moderation efforts, more takedowns of harmful influence campaigns, and many new product features designed to counter influence operations.1 Governments have proposed, amended, or implemented dozens of new laws addressing misinformation.2 And hundreds of civil society organizations are now dedicated to addressing influence operations.3 Yet the efficacy of these efforts remains unclear. We know relatively little about what kinds of countermeasures actually work to prevent influence operations, limit their spread, or curb their harmful impacts.

There is a dearth of rigorous, independent empirical research on influence operations countermeasures. Social media platforms rarely reveal the results of their internal studies, while academic research remains sparse, scattered across disciplines, and not synthesized for policymakers.4 This means that policymakers are left to make most decisions based on anecdotes or intuition. The results are more likely to be ineffective, costly, or counterproductive compared to the outcomes of evidence-based policy.

To assess what is known about countermeasure efficacy and to identify remaining gaps, the Partnership for Countering Influence Operations commissioned Princeton University’s Empirical Studies of Conflict Project to carry out a systematic review of studies. Laura Courchesne, Julia Ilhardt, and Jacob N. Shapiro sought out academic studies that (1) examined a specific group of people who viewed real or experimental influence operations, (2) compared measurable outcomes (behaviors or beliefs) of subjects exposed to a countermeasure versus those who were not, (3) met minimum standards of statistical credibility, and (4) had relevance for real-world policy. They identified 223 studies published since 1972 that met all four criteria. The research is presented in the article “Review of Social Science Research on the Impact of Countermeasures Against Influence Operations,” published in the September 2021 issue of Misinformation Review.5

The review confirmed the value of fact-checking. But it also highlighted enormous gaps in empirical knowledge about the most widely used and frequently proposed kinds of countermeasures.

KEY INSIGHTS

FACT-CHECKING

The vast majority of studies in this dataset focused on various forms of fact-checking that occur in close proximity to the influence operation itself. Examples include a news article that reports but then refutes a politician’s false claim or a social media platform that adds source labels to help users better evaluate misleading content. Overall, the literature suggests that fact-checking can reduce the impact of false information on individuals’ beliefs as well as their propensity to share dis/misinformation with others.

There is also promising, though less conclusive, evidence on the efficacy of countermeasures that are similar to fact-checking. These include “prebunking” (in other words, preemptively refuting weakened versions of misinformation narratives), media literacy prompts (for example, encouraging people to think about accuracy), and crowdsourcing the identification of misinformation.

DESIGN FACTORS

That said, fact-checking and related countermeasures are not all equal. Specific design choices appear to play a significant role in their efficacy. For example, one study found that video fact-checks were more effective than long-form article fact-checks.6 Another found that providing information on the trustworthiness of sources made refutations of misinformation more effective.7

Fact-checks appear to work best when they are both prominent and precise. YouTube’s state media label became more effective when the color was changed to better stand out from the background.8 Meanwhile, several studies suggest a “tainted truth” effect: if warnings about misinformation are themselves overstated, then people may reject even accurate warnings in the future and become more distrusting in general.

While these studies seem to offer several clear policy lessons, it is difficult to generalize from a small body of research. To fully assess efficacy, specific fact checking efforts and similar countermeasures should be tested in their unique contexts whenever possible.

KEY GAPS

Despite the encouraging data on fact-checking, this review indicated that we know little about countermeasures overall. Most key countermeasures have yet to be studied in a rigorous way. The high-quality studies that do exist have significant methodological limitations that reduce their relevance to current policy debates.

SUBSTANTIVE GAPS

There is virtually no highly credible research on many of the policies most frequently proposed by experts or implemented by platforms, governments, or civil society. Understudied policy areas include the following:

  • Deterring or disrupting bad actors—for example, deplatforming, takedowns, sanctions, indictments, or public attributions.
  • Enhancing content moderation—for example, widening the scope of community standards prohibitions or adding more human or artificial intelligence enforcement capability.
  • Adjusting recommendation algorithms—for example, suppressing sensational content or giving users more choice over their algorithm.
  • Limiting microtargeting—for example, improving users’ data privacy or restricting advertisers’ microtargeting options.
  • Building societal trust—for example, strengthening journalistic institutions, incorporating media literacy into educational curricula, or bolstering confidence in election processes.
  • Altering incentives—for example, taking antitrust enforcement actions against platforms or demonetizing bad actors on platforms.
  • Informing policymakers—for example, expanding data and information sharing, improving research, or creating more international coordination.

Additionally, no study in our dataset examined the efficacy of redirection, an important countermeasure similar to fact-checking.9 Redirection occurs when platforms invite users to access authoritative content in another location, either on or off the platform. (Unlike labeling, redirection requires the user to click through to the corrected content.) Redirection has become by far the most common product feature that platforms use to combat influence operations.10

METHODOLOGICAL GAPS

Several aspects of these studies raise questions about their applicability to real-world situations. Most studies we reviewed (194 out of 223, or 87 percent) involved survey experiments, lab experiments, or simulated social media environments. These were labeled as “experimental” or “simulated social media.” Additionally, only a small fraction of studies examined how countermeasures mitigated the impact of misinformation on actual off-line behavior (6 percent) and/or online behavior (2 percent). The vast majority of studies looked instead at how countermeasures affect people’s beliefs, knowledge, or stated behavioral intentions.

The studies in our dataset overwhelmingly involved U.S. subjects, who were usually recruited from universities or Amazon’s Mechanical Turk, a crowdsourcing marketplace. This makes it harder to generalize the research findings to other countries or even to the U.S. population as a whole, because different populations sometimes react differently to the same countermeasures. For example, one study found significantly different efficacy in American versus Australian voters.11

Finally, platform- or venue-specific studies tend to focus on Facebook, Twitter, and traditional journalistic outlets. While these are all important channels in the spread of influence operations and highly salient to policymakers, there has been very little focus on other major platforms such as YouTube or Instagram. Research has also neglected newer, smaller, and/or non-U.S.-based platforms, as well as multiplatform countermeasures (like those associated with the Global Internet Forum to Counter Terrorism).

LOOKING AHEAD

Empirical research on influence operations countermeasures is still nascent. Only 10 percent of the studies (22 out of 223 meeting our selection criteria) predated 2010. Thankfully, research activity is rapidly growing as democracies have become more concerned about influence operations and funders have dedicated more resources to studying the problem. Sixty-two percent of the studies in our dataset have been published since 2019.

Nevertheless, the research gaps identified in this review will not be remedied easily. They stem from multiple structural factors including lack of data access, inadequate funding, misaligned professional incentives, disciplinary silos, and nonstandard terms and methodologies.12 Addressing these gaps will require new models of collaboration that bring together academic, platform, and government capabilities.13 Only then will we be able to develop a firm foundation for evidence-based policy decisions and to systematically track the efficacy of countermeasures over time and in different contexts.

 
Topics:Press

Recent Posts