top of page

SORRY TO BREAK IT TO YOU, TWITTER!  LABELING
POSTS AS FALSE, MISLEADING, OR DISPUTED
DOESN'T SEEM TO HELP, AND MAY EVEN HURT

In May 2020, Twitter finally got serious about placing fact-check labels on certain posts, as well as linking others to accurate information.  This included posts made by the sitting president of the United States.

One study (read the entire paper here) found that:

Warnings may be rendered ineffective by politically motivated reasoning, whereby people are biased against believing information that challenges their political ideology.  Indeed, such warnings might actually backfire and increase belief.

Beyond the potential for warnings to backfire, there is an additional (potentially more serious) concern regarding misinformation warnings which the researchers refer to as the Implied Truth Effect.  When attempting to fight misinformation using warnings, it is necessary for some third party to examine every new piece of information and either verify or dispute it.  Given that it is much easier to produce misinformation than it is to assess its accuracy, it is almost certain that only a fraction of all misinformation will be successfully tagged with warnings. Thus, the implication of the absence of a warning is ambiguous: does the lack of a warning simply mean that the headline in question has not yet been checked, or does it imply that the headline has been verified (which should lead to an increase in perceived accuracy)?  To the extent that people draw the latter inference, tagging some false news headlines will have the unintended side-effect of causing untagged headlines to be viewed as more accurate. Such an Implied Truth Effect, combined with the near impossibility of fact-checking all (or even most) headlines, could pose an important challenge for attempts to combat misinformation using warnings. 

Another study was a little more optimistic (read the entire paper here), but still found that:

Both “Disputed” and “Rated false” tags modestly reduce belief in false news. Notably, the researchers found larger accuracy effects for the “Disputed” tags than Pennycook and Rand 2017 (the study above).  However, their results demonstrate that “Rated false” tags, which specifically tell users when claims made in headlines are untrue, are more effective at reducing belief in misinformation than the “Disputed” tags previously used by Facebook.  Encouragingly, we find no consistent evidence that the effects of these tags varies by the political congeniality of the headlines or that exposure to the tags increases the perceived accuracy of unlabeled false headlines (though our study lacks the precision necessary to detect the small “implied truth” effect that Pennycook and Rand 2017 identify).

By contrast, though general warnings about false news also appear to decrease belief in false headlines, the effect of a general warning is small compared to either type of tag. Moreover, general warnings also reduce belief in real news and do not enhance the effects of the “Rated false” and “Disputed” tags, suggesting that they are a less effective approach.

The results provide support for prior studies finding a negative effect of general warnings on belief in misinformation, but the finding that these warnings also reduce the perceived accuracy of true headlines suggest that they pose a potential hazard. False news may already increase distrust in legitimate information; unintended spillover effects from general warnings or related proposals to fight false information by increasing media literacy could exacerbate this problem. The researchers' “Disputed” and “Rated false” tags, which more effectively reduce the perceived accuracy of false headlines without causing these spillover effects, may be a safer way to reduce belief in misinformation.

Evidence:

Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand.  "The Implied Truth Effect:  Attaching Warnings to a Subset of Fake News Headlines
   Increases Perceived Accuracy of Headlines Without Warnings." 

Katherine Clayton, Spencer Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri Jonathan, Martin Evan Morgan,
   Morgan Sandhu, Rachel Sang, Rachel Scholz-Bright, Austin T. Welch, Andrew G. Wolff, Amanda Zhou, and Brendan Nyhan.  "Real Solutions for Fake News?:
   Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media."

bottom of page