Science Bibliographies Header

Science Bibliographies Online

How and why does false information spread online?

This image has an empty alt attribute; its file name is fake-news-illustration-bots-1024x900.jpg
Gary Waters / Getty Images/Ikon Images

How does false information/”fake news” spread through social media and web sites? And, why does it?

We are told–frequently–by legitimate, trusted sources and the very people and organizations who create disinformation that a significant proportion of what we view is fake and has been deliberately created to sow discord and distrust, to make us not believe and to tear us apart.

So, do we trust no one? Is the biggest problem the sowing of doubt?

Quick bibliography: What does research tell us about the how and why of false information being spread online?

**updated November 2021**

*Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., . . . Dredze, M. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108(10), 1378-1384. [PDF] [Cited by]

“Objectives: To understand how Twitter bots and trolls (“bots”) promote online health content.

Methods: We compared bots’ to average users’ rates of vaccine-relevant messages, which we collected online from July 2014 through September 2017. We estimated the likelihood that users were bots, comparing proportions of polarized and antivaccine tweets across user types. We conducted a content analysis of a Twitter hashtag associated with Russian troll activity.

Results: Compared with average users, Russian trolls 2(1) = 102.0; P < .001), sophisticated bots 2(1) = 28.6; P < .001), and “content polluters” 2(1) = 7.0; P < .001) tweeted about vaccination at higher rates. Whereas content polluters posted more antivaccine content2(1) = 11.18; P < .001), Russian trolls amplified both sides. Unidentifiable accounts were more polarized2(1) = 12.1; P < .001) and antivaccine2(1) = 35.9; P < .001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive.

Conclusions:  Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination.

Public Health Implications: Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content.”

*Buchanan, T. (2020). Why do people spread false information online? the effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLoS One, 15(10), e0239666. [PDF] [Cited by]

Individuals who encounter false information on social media may actively spread it further, by sharing or otherwise engaging with it. Much of the spread of disinformation can thus be attributed to human action. Four studies (total N = 2,634) explored the effect of message attributes (authoritativeness of source, consensus indicators), viewer characteristics (digital literacy, personality, and demographic variables) and their interaction (consistency between message and recipient beliefs) on self-reported likelihood of spreading examples of disinformation. Participants also reported whether they had shared real-world disinformation in the past. Reported likelihood of sharing was not influenced by authoritativeness of the source of the material, nor indicators of how many other people had previously engaged with it. Participants’ level of digital literacy had little effect on their responses. The people reporting the greatest likelihood of sharing disinformation were those who thought it likely to be true, or who had pre-existing attitudes consistent with it. They were likely to have previous familiarity with the materials. Across the four studies, personality (lower Agreeableness and Conscientiousness, higher Extraversion and Neuroticism) and demographic variables (male gender, lower age and lower education) were weakly and inconsistently associated with self-reported likelihood of sharing. These findings have implications for strategies more or less likely to work in countering disinformation in social media.”

*Effron, D. A., & Medha, R. (2020). Misinformation and morality: Encountering fake-news headlines makes them seem less unethical to publish and share. Psychological Science, 31(1), 75-87. [Cited by]

People may repeatedly encounter the same misinformation when it “goes viral.” The results of four main experiments (two preregistered) and a pilot experiment (total N = 2,587) suggest that repeatedly encountering misinformation makes it seem less unethical to spread—regardless of whether one believes it. Seeing a fake-news headline one or four times reduced how unethical participants thought it was to publish and share that headline when they saw it again—even when it was clearly labeled as false and participants disbelieved it, and even after we statistically accounted for judgments of how likeable and popular it was. In turn, perceiving the headline as less unethical predicted stronger inclinations to express approval of it online. People were also more likely to actually share repeated headlines than to share new headlines in an experimental setting. We speculate that repeating blatant misinformation may reduce the moral condemnation it receives by making it feel intuitively true, and we discuss other potential mechanisms that might explain this effect.”

*Lance, B. W., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122-139. [Cited by]

“Many democratic nations are experiencing increased levels of false information circulating through social media and political websites that mimic journalism formats. In many cases, this disinformation is associated with the efforts of movements and parties on the radical right to mobilize supporters against centre parties and the mainstream press that carries their messages. The spread of disinformation can be traced to growing legitimacy problems in many democracies. Declining citizen confidence in institutions undermines the credibility of official information in the news and opens publics to alternative information sources. Those sources are often associated with both nationalist (primarily radical right) and foreign (commonly Russian) strategies to undermine institutional legitimacy and destabilize centre parties, governments and elections. The Brexit campaign in the United Kingdom and the election of Donald Trump in the United States are among the most prominent examples of disinformation campaigns intended to disrupt normal democratic order, but many other nations display signs of disinformation and democratic disruption. The origins of these problems and their implications for political communication research are explored.”

*Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences of the United States of America, 116(7), 2521-2526. [PDF] [Cited by]

Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.”

*Shao, C., Ciampaglia, G. L., Varol, O., Yang, K., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), 4787. [PDF] [Cited by]

“The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.”

Questions? Please let me know (engelk@grinnell.edu).


Posted

in

by

Tags:

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Copyright 1999-2024 Kevin R. Engel · IA 50309 · United States engelk@grinnell.edu