Science Bibliographies Header

Science Bibliographies Online

Mental models: misinformation and truth

“A mental model is a form of mental representation for mechanical–causal domains that affords explanations for these domains. Mental models contain mental representations of objects in space and the causal relations among the objects. The structure of the mental representation corresponds to the structure of the world. This analogical relation allows the mental model to make successful predictions about events in the world.
The term ‘mental model’ is used to refer both to general model-based knowledge in long-term memory and to temporary specific mental representations constructed in the course of understanding particular events in the world. The term ‘mental model’ is also used to refer to mental representations of static spatial domains. In this usage, the term is typically restricted to the specific mental representations that are used to represent unfamiliar spatial arrays. These spatial mental models have an analogical relation with the spatial information in the external world and are typically experienced in terms of visual mental images. This usage of the term is also applied to the case where abstract logical problems are solved by converting them to spatial model-based formats” [Brewer, W. F. (2005). Mental models. In L. Nadel, Encyclopedia of cognitive science. Wiley.].
What is the connection between mental models and misinformation/disinformation? How do the models we create to help us navigate our worlds handle the rampant misinformation and disinformation so often found on major social media platforms?
What does the research say?
**created December 2022**
*Ecker, U. K. H., Butler, L. H., & Hamby, A. (2020). You don’t have to tell a story! A registered report testing the effectiveness of narrative versus non-narrative misinformation corrections. Cognitive Research: Principles and Implications, 5(1), 64. [PDF] [Cited by]
Misinformation often has an ongoing effect on people’s memory and inferential reasoning even after clear corrections are provided; this is known as the continued influence effect. In pursuit of more effective corrections, one factor that has not yet been investigated systematically is the narrative versus non-narrative format of the correction. Some scholars have suggested that a narrative format facilitates comprehension and retention of complex information and may serve to overcome resistance to worldview-dissonant corrections. It is, therefore, a possibility that misinformation corrections are more effective if they are presented in a narrative format versus a non-narrative format. The present study tests this possibility. We designed corrections that are either narrative or non-narrative, while minimizing differences in informativeness. We compared narrative and non-narrative corrections in three preregistered experiments (total N = 2279). Experiment 1 targeted misinformation contained in fictional event reports; Experiment 2 used false claims commonly encountered in the real world; Experiment 3 used real-world false claims that are controversial, in order to test the notion that a narrative format may facilitate corrective updating primarily when it serves to reduce resistance to correction. In all experiments, we also manipulated test delay (immediate vs. 2 days), as any potential benefit of the narrative format may only arise in the short term (if the story format aids primarily with initial comprehension and updating of the relevant mental model) or after a delay (if the story format aids primarily with later correction retrieval). In all three experiments, it was found that narrative corrections are no more effective than non-narrative corrections. Therefore, while stories and anecdotes can be powerful, there is no fundamental benefit of using a narrative format when debunking misinformation.”

*Hamby, A., Ecker, U., & Brinberg, D. (2020). How stories in memory perpetuate the continued influence of false information. Journal of Consumer Psychology, 30(2), 240-259. [PDF] [Cited by]

People often encounter information that they subsequently learn is false. Past research has shown that people sometimes continue to use this misinformation in their reasoning, even if they remember that the information is false, which researchers refer to as the continued influence effect. The current work shows that the continued influence effect depends on the stories people have in memory: corrected misinformation was found to have a stronger effect on people’s beliefs than information that was topically related to the story if it helped to provide a causal explanation of a story they had read previously. We argue this effect occurs because information that can fill a causal “gap” in a story enhances comprehension of the story event, which allows people to build a complete (if inaccurate) event model that they prefer over an accurate but incomplete event model. This effect is less likely to occur for stories in memory that end in a negative way, presumably because people are more motivated to accurately understand negative outcome events.”

“How to combat the spread of misinformation on social media is a long-standing issue in the academic and practical fields, but creating effective correction strategies remains a challenge. Moreover, why people use social media has not been considered in understanding the effects of correction on misperception. Building on existing research, the current study examines two agendas: (a) whether different conditions of correction – no correction, web add-on correction and narrative correction – affect misinformation believability and (b) how different motivations of using social media – receiving news and interaction with other users – moderate the effects of correction types on misperception. The online experiment (N = 171) notes several key findings. Web add-on correction was effective in decreasing belief in misinformation. For those who use social media for social interaction, narrative correction was effective in reducing misperception. These findings revisit the effects of different correction types on beliefs in misinformation by emphasising the features of social media users.”

*Susmann, M. W., & Wegener, D. T. (2022). The role of discomfort in the continued influence effect of misinformation. Memory & Cognition, 50(2), 435-448. [PDF] [Cited by]

“Research examining the continued influence effect (CIE) of misinformation has reliably found that belief in misinformation persists even after the misinformation has been retracted. However, much remains to be learned about the psychological mechanisms responsible for this phenomenon. Most theorizing in this domain has focused on cognitive mechanisms. Yet some proposed cognitive explanations provide reason to believe that motivational mechanisms might also play a role. The present research tested the prediction that retractions of misinformation produce feelings of psychological discomfort that motivate one to disregard the retraction to reduce this discomfort. Studies 1 and 2 found that retractions of misinformation elicit psychological discomfort, and this discomfort predicts continued belief in and use of misinformation. Study 3 showed that the relations between discomfort and continued belief in and use of misinformation are causal in nature by manipulating how participants appraised the meaning of discomfort. These findings suggest that discomfort could play a key mechanistic role in the CIE, and that changing how people interpret this discomfort can make retractions more effective at reducing continued belief in misinformation.”

“A meta-analysis was conducted to examine the extent of continued influence of misinformation in the face of correction and the theoretical explanations of this phenomenon. Aggregation of results from 32 studies (N = 6,527) revealed that, on average, correction does not entirely eliminate the effect of misinformation (r = –.05, p = .045). Corrective messages were found to be more successful when they are coherent, consistent with the audience’s worldview, and delivered by the source of the misinformation itself. Corrections are less effective if the misinformation was attributed to a credible source, the misinformation has been repeated multiple times prior to correction, or when there was a time lag between the delivery of the misinformation and the correction. These findings are consistent with predictions based on theories of mental models and offer concrete recommendations for practitioners.”

Questions? Please let me know (





Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Copyright 1999-2024 Kevin R. Engel · IA 50309 · United States