The U.S. government is funding research on how to “correct false beliefs” held by Americans in partnership with “fact-checkers” due to the alleged spread of “misinformation” online, War Room can reveal.
Launched on July 7th, 2021, the grant from the National Science Foundation (NSF) is titled”How False Beliefs Form and How to Correct Them.”
The project originally allocated $444,345 to Vanderbilt University’s Associate Professor of Psychology and Human Development Lisa Fazio, but since its inception, has amounted to $506,478 in funds for the researcher.
“There is currently an urgent need to understand the real-world effects of misinformation on people's beliefs and how to best correct false beliefs,” explains a synopsis of the grant’s purpose on the NSF website.
“Through a series of laboratory and naturalistic experiments, the project team is examining the effects of repetition on belief in real-world settings and how to more effectively counter-act misinformation,” continues the summary of the project, which is set to conclude in 2024.
“By examining these basic psychological processes in the primary domain within which they affect daily life â€" misinformation on social media â€" this work will have implications for real-world practices aimed at reducing the impact of misinformation.”
The research will “inform real-world practices aimed at reducing the impact of misinformation,” and the NSF notes that “fact-checking practitioners are consulted to help guide the research, and results will be discussed with them.”
The NSF, however, does not identify any of its fact-checking partners, which are notoriously rife with left-wing bias.
Fazio, the Principal Investigator on the project, also notes in her professional bio that her “research informs basic theories about learning and memory, while also having clear applications for practitioners, such as journalists and teachers.”
The government-funded research will also “leverage core principles of cognitive psychology” in a “series of studies investigate[ing] how to best correct false beliefs.”
“Using predictions derived from existing theories within memory, language, linguistics and communications, the project is testing various design features hypothesized to improve the effectiveness of misinformation debunking strategies. Findings will reveal the cognitive mechanisms underlying successful misinformation debunking, and how fact-checkers should best present their findings,” explains the project summary.
“Overall, the results will inform and constrain current theories of how beliefs form and can be changed.”
Two papers have been published since the launch of the grant: “Does wording matter? Examining the effect of phrasing on memory for negated political fact checks” and “The effects of repetition on belief in naturalistic settings.”
The first paper appears to deploy a rigorous scientific approach to understand how to best craft “fact checks” that will resonate with social media users the strongest.
As its abstract explains:
“After encountering negated messages, people may remember the core claim while forgetting the negative evaluation. These memory errors are of particular concern for fact checks on social media, which often use brief affirmations or negations to help the public learn the truth behind questionable claims. Across three experiments, we examined whether these memory errors could be minimized by placing evaluations before the entire claim is stated (e.g., "No, X did not do Y, as A claims"), rather than after (e.g., "A claims X did Y. No, this is false"). Participants remembered whether fact-checked political claims were affirmed or negated immediately (Experiment 1) and 1 week later (Experiment 2). While participants began to forget these fact checks after 3 weeks, this forgetting was similar for before- and after-claim evaluations, contrary to our predictions (Experiment 3). These results suggest that there are multiple, equally memorable formats for communicating affirmations and negations.”
The half-million-dollar grant follows controversy over the federal government’s involvement in the censorship of alleged “misinformation” and “disinformation” – terms which appear to be molded to fit certain political narratives that are unfriendly towards powerful actors such as the White House, Chinese Communist Party, and World Economic Forum.