Conversations with artificial intelligence can successfully reduce belief in conspiracy theories

A new MIT study published in SCIENCE shows that artificial intelligence can change the minds of conspiracy theorists by presenting personalized evidence, and this effect lasts a long time

Advanced robot GPT-4 talks to a human about technology and information, debunking conspiracy theories using evidence-based science. The image was produced using DALEE artificial intelligence software
Advanced robot GPT-4 talks to a human about technology and information, debunking conspiracy theories using evidence-based science. The image was produced using DALEE artificial intelligence software

Have you ever tried to convince someone who believes in conspiracy theories that the moon landing was not staged? You likely failed, but ChatGPT may succeed where you failed, according to research conducted by Professor David Rand of the MIT School of Management and Professor of Psychology Thomas Costello of American University. The research was conducted during Costello's postdoctoral work at MIT.

In a new article called "Durably reducing conspiracy beliefs through dialogues with AI", published in the journal Science, the researchers show that large models of language can effectively reduce people's beliefs in conspiracy theories—and these reductions are maintained for at least two months. This finding provides new insights into the psychological mechanisms behind the phenomenon, as well as potential tools to combat the spread of conspiracy theories.

Get to the bottom of the conspiracy

Conspiracy theories – beliefs that certain events are the result of secret plots by powerful actors – have long been a fascinating and disturbing subject. Their resistance to counter-evidence has led to the conclusion that they fulfill a deep psychological need, making them impervious to facts and logic. According to the conventional view, once someone "falls down the rabbit hole", it is almost impossible to get them out.

But for Rand, Costello and their research partner, Professor Gordon Penny-Cook of Cornell University, who have spent years studying the spread of misinformation, that conclusion didn't ring true. They suspected there was a simpler explanation.

"We wonder if it's possible that people simply haven't been exposed to compelling evidence that disproves their theories," Rand explained. "Conspiracy theories come in a variety of forms - the details of the theory and the arguments that support it vary from believer to believer. If you try to disprove a conspiracy without having heard these particular arguments, you will not be ready to disprove them.”

In other words, to effectively debunk conspiracy theories, you need two things: personalized arguments and access to vast amounts of information – both of which are now available through generative artificial intelligence.

Conspiracy talks with GPT-4

To test their theory, Costello, Penny-Cook, and Rand used OpenAI's GPT-4 Turbo model, the state-of-the-art large language model, to conduct evidence-based in-person conversations with more than 2,000 conspiracy theorists.

The study included a unique methodology that enabled a deep engagement with the personal beliefs of the participants. Participants were first asked to identify and describe a conspiracy theory they believed in, in their own words, along with the evidence that supported that belief.

GPT-4 Turbo used this information to create a personalized summary of the participant's belief and start a conversation. The AI ​​was designed to convince participants that their beliefs were incorrect, and adapted its strategy according to each participant's unique arguments and evidence.

These conversations, which lasted an average of 8.4 minutes, allowed the AI ​​to directly address the evidence supporting each participant's conspiratorial beliefs—an approach that would have been impossible to test on a large scale before the development of this technology.

A significant and lasting effect

The results of the intervention were surprising. On average, the conversations with the artificial intelligence reduced the participants' belief in their conspiracy theories by about 20%, and about one in four participants - who previously believed in the theory - retracted after the conversation. This effect continued even two months after the conversation.

The impact of the talks was not limited to conspiracy theories of any particular kind. She has successfully challenged beliefs on a wide range of topics, including conspiracies related to sensitive political and social matters, such as those related to COVID-19 and fraud in the 2020 United States presidential election.

The intervention was less successful among participants who reported that the theory was central to their worldview, but even among them there was some effect, with small changes between different demographic groups.

Notably, the impact of the conversations with the artificial intelligence was not limited to belief changes. Participants also reported changes in their behavioral intentions regarding conspiracy theories. They reported a greater willingness to "unfollow" people who spread conspiracy theories online, as well as a greater willingness to engage in conversations that challenge those beliefs.

The opportunities and dangers of artificial intelligence

The researchers point out that there is a need for continued responsible use of artificial intelligence, as the technology could be used not only to convince people to give up beliefs in conspiracies, but also to convince them to believe in them.

However, the potential for positive applications of artificial intelligence to reduce belief in conspiracies is significant. For example, artificial intelligence tools can be incorporated into search engines to offer accurate information to users searching for terms related to conspiracy theories.

"This study shows that evidence is much more important than we thought, as long as it is related to people's beliefs," Penny-Cook said. "This has implications far beyond conspiracy theories: any number of beliefs based on bad evidence could, theoretically, be undermined using this approach."

Beyond the specific findings of the study, its methodology also highlights the ways in which big language models can transform the field of social science research. Costello noted that the researchers used the GPT-4 Turbo not only to conduct the conversations, but also to screen the participants and analyze the data.

"Psychological research used to depend on graduate students interviewing or conducting interventions on other students, which limited capacity," Costello said. "Then we moved to online survey and interview platforms that enabled scale, but took away the nuances. The use of artificial intelligence allows us to enjoy both worlds."

These findings challenge the idea that conspiracy theorists are beyond the reach of reason. Instead, they show that many are open to changing their views when presented with compelling, personal evidence.

"Before we had access to artificial intelligence, research on conspiracies was mostly observational and correlational, leading to theories that conspiracy theories fulfill psychological needs," Costello said. "Our explanation is simpler - many times, people simply didn't have the right information."

In addition, members of the general public interested in this research are welcome visit the site and experience for themselves.

for the scientific article

More of the topic in Hayadan:


Comments

  1. To Tomer: Whoever wrote the article is not responsible for the formation of conspiracies. On the contrary: he presents ways to fight them. Don't blame him for something that doesn't depend on him.

    About the article: Artificial intelligence relies on the beliefs of most people in the world and the knowledge they have, and this is the truth from its point of view. Yes - she is right, but only if the world is right. If the world is wrong - no artificial intelligence will help in this case, because the truth is not known to anyone, and the artificial intelligence has no truth to rely on. So even if someone has a connection theory that happens to be true - it will be disproved, because from the point of view of artificial intelligence it is a lie.

  2. Looking for ways to combat the spread of conspiracy theories
    =========
    Say there is no limit to your insolence ???????????? no limit???
    Do you want people to believe that there is no conspiracy?
    So don't make conspiracies!!!
    When people see how much the media lies, how much all kinds of studies/surveys are on behalf of
    And it's all mind engineering for financial gains or other hidden motives
    So why are you surprised that in the end many believe in conspiracies???

    As soon as there are so many conspiracies - in the end the person will believe in many things that are
    Indeed conspiracies and will also believe in things that are just false conspiracies

    ** There is a difference between "delusional" and "lie"
    By the way, even things that are considered "weird - it can't be true" - this really does not mean that it is not true. As soon as you understand the real situation - you realize that many things that they thought could not be - that they are and how they can be and are the truth

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.