Study: The immediate risks of artificial intelligence are scarier than doomsday prophecies

A study by the University of Zurich among over 10,000 participants in the UK and US finds that people are more afraid of the tangible harms of AI in the present than theoretical conclusions about humanity's doomsday.

A division between the immediate risks of artificial intelligence (left) and the future existential danger (right). Image prepared using DALEE
A division between the immediate risks of artificial intelligence (left) and the future existential danger (right). Image prepared using DALEE

Most people are far more concerned about the immediate harms of artificial intelligence—such as biases in automated systems, the spread of misinformation, and job losses—than they are about the theoretical threats to humanity’s existence in the distant future, according to a study by scientists at the University of Zurich of more than 10,000 participants in the United Kingdom and the United States. Even when subjects were presented with apocalyptic headlines describing extreme AI dangers, they remained focused on the concrete concerns of the day.

While there is broad agreement that AI poses significant challenges, there is variation in how people interpret and prefer to address the various threats. Some believe that we need to prepare for long-term, “existential” risks—for example, the possibility that AI will endanger the very existence of humanity—while others emphasize the immediate problems: the increase in prejudice, the spread of disinformation, and the loss of livelihoods due to automation.

To test the differences, a political science team at the University of Zurich conducted three large-scale online experiments involving more than 10,000 participants. The participants were divided into groups: some were exposed to headlines describing a distant existential risk, others read about current threats such as discrimination and misinformation, and a third group was also given a positive review of the benefits of artificial intelligence. The researchers tested whether warnings of future disaster reduced awareness of the real problems of the day.

“The findings show that respondents are much more concerned about the immediate risks of AI than about tomorrow’s apocalyptic solutions,” says Prof. Fabrizio Gilardi of the Department of Political Science at the University of Zurich. Even when texts about long-term threats were viewed, concerns about concrete vulnerabilities—such as biases in decision-making systems and loss of job roles—remained significantly higher.

However, research shows that people are able to distinguish between theoretical dangers and tangible problems, and take both seriously, even at the same time.

Broad discussion of all risks


The study fills a significant knowledge gap: Public debates have criticized the focus on doomsday scenarios as distracting from the problem of the day. But this is the first study to provide systematic data showing that alertness to immediate risks does not diminish even when future threats are highlighted.

“The discussion of long-term risks must not be at the expense of awareness of today’s dangers,” notes Associate Fellow Emmanuel Huss. Prof. Gilardi adds: “A parallel discussion of both immediate and future challenges is needed.”

for the scientific article

More of the topic in Hayadan:

6 תגובות

  1. The fact that "people" are afraid does not justify publishing an article... People do not understand what it is, and the media "inflates" the hype around the issue. They were also very afraid of "Bug 2000" and invested billions in "preparations", for no reason.

  2. In the following article:
    Research shows that people are more afraid of the tornado coming their way now than they are of the impact of global warming in 50 years.

  3. It's interesting that there is such a fear of AI spreading misinformation when we are constantly inundated with misinformation by the media (classical press, television and radio, and social networks) when there, the information is spread by humans.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismet to filter spam comments. More details about how the information from your response will be processed.