If we are going to label artificial intelligence as a cause of extinction, we need to clarify how this will happen

Along with many benefits, this technology comes with risks that we need to take seriously. But none of the above scenarios seem to provide a specific roadmap for extinction, claims an artificial intelligence researcher from the UK

By Nello Christianini' Professor of Artificial Intelligence, University of Bath.

Nello Christianini is the author of The Shortcut: Why Intelligent Machines Don't Think Like Us, published by CRC Press, 2023.

Will artificial intelligence cause the extinction of the human race? Figure using DALEE 2. Definitions: Avi Blizovsky
Will artificial intelligence cause the extinction of the human race? Figure using DALEE 2. Definitions: Avi Blizovsky

This week a group of Well-known and reputable AI researchers On a 22-word statement (in English - Hebrew 18 A.B.):

"Reducing the risk of extinction from artificial intelligence should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

As a professor of artificial intelligence, I am also in favor of reducing any risk, and am willing to work for it personally. But any statement worded in this way is bound to create panic, so its authors should probably be more specific and clarify their concerns.

As defined in the Encyclopaedia Britannica, extinction is the "dying or extermination of a species." I have met many of the signatories of the declaration, who are among the most respected and solid scientists in the field - and their intentions are certainly good. However, they did not provide us with a tangible scenario of how such an extreme event could occur."

This is not the first time. On March 22 of this year, a petition signed by another group of entrepreneurs and researchers asked to suspend the deployment of AI for six months. In a letter published on the website of the Institute for the Future of Life, they detailed their reasons: "profound risks to society and humanity, as shown by extensive research and recognized by leading AI laboratories" - and attached to their request a list of rhetorical questions:

Should we let the machines flood our information channels with propaganda and lies? Should we automate all jobs? Should we develop non-human minds that may eventually outnumber us, trick us, make us obsolete, and replace us? Should we risk losing control of our civilization?

Generic sense of alarm

It is certainly true that along with many advantages, this technology comes with risks that we have to take seriously. But none of the above scenarios seem to provide a specific roadmap to extinction. This means we are left with a general sense of alarm, without any possible action we can take.

The AI ​​Safety Center's website, where the latest statement appeared, describes in a separate section eight broad categories of risk. These include the weaponization of AI, its use to manipulate the news system, the possibility of humans eventually being able to govern themselves, aiding oppressive regimes, and so on.

Except for arming, it is not clear how the other risks - still terrible - could lead to the extinction of our species, and the burden of answering this is on those who claim it.

AI weaponry is a real concern, of course, but what is meant by that also needs to be clarified. The Center for AI Safety's main concern appears to be the use of AI systems to design chemical weapons. This should be prevented at all costs - but chemical weapons are already prohibited. Extinction is a very specific event that requires very specific explanations.

On May 16, at his Senate hearing, Sam Altman, CEO of OpenAI – which developed the ChatGPT AI chatbot – was asked twice to detail his worst-case scenario. He finally replied:

"My biggest fear is that we - the field, the technology, the industry - are causing significant damage to the world... That's why we started the company [to prevent this future]... I think if this technology goes wrong, it can go wrong."

But while I agree that we should be as careful as possible, and I've been saying this publicly for the past ten years, it's important to maintain proportion - especially when discussing the extinction of a species of eight billion individuals.

Artificial intelligence may create social problems that need to be prevented. As scientists, it is our duty to understand them and then do our best to solve them. But the first step is to name them and describe them - and be specific.

For the article in THE CONVERSATION

7 תגובות

  1. A simple scenario: a single terrorist who hates humanity or sees it as a threat to the earth, develops, using the accumulated knowledge in a language model, a biological weapon for the total destruction of humanity and returning the globe to a pre-human era.

    Even today this is a possible scenario, but not for an individual. Because collaborations are required to get all the knowledge for such a project.
    But given an omniscient LLM modem, even a single person would be able to do it.

    Let's say an 18-year-old boy would do this instead of opening fire and spraying his school with bullets.

  2. Evolution has proven that wisdom is not exactly a trait that survives, so it seems to me that the amount of human stupidity will allow us to survive forever

  3. I don't understand the article, because there is already a lively discussion (in English) that presents the concrete dangers, including extreme dangers. For example, Eliezer Yodkovski, who presents detailed and scholarly explanations. These discussions will continue to expand in the coming years.

  4. The artificial intelligence systems have enormous advantages as well as risks,
    Another danger is that due to excessive hysteria, the development of artificial intelligence systems will be inhibited and the result will be a weakening of the human ability to help and solve very difficult problems that exist today,
    Despite the enormous human development in medicine, we are still deep in the Middle Ages in many areas
    A small "trip" through the different departments in the hospitals such as the oncology and not only, brings us to the edge of the limit of human knowledge and the meanings of that which is suffering pain and human loss at the highest levels, promoting solutions even in a decade will save so many people fathers mothers children there is enormous pain there will die that there are so many broken worlds and this needs to be resolved as soon as possible, not far from me there is a mother I know who is in her last moments and her children are separated from her, that's what this is about, we want to eliminate this suffering from the world,
    If it is clear that there are both risks of various types and benefits, this could fill a book,
    When potential for horror scenarios are more in the long run
    Although more perhaps it will be degeneration scenarios where we find ourselves drawn only to pleasure a type of bread and amusement
    with non-human partners whose appearance and comfort are tailored to each one specifically

  5. The fact that artificial intelligence succeeds in permeating and serving the public proves that human color abilities are becoming weaker and lazy and therefore humanity wants to rely on artificial intelligence.
    Artists certainly suffer from this the most because in his opinion the intellectual capacity of man comes from the depth of his feelings and his perception of the human universe mainly, and not the sum of how genius man is to this day, which mainly produces clichés.
    At this rate, in a few more years artisans will be considered witches or wizards...
    One only has to hope that there is no dictatorship hiding behind human reason

  6. The title of the article promises an answer (or at least a discussion/hypothesis) to the question "How can artificial intelligence cause extinction?". Towards the end of the article there is the passage
    "On May 16, at his Senate hearing, Sam Altman, CEO of OpenAI — which developed the ChatGPT AI chatbot — was asked twice to detail his worst-case scenario. Finally he replied:
    "My biggest fear is that we - the field, the technology, the industry - are causing significant damage to the world... This is why we started the company [to prevent this future]... I think if this technology goes wrong, it can go wrong."
    The author fell into a well-known trap that there is no connection between the title (even given by the author) and the content of the article.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.