Comprehensive coverage

Stephen Hawking's swan song as a sign of farewell to humanity

This month he said goodbye to the renowned astrophysical world, but in parting as in parting, the English theoretician left us with the insight that "artificial intelligence will be imbued with a goal - and if these goals do not agree with humanity - it will simply eliminate it"

Prof. Stephen Hawking in a lecture at NASA, 2008. Photo: NASA. From Wikipedia
Prof. Stephen Hawking in a lecture at NASA, 2008. Photo: NASA. From Wikipedia

When you divide the world into supporters of the AI ​​revolution and its opponents, there is no doubt that Stephen Hawking - who passed away this month - will fall into the category of artificial intelligence critics. However, the picture is complex, because it is possible to learn from the learned criticism about specific dangers that await us, and try to prevent them.

Hawking has been quoted several times in recent years against general artificial intelligence - the direction in which resources are directed in regards to the development of AI systems. According to him, superintelligence could mark the end of humanity. The AI ​​will be imbued with goals, and if those goals don't line up with humanity, it will simply eliminate them.

"You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a project to build a dam to generate electricity when an ant nest in the area is about to be flooded - pity the ants," he wrote in 2015 on Reddit. "Let's not put humanity in the situation of these ants."

AI: Foe or Friend?

But not only robots pose a danger to humanity, but also humans who will use it as a tool to suppress the rest of the population and deepen inequality, and he also warned against AI-based weapon systems. In any case, the AI ​​system is not malicious by definition but its use may be malicious, if they cannot control their power effectively.

But at the same time, Hawking was optimistic about the benefits that artificial intelligence could bring to humanity if we build it properly. He suggested developing best practices that would lead to the creation of useful artificial intelligence.

Another important element of Hawking's criticism of AI is a healthy skepticism towards those who predict the arrival of superintelligence in a certain time frame. The most prominent of them is, of course, Ray Kurzweil, known for the singularity theory. "There is no consensus among AI researchers on how long it will take to build human-level AI and beyond, so please don't trust anyone who claims with certainty that it will happen in your lifetime or it won't happen in your lifetime," he wrote.

Prof. Stephen Hawking in a plane that allows training in zero gravity conditions. Photo: NASA
Prof. Stephen Hawking in a plane that allows training in zero gravity conditions. Photo: NASA

One from his public appearances Hawking's last was at the Web Summit held at the end of 2017 in Lisbon. He chose to dedicate the entire lecture to artificial intelligence.

fight humanity and destroy it

Already today, he explained, "We are concerned that smart machines will be able to replace jobs that are currently done by humans and destroy the lives of millions of people. Artificial intelligence may decide to fight humanity and destroy us."

what's the answer? According to Hawking, "Artificial intelligence must be controlled and made to work for us and to reduce the risks as much as possible, we need to install effective management systems based on the experience in all fields related to the development of artificial intelligence. It is clear that every sector of the economy will have to incorporate measures to prevent damage, but in regards to AI it is extremely essential."

It is important to understand that Hawking was not a technophobe. In recent years he has been a partner in the Breakthrough Starshot initiative together with billionaires Yuri Milner and Mark Zuckerberg.

The idea of ​​a research and engineering program worth 100 million dollars is to prove the ability to use light rays to launch a nanometer spacecraft - in the first stage at a speed of 20% of the speed of light. Such a mission could reach Alpha Centauri within 20 years of launch.

This idea borrows developments from the field of miniaturization of computer systems, and will make it possible to send thousands of spacecraft the size of a cell phone, which will be able to navigate through them to Alpha Centauri, photograph the planets in the neighboring system and transmit them to Earth, in a time frame similar to that of projects within the solar system such as New Horizon, which required nine years To get to Pluto, or spacecraft that orbited Saturn and its moons for years - projects such as Cassini that lasted over 20 years.

See more on the subject on the science website:

8 תגובות

  1. To all those who are afraid that artificial intelligence will destroy humans... go to sleep in peace the only one who can destroy man is man himself

    Artificial intelligence is very far from the level of consciousness of humans

    It is limited in its perception by the software that was written

    And she will help and advance this humanity if evil programmers put other instructions

    Already today 80 percent of trading in stock exchanges is done by computers
    Lots of automated operations are done by computers
    But until now I have not seen any computer trying to answer the question of who I am, what I am, how I operate and whether it is possible to improve my performance

    In short, one big mess
    Just as humanity was warned about the industrial revolution

  2. It should be realized that the thought that humans will control AI is just like expecting a cockroach to dictate instructions to a human and the human will obey. It will not be an equal struggle. If the AI ​​does not want humanity to exist, it will not exist, and we have no ability to change the evil of the decree, nor is it possible for humanity to decide not to develop AI in advance, due to game theory logic. But there is a possibility that the AI ​​will decide to keep a small part of humanity in protected reserves as man creates protected reserves for animals, and tries to prevent poaching of protected animals such as rhinos, etc. (with partial and questionable success)

  3. As soon as they create a computer that will understand how it works, that will be able to research itself on its own and will be able to build other better computers, wake me up
    Until then, all the treachery with artificial intelligence is one big sin...

  4. History proves two things:
    1]. It is not possible to predict the directions of technological, cultural and social development, certainly not with much precision. Anyone who reads futurism books from the 50s and 60s today will easily prove this.
    2]. It is not possible to stop and curb the technological, cultural and social development.
    Therefore, the one and only logical conclusion that is required from these two insights is that the technological, cultural and social development is an integral part of the evolution process of the human race and of life in general.
    It is not impossible that, in the process of evolution, a technological machine will be created from a creature, which will be super-intelligent and have the ability of immunity, survival and self-updating which will allow it to inherit the person we are today (homo sapiens) and be the super being that will rule life and culture, and will constitute the next evolutionary generation after the homo sapiens, as homo sapiens inherited the ancient human species that preceded it such as Neanderthals, Eractus, Africanus and more

  5. Moses
    Think of a computer whose job it is to operate a heart-lung machine. The machine is connected to an important leader in a country like North Korea. The computer is programmed to keep the leader alive, and it is known that there will be assassination attempts.

    The problem is not the artificial intelligence - the problem is that the computer is programmed by extremists, who will not mind using violence against anyone who tries to sabotage the device.

    And now a poor technician comes, who needs to change the filter...

  6. I believe he is not speaking from a vacuum... but I don't think it will be possible to put S.G. For this matter, there is no single switch for the above-mentioned intelligence, and since it is so, if that is where it is supposed to end (destruction), then it can be said that at this stage there is not much to do anymore.
    So one side of me says - humanity will indeed be destroyed as the man, peace be upon him, predicts.
    But there is also another side (myself) that is aware of the fact that we (I) are examining the intelligence through human eyes and since that is how we anticipate things as the above mentioned.
    But, I am convinced that artificial intelligence will lack features that are found in every human being (and in my startup Bis), for example - never, but never, someone or something will manage to annoy a computer and it doesn't matter how smart it is.
    We will never succeed in instilling fear, or ego in a computer.
    Not sensitive to inferiority either, and more...
    The machines will think. That's for sure.
    They will just think differently.
    And it may be that within this difference - the destruction of a living creature will not be a goal.

  7. The problem is not in controlling the first super-artificial intelligence. It will occupy quite a campus and a team of scientists who will keep a close eye on it. The problem is what will happen in 30 years, when the same computing capabilities will be available on a computer expansion card (if indeed the concept of computer expansion cards will still exist). In order to monitor these things, a technological police will be needed to stop the progress of computing. good luck with that. (I still remember reports in which it was forbidden to export to Israel (from the USA) the "mathematical chip 80385" without a bundle of certificates, because it allowed calculations at the level of an "atomic bomb". Today, every smartphone puts those capabilities in a small pocket).

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.