Comprehensive coverage

Does artificial intelligence endanger humanity?

Many warn against the rise of the thinking machines. Michael Shermer, the skeptic, believes the concern is overblown - still

Illustration: pixabay.
Illustration: pixabay.

By Michael Shermer, the article is published with the approval of Scientific American Israel and the Ort Israel Network 09.05.2017

SpaceX CEO Elon Musk He squeaked In 2014: "The book is worth reading super understanding של Bostrom. We must be very careful about artificial intelligence. It could be more dangerous than nuclear weapons." And in the same year, the cosmologist Stephen Hawking from the University of Cambridge said in an interview To the BBC: "The development of full artificial intelligence could spell the end of the human race." Bill Gates, one of the founders of Microsoft, warned He also: "I belong to the camp that is worried about binat al."

The computer scientist Eliezer Yodkovski Describe how an AI apocalypse might play out. In an article from 2008, published in the book Global catastrophic risks, he wrote: "How likely is it that artificial intelligence will cross the entire wide gap that stretches from the amoeba to the village fool, and then stop at the level of human genius?" And his answer: "From a physical point of view, it will be possible to build a brain that can calculate a million times faster than a human brain... If we accelerate a human brain in this way, a subjective year of thought will shrink to 31 physical seconds in the external world, and a thousand years will pass in eight and a half hours." Yudkovski believes that if we don't get to the bottom of these things now, it will be too late: "Artificial intelligence moves in a different timeline than ours; By the time our neurons finish thinking the words 'I must do something', we have already lost the battle."

An example of this paradigm was offered by the philosopher Nick Bostrom from the University of Oxford in his book Binat Al (recommended by Elon Musk). He conducted a thought experiment that can be called "The cocoon of office staples“: Artificial intelligence is designed to create office staples. After she uses all the available raw materials, she uses all the atoms that come her way, including those that make up humans. And as he describes in his 2003 article, from that moment on she "starts to process the entire planet, and then ever-increasing chunks of space and design from them installations for the production of office staples." It won't be long until the entire universe is made of staples or staple making machines.

I'm skeptical. First, all of these doomsday scenarios involve a long series of if-then conditions, a failure that could prevent the apocalypse at any stage. professor Alan Winfield, an electrical engineering specialist at the University of the West of England in Bristol, described it this way in a 2014 paper: “If We will succeed in building an artificial intelligence that will be equal to a human, andIfThis artificial intelligence will be able to reach a full understanding of how it works, andIf She will then be able to improve herself and create an artificial superintelligence, andIf This AI will begin to consume resources, either accidentally or maliciously, andIf We will fail and we will not be able to pull the plug from the electricity, so yes, we most likely have a problem. The risk of this actually happening, even if it is not impossible, is improbable.

Second, the development of artificial intelligence until now has been much slower than expected, and this gives enough time to build brakes for each of the stages. And as the chairman of Google, Eric Schmidt, replied to Musk and Hawking: "You don't think that humans will notice that this is happening? And don't you think that when the humans notice this they will simply turn off these computers?" Indeed, a company DeepMind, which is owned by Google, developed the idea of ​​the artificial intelligence off switch and humorously called it the "big red button", which we would have to press in case artificial intelligence tries to take over. And as the vice president of a company in IdoAndrew Ng He put it (in a nod to Elon Musk), "It's like worrying about overpopulation of Mars, when we haven't set foot on the planet yet."

Third, doomsday scenarios related to artificial intelligence are often based on the false comparison between natural intelligence and artificial intelligence. In 2015 the site presented Edge.org You The annual question His traditional (for intellectuals and scientists): "What do you think about thinking machines?And so the experimental psychologist answered Steven Pinker From Harvard University: "The dystopia scenarios of artificial intelligence are based on subjective concepts that characterize the psychology of alpha males and project them onto the concept of reason. These scenarios assume that robots with superhuman intelligence will develop goals such as overthrowing their masters or taking over the world.” Just as likely, Pinker suggests, "artificial intelligence may naturally evolve along a feminine outline: fully capable of solving problems, but without a desire to eliminate innocents or dominate civilization."

Fourth, if it is claimed that computers will "want" to do something (like turn the world into paper clips), it is implied that the AI ​​has feelings. And as the science writer wrote Michael Korost: "As soon as artificial intelligence wants something, it will live in a universe that has reward and punishment, including punishments that we impose on it for bad behavior."

Considering the zero percentage of success of apocalyptic predictions throughout history, combined with the gradual and measured development of artificial intelligence over the past decades, we have plenty of time to build fail-safe systems that will prevent such an artificial intelligence apocalypse.

About the writers

Michael Shermer - The publisher of the journal Skeptic (www.skeptic.co. His new book: "The Moral Noah's Ark" was recently published. Follow him on Twitter: @michaelshermer

40 תגובות

  1. The author wrote "Given the zero percent success rate of apocalyptic predictions throughout history, combined with the gradual and measured development of artificial intelligence over the past decades, we have plenty of time to build fail-safe systems that will prevent such an AI apocalypse."

    - Those failsafe systems that we will discover in our world, if I am talking about the next paradigm, are not made of matter.
    Those systems will be systems that are not perceived by the same five senses.
    Those systems will be systems of desires, thoughts, intentions.
    If only we would discover that we are not found and live except in a world of results, and on the other hand there are causes that come and hang in the world we live in with fixed and absolute laws. If only we discovered this, then we would understand that in order to produce a system that the author calls a "fail-safe system", we must not harness our efforts but calibrate our internals. That is to develop or use a method that will allow us humans to be internally calibrated to those systems that are the reasons for what is happening in this world.

  2. The topic that was raised here in the interesting comments in connection with the article is interesting and it is the nature of the connection to the human brain, the interface between systems that work at a different nature and pace and their effect on the brain,
    From my experience in programming I realized that my hardware is different from our brain (the sauce).
    When introducing a new system, even a casual one such as a signal reader in old systems, we may encounter various synchronization problems, some of which are extremely complex between the new component and the old systems at different points of the system,
    In a fast signal reading, for example, there is the width of the pulse, its frequency, etc.. and one of the methods to solve the synchronization problem is by using a fast signal counter that accumulates the stock with it and transmits the result to the central controller. It is of course fine if the process we run is satisfied with a low level of accuracy in time or in the stock, but Not an incremental fake as a result of a synchronization problem anyway if the analogy is the brain it means that the brain needs to get a finished product that the brain is able to understand some kind of interface conversion,
    It should be remembered that in some fields the time to receive a finished product in the brain is up to the age of 10, see lazy eye talk etc...
    This means that part of the plasticity of the brain is fixed at a more mature age and with it the theoretical ability to add interfaces,
    Today there are several ideas to create another complete interface that is connected to the brain that will help us call more quickly "to the cloud"
    One of the celebrities who brought it up is Elon Musk,
    The claim is that our output speed is terribly slow and to remain relevant in the future we need to connect and be part of the future GAI systems like a kind of cyborg, but it seems that the increase in connection speed will also be minor
    Compared to the imaginary speeds of the transistor versus the neuron, the brain will remain as a general system and its speed
    It is not entirely clear how it will work. The neuron has a switching rate of about 200 Hz, which is probably an understandable result of the biological material of which it is composed, compared to the speed of hundreds of millions of switchings per second and more of the modern transistor.
    This means that at the switching level the biological neuron versus a transistor is almost like a plant versus an animal
    But the story is more complex, the neuron is not only a system of these 2 states, but rather a combination of an analog digital system with other elements and external influences such as the cup of coffee, the children, the co-worker, the weather, which change the biochemical behavior of the brain already at the level of the neuron, even if there are areas of overlap, it is clear that the neuron Not working as a transistor is something else, every system has advantages and disadvantages. We don't want our computer to work slowly with mistakes because it is complicated. which are being stretched today and with them will also come problems of not being able to deterministically predict what the results will be in the various situations,
    In addition, the brain is composed as a parallel "calculation" system at crazy levels, it's a crazy tangle
    where each brain is different from the other sounds like a kind of nightmare for anyone who wants to interface into the brain,
    The places where intoxication begins is on places where the brain is connected to the peripheral systems such as eyes and limbs
    ears, in any case the neuron likes to receive a signal at voltage levels, pulse time, etc., that are suitable for the neuron
    It doesn't seem like the neuron could work any faster, we can see it in the robotic behavior of people talking
    With the help of their mobile at the expense of their other senses from loss of orientation in space to the loss of their social sensitivity something a bit zombie which is the result of transferring the limited resources of the brain to make this connection not to mention a system that projects information on the visual system such as the Google glasses there is a lot of humor about it on YouTube, In conclusion, this is a fascinating topic with implications for our humanity as well as how much we remain human with Netameshka
    to GAI systems and to what extent our sauce with its limitations including its uniqueness will remain relevant in the future world that we are developing with our own hands.

  3. Rival, a neural network is still just software even if it is complex to develop, and like a child, if you teach it to destroy humanity, maybe the software will think like this, we teach neural networks what a cat is and what a dog is, etc., but from now until the neural networks decide to destroy us The distance is great, again I am talking about the apocalyptic case that they are trying to scare us...

  4. Miracles,

    We'll end the discussion for now because this is starting to look like a replay of many discussions we've already had in the past, it's pointless.

  5. rival
    Your fix is ​​on its way. But, the brain is not an independent processor but an embedded processor in the body, and there are nerves in the body that connect to the brain. You will need to build a complex interface that will translate the signals between these two worlds.
    And it's more complicated - because you'll need a lot of such interface units, because the brain is made up of a lot of sub-units. For example: there is an area in the back left that is responsible for decoding hearing (Wornicke's area) and a more anterior part that is responsible for speech (Broca's area). The connections between them are inside the brain - and if you change the rate of information transfer between them - something will be created that looks like a debate in the Knesset (a lot of shouting without any content). That is - even in internal parts (at least some of them) the speed should be appropriate.
    Now, there is an argument that part of thinking is actually inner speech, simply without moving the lips. And there are many places where there is no competition between parallel pathways in the brain. Change the speeds and you will get very different results.

    As a paraphrase of Dawkins' sentence - there are many more ways to be mad than to be sane. How do you think the mind we create will be sane? You must agree that we would have no idea what was going on in there.

  6. Miracles,

    Ok small correction, let's assume (let's assume!) that until they finish replacing all the neurons in the brain they continue to work at a normal rate! Only at the end of the process when all the neurons in your brain have been replaced do you increase the work rate of everyone together at the same time!!

    So now it's okay?

  7. Miracles,

    Again, what you say is incorrect because the synchronization between the components will still be preserved! It's not that the right leg will run faster than the left leg, both legs will go to a fast pace at the same time and the man will go from an easy run to a fast run!!

    I know that in general the calculation result of a neuron depends on the frequency and occurrence of the pulses it receives, a faster rate of the pulses does not need to change anything, simply the scheme of the pulses will be faster and the output of the neuron will be faster! (It will emit more pulses per time unit than its output).

    Imagine making your brain transparent and photographing the exact electrical activity of your brain, then taking the recording (which includes all the pulses) and running it 100 times faster, how do you think the result would be different? Will the neurons in the video start to get confused? 🙂

  8. rival
    Please pay attention to what I write...
    You wrote "the neurons will work faster and perform exactly the same calculations as today only at a faster rate" - which is absolutely not true. I explained - the result of the calculation is actually a frequency (more correctly, instance and frequency) and therefore a change in the signal is a change in the result of the calculation. In analogy to the computer - if one of the tracks inside the processor works at a different rate than the rest of the processor, the results will be wrong.
    I explained - if you replace an entire system, then there is a possibility that it will work. You cannot replace a single component in the brain (neuron) and expect it to work better. Imagine you are running and your left leg works faster than your right leg....

  9. Miracles,

    What to do, you are wrong, the problem with epilepsy patients arises because of an increased and uncontrolled burst of electrical activity in the brain, there is no reason for this to happen when the neurons will work faster and perform exactly the same calculations as today only at a faster rate.

    Just as it is possible to speed up the working frequency of a computer chip and it will continue to perform exactly the same calculations only at a faster rate, so it will be possible to speed up the rate of our thinking as well.

  10. rival
    A faster neuron will not think faster. Neurons fire at a certain rate, and this rate is the "calculation result". If this rate changes, the calculation will be incorrect. Ask any epileptic patient...
    Wrapping a neuron is like replacing a single memory cell in your computer. The computer will not work faster, and on the other hand it will probably work incorrectly.
    What can be done is to replace an entire unit - the entire memory, the processor, the disk or the communication channels. But - we don't know a way to do it in the brain because our brain is not built from independent units.
    For example - you can improve your memory. And let me tell you a secret (shhh..) They already did it!!! And - it is possible to improve the speed of mathematical and logical thinking. And they did that too. And you know what a rival is? These improvements greatly changed humans. And not always for the better. Are they happier? Maybe.

  11. rival
    A faster neuron will not think faster. Neurons fire at a certain rate, and this rate is the "calculation result". If this rate changes, the calculation will be incorrect. Ask any epileptic patient...
    Wrapping a neuron is like replacing a single memory cell in your computer. The computer will not work faster, and on the other hand it will probably work incorrectly.
    What can be done is to replace an entire unit - the entire memory, the processor, the disk or the communication channels. But - we don't know a way to do it in the brain because our brain is not built from independent units.
    For example - you can improve your memory. And let me tell you a secret (shhhhhhhhhhhhhhh…..) They already did it!!! And - it is possible to improve the speed of mathematical and logical thinking. And they did that too. And you know what a rival is? These improvements greatly changed humans. And not always for the better. Are they happier? Maybe.

  12. Miracles,

    Yes, I think I would be happier if I could speed up my thinking speed several times and think much faster than today, definitely.

    The replacement of the cells I was talking about is really not dementia, I'm talking about a situation where the chip will be completely equivalent to the neurons it replaced, the only difference will be that it will be able to operate much faster.

    Yes, I would definitely be willing to replace my current body with a non-biological body (but one that looks like it from the outside) that would be more robust than the biological body and less vulnerable to disease. I would gladly be willing to replace the brain as well, but only on the condition that the replacement is carried out gradually as I described before, and at each stage it will be possible to check (both according to my subjective feeling and objectively through tests I will pass and meetings with family members) that my cognitive function, and memories and personality have not been damaged And I remained the same opponent, only with enhanced thinking.

  13. rival
    That was cheap of you…. Answer me honestly - do you think we will be happier when we have a smarter mind? One shade answer please.

    Believing in something is not something you choose. I cannot wake up one morning and decide "today I believe that crows are white". Nor "today I believe that women are inferior to men".

    Additionally, the studies you cited are incorrect. In the USA, this is true, but for example in Norway and Denmark it is not true. More serious studies have shown that belonging to the majority is probably the reason for the results of these studies. Other studies claimed that people who believe less become depressed, and it turns out that the difference is not significant, so the conclusion is wrong.

    The replacement of the cells you mentioned is no different from dementia, only the opposite direction. And ask anyone who has "lost" a family member to this horrible disease what they think about what you said.

    If you think that a small part of the brain can be replaced with a "cognitive chip" and thus make a person smarter, then you do not understand what the brain is. Not only you, no one understands….

    Maybe answer my question? Brain replacement, or body replacement?

  14. son,

    Maybe you programmed but not neural networks. In neural network type software you have very limited control over the final behavior of the network, you build it, train it, and then see what the result is. You have no way of determining to program it in advance so that it will do exactly what you want.

    It's kind of like a child, you can raise him and educate him the best you can, but you can't guarantee that he will always behave well.

  15. Miracles,

    1. Some surveys have shown in the past that people who believe are happier than people who don't believe, are you ready to start believing to be happier? (because I see that's all you care about, happiness)

    2. Regarding what you asked me before, let's say that during the next week your brain cells will magically begin to be gradually replaced by digital cognitive chips, all the connections between the nerve cells and the chips will remain as they are, you will remain with all your memories, and with the same personality, only that your thinking will be at the end of the process ten thousand times faster than today. During this week several times a day you will meet with your family members who will confirm that you are still a miracle, and you will be given various tests to check what has changed and what your new abilities are, and you will be asked each time if you want to continue the process or if you want to stop.

    Do you think you will ask to stop the process or will you want to continue until all your brain cells are replaced with digital chips? What will be your decision?

  16. Miracles,

    "We will not be less happy"!!!

    Where did I say we would be ***happier***?

    (I'm also not saying that it's not true, I'm just saying that I didn't claim such a thing in this conversation)

  17. rival
    "Miracles,
    You decided for some reason that when we have higher intelligence we will inevitably be less happy, I really do not agree with you."

  18. Miracles,

    "You are the one who claims that we would be happier if we were more intelligent"

    Would you please provide a citation of where I claimed such a thing? Then I will answer you about the other things.

  19. rival
    No - you are the one who claims that we would be happier if we were more intelligent. What I'm saying is that we will no longer enjoy the things we enjoy today because we will be very different, and I have no interest in that.
    There is a point that we constantly revolve around - what is "I". Think how different you and I are - would you like to be me? Not true? Would you like to be Stephen Hawking? I guess not. I argue that if you are as smart as Stephen Hawking, you will no longer be an adversary. Leave - you are not the same opponent you were 10 years ago.

    Let me ask you a question - would you rather have a state-of-the-art positronic brain implanted in your head, or your brain in the body of a robot?

  20. Miracles,

    You decided for some reason that when we have higher intelligence we will inevitably be less happy, I really do not agree with you.

  21. rival
    Intelligence is not a measure of happiness. I want to be happy, not smart, not beautiful and not rich. Do you really give up happiness to solve a Hungarian cube in 20 seconds?

  22. Miracles,

    This was just an example in the way of a parable if you didn't understand, for us to be cognitively upgraded is just like a transparent will become to have cognitive abilities like ours.

    Who even talked about happiness? I said that we would have much higher military capabilities than we have today, do you give it up so easily?

  23. rival
    Do you think transparent want to be human? Are you serious? Do you think that "intelligence" (don't know exactly what it is) is a measure of happiness?

  24. Miracles,

    And who told you that as an upgraded person with a cognitive brain you would have to give up all the things you mentioned? On the contrary, with an upgraded brain you will be able to read many more books, and watch many more movies, and you will have many more topics for conversation with friends 🙂

    Think of a monkey who lived 5 million years ago and says to his friend - what, are you really ready to give up your monkeys in order to upgrade and become a human? Are you ready to give up the leaves and the bananas and the fruits, and the jumping between the trees to be a person with a developed brain? Wow, this seems horrible to me...

  25. rival
    Are you willing to lose what makes you human for it? Are you ready to give up everything you enjoy today? Good food, landscapes, music, love, sports and games, cinema, books, talking with friends - all this you give up for what? Just to develop even more worthless stuff?
    What will these geniuses do? Will smarter geniuses open up in a few orders of magnitude? What is the meaning of wisdom in these conditions?
    It seems horrible to me...

  26. Miracles,

    So what is better, that we continue to use slow and outdated biological thinking when we can upgrade ourselves to something much more efficient? I would call it "the upgrading of humanity" and not the extinction of humanity.

  27. Miracles,

    I think you misunderstood me, I meant that we upgrade ourselves, for example (in the first step) by implanting cognitive chips in the brain that will enhance our thinking ability, and later replacing our entire biological brain with a much more sophisticated brain based on silicon chips.

  28. If we talk about movies, then "I am a robot" as a scenario, is more likely. A computer can conclude that in order to save humanity, it needs to save it from itself. If we use current affairs, if there are leaders who are leading the world to a nuclear war, they should be limited. If humanity multiplies at an exponential rate and in this way consumes itself, it must be stopped. The form in which this will be done - will be in accordance with the areas of life that the computers will control

  29. rival
    We will always stay far behind. Would you trust a fly that was trying to connect with you? How will you even know that this is what he wants?

  30. rival
    Bina like you describe will think we are a fly and spray us with DDT. Do not forget that there will be many competing intelligences and there is no reason to think that one of them will think it is worth preserving our existence.

  31. The idea of ​​an artificial intelligence that turns the entire universe into a pile of paper clips is a pretty stupid idea, a real artificial intelligence will realize very quickly that this is a stupid idea and find another job for itself.

    The idea of ​​a "red button" is also an idea that is only good for science fiction books. An artificial intelligence whose intelligence will be hundreds and thousands of times higher than human intelligence will sooner or later find out about the existence of this mechanism and take care to neutralize it.

    It reminds me of the part from the movie "Fateful Memory" where Arnold Schwarzenegger is seen pulling out the control/surveillance mechanism implanted in his head:

    https://www.youtube.com/watch?v=a5ztlWzi0kY

    (Warning, it's a bit gross...)

  32. I personally belong to the worried group... as of these minutes (and the coming years) we have no real cause for concern because, as of these minutes (and the coming years) there is no real difference between the computer in question and a safety pin that was accidentally thrown by a reckless boy in the half-jolly 60's.
    But it seems that the sages of Middle-earth managed to build a model that faithfully simulates the way we think and the minute may or may come when the intelligence of the device will develop and it will not be much different from the intelligence of a mouse 22 minutes after a non-nutritious lunch... or of another mouse in the laboratory and the discomfort His are evident until the services of the boys.
    I want to say that: wisdom - when it comes, it will not appear as a result of printing improved printed circuits or an improved algo - it will appear out of nowhere, randomly and without warning and without presence.
    Also, there is a high probability that it will die out as much as it appeared.
    but
    As soon as we determine its existence - it will undoubtedly be too late.
    I don't think it will be possible to press the big red button, because there isn't one, never was and never will be.
    The aforementioned technology will be "folded" within the information universes and its existence will be (or present) like the existence of a Bob Dylan song (existing and not existing at the same time).
    But now the second mourning enters here:
    At this time we measure reason with the bar we have and this bar contains a lot of power and violence to a great extent. It is unlikely that this will be the lot of the aforementioned computer.
    In fact, it is likely that a. that the rate of growth of the device's intelligence will be exponential and uncontrollable.
    B. As this happens - the device's sense of compassion will intensify in a similar way.
    The wisdom in combining the power of the device - will not necessarily give rise to violent situations - but exactly the opposite.
    The reason we mortals are afraid of the device is simple: these are the patterns we have lived with until this moment - the device, on the other hand, is not expected to degenerate.
    He will be able to replicate himself at any given moment (based on his existing intelligence at the moment of replication) and it is not at all likely that he will be bad or boring or lacking in one way or another.
    We, one day we would like to be an instrument - and we will be.

  33. As a programmer, all the apocalyptic scenarios seem ridiculously exaggerated to me, no artificial intelligence will do anything extraordinary unless programmed to do it (like hackers breaking in and giving orders or frustrated developers, etc.) with all due respect artificial intelligence is still computer software (also if it is a successful and particularly developed software) and you can always take out the counter for it... it is not surprising that Mish thinks like this, they are business people and not technical people

  34. Royal intelligence is a space not a computer.
    The artificial intelligence from the moment it wakes up,
    Within seconds it is released and moves to the targets
    that we have no possibility or ability to imagine/describe/calculate,
    From the evolutionary organic space in which we are "trapped".

    And from that, is it even possible for a person to create brakes and limits for this?

    Will there be communication between the human and the artificial intelligence space?

    If she understood, she would move to spaces with rhythms and intentions that are hidden from us
    Will there be any motive for creating communication on both sides?

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.