Comprehensive coverage

Worried about artificial intelligence taking over the world? You are probably making unscientific assumptions

This is what Yelani Vasilaki, a professor of computer science and neuroscience at the University of Sheffield claims, as an answer to the concerns of people such as Stephen Hawking

Will robots take over the world? Illustration: shutterstock
Will robots take over the world? Illustration: shutterstock

Yelani Vasilaki, Professor of Computer Science and Neuroscience, University of Sheffield
Full disclosure: Prof. Yelani Vasilaki receives funding from EPSRC, the Wellcome Foundation, Google Deepmind, and the Amazon Alexa Foundation.

Translation: Avi Blizovsky

Should we be afraid of artificial intelligence (AI)? For me, it's a simple question with a simpler, two-letter answer: no. But not everyone agrees - many people, including the late physicist Stephen Hawking, have expressed concern that the rise of powerful systems could spell the end of humanity.

Obviously, the view that artificial intelligence will take over the world depends on the question of whether computers can develop intelligent behavior that surpasses that of humans - what is known as "superintelligence". Let's examine how likely this scenario is and why there is concern about the future of artificial intelligence.

Humans tend to fear what they do not understand. They are often accused for this of racism, homophobia and other sources of discrimination. It is therefore no wonder that this tendency also applies to new technologies - they are often surrounded by mystery. There are many technological achievements that seem almost unrealistic, and clearly exceed expectations and in some cases even human performance.

There is no wind in the machine

But let us demystify the popular AI techniques known collectively as "machine learning." These methods allow the device to learn a task without having to program explicit instructions into the content. It may sound scary but overall it's something simple.
The machine, which is software, or more precisely an algorithm, has the ability to discover relationships within the data it receives. There are many different methods that allow us to achieve this. For example, we can show the images of handwritten letters, one by one, and ask the algorithm to recognize it. The machine initially outputs a letter at random and we correct it by giving the correct answer. We programmed the machine to reconfigure itself next time. As a result, the machine improves its performance over time and "learns" to recognize the handwritten alphabet.

In effect, we have a machine programmed to exploit common relationships in the data in order to accomplish the specific task. For example, all versions of “a” look structurally similar, but different from “b”, and the algorithm can take advantage of this. After the practice phase, the machine can apply the knowledge gained to new letter samples, for example those written by a person whose handwriting the machine has not seen before. Humans, on the other hand, are good at reading.

Perhaps a more interesting example is that of Google's Deepmind AI player, who outperformed every human player, including world champions. He clearly learns differently than humans do. He played with himself a number of games that no man could play in his life. The machine also learns the rules of the game. Through the millions of repeated games, she can find out what the best move is in any situation and suggest moves that no one has taken in Go games so far.

Toddlers versus robots

Does this make artificial intelligence smarter than humans? of course not. This is a dedicated AI for this particular type of tasks and does not display all human versatility. Humans have been developing an understanding of the world for years, artificial intelligence is unlikely to catch up anytime soon.
The fact that the machine is "intelligent" is limited to the machine's ability to learn, but even when it comes to learning, it is done in a different way than humans. In fact, toddlers can learn just by watching someone solve a problem once. Artificial intelligence, on the other hand, needs huge amounts of data, and to make a lot of attempts until it learns a very specific problem, and it is difficult to generalize its knowledge on tasks that are very different from the ones it believes about. So, while humans develop incredible intelligence rapidly in their first few years of life, the key concepts of machine learning are not that different from what they were a year or twenty years ago.
The success of modern AI stems less from breakthroughs in new methods and more from the sheer amount of data and computational power available. It is important to note that even an infinite amount of data will not result in artificial intelligence like that of the human brain. For this we need to make significant progress in developing the first "general intelligence technique". There are already researchers trying different approaches to build such a system but it involves building a computer model of the human brain, something we are not even close to achieving.

Ultimately, just because AI can learn, it will be impossible to teach it all aspects of human intelligence so that it can outsmart us. There is no simple definition of what human reason is and we certainly have no idea how reason is created in the brain. But even if we can overcome this and then create an artificial intelligence that can learn and become smarter, this does not mean that it will be more successful.

Personally, I'm more concerned about the way humans use AI. Machine learning algorithms are often thought of as black boxes, and no effort is made to trace the specifics of the solution our algorithms found. This is an important and often neglected aspect because we are often obsessed with performance and less with understanding. Understanding the solutions these systems discovered is important, because we can then assess whether they are correct or desirable solutions.
If we train our system incorrectly, we will end up with a machine that has learned relationships that are not correct at all. Let's say for example that we want to design a machine that will assess the ability of potential engineering students. Probably a terrible idea, but let's go with it for the sake of argument. Traditionally, this is a male-dominated field, meaning that the data is likely to include former male students. If we do not make sure, for example, that the data fed into the machine during the learning phase is balanced, the machine may come to the conclusion that the engineering students are men, and apply it incorrectly in future decisions.

Computer learning and artificial intelligence are tools. They can be used by us to walk the right way or the wrong way, like any other tool. Human greed and human intelligence scare me far more than artificial intelligence.

To the article on THE CONVERSATION website

  • More on the subject on the science website: 30

6 תגובות

  1. We can develop an intelligence that surpasses us. More than one key person on the subject. Why do all the big companies want quantum computing?
    If the above prophecy comes true. What is the difference between the genius above, and religious people (I am a traditionalist+ and a believing person) who say that it is not possible to create intelligence above what the Creator created. What is the difference from Maxwell, the father of electromagnetism - the unification of all electric and magnetic knowledge, and special relativity without his knowledge (since Lorenz derived the equations of special relativity as arising from invariance under the Lorenz transformation, and he also did not know). Einstein recalculated and knew. Maxwell who claimed in the 19th century that all physics had already been discovered.

    The problem with such statements that deny the existence of the problem, that the train left the station a long time ago and those who deny simply don't
    Prepared for what's to come. According to my instinct Hawking has more sense about the future than the article here. Indeed we are not at human intelligence but we may not be that far. About one hundred thousand (10,000-1,000000) teams of scientists research in it. I am among them and I am only a PhD student (holding the knowledge that paper and pencil and head have the power of a supercomputer) we have research resources available. On the day when elite intelligence will be less than me, IBM offers to implement research in quantum computing - at least to offer
    And she will allocate resources. Today a quantum computer is the property of IBM in a few years it will be available to many more.
    In the feeling that the calculation power will be 15,000,000,000 neurons and about a hundred thousand researchers are engaged in it. Someone will come.

  2. There is a conflict of interest between those whose livelihood is derived from the computer field and a position of support for computer science research...
    An opinion in favor of research from a researcher is self-evident and predictable and does not arouse interest.

  3. Laws of life, within about a hundred years humans are able to develop computers and robots with greater intelligence than humans and also bugs in the software will cause them to develop emotions like in humans and indeed it is possible that computers and robots will control humans. The only thing that computers and robots may not be able to do is to repair themselves when software is broken or broken, but at the level of principle, computers and robots will indeed rule the world with a firm hand, and also over humans, and by their mouths, absolutely everything. In the foreseeable future, humans will dominate and inhibit artificial intelligence, and from this moment until eternity, humans will be the ones who will control the computers and robots.

  4. My expectation in the article will be support for the main motive that we will not be able to develop an artificial intelligence that surpasses us,
    But the article does not contain a description of some physical or technological wall that shows that artificial intelligence will not be able to overcome human intelligence, there is more a description of a current situation that we all know there is no such system yet and there is a description of the complexity,
    But it is no small thing that a system is able to beat the best human players in the body,
    The people at the top are a kind of genius from the best of human minds and yet in this narrow and complex sector we were defeated,
    It is possible that during the development there are barriers that are not yet known to us, but it seems more like barriers that delay time than barriers
    that will make the possibility of artificial intelligence that occurs to us as impossible,
    Apparently, the computational base unit has a huge advantage for the artificial intelligence systems and that is the number of operations per second
    Between 200 hertz and billions of hertz, phenomenal memory capacity versus a pathetic average human ability to remember even one phone number,

  5. It sounds like the author disagrees that next week Stephen Hawking's fear will come true.
    But the concern in question may occur in a few more decades if not a few centuries. (assuming Stevie was right)

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.