Comprehensive coverage

Four types of artificial intelligence - from responsive robots to self-aware ones

In light of recent breakthroughs in artificial intelligence research, it seems that machines with intelligence, sensing ability, capable of understanding verbal commands, recognizing images, driving cars and playing games better than us is on the horizon. How long will it take for them to walk among us?


A dreaming robot. Illustration: shutterstock
A dreaming robot. Illustration: shutterstock

Written by: Arend Hintze, Hintze Assistant Professor of Integrative Biology for Engineering and Computer Science, University of Michigan

In light of recent breakthroughs in artificial intelligence research, it seems that machines with intelligence, sensing ability, capable of understanding verbal commands, recognizing images, driving cars and playing games better than us is on the horizon. How long will it take for them to walk among us?

A fresh report from the White House The one who deals with artificial intelligence, expresses skepticism. According to its authors, it is likely that in the next twenty years we will not see machines with broad intelligence comparable to or exceeding that of humans." However, according to the report, in the coming years we will see more and more machines that will surpass human performance in more and more tasks. But the authors of the report missed some important points.

As an AI researcher, I admit it's nice to have my field high on the agenda of the US government, but the report focused almost exclusively on what I call "the boring kind of AI" and doesn't address the question of how evolution can help us develop intelligent systems Improved artificial intelligence and how computational models can help us understand how our human intelligence evolved.

The report focuses on what can be called mainstream artificial intelligence tools: machine learning and deep learning. These are mini-technologies that managed to play "Jeopardy!" well and even beat humans in the most complicated game ever invented - GO. These current intelligent systems are capable of handling vast amounts of data and performing complex calculations at high speed, but they are missing a key component in building sentient and intelligent machines as we expect them to be in the future.

We need to care about more than machines that know how to learn. We need to overcome the boundaries between the four different types of artificial intelligence, the barriers that separate the machines from us - and us from them.

Type 1 – reactive machines

The most basic types of AI systems are purely reactive, and do not have the ability to remember and therefore also cannot use past experiences to make current decisions. "Deep Blue", the IBM chess computer that defeated international grandmaster Garry Kasparov in the late XNUMXs, is the perfect example of this type of machine.
"Deep Blue" can recognize the pieces on a chessboard and know how to make all the moves. He can make predictions about his and his opponent's next action. He can choose the most optimal moves among the possibilities.
But he has no idea about the past and any memory of what happened in the past. Apart from the basic chess rules, they are hardly used so that they do not repeat the same move three times. Deep Blue ignored everything that happened before the present moment. All he does is look at the tool evaluations on the chessboard as it is now and choose from a number of possible moves.
This type of intelligence involves giving a computer the ability to understand the world and act on what it sees. Artificial intelligence researcher Rodney Brooks claimed that we should build machines just like that. The main reason in his opinion was that people are not that good at accurately programming virtual worlds for computer use, so it is better to let artificial intelligence build a "representation" of the world.
We marvel at the fact that the current intelligent machines do not have an overall perception of the world, at most they have a limited and focused perception in certain areas. The innovation of "Deep Blue" was not in expanding the field of possible scenarios for the computer to consider, but on the contrary, the developers found a way to narrow the field of view, and forgo examining many scenarios based on the rating they received. Without this ability, "Deep Blue" would have had to be a much more powerful computer to beat Kasparov.
Similarly, Google's AlphaGo, which defeated human world champions at the game of Go, was not required to evaluate all future possibilities. His analysis methods were more sophisticated than "Deep Blue" because he evaluates the development of the game using neural networks. These methods make it possible to improve the ability of artificial intelligence systems to play more specific games, but they cannot be easily modified or applied to other situations. Their computerized imagination has nothing but an idea of ​​the wider world - that is, they are unable to function beyond the specific tasks assigned to them.
These programs cannot interactively participate in the world in the way that we imagine will happen one day. Instead, these machines will behave in exactly the same way every time they encounter the same situation. This can be very good for ensuring reliability in artificial intelligence systems but it is bad if such a system is allowed to drive an autonomous car when it has to react to surprises in the real world. These simple AI systems will never get tired, bored or sad.

Type 2 – limited memory

In this type we find machines that can remember the past and use this information. Driverless cars already do this. For example, they keep a distance from other cars by controlling the speed and direction. This cannot be done in one moment but requires identifying specific objects and monitoring them over time.
These observations are added to the pre-programmed representations for the autonomous car's perception of the world, which also include lane markings, traffic lights and other important elements such as curves in the road. They also include instructions on when to change lanes to avoid hitting an adjacent car. But these bits of the past are fleeting. These memories are not stored as part of the car's experience library, human drivers process years of experience behind the wheel.
So how can we build artificial intelligence systems that have a complete representation of the world, that can remember their experiences and learn how to deal with new situations? Brooks was right that it is very difficult to do. My research into methods inspired by Darwin's idea of ​​evolution can begin to compensate for human shortcomings by giving machines the ability to build their own representations.

Type 3: Artificial intelligence and the theory of mind

This will be the point that separates the machines we have today from the machines we will build in the future. However, we need to be more specific in discussing the type of representations that the machines will be required to create. Machines of the more advanced type not only created representations of the world, but also about other agents and other entities in the world. In psychology, this is known as the "theory of mind" - the understanding that humans, creatures and objects in the world have the ability to think and have feelings that affect their behavior.
It is important to know how human societies were formed. It will be difficult and even impossible to get along without knowing the motives and intentions of the other party, and what he knows about me or the environment while cooperating. If artificial intelligence systems do ever walk among us, they will need to be able to understand that each of us has thoughts and feelings and expectations of how we expect to be treated and they will have to adjust their behavior accordingly.

Type 4: Self-awareness

The final stage of artificial intelligence development is to build systems that can create a representation of themselves. Ultimately, we artificial intelligence researchers will be required not only to understand consciousness but also to build machines that will have consciousness.
It is in a way an extension of the theory of mind that I defined in the third type of artificial intelligence. Consciousness is also called "self-awareness" and not for nothing. "I want the object" is a completely different statement from "I know I want the object". Conscious beings are self-aware, know their inner state and can predict the emotions of others. We judge that someone honking behind us is nervous or impatient, because that's what we feel when we honk at others. Without the theory of mind, we cannot create such types of hikshas.
While we are probably far from creating self-aware machines, we need to focus our efforts towards understanding the areas of memory, learning and the ability to base decisions on past experiences. This is an important step towards understanding human intelligence itself. And this is essential if we want to design or develop machines that can classify what they see in front of them in an extraordinary way.


For the original article on The Conversation website


More of the topic in Hayadan:

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.