Researchers need new ways to distinguish between artificial intelligence and natural intelligence
- In the eyes of the public, Alan Turing's "imitation game," in which a machine tries to convince an interrogator that it is communicating with a human, has long been considered the best test of artificial intelligence.
- But the Turing Test has not stood the test of time well. Even a machine that is not really intelligent can use deception tricks that make humans think it is intelligent. Artificial intelligence experts claim that it is time to replace the Turing test with several tests or "events" that will evaluate the machine's intelligence from many different points of view.
- A machine with real intelligence should be able to understand sentences containing ambiguity, assemble self-assembled furniture, pass a fourth grade science test, and more. The difficulties involved in these tasks show that despite everything that has been said recently about artificial intelligence applications, it will be a long time before we see machines with human-level intelligence.
In 1950 he proposed Alan Turing A thought experiment that has since been considered the ultimate test for the existence of artificial intelligence. He called his test "The Imitation Game" but most people know it as "Turing test". Decades ago The chatbots Turing envisioned a competition in which a machine tries to make its interrogator think it is human and in the process answers questions about poetry and makes deliberate mistakes in math. Most of the public sees the Turing Test as a kind of Rubicon that crossing means that machines have truly reached the level of human intelligence. But this is a mistake: this Rubicon has indeed been crossed, but not for the right reasons. Today it is possible to build machines that fool humans, at least for a period of time, but these victories are short-lived, and do not seem to bring us any closer to true artificial intelligence.
The problem is that the Turing Test is actually a fairly easy game to beat, simply by misleading behavior or by faking ignorance. Turing himself predicted (in part) that the key to success in the test he devised would be mainly Failure to provide Answers to questions. If a judge asks the machine: "Do you have feelings?" The machine must lie to win. But this strange aspect is not the exception but the rule itself. The way to win the Turing test is to fool the human investigators.
In 2014, for example, it seems that a chatbot named "Yand Gene Gustman" successfully passed the Turing test by pretending to be a sarcastic 13-year-old boy from Odessa, Ukraine, who never gave direct answers to questions. He managed to fool some of the judges (about a third of them - a rate that is enough to be considered a superficial victory). But does it matter? The judges only had a few minutes to conduct the interviews with the chatbot, and there is no doubt that in a little more time everyone would have discovered the truth. In fact, Gustman had not He knew and understood almost nothing. Think about this dialogue conducted by the computer scientist Scott Aronson With the chatbot:
Scott: Which is bigger: a shoebox or Mount Everest?
Eugene: I can't decide now. I have to think about that later. And I forgot to ask you where you are from...
Scott: How many legs does a camel have?
Eugene: something between 2 and 4. Maybe three:-))) By the way, I still don't know what you specialize in - or maybe I didn't notice?
Scott: How many legs does a centipede have?
Eugene: Only two, but mutations due to Chernobyl have up to five. I know you're supposed to work on me.
Scott: No, I need to know you're not a chatbot. Just please answer the question directly: How many legs does an ant have?
Eugene: something between 2 and 4. Maybe three?:-))) Oh, how fruitful this conversation is (-;.
It is evident that Turing's original test did not stand the test of time.
Not long after the news about Gustman was spread in the media, I proposed an alternative test, designed to come closer to discovering true intelligence, a test that to pass it would not be enough with deceptive maneuvers. BPost In a blog on the "New Yorker" website, I suggested giving up the Turing test in favor of a more comprehensive challenge: "Turing test for the twenty-first century".
The goal, as I described it at the time, is "to build a computer program that can randomly watch any TV show or YouTube video and answer questions about their content: 'Why did Russia invade Crimea?' or 'Why did Walter White think of eliminating Jesse?'" The idea was to get rid of mere ploys to mislead the examiners and focus on the question of whether the system is really capable of understanding the material it is exposed to. Programming computers so that they sound clever will probably not bring us closer to artificial intelligence. But we may indeed get closer to true artificial intelligence if we make computers process more deeply the things they see.
Francesca Rossi, who was president at the time The joint international conference on artificial intelligence, read my idea and suggested that we work together to make this latest Turing test a reality. We also added you to our ranks Manuela Veloso, a roboticist from Carnegie Mellon University and the former president of The Association for the Advancement of Artificial Intelligence, and together we started coming up with ideas. First we focused on finding a single test to replace the Turing test. But we soon moved on to the idea of using several different tests, because just as there is no single test for athletic ability, there cannot be a single test for the reality of intelligence.
We also decided to share our efforts with the entire AI community. In January 2015, we gathered about 50 leading researchers in Austin, Texas, to discuss the renewal of the Turing Test. The direction that emerged during a whole day of presenting ideas and discussions was to hold a competition that includes various challenges or "events".
one of these events, Challenge the Winograd scheme, named after the pioneer of artificial intelligence Terry Winograd (who served as a mentor to the founders of Google, Larry Page and Sergey Brin), requires the machine to deal with a test that combines language comprehension and common sense. Any programmer who has ever tried to make a computer understand natural language soon realized that almost every sentence involves ambiguity, and often more than one element that is ambiguous. We usually don't notice this simply because our minds are so good at understanding language. Think about the sentence: "The heavy ball that hit the table made a hole in it because it was made of Styrofoam." Technically, the sentence is ambiguous: the word "he" can refer to the table or the ball. Any human listener would understand that the word "he" refers to the table. But to understand this he has to combine his knowledge of materials with the understanding of language, a task that is still far from the day when machines can successfully cope with it. three experts, Hector Lesque, Ernest Davis וLaura Morgenstern, have already developed a test built around such sentences, and company Nuance Communications The company, which deals, among other things, with speech recognition, is offering a $25,000 reward to the first system that manages to pass it.
Just as there is no single test for athletic ability, there can be no single test for the reality of intelligence.
We hope to include many more challenges in our test. It is natural that one of its components will be a comprehension challenge where the machines will be tested on their ability to understand images, video, speech and text. Charles Ortiz Jr., director of the artificial intelligence and natural language processing laboratory at Nuance, offers a construction challenge that will test environmental perception abilities and practical physical abilities: two important elements of intelligent behavior that were not included at all in the original Turing test. Peter Clarke from the Allen Institute for Artificial Intelligence proposed giving machines the same standardized tests that school students take in science or other fields.
Apart from the tests themselves, the conference participants discussed the general requirements that each test must meet in order to be considered a good test. Guruduth in Anwar And his colleagues at IBM, for example, emphasized that computers should construct the tests themselves. Stuart Sheaberfrom Harvard University emphasized the principle of transparency: in order for the tests to really help advance the field, awards should only be given to open systems (that is, those that will be available to the entire artificial intelligence community) and reproducible.
When will machines be able to meet the challenges we offer? no one knows But there are those who already take some of the test events seriously, and success in them can have significant consequences for our world. A robot that successfully coped with the construction test, for example, could set up temporary camps for displaced humans, on Earth or on distant planets. A machine that will be able to pass the Winograd scheme challenge and a 4th grade biology test may bring us closer to realizing the dream of machines that will be able to read the huge amounts of material contained in the scientific literature in the field of medicine and unify all the knowledge contained therein. This could be an important first step towards finding a cure for cancer or towards understanding the brain. As in any field, also in artificial intelligence clear goals are necessary. The Turing Test was a nice start, but now it's time to build a new generation of challenges.
11 תגובות
rival
I agree with you that the letter is similar. But if the neuron is much larger then the delays will be different and therefore the behavior will be different.
Regarding brain crossing - your question is correct, and from experiments done not long ago it becomes clear that there are not exactly two consciousnesses. Here is an interesting link:
https://aeon.co/ideas/when-you-split-the-brain-do-you-split-the-person?utm_source=Aeon+Newsletter&utm_campaign=ddbff61ecc-EMAIL_CAMPAIGN_2017_09_26&utm_medium=email&utm_term=0_411a82e59d-ddbff61ecc-69476645
In general - I highly suggest reading articles on this site - very interesting (on all kinds of topics).
Miracles,
The electrical pulses generated in the human brain simulation project are so similar to the real pulses that it is difficult to distinguish which one is the pulse measured in the laboratory and which is the pulse generated in the simulation.
Yes, a person with a split brain (and there are quite a few of them, people whose connection between their two hemispheres has been severed by surgery) has two consciousnesses, there are also very interesting experiments that demonstrate this.
But there is another interesting point here, how does such a consciousness that has only half of the neurons (also in the cerebral cortex) still manage to function (at least outwardly) like the consciousness of a normal person?
rival
This means that the signals need to be accurately simulated at the level of signal shape, signal-to-noise ratio, timings, dwell times, and so on. The inductances between different signals should be simulated (if present. And if not, then it must not be in the simulation). I mean - the visualization of each neuron is very complex, and I don't think we understand today how complex it is.
Indeed, they managed to produce a "synthetic neuron" that can interface with real neurons, but its size is inhibited by about a centimeter. That is - a trillion times the size of a real neuron.
I definitely think that it is possible to simulate the neurons in the brain, but without building a neuron that is also physically compatible with a real neuron, we will not get close to a human brain.
but …. As I said a long time ago, consciousness is something else entirely. We created consciousness a long time ago. On the other hand, we really have no idea what human consciousness is. I will ask you a simple question - does a person with a split brain have two consciousnesses?
Miracles,
As far as I remember from Idan Segev's lectures (and his friend Henry Markram who no longer manages the project) there is a great deal of care in this project to build a neural network whose topographical structure also matches the one that exists in reality and they also demonstrate this visually:
https://www.youtube.com/watch?v=HN1iX_3CXLY
Therefore, in my opinion, there should be compatibility with the navigation mechanism in the brain in terms of the distance relationships between the neurons and the dispersion between them. If in the brain the distance between neuron A and B is 2.7 times greater than the distance between neuron A and C then also in imaging this ratio will be preserved even if the imaging is the size of a building.
rival
Look for Grid Cells. There is also in Wikipedia.
I don't think any project realizes such a thing today, because it requires that the size of the neurons and their distribution in space be like in the brain.
In fact, I'm pretty sure that this is what happens in projects like the "Human Brain Project", I know that they are very careful there about accurate physical simulation of the structure of the neural network in the brain, I'm sure that even in their simulation of more distant neurons, it will take more time to transmit the electrical signals between them.
Miracles,
"In this mechanism the distance between neurons simulates the distance in reality"
That's interesting, first time I've heard of it. Do you have a link on this? In any case, I don't see any reason why we can't simulate this in a computerized neural network as well, I estimate that the greater the distance between the neurons, the longer it takes the pulses to reach the neighboring neurons, I see no reason why we can't also simulate this on the computer and make the pulses in the simulation behave in a similar way ( meaning to delay their arrival depending on the distance).
rival
I agree with you that these are two different problems. Reading comprehension is a good idea, but we are nowhere near the tip of the iceberg of reading comprehension. Today we do not know how to solve a problem that is orders of magnitude easier: text translation. And every problem we do (think we) have solved - it's not the way our brain works.
One of the mechanisms in the brain "navigates" by maintaining a physical map of the environment. In this mechanism, the distance between neurons simulates the distance in reality. Do you think there is a synthetic nervous system today that can work like that? The argument is that all that matters are the links and levels of connection... .
Miracles,
I know that there are cases of autistic people who are very intelligent in a very specific area (for example performing complicated mathematical operations in their head without a calculator) but in other areas they are still... autistic:
https://he.m.wikipedia.org/wiki/תסמונת_סוואנט
In any case, I have long argued that a real Turing test for artificial intelligence must include a wide variety of fields and not just a blind conversation through a computer. For example, I would expect an artificial intelligence at the level of a person to successfully pass IQ tests and psychometric and psychotechnical tests at a level that humans face, to know how to read a story and then successfully answer comprehension questions, to be able to solve logic puzzles, and the like.
I'm sure we'll get to that, but the real interesting question is how can it be proven that the artificial intelligence we created really has self-awareness? and feelings? It is true that she can tell us very convincingly that she has all of these, but how can it be proven that this is really so? And that she does experience these things subjectively as we experience them? This is a very interesting question in my opinion.
According to what is described here, an intelligent person suffering from autism is not intelligent....
So what did we do?
It seems quite likely that for highly capable intelligence systems we will need a variety of tests to test it in addition
It may seem trivial but in an advanced broad artificial intelligence system it will be possible to ask the same simple question
Do you have feelings, are you human, it's pretty clear that a wide artificial intelligence system at a high level could work on us, but it's also absolutely possible and maybe more likely that she'll just give us the self-diagnosis that she doesn't feel anything, she's just creating a behavioral simulation that she'll be able to "understand" "From observing living beings and especially man,
Just like a person who is color blind who at some point in early life realizes that something is missing in the picture, something that everyone is talking about and he just doesn't see it,
There is another interesting question as to whether an independent simulation assuming that it does not feel emotions without the subjective inner feeling can over time continue to behave as human or at some point its behavior will be distributed in a different direction like some different mathematical equation because it is missing part of the equation, as if you are trying to wait for water with a different liquid
When under certain conditions the behavior will be similar but in other situations there will be an overall difference in the interaction with other substances.