Comprehensive coverage

What did you want her to understand?

According to Prof. Carmel Domshlak, a researcher of decision support systems from the Faculty of Industrial Engineering and Management at the Technion, "the jump in the development of the field is due to the transition from dedicated algorithms for each problem, to more inclusive algorithms"

Illustration: pixabay.
Illustration: pixabay.

"Artificial intelligence was the dream of the world of computer science back in the time of Alan Turing who formulated the famous Turing test (artificial intelligence that tricks the computer user into thinking that there is a person on the other side, a test that no computer has yet passed - AB) after World War II. Turing saw computational machines fulfilling functions that at that time were fulfilled by humans", said Prof. Carmel Domshlak, a researcher of decision support systems from the Faculty of Industrial Engineering and Management at the Technion.

According to him, "In 1956, a group of American researchers gathered, including the later Nobel laureate, Herbert Simon, and Claude Shannon, the founder of information theory. Their hypothesis was that any activity that requires human intelligence can be described to a machine, and if so, let's achieve it."

"The questions that came up are 'What is an activity that requires intelligence? What is intelligence anyway? For which activity we do we need intelligence and for which activity we do not need intelligence?' And they never came to a precise definition of what the understanding is that we want for it."

If you think about a golem, Dumshlak said, "What is that golem capable of doing? It has to somehow represent knowledge. As soon as someone told me that my father was a writer in people and computers, now I have knowledge and it is stored in my head. Second, that puppet should be able to draw new conclusions based on the knowledge it has. For example, if I was told that my father is a reporter for People and Computers and I was told that People and Computers is a means of communication, I know that my father is a reporter for a newspaper. No one gave me the knowledge directly. Nor did I read anywhere that my father was a newspaper reporter. I took a piece of knowledge I had with another piece of knowledge I had and gained new knowledge."

According to him, "The basic knowledge will not come from nowhere, it has to be acquired somehow. Acquiring the basic knowledge is actually learning. As humans we acquire knowledge through reading, observation, experience. As soon as a child gets stuck in the corner of the table once, twice, three times, he learns without someone telling him to be careful. There are many, many ways to learn. There are many ways to perform sensing. There are also teachers who teach us and we take the knowledge and store it."

In addition, he said, "That pup has to make decisions. That golem has to formulate for himself/achieve/receive from the outside goals to which he has to strive, arrange them between short-term goals and long-term goals, decide on the relative importance of these goals and ultimately decide on actions that he is going to perform in order to achieve those goals that he decides to achieve ".

"And finally, he must know how to get along in the world. Since our golem does not live in a vacuum, that golem has to make its decisions not only along its own goals but also along the goals and behavior of what it thinks about the goals of other golems and humans as well. He has to conduct himself in society, which makes the problem much more complicated. There is a difference between driving on an empty road and between driving on a road with other cars, and especially with cars that want to kill you."

To this end, Prof. Dumshlak explained, "an artificial intelligence system requires sensing means from the world of audio, text and to communicate with humans. These are mechanisms that each in itself constitutes a whole field such as natural language processing, computational vision, text recognition and sound recognition. That golem should recognize the sound of a horn because there is a situation that he is going to be run over. On the other side there is the whole world of robotics - the physical world involving mechanical engineering and electrical engineering. It should have an equivalent to moving the organs of the body in humans."

Until the late 80s, said the professor, "researchers took a problem such as playing checkers or identifying fruit in a picture, and solved it by developing an algorithm for this problem and writing software that implements this algorithm. Feed the knowledge about the problem - what an apple is, what it looks like, or how best to play checkers - into this software."

"Many good things came out of these works, but quite indirectly, because the problem with this strategy was that such research never fails. He cannot fail. The result is either it works or it doesn't. If it works - wonderful. If it doesn't work - the conclusion is that there is a lack of knowledge. And it's a bottomless pit. Once more knowledge began to be entered, those systems that had been developed did not fit. If you stop adding knowledge, the system does not solve the problem, and of course if the product is software that plays checkers - it is not good for anything except playing checkers."

"It's hard to believe that in our heads we keep separate software for everything we do in life," said Domshlak. "How do you transfer what you learned in problem A to problem B was problematic. And in the late 80s, the internalization caused a lot of internal criticism, until finally the researchers turned to the development of algorithms and software not for specific problems - but for models, families of problems, like strategy games."

"If we think about developing a system that also plays checkers, also plays chess, also plays Go, and other similar games, but as soon as we talk about a game like poker, which is a much more complicated game because you don't see the cards of the opponents, there is partial sensing, non- Certainty that comes from dealing cards. This requires a much more generic technological development."

"For example, systems developed for artificial vision can certainly be used for speech recognition, and in general for translation, because what is image recognition if not translation? The computer takes a picture and describes it with words, and it's just like taking a group of words in one language and translating it into a group of words in another language."

In conclusion, Prof. Domelshek said, "In recent years, I have been disturbed by the misconception of the world of artificial intelligence that is taking shape among the public. It ranges from will kill me shortly, will take over me shortly, and - everything is simple, there is a deep learning system and it will solve all the problems in life. The truth is always in the middle."

See more on the subject on the science website:

11 תגובות

  1. rival
    We definitely agree. We are all just machines.
    My contention is that this consciousness will be almost nothing like the consciousness of man. Will the magnitude of the risk that such a robot will undertake to save another depend on the genetic proximity between them? Will he be angry because the prime minister's wife is corrupt?

  2. Yosef,

    "The machine in us - which is aware of itself, is a machine altogether"

    I didn't understand, are you claiming that we have already built a machine that is aware of itself? Where can I see her?

  3. There are studies in the field of understanding consciousness and also practical mathematical and physical progress.
    Since I also want to investigate there, we will not elaborate at this point.
    The machine in us - which is aware of itself, is a machine in its entirety. Wonderful though.

  4. rival
    We need to ask ourselves not whether we can produce consciousness, but what consciousness is. Maybe we will discover that artificial consciousness exists. In my opinion, consciousness is the ability of a machine to retrieve a memory from the internal knowledge stored in it. The fact that the computer does not know how to tell us that it has consciousness, does not mean that there is no consciousness.

  5. Yosef,

    "We're not yet at the point where she's self-aware. I understand that we lack knowledge of how it works.'

    In my opinion, there is a pretty good chance that this feature of self-awareness will arise spontaneously when the neural network is large enough and connected to the body with sensors and cameras.

    Nissim disagrees with me on this point, but I have no intention of entering into discussions on this issue again, Mitsino.

  6. Miracles,

    Is the world of civil aviation unsuitable for artificial intelligence? How did you come to this conclusion? In my opinion, artificial intelligence that exists today can quietly carry out aerial control of civilian planes (preventing collisions, etc.) and fly the plane, including takeoff and landing in 99.9% of cases, except for extreme cases and emergencies that would require human intervention.

  7. The artificial intelligence - will understand itself in the end, and since we created it in our images, I study artificial intelligence:
    In the end it's statistics and mathematical operations on entry and exit. I don't live in a paranoid world about intelligence.
    Because that's how she won't love others, meaning us. Therefore, even today what is visible is not yet aware of itself. There is no mistake that basements are not developed for positive purposes. It is possible that a company like Google is developing more intelligent intelligence than currently exists. Even in 2012, when the layered neural networks revolution was carried out, they did not want to tell others.

  8. Moses
    In 2025 there will be a shortage of programmers. Many fields are not suitable for artificial intelligence - a clear example is the world of civil aviation. A second example is mathematical calculations of spacecraft trajectories.

    And regarding law - I want to see a robot convince a jury that the accused had a difficult childhood... 🙂

  9. The above article caresses the issue.
    What will happen in practice is that, at least in the first stage, those who will have access to computers are entities whose purpose is defined and defined (profit), and hence the nature of the device.
    It will have an effect on all areas of life and in the micro - it will be good, in the macro: much less.
    There will be no middle ground, because the side effects of this will apply to the entire world (unemployment, etc...).
    I personally work in programming and I know that around 2025 there will be no need for a programmer (the same with many other professions that are considered irreplaceable today, for example: CPA, attorney... I'm not talking about drivers, street cleaners, etc.) .
    Regarding the fear that the machines will be violent (in the classical sense of the word): my answer is - they will not be violent (even if it is possible to get on their nerves).
    The reason is simple: already today {in my opinion} we are at war. World War 3 The point is that the weapons have changed. As of today, there is no longer a need for a mortar or a tank to conquer targets (and thus obtain capital). People give themselves voluntarily.
    If in a classic war more kills = victory. So today it is exactly the opposite and everyone's desire is for people (who are the basis for the whole new economic model) to live as many years as possible - and that is exactly what is happening. In fact, it is likely that the toddler born just yesterday will be retarded in 150 years or so.
    The situation that exists today that a number of companies are worth more than a number of countries (Apple + Google + Facebook are worth more than 2 trillion dollars) is a somewhat problematic situation... they have the rights and are debt-free (they don't pay pensions and they don't have to pave roads or maintain them... the obligation One of them is: to make money).
    In any case, the problem with them is: that they are outside the framework as we have come to know it over the years, or in Hebrew: it is impossible to contain effective regulation on them and since thus, the total of their rights has increased even more than what I just mentioned (in other words: they know how to pay less taxes per dollar that lands with them).
    ====
    By and large, the artificial intelligence that will come to us soon is not bad news.
    This is a very advanced device - and that's fine.
    Those who progress less are the humans.
    The last paragraph does not tend to be kind to this species, because it is actually a minority of a minority. It's about those who will have control over this device - and since it's about evolution, we, the rest of the humans, just don't make up our minds about it and from time to time we'll write an article of the above type or a response of one kind or another, just as I'm doing now.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.