Interview: Artificial intelligence, autonomous cars and understanding the "music" of speech

The president of the Academy of Sciences, Prof. David Harel, shares with the science website his research interests at the Weizmann Institute: among others, biological systems modeling and computer science

Prof. David Harel, President of the Academy of Sciences. PR photo
Prof. David Harel, President of the Academy of Sciences. PR photo

Professor David Harel, currently the president of the Israel National Academy of Sciences, received his doctorate from the Massachusetts Institute of Technology (MIT) in 1978, in a record time of one year and nine months. He has been a faculty member at the Weizmann Institute of Science since 1980, where he served as head of the Department of Computer Science and Applied Mathematics (1989-1995) and Dean of the Faculty of Mathematics and Computer Science (1998-2004). He worked for periods of time in IBM's research department in New York and did sabbatical years at Carnegie-Mellon, Cornell and Edinburgh universities, and founded with partners the company I-Logix, which was later assimilated into IBM. Harel won many awards, including the Israel Prize in 2004 and the Emet Prize in 2010. In addition to the Israeli Academy, he is a member of several academies around the world, including the American National Academy of Sciences, the American National Academy of Engineering, the Chinese Academy of Sciences and the British Royal Society.

The main areas of Prof. Harel's specialization in the past were in theoretical computer science (logic, computation, automata and database theory), and in recent decades he has focused mainly on software and systems engineering, studies on modeling and reconstruction of smell and modeling and analysis of biological systems. Prof. Harel is the inventor of the language of positioning diagrams (Statecharts) and co-inventor of living script diagrams (LSCs) as well as the tools Statemate, Rhapsody, Play-Engine and PlayGo.

In an interview with the Hadaan site, he explains: "The modeling process of biological systems that we carry out uses methods and tools taken from the fields of systems engineering and software engineering. In the last two years, I have also been researching the field of the 'music' of speech, or intonation (prosody), in collaboration with the linguist D R. Teresa Biron. In general, it is possible to record conversations between people and there are programs that can output an exact transcript of what was said, assuming that the conversation is clear enough. These programs also know how to distinguish between my voice and yours. If you give the transcript to another person, he or she can get a pretty good sense of what was said. However, there are many things that pass in conversation between us that go beyond the words themselves, and are conveyed between us by the music of the speech. It is enough that I emphasize one part of the sentence and not another part and already the meaning changes. I can tell you, for example, "Who is knocking at the door" in a normal tone, in a tone of wonder or out of fear and panic, and you will immediately be able to distinguish between the three. But to calculate what we humans are able to do very easily is very difficult.

Of course, it is possible to record all the details of the sound wave, but to remove from the recording the fact, which is clear to any human listener, that a sentence was uttered with skepticism, surprise, hesitation or joy, no one yet knows how to do that. The task is important because as the number of tasks involving communication with computerized systems increases, this will become essential. When you call the HMO and a machine answers you and you say the tests haven't arrived, that machine will have to distinguish different tones of voice - for example, if you only mean to inform that the test hasn't arrived or if you're really angry - in order to respond to you reasonably."

Speech recognition - the next step. Illustration: depositphotos.com
Speech recognition - the next step. Illustration: depositphotos.com

General artificial intelligence

From this task we will move on to what is now called general artificial intelligence. Prof. Harel refers to an article he wrote together with Assaf Maron from his research group at the Weizmann Institute and Prof. Joseph Sipakis from the University of Grenoble-Alpes in France. The title of the article is  "Autonomics: a search for an infrastructure for the development of the autonomous systems of the next generation".

In the introduction to the article, the researchers write: "Autonomous systems are replacing humans in a wide variety of tasks, and in the future these systems will be central and decisive for human society. They will come in many forms, such as various types of vehicles, medical equipment, industrial robots, robots used in agriculture, management of transportation systems and many other tasks,"

 "Many organizations are already busy trying to develop the next generation of autonomous systems, so that they will be reliable and financially viable. But the enormous complexity of such systems and their critical importance create new fundamental challenges. There is a crucial need to define a fundamental scientific and engineering infrastructure that will form the basis for the development of these systems. We call such an infrastructure "Autonomics", and believe that its availability can dramatically accelerate the implementation And the acceptance by the public of quality autonomous systems, for the benefit of human society."

 black box

"In recent years there has been a dramatic improvement in the methods of machine learning, which are at the basis of the prevailing approaches of artificial intelligence. These methods use what are called deep neural networks, which consist of layers of layers of 'parts' that simulate types of neurons in the software. The inputs are received on one side of the network and the outputs (i.e. , the answers) come out on the other side. It is difficult to explain things on one foot, but it is important to note that the problem with this process is that there is not enough transparency Regarding what happens inside the network during the calculation. In contrast to a normal computer program, where you can see the lines of code for execution and follow what it is doing, here it is a system that learns through training, and the network itself changes while learning. For example, if ' the network to distinguish between a dog and a cat by giving it a large number of pictures and showing it who is who, we still cannot easily prove that in the future it will know how to make the distinction well, it will also be difficult for us to explain To ourselves why it decided that this is a cat and that is a dog. For the same reasons, it is difficult to go inside the network and change it so that you can, for example, distinguish between a dog and a snake. This is no longer something that can be done through logical changes in the code. You have to teach the network from the beginning."

 "Of course, it is not just about non-critical and lighter tasks, such as distinguishing between a cat and a dog, but about weapons systems, airplanes, autonomous cars and robot systems that help in hospitals and perform actions that can be critical.

"My research is related to these problems. How can you verify the software, make sure that the system really does what you expect it to do, and this from the aspect of a deep understanding of the system's operations while running, and not just relying on the final results."

Do you mean reconstructing the actions of the system, ie how it arrived at the result?

General Artificial Intelligence Illustrative: depositphotos.com
General artificial intelligence. Illustration: depositphotos.com

Professor Harel "Of course, it is possible to perform simulations (simulations) of runs in different situations, and thus 'reproduce' what the system would do in a real run in the field. But I did not mean only that, but the possibility of proving mathematically, formally, that the system will indeed always do what you want it to do And don't do what you shouldn't do. In principle, it is possible to reach such proofs for software or models that are written in a classical way, with the help of sophisticated and complex verification methods. But in the software produced by machine learning, there is currently no way to mathematically prove that the system will correctly identify 99% of the images of cats that are shown to it. It is possible to run again and again and extract statistics from these runs, but there is still no "closed" mathematical proof that this will always be the case Much worse in the really critical applications of machine learning. I'm not sure any of us would be willing to have a pacemaker implanted using the system If you study without being able to prove that you are making the right decisions, then there is a problem."

Prof. Harel emphasizes that "the problem is not in the artificial intelligence itself, but in the programming methods of critical systems in which artificial intelligence is used too much, in such a way that the 'programming' itself is done by the learning machine."

Does the use of artificial intelligence require considerations that go beyond the classical sciences?

 Professor Harel: "Yes. First, as soon as you attach the title 'artificial intelligence' to these systems, which use learning techniques, there will be many who will think that these are systems that understand everything like humans, for example the considerations of drivers and pedestrians on the road.

"Secondly, matters from the fields of the social sciences and the humanities, such as psychology, social psychology, ethics, jurisprudence, etc., are definitely included here. The typical example of this issue is what is known as the 'car dilemma': "You are driving, and suddenly in a split second you have to decide if you are speeding A young family that suddenly crosses the road, and which you did not see ahead of time, or you turn right (because there is a wall on the left) and runs over two elderly people standing on the sidewalk. In court you can explain the decision you made. You could say that maybe you saw the family for a fraction of a second and instinctively turned to the right, or maybe the other way around, that you saw the elderly couple who were unable to escape easily, and continued straight so as not to hurt them. In any case, you won't be in prison for life because you chose one way or the other. On the other hand, if an autonomous car has to make such a decision, they will immediately shut down all the cars for a year, start a long and tedious investigation, test the software in depth, and try to find out why the programmer did this or that, and why he didn't think about such dilemmas in advance."

So ethical considerations also come into play here, right?

The trolley problem, BBC

"Yes. We cannot easily apply the morals, ethics and life experience of the human driver to the future autonomous car, in such a way that the responsibility of the vehicle manufacturer will also be explained in terms of artificial intelligence. It is not enough that the car knows how to recognize traffic lights and stop before crossing a crosswalk. For it to be called a system To be intelligent she has to acquire something approaching general human intelligence and this is an extremely difficult task.

"My car has a system that knows how to keep a distance from the vehicle in front of me and stick to the lane of travel, and I have been told many times, 'What a beautiful car! Autouto, it will be autonomous.' Regarding the relationship between it and relevant human beings, pedestrians and other drivers, is extremely complex and complicated.For example, let's say I slow down before crossing And there are two people standing next to them talking, and one of them tilts her body slightly to the side. As a human driver, I can usually tell intuitively if the tilt is done to take something out of the pocket and show the other person that she is about to cross the road. We have no idea how to give an autonomous system a similar intuition It's a task whose scientific and technological complexity is difficult to describe." Prof. Harel concludes.

The carotid problem - Wikipedia

More of the topic in Hayadan:

.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.