Comprehensive coverage

When man creates God, he should be friendly

In 1993, the Mathematician and writer Vernor Wenge coined the concept of the singularity (singularity), a machine that would surpass man in its wisdom and abilities, and even claimed that it would appear by 2030. The computer expert Eliezer Yudkovski from Atlanta had already established an institute, the purpose of which is to write the programs that would ensure that this being Be friendly and moral

Yuval Dror, Haaretz, News and Walla!

He is convinced that in this way he will save humanity. But some researchers believe that we are still very far from being able to develop the ultimate machine
He is convinced that in this way he will save humanity. But some researchers believe that we are still very far from being able to develop the ultimate machine

Illustration: Shira Mazor

The first film in the "The Matrix" series led to the publication of dozens of articles, which dealt with the philosophical meanings hidden within it. One of the issues discussed at length was the question of the ability of the "machines" to one day take over humans. Science fiction writers have been grappling with this issue for decades. In 1993, the mathematician and science fiction writer Vernor Wenge raised the debate to a level when he claimed that the entity that will surpass humans in its wisdom and abilities, which he called the "Singularity", is going to arrive in the near future: not before the year 2005 and not after the year 2030. In an interview It is special to "Haaretz", he says, that even though more than a decade has passed since the prediction was made - he does not change the forecast.

It would be possible to dismiss the idea as something that only bothers one type of person - someone who eagerly reads science fiction books and imagines for a moment that he is the hero of one of the plots. But in recent years, more and more supporters have risen to the idea of ​​uniqueness. One of them is Eliezer Yodkovski, a 23-year-old American Jew from Atlanta who founded the "Uniqueness Institute for Artificial Intelligence" in 2000, whose role is to accelerate the introduction of uniqueness into the world. Despite his seemingly very complicated arguments, he does not impress artificial intelligence experts. According to them, our distance from building a machine that will be as smart as humans is so great, that there is no reason to discuss a world where a machine will exist that will be smarter than humans.

The mirror is soft and the mirror is hard

Wing has a place of honor in the world of philosophy of technology. In 1981, he published the book "True Names", which dealt with hackers operating in cybernetic space - and this was three years before William Gibson coined the term "cyberspace". When he used the term "singularity", he knew that it already had meanings attached to it in other fields, such as in relation to black holes. According to Vinge's method, humans tend to analyze the future based on the premise that human intelligence will rule the earth. However, he asks, isn't it time to test the validity of the assumption?

Vinge sees improvements in hardware and software as a necessary condition for the existence of uniqueness. According to him, the last decades have proven that the progress in the field of hardware exceeds all imagination. The calculation capabilities of computers, which were considered powerful 30 years ago, are extremely low compared to what exists today. In an article he published in 1993, he wrote that he was aware of the controversy over whether scientists could build a machine that would be as intelligent as humans. "If the answer is positive, then there is almost no doubt that it will be possible to build a machine that will be more intelligent than humans." Moore's Law (named after Gordon Moore of Intel), which states that the processing capacity of computer chips doubles every 18 months, supports his claim. Because, a simple calculation shows that within a few years the computer will reach processing capabilities similar to those of humans. From this moment, the transition to a being smarter than humans may happen within days, hours, maybe even minutes.

Here, Winge predicted, things would begin to move at breakneck speed. The entity's self-improvement capacity, which will essentially "take command" of its rate of development, will increase until the point where between the blink of an eye and the next it will double its "wisdom". At this point, Vinge claims, the person will already be at the mercy of the being. The possible scenarios will range from unprecedented progress in human abilities (we will upload our brains into computers) to a holocaust of the type described by "The Matrix", when the machines will actually use humans as electric batteries. Vinge explains that the increasing number of artificial intelligence researchers increases the chance of the arrival of the singularity. "If we combine Moore's Law with our abilities as humans, we understand that there is a possibility that uniqueness will reach the world in the near future," he explains.

According to him, there are two possibilities for the arrival of uniqueness, which he calls the soft look and the hard look. "Within the framework of the soft take-off", he says, "things will happen slowly, over decades and perhaps hundreds of years, when humans cooperate with the machines and slowly produce the uniqueness. In this case humanity will be able to prepare itself. The hard takeoff is completely different. Within it the consciousness of uniqueness will be created in a short time, days and maybe even hours. In this case, it is more difficult to predict what will happen to us."

The Flaws of the Robot Laws

The mission of the institute founded by Yudkovsky (whose internet address is www.singinst.org) is to write the computer code that will eventually lead to uniqueness. According to him, he is aware of the great dangers that the era of singularity entails, therefore his main goal is to build the entity so that it is "friendly" to humans. "We are focusing our efforts on building uniqueness because we believe this is the most critical point in human history," he says in an interview with Haaretz. "At this point humanity will either perish forever or survive, according to the degree of our wisdom, rationality and skill."

Yudkovski explains that at this stage the institute is working on solving the technical problems related to artificial intelligence, this is so that the uniqueness will not consist of a collection of code parts that happen to work, but will be based on a "coherent perception and features that will allow it to feel and formulate goals, and all this in an ethical manner." According to him, "at this stage almost everyone who deals in the field of artificial intelligence deals with these problems, but also thinks that they are so difficult that they cannot be solved. We hope they are wrong. At least we are trying to do something about it."

There is no reason to rush and get excited about the word "institute" or Yudkovski's tendency to speak in the first person plural. Right at the beginning of the interview, Yudkovski clarifies that his institute employs one full-time researcher (himself) and a volunteer. "At this stage we are publishing theoretical position papers", he clarifies. He admits that the institute has not yet written a single line of code, but explains that a theory must be built before writing the software.

It's hard to ignore Yudkowski's commitment to the cause. In an interview with "Wired" magazine two years ago, he said that he has been committed to uniqueness since he was 11 years old, when he read Vinge's book "Names True". "When I got to page 47, I realized what I was going to do for the rest of my life," he explained at the time. Yudkovski has no formal education in the field of computers, but he claims that his informal education is much broader than that of many. The online encyclopedia "Wikipedia" defined him as "a genius in his own eyes." In a special autobiographical page that he uploaded to the institute's website two years ago, he wrote: "I think that the future of humanity may be decided thanks to my efforts. I think I can save the world not only because I make an effort, but because I am the only one who can make the effort." After this sentence was quoted in the article, the text was changed and now it contains an indictment against journalists, who choose to defame him instead of engaging in his work.

On the face of it, the effort that Yudkovski invests in building a uniqueness "friendly to the people" is strange. Because, theoretically, from the moment the entity becomes aware of itself, it will be able to change Yudkowski's basic programming and turn into a murderous machine. Yudkovski does not deny that this is a possible scenario, but maintains that it is possible to reduce the chances of its realization through the programming of the singularity so that it bears a sympathetic character. "Take Gandhi for example. He was a man of incredible self-control. Why didn't he 'change his programming' and decide he wanted to rule the world instead of helping people? I'm also an intelligent person and I still don't want to 'change my programming' and murder people. If it's about programming, then I'm well programmed." According to him, this is similar to raising a child. "If you raise a child from infancy, you can teach him, educate him, direct him so that he will love people. But if you get a big boy who is egotistical, you will have a hard time making him altruistic. Sooner or later he will rebel and become an egoist again."

The robot series of the science fiction writer Isaac Asimov dealt a lot with the question of how to protect man from robots whose ability exceeds his own. In Asimov's books, the minds of the robots are controlled by three fundamental laws with an internal hierarchy. According to the laws of robots, a robot will not harm a person and will not assume that he will be harmed (the first law), he will obey the orders of a person (the second law) and protect himself (the third law). In later books, two robots program the Zero Law, which puts the good of humanity above the good of individuals.

Yudkowsky says that Asimov's robot laws are science fiction and cannot be applied in reality. However, he notes that it is possible that "our goal is to discover everything that is flawed in the laws of robots."

Vinge says he has heard of Yudkowski's institute and is not opposed to its goals. "If there are indeed superhuman beings in the world, it would be nice if they liked us", he says, "if the uniqueness comes in the form of 'soft appearance', then this is an important goal. If it comes as a 'difficult appearance', then such planning is pointless."

When a computer pretends to be a woman

For Dr. Shaul Markovich, from the Faculty of Computer Science at the Technion, the discussion of uniqueness is quite strange. "We are so far from this vision. Our computers know how to do much less than people think." According to him, "In areas such as speech understanding, context analysis and image analysis, we have made very little progress in the last 50 years. A discussion about computers that are smarter than us is an airy discussion."

To illustrate the exhaustion of computers, it is customary to use the example of the Turing test (which was invented by the mathematician Alan Turing). Dr. Amit Pinchevski, an expert in communication and cultural studies at Ben-Gurion University, explains the idea. "Since it is difficult to test intelligence in terms of itself, the Turing test was constructed as a test by which it is possible to determine whether a computer is as intelligent as humans. As part of the test, three people enter three different rooms. In one room a man, in the second a woman and in the third a researcher. The researcher neither sees nor hears the man and the woman in the other rooms. He gives them written questions and they answer in writing. His goal is to determine which of the two is the woman, with both the man and the woman trying to convince him that they are the woman. According to the test, if we replace the man with a computer and he pretends to be a woman so successfully that the researcher believes him, then the computer can be said to be intelligent."

According to Markovich, today's computers fail the Turing test embarrassingly. "Suppose you pass the following question: 'The boy threw the cup against the wall and broke it. How can it be fixed?' The computer will not be able to tell what broke - whether the cup or the wall. He may answer the researcher with the question 'What broke?' It is likely that the real person, unlike the machine, will immediately answer 'glue' - because he knows that what broke was the cup and not the wall. Man may not understand differential equations, but he knows the physics of the world. Since computers are not able to analyze natural language, they will not be able to answer convincingly."

Markovich says that one must ask if it is possible to reduce the human brain to the level of physics and chemistry. "I and others believe that it is possible to point to a collection of biochemical processes that occur in the brain, which explain every emotion and thought. In our opinion, there is no such thing as a 'soul'. If this is so, then we have a chance to build a device that simulates a brain. The second question is whether the current model of a computer is sufficient for building a brain. The answer is, apparently, no. The brain works differently from a computer."

Also according to Pinchevsky, it is an unreasonable pretense to think that processes occurring in the brain can be coded in a binary language. "We will proceed from the assumption that an intelligent being is a being that thinks. How do we make a thinking computer? We will write a collection of algorithms. But from the moment you removed the human element from reason, you removed the uncertainty, the randomness. You left a collection of laws."

extract the brain from the body

Yudkovski claims that singularity will succeed in changing people's consciousness by extracting their minds from their bodies. "Thus she will free us from pain and stress. It will be a sterile circle of pleasure that has no end." For Pinchevsky, utopia has a second and extremely dangerous side. "Why don't people want to be judged by a machine? Because they want a person, whose reason goes beyond the set of legal precedents and dry laws; They want someone who can feel pain, can feel empathy. From the moment you took the brain out of the body it resides in, you eliminated the machine's chance of actually feeling anything. From this point of view, such an intelligent being may be extremely violent."

Yudkovsky says that, contrary to popular belief, science understands the world of emotions better than the world of reason. "Scientists have made a big step forward precisely in understanding the biology and physiology of emotions. Even so, it is still not clear if we are

In general, emotions can be programmed. This is exactly where things start to get complicated."

Markovich points to another problem. He claims that in the construction of uniqueness the assumption is folded, that humans are capable of creating a creature with consciousness. "We forget that the human brain developed following millions of years of evolution. The pretense that we can create something similar is strange to me."

Yudkovski knows the criticism and is ready for it. "I think that whoever tries to create a machine that will be as smart as humans is trying to hit a very limited goal. And in general, those millions of years of evolution are much less impressive than they sound. If natural selection, which brought humans to where they are, was trying to create uniqueness, it would need thousands of generations of experiments. Humans are able to create the same uniqueness more quickly." Vinge agrees with him on this point. "I am satisfied if as humans we manage to understand ourselves completely. Despite this, we produce human-like creatures on a regular basis - we give birth to children. There is no doubt that breakthroughs are still needed, but these are technological breakthroughs and not breakthroughs in the cognitive field."

Yudkovsky does not admit it, but in his special way he tries to create a type of God. He states that he is not religious, but it is important for him to emphasize that even a religious person should not have a problem with his approach. "If I were a religious person, I would believe that uniqueness can be reconciled with religion, since different truths can be reconciled with each other easily."

Prof. Yehuda Gelman, from the Department of Philosophy in the Faculty of Humanities at Ben-Gurion University, agrees. "Not every being that is wiser than us is necessarily God. Although there is a common belief that man is the crowning glory of God, it does not have to be that way." According to him, "in history there were people of faith who claimed that if God wanted us to fly, he would have created us with wings. Well, he didn't create us with wings, but he created us with a mind, who created us the airplane. I don't see any theological principle in that man should be supreme in the universe, but only that there is a connection between humans and God, between the Jews and God. There is no reason why the relationship should not be preserved even in an era of uniqueness."

Although Wing predicted the arrival of the singularity, he tries not to over-predict the distant future. "No one knows what the future holds for us and there are always surprises," he explains. "But it seems to me that the right way to examine the future is to take into account that there will be 'unavoidable surprises.' That's why you should prepare for every possible scenario."

Even Yudkovski, who is confident that the uniqueness will come, finds it difficult to translate this into probability percentages. In an interview he gave to "Wired" he claimed that "there is a 5% chance that the singularity will appear soon and that humanity will be changed forever". In a later interview, which he gave to the "San Francisco Gate" in early 2004, he was more reserved for some reason and insisted that "there is a 2% chance that the singularity will appear and change humanity." On that occasion, he suggested, she might save us from an ecological beacon. "The end of the world is a very technical issue," Yudkovski concluded at the time, "we are working so that we can save everyone, heal the planet and solve the world's problems."

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.