Comprehensive coverage

Can robots be conscious?

This is a three-part question: What is awareness? Is it possible to make a machine conscious? And if you were able to do so, how would you know for sure?

Unlike the other scientific subjects, awareness - the personal knowledge of the world around - is really in the eye of the beholder. I know, that I am aware, but how do I know that you are? Is it possible that my co-workers, my friends, my editors, my wife, my children and all the people I see on the streets of New York are actually mindless robots, just pretending to be conscious humans? The New York Times reporter asks? According to him, this will make the question controversial.

The way of acceptance makes sense - I am a conscious person, therefore it is likely that you as a person will be conscious as well - I conclude, because I am probably not the only conscious being in the world of biological marionettes. When the question of awareness is extended to other creatures, the uncertainty increases. Is a dog conscious? turtle? Fly? Elm tree? stone?

"We don't have a mythical awareness meter," said Dr. David G. Chalmers, professor of philosophy and head of the Center for the Study of Awareness at the University of Arizona. "All we can do directly is observe the behavior."

Well, without a basic idea of ​​what consciousness is, the idea of ​​introducing it into a machine - or understanding how a machine can develop consciousness - seems almost unattainable.

The field of artificial intelligence began with dreams of creating a machine that thinks and maybe even has awareness, but so far the achievements in the field seem modest. No one has yet created a computer program that would be able to pass the Turing test.

In 1950, Alan Turing, a pioneer in the field of computer science, imagined a computer, which would be considered intelligent, when its reactions would not be different from human reactions. The field has evolved to focus more on solving practical problems such as complex design tasks rather than simulating human behavior.

But with the steady increase in available computational power, many believe that it will be possible to achieve the original goals of artificial intelligence within a few decades.

There are people, such as Dr. Hans Morbeck, professor of robotics at Carnegie Mellon University in Pittsburgh, who believe that people are nothing more than a sophisticated machine, and the development of technology will make it possible to build a machine with the same features and characteristics. They believe that there is nothing special about the brain and biological flesh.

"I am sure that we can build a robot whose behavior will be as rich as human behavior," he said. "You can test such a machine to your heart's content about its inner mental life, and it will be able to answer you like any other person."

For Dr. Morbeck, if it behaves as if it has awareness, it is indeed so. Questions beyond that are pointless.

Dr. Chalmers refers to awareness as a characteristic, which cannot be defined, and it may be meaningless to try to put a finger on it. "We have to admit that there is something here that cannot be reduced," he said. "There are characteristics of awareness that reach down to the bottom" of the smallest and most primitive organisms, even bacteria, he said.

Dr. Chaltsers also does not see anything fundamentally different between a creature of flesh and blood and a creature made of metal, plastic and electronic circuits. "I'm quite open to the idea that machines might eventually become self-aware," he said, adding that "it would be equally weird."

If a person were to engage in conversations with a robot about everything from Kant to baseball, "we would be as certain as possible that the machine is as conscious as other people," Dr. Chalmers said.

"Of course, the theoretical question still remains," he said.

On the other hand, others say that machines, no matter how complex they may be, will never be able to match humans. The arguments can become mysterious. In his book "Shadows of the Mind", Dr. Roger Penrose, a mathematician at Oxford University in England, mobilized the incompleteness theorem in mathematics. He uses the theorem, which states that any set of sentences will always include claims, which cannot be proven, to claim that any machine that uses computing - and therefore all robots - will always fall short of the ability of human mathematicians.

Instead, he claims, that awareness is an effect of quantum mechanics in tiny structures of the brain that exceed the capabilities of any computer.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.