Autonomous military robots will have to obey a warrior code or they will become the masters of humanity, warns a report by researchers from the California Polytechnic Institute
A warning against a coup along the lines of Isaac Asimov's robot books, and this time in reality. A report dealing with the ethics of robots was recently published. In the report, led by researchers from the Emerging Technologies Ethics Group at the California Polytechnic Institute, researchers Patrick Lin, George Becky and Keith Abney write that we must prepare for a situation where robots will dominate the battlefield.
The robots, including armed robots resembling the Terminator from the movie Terminator, will serve as a replacement for military personnel. This raises moral questions. A semi-autonomous robotic cannon installed in South Africa misfired, killing nine "friendly" soldiers and wounding 14 others. The report states that "on whom should we place the blame and the punishment for the unauthorized harm by the autonomous robots, whether by mistake or on purpose - on the designer, on the manufacturer of the robot? About the officer who was in charge of the unit he was assigned to? On the controller and its supervisor, the target in the field, the president of the USA, or perhaps on the robot itself.
According to Lin, there are driving forces towards this trend. "Congress confirmed that by 2010, about a third of the aircraft will be unmanned, by 2015 all ground vehicles will have to be unmanned. These deadlines create pressure to develop and implement robotic tools including autonomous vehicles, but perhaps the rush to market increases the risk of inappropriate design or programming?
Imagine what the battlefield would look like with autonomous robots? Instead of our soldiers returning in flag-draped caskets to broken families, autonomous robots - mobile machines that can make decisions such as firing at a target without human involvement - will be able to replace the soldiers in a wide variety of dangerous missions. These missions range from navigating inside systems and dark tunnels in search of terrorists, to securing streets In cities with the help of sniper rifles or patrol the skies and waterways to cover from attack, clear roads and the sea from improvised bombs, scan for damage from biochemical weapons, protect the borders and buildings, control a potentially hostile crowd, and even replace the spy corps at the front.
"The robots will be smart enough to make decisions that only humans can currently make," the three write, and when the pace of conflict increases and will require much faster data processing and, accordingly, a faster response, the robots will have a significant advantage over the limited cognitive abilities of humans. Not only will the robots expand the battlefield to difficult and vast areas, but they will also be able to represent a significant force - a versatile player - when each robot will be able to do the work of many soldiers, while being immune from the need to sleep and protected from other problems such as exhaustion, low morale , perception and communication challenges in the fog of battle, and other problems that impair human performance."
"The robots will also not be affected by the adrenaline, emotions and stress that cause soldiers to act excessively or break the rules, not to mention war crimes. The robots will be able to report any unusual behavior to headquarters."
"However, technology is a double-edged sword that has both benefits and risks, and military robots are no exception. The concerns include: when the responsibility will fall in cases of causing damage unintentionally or in violation of the law, something that can extend from the manufacturer, through the commander in the field to the machine itself; The possibility of a serious malfunction that would cause the robot to go crazy, capture and hack into the robot's brain and make it act against us; Reducing the threshold of conflicts and wars, because fewer American soldiers will be harmed. The effect of such robots on unit cohesion - that is, if robots record and report every action of the soldier; Refusal of a legitimate order, and other possible damages." The three summarize the introduction to their article."
The report was funded by the Office of Naval Research of the US Navy.
Comments
Or stop fighting O_O
We can learn from Attila's wars:
Attila was at the head of an army of brave warriors who were equipped with very little.
They had no armor and no helmet.
In front of him stood the Roman army. The army has the best weapons of those times. with soldiers trained to function as robots on the battlefield.
The war was fair.
Both armies lost a huge amount of fighters.
Attila decided to retreat back to where he came from.
There was no further battle because Attila died before he could organize for another battle.
But Attila had at least one clear conclusion:
He claimed that his men would have prevailed over the Romans if they had had helmets. That is, if they were equipped a little better.
ב
Not in many cases - but almost always 🙁
The most complex human-made systems are software. Only very simple software does exactly what it was designed to do.
In many cases the software performs different things than the programmer intended.
If you've seen the movie Live Steel, you don't know from this your color robot
The movie "I am a robot" illustrates the problematic nature of Asimov's laws,
In them, even though they are programmed that way, something happens there. Must see
Itzik (other):
And what if they build hundreds of thousands of robots, put an atomic bomb in each one and scatter them in the city centers and then tell them all to explode?
Oh - you don't really need robots for that, do you? Enough nuclear bombs with (delayed) remote activation.
You see?
You can do a lot of stupid things and you don't need to develop a sophisticated tool like a robot to destroy the world.
The question is, of course, why such a stupid thing was done, and I repeat - if someone wants to do it, they can already today.
What you say about computers and vague logic only indicates that you do not know the subject and I say this to you as a person who has dealt with the subject for many years and has also built learning systems.
Itzik (other),
The accident that occurred in South Africa was not due to the "intelligence" of the robot for a very simple reason. This robot has no AI. He has no ability to learn and more than that, he has no ability to decide when to shoot and when not to. It fires according to a fixed set of parameters given to it in the programming. The entire blame lies with the programmer who set an inappropriate set of parameters for the conditions in which the robot was placed. It is possible that the robot was not programmed with a set of parameters at all, but connected to simple motion detectors - as they say shoot on sight.
We have not yet reached the moral problem - whether to give robots intelligence that could eventually turn it against us. We are still at the stage of "is the robot that we programmed so and so applicable for the required action".
The robot does what it is told, but do we understand what we told it?
"— A semi-autonomous robotic cannon installed in South Africa misfired, killing nine "friendly" soldiers and wounding 14 others."
Yes, we are probably very far from a situation where intelligent robots will kill us constantly.
The debate about whether the robot intended to shoot at these people because it developed a sufficiently high intelligence and decided to join the evil side is not relevant at all. The fact is that he was placed there with the order to "shoot to kill" and that is what he did, and effectively.
And if there are enough such robots, and if the robots are programmed with the command to "protect the robots at all possible costs" then the day will not be far away when robots without high intelligence will take over our world.
and…. Yes, regardless of the computing power of computers today and what they were like in 1985 - today smart enough algorithms are produced whose results are not known until their "training" is over. It is no longer a classic algorithm based on methods, it is a learning algorithm with fuzzy logic. Go argue with that... and he has a submachine gun in his hand and is unstoppable.
I think the Terminator movie series gives examples of the 2 sides....
Please correct the spelling mistakes, it's really not up to the level of the article
Thanks
What a beauty:
Even in that you asked my permission to do something and did it without my permission - you were wrong.
Yet! Michael, allow me one more time - you are wrong!
Regards.
What a beauty:
You have already corrected this mistake before.
Michael, all I have to say to you is - you are simply wrong.
Good Day!
What a beauty:
No need to say too many words.
What you didn't understand is what I wrote earlier.
The computers that will be built by us will only be able to do what we want them to do.
There is no reason for us to give them their own will and consciousness (and these qualities are clearly different from the ability we will give them to effectively solve our own problems).
Apart from that - what you wrote about the state of computer science and the brain in 1985 is not true, but of course I did not refer to that in my previous response.
Another thing I did say in my previous response is that despite the optimism of some people, I think we are still quite far from the ability to create truly intelligent computers and also from the ability to create computers with their own consciousness and will.
Well, Michael, haven't you recovered yet?
I await your reply.
Best regards!
Good morning Michael!!
I would appreciate it if you could explain to me what I didn't understand.
Best regards,
What a beauty !!!!!
What a beauty:
You didn't understand me but you managed to tire me out so good night.
Correction - "hand over the glass of water"
Gillans Take a simple example, one robot is given the command to put a poison capsule into a glass of water, and then another robot who doesn't know what the previous robot did, is given the command to submit this water power to his master, which of course will cause his death.
This is a rather silly example but it shows you that Asimov's laws can also be bent quite easily.
Also read my previous message (above the one you wrote)
Best regards,
What a beauty !!!!!
There is no chance - to remind you, Asimov's laws of robotics prohibit a robot from harming a person in any way, so the issue has already been dropped as far as military robots are concerned.
Michael, there are things that can be easily predicted based on trends, for example it is very easy to look at a ball rolling on a surface and predict where it will be in such and such seconds or minutes.
With all due respect, in 1985 computer technology and brain research were topics that were just beginning to develop, at the time in question it was very difficult to predict how and where things would develop, the situation today is completely different and you can already see very clear trends of where things are developing and at what rates. Again as I suggested to Orr (or are you actually under a different name?) I strongly suggest that you purchase and read the book "The Singularity is Near" by Ray Kurzweil as soon as possible, and then continue the discussion with new insights you will have on the subject because the arguments he makes in the book are very convincing and I usually A very skeptical person.
Asimov's laws may be good for science fiction books, but it's really absurd to think that we can really impose them on robots with a higher level of intelligence than ours, it's like a person imprisoned in a prison cell with a group of monkeys guarding him, sooner or later he will manage to get the key to the door and be free, it's absurd to think That a computer or robot whose level of intelligence is billions of times higher than ours will continue to be subordinate to us and serve us is simply illogical and impossible in reality.
"Even if we know how to create computers with consciousness - it does not mean that we will necessarily create such computers"
Such computers will give a huge advantage to the country that will hold them and use them, for example intelligent computers that will control missiles, fighter planes, tanks and other weapons and control systems will give that country a military crushing power that no enemy country will be able to face, such computers will make traveling or flying in an airplane much more For sure, they will help develop new medicines and new materials, when such computers are available no sane country will voluntarily decide not to use their ability, because such a country will very quickly be left behind.
Best regards!
What a beauty !!!!!
What a beauty:
They already said before that it is very difficult to predict - especially in relation to the future.
I remember that in 1985 I intervened with Yoni Nokad (I think that today he is the CEO of the Internet Association) on the question of whether by the year 2000 we will have robots capable of replacing us in the kitchen.
At the time, I was the head of the software branch in one of the IDF's computer units and he was in some parallel position in the third party.
I claimed there was no chance and he said sure.
Now it's 2009 and it turns out I was right.
In my opinion, we still have a very long way to go until we understand how the brain works and what consciousness, intelligence and will are.
I am not claiming that it is not possible to reach such an understanding, but in my opinion we still have some challenges before us that are not only challenges of quantity but also challenges of quality - ones that knowledge alone is not enough to overcome and some brilliant ideas are also necessary.
That's why I think we are not that close to conscious and motivated computers and we don't know how to create them even if we wanted to.
And here comes the next consideration that was actually already raised by Or.
What our computers will do is what we allow them to do.
Even if we know how it is possible to produce computers with consciousness - it does not mean that we will necessarily produce such computers.
I also assume that if we create something close to them - we will really bother - as suggested here in the article and as Asimov also suggested - to instill in them in a way that cannot be erased - laws that will force them to remain our servants forever.
Fix -
"This book answers all your difficulties one by one..."
Light, the study of the brain is progressing by leaps and bounds and every year the amount of knowledge we have about the brain and how it works doubles, when in a few years we reach a complete understanding of how the brain works (and there is no doubt that this moment will come) there will be no reason why we cannot copy and reproduce its structure the neurons in our brain into a computerized electronic system that will perform exactly the same things that our brain is capable of including emotions, and self-awareness, and what's more, we can build such a brain in a much more efficient way than a human brain that developed blindly through a process of natural evolution because when you understand You can improve the operation of a certain device and optimize its operation.
See just as one example the "blue brain" project that is carried out on IBM's supercomputers, an amazing project that tries to reproduce the human brain in a computerized system and although the power of today's supercomputers only allows the brains of small mammals such as rats and mice to be analyzed (and the power of computers doubles from year to year ....) Already now the simulations carried out in the project show remarkably similar results to those obtained under similar conditions in a biological brain, and this proves that the project is clearly moving in the right direction and this is only the beginning.
Read the book "The Singularity is Near" by Ray Kurzweil -
http://www.tapuz.co.il/blog/ViewEntry.asp?EntryId=1065939
This book one by one for all your difficulties which are mainly due to lack of understanding of this amazing subject, go out and learn.
Best regards,
What a beauty !!!!!
how beautiful
A computer or robot, only knows how to do what you programmed it to do. It can be said that even at this very moment the computers are smarter than us - they can calculate much more than me and you. They have graphic abilities to create XNUMXD images far superior to those of any human. The computers can beat us at chess. Even the computer you are using now has higher "mental" abilities than any other person.
Still, you control him and he doesn't control you. The reason for this is that no one has ever engineered emotions into a computer, and it has no instinct for independence or survival. These are characteristics of living beings that have developed in evolution. This evolution was possible due to the ability to copy and make mistakes that the earliest living creatures had, that is, the possibility of heredity. Today, no computer is subject to natural selection and even if it were, I doubt if it would have survived on its own. The computers will not develop a sense of control unless it is deliberately instilled in them.
Which brings back to the discussion the issue of terrorism, which came up earlier...
For someone - any image and its size, and it depends on the source. The picture in this article is exactly like the original picture.
To someone,
You are wrong, the picture is bigger by cm x cm more 🙂 I checked.
In any case, the current picture is only an illustration and would not add information to what is written in the article, so the loss is not that great, if that comforts you 🙂
Oops, it's really a bit bigger, but you hardly notice the difference...
Father, it's funny that in the last articles here you can click on the pictures but then they open in a new window exactly the same size.... So what did we do with it? Shouldn't clicking on an image open it in a larger size?
Eli, you're just naive. These predictions are very far from being science fiction and anyone who deals in these fields will tell you that. When the level of intelligence of these computers surpasses ours, and in light of the rate of development of computer power together with the rapid progress in research towards a full understanding of the human brain, it is expected to happen in the next 20-25 years and will begin to roll like a snowball, so get ready to eat your hat, because Nothing can stop it. This technological evolution is a direct continuation of biological evolution, someday these robots will take control, so you should start treating them with respect from now on so that they will protect you.
Or as the Daleks say in their screeching voice in Doctor Who
!!! exterminate!!! exterminate
What a beauty!!!!!:
You've seen too much science fiction if you think robots are going to beat us, it's never going to happen.
And wow, it's really amazing to see how with every passing day Ray Kurzweil's predictions pile up one by one before our eyes -
http://www.tapuz.co.il/blog/ViewEntry.asp?EntryId=1065939
There is no doubt that tanks, planes and other destroyers that will be controlled by intelligent computerized minds will constitute a decisive crushing force during war in the hands of those who will hold them. As for the laws that want to be implemented in these robots, in my opinion it is complete nonsense and it has no chance, at some point their level of intelligence will increase millions of times And billions times more than ours, at some point we will lose control over them and they will not be willing to act under our authority, in exactly the same way that we humans would not be silent in the hands of monkeys and would not be willing to go out and fight their wars for them, even if we knew that they were the ones who created us.
In short, beautiful, what an amazing future! (At some point, a future without humans, unfortunately...)
Nice, now every Chinese child will have an alternative to breaking into the Pentagon, which has long since ceased to be a challenge.
Of everything presented there, only a robot for detecting and disinfecting halakhic sounds safe.
Everything else is basics for indiscriminate killing.
Why not manage the robots in MMORPG style? Instead of letting them decide on their own.
Shmulik:
It reminds me that Jacob Freund once wrote about someone who (with a missing spelling) "compensated an Italian because of her Iraqi language".
Regarding the war on terror - do not pray hastily.
Let's remember the fact that we are talking about futuristic technology here.
An autonomous mechanized fighter is still far away, many years of development away from our current capability.
The war on terror poses another challenge - exactly the same challenge it poses to the human fighters in terrorism, which is the greater difficulty in distinguishing between a friend and a predator.
Do you really think that in some future we will see a robot police officer at the entrance of the mall that will open fire on anyone suspected of being a terrorist?
Besides - human terrorists will be - as far as the terrorist organizations are concerned - better than robots. Can you compare the difficulty faced by a robot in infiltrating the territory of Israel without drawing attention to the unbearable ease of infiltrating a human being? After all, mechanized systems will detect a hostile robot much more easily than they will detect a hostile person.
Therefore, the terrorist organizations will not equip themselves with such technology - not because of its cost but because of its inefficiency.
A wound with a
And it seems to me that this is the only solution to the war on terrorism
who cannot possess technology at this level
Interesting article. Removing the soldiers from the battlefield means removing the emotion from the killing (far from the eye, far from the heart). As it stands, only rich countries will use this technology while poor countries will use human soldiers. Far-reaching changes are possible in both the fighting mentality and the fighting morals.
Fountain:
Read again.
The entire article talks about the need for these laws (of course - with some modification - one that only recognizes your side as "humans").
Haim:
Since it's not exactly Asimov's robotics laws - it won't prevent wars.
what a spoon If Asimov's laws of robotics are implemented in robots and if robots go to war then there will be no wars. of genius.
Apart from the title there is no mention in the article related to Asimov's Robot Laws, I think this is because they are simply not related to the discussion.
The first law: "A robot shall not adversely harm a human being, and shall not assume, by default, that a human being will be harmed" does not work out so well if a robot for war operations...
Stupid invention. If robots fight instead of humans then it will no longer be fun to fight.