Comprehensive coverage

The robot that (didn't) think it could: on autonomous robots and unexpected successes

I try to engage in this blog on world-class topics: cures for cancer, artificial intelligence taking over the world, the future of work, and so on. But every now and then a story so special comes to my attention that I can't help but stop everything and write about it. Such is the story of the robot who (didn't) think he could.

Unmanned aerial vehicle of the Shadow type. Image source: US Department of Defense.
Unmanned aerial vehicle of the Shadow type. The source of the image: US Department of Defense.

Our story begins at the end of January 2017, When an unmanned aerial vehicle (UAV) is launched from Arizona in the United States on a simple training mission. The name of the robot? Shadow (Shadow) RQ-7Bv22. We'll call him Tsili for short. Robots of the Tsili type are used to monitor military targets for the most part, and should not be more than a hundred and twenty kilometers away from the station that controls them. But Tsilli had other plans: immediately after the launch, the connection between him and the control station was lost.

Simpler tools would surely crash a few minutes after losing communication, but Tsili was special. Shadow UAVs enjoy a high level of autonomy. Or simply put, they are able to stay in the air and continue to fulfill the mission even if they lose contact with the operator. For this reason I allow myself to call them autonomous robots. But Tsili did not know what his mission was. And as the humans who remained on the ground under Tsili's shadow suddenly realized, no one actually knew what his mission was.

Unmanned aerial vehicles are programmed to behave in different ways when they lose communication with the ground. Some of them hover in the air above a certain point until the connection is renewed. Others lower rum and try to land, or try to return to the point from which they were originally launched. Emergency systems are activated as soon as the UAV realizes that it is now alone in the sky. But for Tsilli, those systems don't seem to have been activated. She had a different opinion Shazili accidentally remembered the coordinates of his previous home at a military base in Washington state, and tried valiantly to return there. Or maybe the emergency systems were not activated at all, and Tsili simply continued sailing through the sky, on his journey into the unknown.

Either way, Tsili left behind the frustrated soldiers and engineers, and began to fly north. North, north, north, while ignoring any attempt to reconnect with him. He rose into the air on top of the strong winds, and used them to sail over forests and Indian reservations. All the while, the authorities were tracking him using radar systems, but after five hours, Tsili arrived in the Rocky Mountains. He wasn't supposed to be able to pass them, and the military lost his radar signature at that point and assumed he crashed.

Rocky Mountains. Tsili also crossed them. Source: Rick Kimpel.
Rocky Mountains. Tsili also crossed them. source: Rick Kimpel.

He didn't crash. It just rose even higher, to a height of 4,000 meters, and sailed above and beyond the Rocky Mountains, to heights it was not designed for, to distances it was not designed to swallow. He continued north, tireless, on a journey of a thousand kilometers, until he crashed near Denver. We still don't know what was the cause of the crash, but it is hard to believe that Tsili could have continued much longer, since his fuel must have already run out, or was close to running out, at that time.

And this is the story of Tsili, the robot who didn't think he could - because he has no ability to think at all - but who in no way covered distances he wasn't supposed to travel, in a period of time he wasn't supposed to survive in the sky, and contrary to the instructions he was supposed to receive.

The day of the autonomous robots

Tsili is just part of a new generation of robots - those that benefit from limited autonomy, and are able to carry out a task defined for them, with minimal involvement of a human operator. Tsili's affair actually reflects a bug in the robot's operating system. Sometimes we discover bugs in the software we use: the browser may suddenly freeze or start deleting letters and words by itself, or the computer's operating system may crash and stop working. These are annoying bugs, but we expect them and understand that they are almost inevitable in such complex computer systems.

Well, Chilli had a bug too. The only difference is that Tsili is a military drone worth a million and a half dollars, and the bug made it fly across three countries and the Rocky Mountains. We can rightfully say that we were lucky that Tsili is used for monitoring purposes only, and is therefore not armed. But his cousin The Predator (Predatorr) is also used for monitoring and espionage purposes, and is armed - just to be safe - with two Hellfire anti-tank missiles and six Griffinn air-to-ground missiles.

Tsili's less likable cousin, the Predator. Source: US Air Force photo/Lt Col Leslie Pratt.
Tsili's less likable cousin, the Predator. source: US Air Force photo / Lt Col Leslie Pratt.

I suspect we would have been less amused than the whole affair if one of the armed Predators had taken Tsilli's place and embarked on a similar journey across America, without understanding where he was flying to or what his exact capabilities were.

The desire behind the robot

The emotional writing in the first part of the record must have brought laughter to the mouths of expert roboticists, and for good reason. Robots have no desires of their own. They have no thoughts or self-awareness. But today's sophisticated robots are beginning to gain what can rightly be called - "impulses". The programmers implant in the autonomous robots the impulses to react to different situations in certain ways that were defined for them in advance. In this respect, the autonomous robots resemble insects, which can respond to the same situation over and over again in the exact same way - even if this way has proven to be disastrous for them. And yet, they have the drive to act that way.

Insects are quite sophisticated creatures, but they too reveal 'bugs' when they encounter unexpected situations. This is why mosquitoes continue to receive electric currents when they fly into the hypnotic light of the electric trap, or flies continue to enter a simple mechanical fly trap, even though they see all their friends trapped inside. The simple impulses programmed into their nervous system cannot cope with the new complex modern environment.

And if insects can experience bugs when they are caught in an unexpected environment, even more so for autonomous robots. Tsili's story illustrates what happens when the robot obeys wrong impulses, or when a system failure occurs that is meant to 'push' it back home. Such bugs are inevitable in any complex system, but their consequences can be disastrous when it comes to autonomous robots, and especially armed robots of the kind that can be found on the battlefield.

If it scares you too...

If you too are frightened by a scenario in which unexpected bugs are discovered in autonomous robots, you may want to sign The open letter which he released a year and a half ago The Institute for the Future of Life (The Future of Life Institute), against the use of autonomous weapons on the battlefield. You will not be alone in signing it: More than a thousand experts and researchers In the field of artificial intelligence, they have already signed the letter calling for a ban on the use of autonomous weapons.

Will the letter succeed in dissuading governments from giving autonomous capabilities to robots on the battlefield? I'm in Doubt. If we have not been able to stop the nuclear armament, it is hard to believe that we can stop the robot armament. But at least when Chili or Freddy the Predator get lost next time, you can cluck your tongues in disappointment and point out that you knew it would happen.

In short, you can be futurists.

17 תגובות

  1. The mistake is important for operant conditioning. provided it results in punishment (feeling or emotion of pain). Predefined software for specific tasks will not go wrong. Artificial intelligence does make mistakes, although it still does not have the ability to learn independently, but based on the feedback it receives from humans following the mistake.

  2. Mike
    Not only do we not approach the barrage of a chimpanzee, we do not approach the capabilities of a fly's brain.

    There is something very important about simulating a brain that they like to ignore. A machine is not wrong, but biological intelligence is. She must be wrong!

  3. In some capacities neural networks do not fall or almost never fall from the source. As a whole artificial intelligence still does not come close to the brain of a chimpanzee.

  4. "Computers today are as powerful as a chimpanzee's mind?"...
    …hmmm… is a truck as strong as iron?! how do you come up with those questions?

  5. The mental flexibility of a chimpanzee is similar to that of a human. Not in terms of quality but in terms of principle.

  6. Mike
    There are countless other differences between man and machine.
    Do you think chimpanzees have "mental flexibility"? In general - can you tell me what distinguishes a human thought from a chimpanzee's thought?

  7. It's nice that you agree with me :) that the mental flexibility is what differentiates man from machine. The flexibility comes from the ability to simulate something to something.
    Both the human and the machine know how to receive input, decide whether to treat it, and also how to treat it.
    The difference is that the machine does not know how to handle inputs that were not pre-programmed for it. Man, on the other hand, knows how to receive input that he has never seen, connect it with something similar to it and accordingly decide whether it is a significant input such as a problem that needs to be addressed, and he is also able to solve the problem based on imagination.
    I would like to note that what I meant by the humanity of a machine refers to the area of ​​decision-making, and not to humanity in the overall sense. And I referred to humanity in the sense of helping the decision-making system through association, the decision-making system in itself also still, in my opinion, at least does not reach a human level, despite the progress of understanding in the field, see "The Theory of Value".

  8. Mike
    So it's a shame it's not a joke, because otherwise it's sad. What you are saying is that all a computer lacks to be human is to be human.
    very deep…..

  9. Miracles
    The ability to analyze problems and solutions on an associative basis, or in short imagination, is more or less what a computer lacks in order to make decisions in a similar way to humans.

  10. Beyond dramatization, it sounds like an operating error.
    The flight control system and measurement systems worked properly. Otherwise the plane would immediately fall from the sky.
    I don't know the internal logic of the specific system, but usually in the "no contact" mode, depending on the operator's pre-set, the plane will continue its mission or try to return home on a route with pre-planned points to re-establish contact. He will continue like this until the point where he is in eye contact with the ground station or in certain systems until a final landing.
    From the short description it sounds like the operator did not check the validity of the route. The fault is still a human error and not related to the "deus ex machina" or "desires" of the machine.

  11. Mike
    A fighter plane almost never uses the autopilot. It has a little use in night flights at altitude, but other than that it is not touched much.
    On what basis do you say that a computer will be human? This is the last thing we want! Who wants a computer that makes mistakes??

  12. Not necessarily a bug. Sounds more like the autonomous system just crashed. And the pilot continued on autopilot. Suppose there was a human pilot on an armed plane and he died in the air, the plane continues on the autopilot. Will the situation be less catastrophic if it falls in a populated area?
    In general, the entire article relies on the assumption that the computer can never be human, but this is true only for those of little faith.

  13. Does anyone really think that the military will stop developing autonomous weapons for fear of bugs? Sounds pretty weird to me. Open letters of this type are intended to publicize the institute that initiated them and not really influence reality.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.