Comprehensive coverage

Will a drone decide who to kill?

"Ultimately if machines decide who dies, especially when it comes to mass killing, it could lead to what critics of AI feared most - that machines could destroy humanity," according to Peter Lee, director of security and risk and an expert in politics and ethics at the University of Portsmouth.

A drone destroys a city. Illustration: shutterstock
A drone destroys a city. Illustration: shutterstock

At the beginning of March 2018, it was announced that Google is working with the United States Department of Defense to make the drones, which are already in the service of the United States Army, artificial intelligence-based, and possibly also for the purpose of adding AI to additional armaments.

The joint venture, named Project Maven, will create advanced technology that can identify which objects are in the space where the drone is operating. This project caused indignation among Google employees and several thousand of them signed a petition according to which they ask Google to cancel its participation in the project.

Prof. Peter Lee, director of security and risk and an expert in politics and ethics, University of Portsmouth (University of Portsmouth) in Great Britain, said that "Google and other companies that develop AI systems for weapons, may find themselves as a legitimate military target for the enemy."

According to him, "The United States Army recently announced that it is developing the first drones that can identify and target vehicles and people using artificial intelligence (AI). This is a big step forward. While current military missiles are still controlled by people, this new technology will decide who to kill with almost no human involvement," wrote Prof. Lee in an article published on The Conversation website.

Once their development is complete, he wrote, “these aircraft will represent the ultimate militarization of AI and will cause far-reaching legal and ethical consequences for society. There is a chance that the warfare will go from battles to annihilation, losing all semblance of humanity in the process. However, it can expand the field of warfare, so that companies, engineers and scientists become valid military targets."

UAV Operators Room in the US Air Force. PR photo
UAV operators room in the US Air Force. PR photo

The one who pulls the trigger last must be a person
"Deadly military drones already exist," said Prof. Lee. "One of them is the MQ-9 Reaper, they are carefully monitored and routed via satellite. If such a UAV drops a bomb or fires a missile, a human sensor is activated that actively guides it to the selected target using a laser."

"Ultimately the operator team has the ultimate ethical, legal and operational responsibility for killing designated human targets. As one of the operators of Reaper points out: "I will always agree that a rebel, no matter how important he is as a target, will go away and not risk killing civilians."

“Even when we have a developed ability to kill people with drones, human emotions, judgments and ethics were a factor that affected the fighting. The prevalence of cases of mental trauma and post-traumatic stress disorder (PTSD) among the operators shows the psychological impact of remote killing," said Prof. Lee.

"This actually points to a possible military and ethical argument by Ronald Arkin, in support of autonomous drones. Perhaps if there is a drone to drop the bombs, the psychological problems among the crew members may be avoided. The weakness in this argument is that you don't have to be responsible for killing to be traumatized by it. Intelligence experts and other military personnel routinely analyze graphic footage of aircraft strikes. Studies show that it is possible to suffer psychological damage even from viewing images showing extreme violence."

“When I interviewed over 100 Reaper crew members for a future book, every person I spoke to who carried out deadly strikes believed that ultimately the person pulling the trigger must be a human. Take the person - and you also remove the humanity from the decision to kill."

serious consequences

The possibility of autonomous drones will radically change the complex processes and decisions behind military killing. But the legal and ethical responsibility does not disappear when human supervision is removed. Instead, the responsibility will increasingly fall to other people, including AI scientists.

According to Prof. Lee, "The legal consequences of these developments are already becoming clear. According to current international humanitarian law, 'dual use' facilities - those that develop products for both civilian and military needs - can be attacked under the appropriate circumstances. Thus, for example, in the Kosovo war in 1999, the Pansevo oil refinery was attacked because it could have been supplying Yugoslav tank fuel, as well as civilian fuel."

“Certain lines of computer code may almost certainly be classified as dual-use. Companies like Google, its employees or its systems, can be exposed to an attack from an enemy country. For example, if Google's AI-based Motion Image MIL software is integrated into a US military autonomous drone, Google could find itself involved in the 'killing' business, like any other civilian contributor to such deadly autonomous systems."

"Ethically, there are even more serious problems. The whole idea of ​​self-learning algorithms - software that learns independently from any information it can collect, uses technology to improve on any task it receives."

"If a lethal autonomous drone improves at its job through self-learning, someone will have to decide on an acceptable stage of development - how much the system still needs to learn. In military machine learning, this means that political, military and industrial leaders will have to specify how many civilian deaths will be considered acceptable as the technology improves."

"The recent cases of killing by autonomous systems should cause concern. The fatal experiments of Uber and Tesla with self-driving cars shows that we are pretty much guaranteed to have unwanted deaths from autonomous system operation due to software bugs."

In summing up his words, Prof. Lee fears that in the end if machines decide who dies, especially when it comes to mass killing, this could lead to what the critics of artificial intelligence - including the late Prof. Stephen Hawking - feared most of all - that machines could destroy humanity.

For the article on THE CONVERSATION website

3 תגובות

  1. All the articles here to convince us that we are far from today are misleading in my opinion - with a caveat, my subjective thought. Google's vice president himself recently admitted at a conference that the future looks like science fiction. But in my estimation it will be science fiction like that of Matrix and not like utopian. By the way, I am an advanced artificial intelligence developer.

  2. There is a more basic rule: as soon as one intelligence is smarter than another, there is no possibility that the less intelligent will be determined to be more intelligent over time. Of course, the rule can be complicated, but in principle it is true. Now evolution took about 2.5 billion years to engineer human intelligence. In artificial intelligence, the quality of knowledge doubles every two years. IBM, Intel and Google have launched quantum computers in the 50, 70 QBIT range available to large companies such as Samsung Daimler. It means up to 2 to the power of 70 floating point operations in one cycle when there are 3E9 cycles in a second. Companies like NVIDIA, AMD, ASUS release graphic accelerators that are the key component for intelligence at prices of $3000 for 16 Teraflop, and the price drops quickly to $300. 16 teraflops
    That's about 50E6 variables. We in the brain 15E9 vary in assessment.
    Where am I heading? When the elite intelligence crosses the level of human intelligence, it is quite clear that it is different from it and not the same. Similar and also mainly different - we have crossed the threshold of no return. From here we are on borrowed time until artificial intelligence replaces human intelligence. She needs to master all the professions first, and this with the kind help of Google, Facebook Accelerators and IBM. The capital involved in this and the spectacular mathematics dazzle us. Then to be a survivor, like humans, and then we are like monkeys compared to her. And that's it.

  3. Seemingly a strange perception of the author of the article "so that companies, engineers and scientists become valid military targets". It sounds more like his threat and intimidation towards those companies than a description of a complex reality,
    Regardless of AI, according to his perception, every company is complicit in killing, if you are, for example, a food producer, soldiers eat it, and if it gives them the energy to eliminate the enemy, then according to this, the food producer is also a legitimate target; otherwise, the food producer must refuse to let the military use his food, including the distinguished professor who shares with the university that creates the technological infrastructure in his country that is used in the end to eliminate enemy troops also becomes a legitimate target according to his perception, it is clear that all the world's armies rely on the technological capabilities of their country to create technological superiority over the enemy, armies on both sides use the same processors, sensors and other means of technology from all over the world to To improve the capabilities of their military this includes all the technology companies in the world do they all become a legitimate target? Where is the borderline of those involved?
    And to add another element of complication, if we suppose a situation arises where the analytical ability of an AI system will reduce human error and uninvolved killings, contrary to his description, will the esteemed professor continue to oppose AI systems at all costs, even if as a result more people will be killed?
    It is not clear if the description below regarding the distinguished professor is appropriate but there are really huge benefits but also a real future risk from the capabilities of AI but there may also be a danger that the ability to handle those problems will be "hijacked"
    By worldviews other than this danger is at the top, other worldviews drive it.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.