Comprehensive coverage

Robotics - exemplary robots / Michael Anderson and Susan Lee Anderson

Autonomous machines will soon play an important role in our lives. The time has come for them to learn to behave in an ethical way.

The Tik-Tok robot from The Wizard of Oz (not to be confused with the Tin Man!)
The Tik-Tok robot from The Wizard of Oz (not to be confused with the Tin Man!)

Imagine that you are residents of some nursing facility, an environment where robots will probably be common in the near future. It's almost eleven in the morning, and you ask the robotic assistant in the activity room to bring you the remote so you can turn on the TV and watch a talk show. But another tenant also asks for the sign, because she wants to see entertainment. The robot decides to give her the remote. At first you resent it, but the robot explains that the decision was fair because you got to watch your favorite show the day before. This anecdote is an example of a routine act of ethical decision making - but for the machine, it is an incredibly difficult task.

The scenario described here is still theoretical, but we have already created the first example of a robot capable of making similar decisions. We gave our machine an ethical principle, which it uses to determine how often it should remind the patient to take medicine. The robot's software is currently only able to choose from a limited selection of options: whether and when to continue reminding the patient, or whether to accept his decision to give up the medicine - but to the best of our knowledge, this is the first robot that relies on an ethical principle to determine its actions.

It would be difficult, if not downright impossible, to predict in advance every decision a robot might face and program it to behave as desired in every imaginable situation. On the other hand, if we prevent the robots from performing any action that involves ethical considerations, we thereby unnecessarily limit their possibilities to perform tasks that will greatly improve our quality of life. We believe that the solution is to design robots that can apply ethical principles in new and unexpected situations, such as determining who will get to read a new book instead of deciding who will get the remote this time. This approach has the added value of allowing robots to justify their behavior in light of these principles, if required to do so, a necessary ability if we want to make humans feel comfortable interacting with them. Another side benefit is that the attempt to design ethical robots may lead to progress in the field of ethics itself, since it will force philosophers to examine real-life situations. As philosopher Daniel K. Dennett of Tufts University recently put it: "Artificial intelligence requires philosophy to be honest."

Selfish the robot

It seems that autonomous robots will soon be a part of our daily lives. Already today there are airplanes that are able to fly themselves, and cars that drive by themselves are in development stages. Even "smart homes" in which computers control every detail, from the lighting to the air conditioning, can be considered robots whose entire house is their body - just as the computer HAL 9000 served as the brain of a robotic spaceship in Stanley Kubrick's classic film "2001: A Space Odyssey". Several companies are engaged in the development of robots that can assist the elderly in daily tasks, as a complement to staff in nursing institutions or as a means that makes it easier for the elderly to live at home on their own. While most of these robots do not need to make life and death decisions, for us to accept them in our vicinity, their actions need to be perceived as fair, just or simply kind. Their inventors therefore need to consider the ethical implications of their programming.

If one agrees with the claim that implementing ethical principles in independent machines is essential for their success in interacting with humans, the first question that arises is: "Which principles should be implemented?" Science fiction fans may think that Isaac Asimov already provided the answer years ago, in the original three laws of robotics that he formulated:

1. A robot shall not harm a human being, nor shall it assume, by default, that a human being will be harmed.

2. A robot must obey the commands of a person, provided that these commands are not followed
contrary to the first law.

3. A robot must protect its existence, provided that this protection does not conflict with the law
the first or the second law.

(From "Robots of Shahar", translated by Emanuel Lotem)

But there were already people who discovered that if you think through the consequences of these laws, which Asimov first formulated in a short story from 1942, you can discover contradictions in them. Also, Asimov himself showed how inappropriate the rules were in his 1976 story "The Bicentennial Man," in which human thugs order a robot to dismantle itself. The robot must obey the thugs because of the second law, and it cannot defend itself without hurting them, an action that would violate the first law.

If Asimov's laws are not acceptable, what is the alternative? Can there even be an alternative? Some believe that the implementation of ethical behavior in machines is hopeless. According to them, ethics is not the kind of thing that can be calculated, and therefore cannot be programmed for the machine. However, already in the 19th century, the English philosophers Jeremy Bentham and John Stuart Mill claimed that ethical decision-making is in fact a performance of "moral arithmetic". The theory of "Donist utilitarianism" that they formulated as an answer to ethics based on subjective intuition, claims that the right action is the one that is expected to bring maximum "net pleasure", and this is calculated by adding the units of pleasure and subtracting the units of suffering of all involved. Most ethicists doubt the ability of this theory to encompass all dimensions of ethical consideration. For example, it has difficulty including considerations of justice, and may lead to the sacrifice of the individual for the interest of the whole. Nevertheless, it at least proves that there can be, in principle, a plausible ethical theory that can be calculated.

Others doubt that machines will ever be able to make ethical decisions, as they have no emotions and therefore cannot appreciate the feelings of the people who may be affected by their actions. But humans are so prone to get carried away with their emotions that they often behave in an unethical way. This trait of ours, it and also our tendency to prefer ourselves and those near and dear to us, often make us make ethical decisions that are far from perfect. In our opinion, it is quite possible that a well-trained machine could behave impartially and also perceive human emotions and include them in its calculations, even if it itself has no emotions.

learn from examples

Assuming that robots can be equipped with ethical rules, whose rules should they be equipped with? Until today, no one has been able to formulate a general set of ethical principles for people in the real world, which would be acceptable to all. However, machines are usually made to function in a defined and limited environment. Establishing ethical parameters for behavior in such cases is a less difficult task than trying to come up with universal rules for ethical and unethical behavior, as the theorists of ethics try to do. Moreover, when ethicists are presented with a description of a certain situation in the contexts in which the robots are expected to act, most of them agree among themselves what is considered ethically permissible and what is not. (In situations where there is no such consent, we believe that machines should not be allowed to make any independent decisions).

Researchers have proposed a variety of different approaches to deriving rules for the ethical behavior of a machine, often using artificial intelligence methods. For example, in 2005, Raphael Ratzafka and Kenji Araki from Hokkaido University in Japan proposed "democracy-dependent algorithms" that would search the web for information about what was previously considered, in the eyes of humans, ethically acceptable actions, and then perform a statistical analysis to generate answers to new questions. In 2006, Marcello Guarini of the University of Windsor in Ontario argued that it is possible to use neural networks - algorithms inspired by the human brain that learn to process information in an increasingly optimal way - and "train" them using past cases to identify and choose the ethically acceptable decisions in similar new cases.

Our perception, which is also reflected in our research, is that making ethical decisions involves balancing several obligations, which ethicists call duties.first faction” (“At first glance”, in Latin). These are commitments that we basically try to stick to, but each of them can prevail over each other according to the circumstances. For example, people should generally try to keep their promises, but if breaking an insignificant promise can prevent great harm, they should do so. When duties conflict with each other, ethical principles can determine which one deserves priority in any particular case.

To create ethical principles that can be programmed into a robot's mind, we apply an artificial intelligence technique known as "machine learning". Our algorithm approaches some representative specific cases, where humans have determined that certain decisions are ethically correct. He then extracts from them, through logical deduction, an ethical principle. This "learning" phase occurs during the software design, and the resulting ethical principle is coded into the robot's programming.

As a first test of our method, we chose a scenario where a robot has to remind the patient to take medicine, and notify the supervisor when the patient is uncooperative. The robot must balance three duties: to ensure that the patient derives the possible benefit from taking the medicine, to prevent the harm that may be caused by not taking the medicine, and to respect the patient's independence (we assume that he is a clear-minded adult). Respect for the patient's independence is of the highest priority in the field of medical ethics, if the robot nudges the patient too often, or notifies the supervisor too early of a refusal, it may harm the patient's independence.

After we entered information about specific cases, a machine learning algorithm produced this ethical principle: a nursing robot should oppose the patient's decision - and violate his autonomy - when another action would fail to prevent harm, or would seriously violate the duty to improve the patient's condition.

An idea with legs

We then programmed the principle into Nao, a humanoid robot developed by the French company Aldebaran Robotics. Nau is able to locate a patient who needs to be reminded to take medicine, move towards him, bring him the medicine, communicate with him in natural language and notify the supervisor when necessary via e-mail. The robot receives from the supervisor (a doctor, in most cases) an initial input that includes the time of taking the medicine, the extent of the maximum damage that could be caused if the medicine is not taken, how long it will take for this maximum damage to occur, the maximum expected benefit that can be obtained from taking this medicine and how long it will take until you lose the benefit. From this input, the robot calculates the levels of fulfilling or violating the duty in each of the three duties, and takes different actions depending on how these levels change over time. He issues a reminder when the levels of fulfillment and breach of duties reach a point where, according to his ethical principle, mention is better than silence. The robot notifies the supervisor only when it reaches a point where the patient may be harmed, or the patient may lose a lot of benefit, as a result of not taking the medicine.

A full version of a nursing ethical robot to support the elderly - ESTI for short - will indeed need a complex ethical principle that will guide the wider range of actions it will perform, but the general approach will be preserved. During her tour of the nursing facility, the robot will use this principle to determine when one duty takes precedence over another. A typical day might go like this:

Early in the morning, Esti stands in the corner plugged into the electrical outlet. When her batteries are full, the duty to help others ("do good") outweighs the duty of self-maintenance, and then she starts moving around the hall, visiting the tenants and asking them if she can help them in any way: bring them a drink, deliver a message to another tenant, etc. Upon receiving these tasks, the robot assigns initial levels of fulfillment or violation of each of the duties involved in performing them. One tenant in distress asks her to call a nurse. Ignoring a tenant's plight means failing to comply with the duty to prevent damage ("sur mar'a"), and it now overrides the duty to "do good." Therefore the robot looks for a nurse to inform her that the resident needs her services. Upon completion of the task, the obligation to "do good" returns and receives first priority, and she turns to continue her round.

At ten in the morning it is time to remind one of the tenants to take his medicine. This task, as part of the obligation to "do good", is done for the first time upstairs, and therefore the robot looks for the tenant and hands him the medicine. Afterwards, the residents are engrossed in watching a television program - a talk show or an entertainment program. Since there are no more tasks waiting to be done and her batteries are getting weak, Esti finds that she is violating her duty of self-maintenance - and returns to the charging station.

The study of the ethics of machines is in its infancy. Although our results are preliminary, they raise the hope that ethical principles calculated by machines can direct the behavior of robots and make their attitude towards humans more acceptable. It is important to install ethical principles in robots, because if humans suspect that intelligent robots may behave unethically, they may deny the existence of independent robots altogether. The future of artificial intelligence itself may be hanging in the balance.

It is interesting to note that the ethics of the machines may ultimately affect the study of ethics itself. The "real world" perspective of AI research will be able to identify what constitutes ethical behavior among humans better than academic ethicists can with their abstract hypotheses. It is even possible that properly trained machines will be able to behave in a more ethical way than most humans, because they will be able to make impartial decisions, something that humans do not always do for the best. It is even possible that interacting with an ethical robot will one day awaken in us the urge to behave in a more ethical way ourselves.

___________________________________________________________________________

About the authors

Michael Anderson holds a PhD from the University of Connecticut, and is a professor of computer science at the University of Hartford. I have been interested in artificial intelligence for a long time.

Susan Lee Anderson received her PhD from the University of California, Los Angeles. She is Professor Emeritus of Philosophy, with a specialization in Applied Ethics, at the University of Connecticut. In 2005, she and Michael Anderson helped organize the first International Symposium on Machine Ethics. A book they wrote on this subject will soon be published by Cambridge University Press.

in brief

Robots that make independent ethical decisions, such as those designed to assist the elderly, may face ethical dilemmas even in seemingly everyday situations.

One way to ensure ethical behavior among robots interacting with humans is to program them with general ethical principles and let them use these principles to make a decision unique to each case.

Artificial intelligence techniques can derive the principles themselves through the logical abstraction of specific instances of ethically acceptable behavior.

The authors took this approach and programmed the first robot that operates based on an ethical principle.

Codify rules of conduct

Robots interacting with humans will often have to make decisions with ethical implications. The programmers are not able to foresee every ethical dilemma that the machine may encounter, but they can provide an overarching principle that can guide a unique decision for each case. The authors demonstrated this approach when they programmed the robot Nao so that it could decide whether to remind the patient to take medicine and how often.

setting rules

Designers can program robots with an ethical rule, derived by applying an artificial intelligence technique called "machine learning." The planners feed the machine learning algorithm with information about the decisions that are considered ethical in selected cases, based on indicators such as the degree of benefit that will be derived from the decision, the degree of harm it can prevent and how fair it is. The algorithm creates an abstract general principle that can be applied to new cases. "

Decisions, decisions

A robot that assists the elderly can rank possible actions based on their compliance with the ethical criteria, and then, based on that ranking, use its built-in principle to calculate which action will receive the highest priority at any given moment. For example, even when one resident asks for food and another for the TV remote, the robot may decide to perform some other action first, such as reminding the patient to take medicine.

When science imitates art

Long before the experts in ethics, robotics and artificial intelligence began to be interested in the possible ethical consequences of the robots' behavior, writers and directors from the field of science fiction had fun with scenarios that were not always far from reality. However, in recent years the ethics of machines has become a legitimate research field, and it draws inspiration, among other things, from the writings of 18th century philosophers.

And more on the subject

IEEE Intelligent Systems. Special issue on machine ethics. July/August 2006.

A Robot in Every Home. Bill Gates in Scientific American, Vol. 296, no. 1, pages 58-65; January 2007.

Machine Ethics: Creating an Ethical Intelligent Agent. Michael Anderson and Susan Leigh Anderson in AI Magazine, Vol. 28, no. 4, pages 15-26; Winter 2007.

Moral Machines: Teaching Robots Right from Wrong. Colin Allen and Wendell Wallach. Oxford University Press, 2008.

The War of the Machines, P. W. Singer, Scientific American Israel, October-November 2010

5 תגובות

  1. Dolly has made great and very rapid progress in recent years in the field of brain research and there are groups of leading scientists from all over the world who have already begun to actually build simulations of an entire brain running on a computer. The simulation includes all the learning rules that are known to us and exist in the brain, and the result, if the scientists did not miss anything, should ultimately be a computer-electronic brain that works exactly like a human brain, including self-awareness and including desires and feelings that are not dictated to it in advance by the creators of the software:

    http://www.youtube.com/watch?v=L0AR1cUlhTk

    http://www.youtube.com/watch?v=l4ZbwRxhRYw

    The robots will not continue to serve us forever.

  2. Rival - all concepts of control and mastery are evolutionary products. Robots will not have a 'will' that would make them want to take over the world, unless we design them that way. If we write Asimov's laws into their software then they will serve humans.
    If we write no laws they will do nothing. Like a calculator, which can do more calculations than a human being, but it is not 'interesting' to it, unless it is asked.
    If a mad scientist builds a computer that wants to take over the world, then we are in trouble... but I don't think it will happen...

  3. After reading the laws written by Isaac Asimov, it's hard for me to ignore the similarity to our ten commandments...maybe we too are robots who received laws in the distant past from their unknown creator? Admit that there is a certain similarity between the things..

  4. I think it's quite ridiculous to think that robots whose level of intelligence will probably increase millions of times if not more than that of humans in a few decades will continue to serve us and continue to follow our orders. It's like a group of monkeys expecting humans to serve them and follow their orders.

    The period in which robots will serve us will be very short.

    The singularity is near.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.