Comprehensive coverage

To program moral robots

In the near future, beings with artificial intelligence could be "more moral beings than humans" - or at least those who make better decisions when dealing with certain dilemmas.

A robotic helicopter. Photo: shutterstock
A robotic helicopter. Photo: shutterstock

In 2009 we reported on the science website about A report by researchers from the California Polytechnic Institute to implement a code of ethics, which would be based on something similar to Asimov's three laws of robotics in robots for military use. According to the researchers then, autonomous military robots will have to obey a warrior code or they will become the masters of humanity.

In the years that have passed since then, as this was also reflected in the framework of Operation Solid Cliff, a lot of technology was used, and among other things, they showed how to use UAVs equipped with night vision devices and even (some of them) attack devices. Today, the main wisdom of these systems is in assisting their navigation and management, but everything related to their operation is done by human operators.

In the near future, beings with artificial intelligence could be "more moral beings than humans" - or at least those who make better decisions when dealing with certain dilemmas.

The design of robots with independent moral capacity may include the implementation of a perfectly developed conscientious capacity to distinguish between good and bad, and act accordingly. These future robots may be more moral than humans.

In 2002, the roboticist Gianmarco Veruggio coined the term roboethics - the assimilation of human morality within the framework of the design, production and functioning of robots - emphasizing the research goal to which one should strive. At the time, the ethics of artificial intelligence was divided into two subfields:

  •  Machine ethics: the branch dealing with the behavior of artificial moral agents;
  • Robot ethics: the branch responsible for questions regarding the behavior of humans - how they develop, build, use and behave towards robots towards other creatures with artificial intelligence. This field also includes the questioning of the possibility of programming robots so that they have an ethical code that can direct their behavior according to social norms that differentiate between good and bad.

Logically, in order to be able to create morally independent robots like these, researchers must agree on a number of basic standards: what is moral capacity and what should humans expect from the robots working alongside them while sharing with them the decision-making process in fields such as medicine and warfare. At the same time, another question emerged: what is the responsibility of humans in creating an artificial intelligence with moral independence? And the main research question: what should we expect from morally capable robots?

One of the most important issues, although many questions about it remain unanswered, is regarding the idea of ​​'moral capacity'. Professors Bertram F. Malle of Brown University and Matthias Scheutz of Tufts University published a research paper this year titled: “Moral competence of social robots". In this article they claim that moral capacity consists of four broad concepts:

  1. Moral core: "a system of the norms and the language and terms intended to define these norms, including concepts and language of morality and a network of moral norms";
  2. Moral action: "awareness of morality and consequences" - adapting the actions of the cause to the norms;
  3. Moral cognition and emotion: "Determining decisions and taking moral actions", the emotional response to violating norms and moral judgment;
  4. Moral communication: reasoning, justifying, bargaining and coming to terms with violations of morality.

Designing robots with independent moral capacity may be inspiring and fascinating, but it will undoubtedly be challenging. The engineers must go through necessary steps. First, they must build a computerized representation of moral systems and embed terms and vocabulary in the field of morality into the robot's architecture. In the next step, they must develop algorithms that can computerize recognition and moral decision-making.

In a practical scenario, for example, a system with independent moral capacity intended for medical transport purposes would have to decide whether changing its traffic path from checkpoint A to checkpoint B is the best way to achieve its purpose, which is to deliver supplies to a disaster area or battlefield.

 

8 תגובות

  1. I can't browse the site from a SAMSUNG GALAXY S3. Has anyone encountered this phenomenon?

  2. Morality is a social matter, and it differs between cultures, so good and bad is a vague and baseless definition, moral in one culture can be immoral in another, so the morality of the robot belonged to whoever programmed it.

  3. I want to expand on what the commenters wrote here with the help of an example that I shared. The details have been slightly changed for obvious reasons.

    I was ordered to attack a traffic light in a certain place, on a busy street. I had no other information. As a disciplined soldier I carried out the order as I was taught, while harming civilians.

    Was it moral? On the face of it, it sounds like no. But - it turned out that blocking the road at this point prevented a mass attack in which dozens, if not hundreds, of innocent civilians would have been killed.

    What exactly could an autonomous weapon decide in such a situation? He is not exposed to information that others have. Discretion must, in my opinion, be left to humans.

  4. Robots are more moral than humans?!
    Who exactly will teach the robot what is moral and what is not if not humans?

  5. In the same context, one of the worrisome things about those vehicles that drive alone is the possibility of carrying out attacks, assassinations, etc. with them.

  6. Of course, at the moment we still don't know where a moral code can be embedded in a robot, it's still very far from that
    It's almost like wanting to implant a moral code in an insect, assuming it will be possible in the future then
    The autonomous robots will be adapted to morality that depends on culture in humans and even then there will be exceptions,
    Let's start with depending on culture, obviously a body like the Nazis for example will design the robot according to their "moral" code,
    And there is no doubt that from a technological point of view if there was such a thing and there would be such a culture when they met the connection would be bad,
    But even if making the robot a little more brutal would give one side an advantage, even a moral side would not be able to resist the temptation,
    For example, we can take the Allies in World War 2 in the analysis of their actions, for example bombing cities
    It is far from meeting the moral standards of today that we want, but that the evil also takes over the good
    They will not be able to escape the murderous nightmare that is war, they will have to roll up their sleeves and get dirty,
    In the end, this is about group existence, which becomes the highest bar above the individual
    If he can go with morality, then fine, but if not, even the good will move morality aside in order to survive,
    You can take as an example the degree of autonomy that a robotic system will have, we are talking about a system for quick decision-making, an excess of theoretical autonomy can get out of control
    And can endanger the creator, but if the robotics creator recognizes an advantage in a more autonomous robotic system
    He will not be able not to assimilate it to him only for the reason that the enemy will assimilate it and then his system will collapse in front of the opponent,
    The same goes for removing removed elements from the system, for example if we recognize that removed elements are embedded
    Our robotic system causes her to stutter and slow down against the opponent's robotic system and the result
    The obvious will be a loss, in what we choose in morality and the extinction or damage to the moral elements we believe in
    But we will not be able to implement them in the same robotic system,
    Perhaps this is the dangerous basis of these systems, that once they are implemented it will be very difficult to control what comes out of them
    In addition, a robotic system is nevertheless a super system for decision-making and if it is a robotic system like a soldier on the battlefield
    If it has the ability to assimilate a moral code, it will also be a system with tremendous complexity
    That there will always remain unexpected elements as in a person, everyone who develops knows it, the launch phase like a company
    Boeing with the battery in the plane that failed and we are talking about the best engineers in the world,
    Or any Carthian system, as the complexity increases so do the unexpected events, another key element in a robotic system
    A warrior who gives her the advantage will have freedom of action to respond to unexpected situations just like a human
    that from these requirements also arises the danger in this system,
    One human dilemma a robotic system will not meet is handling a robot that has been damaged or has ceased to exist,
    So that he will always be able to continue to operate with the same set of codes of considerations and he will not change the course of action emotionally
    These changes will always be tactical or strategic,
    Because of the complexity of this it will take many more years before we see it but these will be the problems
    that humanity will have to face in the future.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.