Comprehensive coverage

"The fact that something can be done with artificial intelligence does not mean that it should be done"

This is what Prof. Francesca Rossi, IBM's vice president for the ethics of artificial intelligence, said at a press conference held at IBM's research laboratory in Zurich. As an example, Rossi gave the problem known as the car problem, and said in response to the science website's question whether when all the cars are autonomous there will be such a risk at all, because still Accidents with pedestrians and cyclists are expected

Ethics and artificial intelligence. Illustration: shutterstock
Ethics and artificial intelligence. Illustration: shutterstock

 

"The fact that something can be done with artificial intelligence does not mean that it should be done," said Prof. Francesca Rossi, IBM's vice president for the ethics of artificial intelligence. Prof. Rossi and Prof. Barry O'Sullivan from the University of Cork in Ireland, who also serves as the president of a group of researchers for artificial intelligence that advises the European Union Commission, discussed this issue with journalists, at a meeting held at the research center of the blue giant in Zurich. The two delivered their words on video, because they were at the European Union conference in Brussels at the time of the conversation.

According to Prof. Rossi, "People love the benefits of the new technologies, but also fear them. There are a lot of concerns that need to be answered. One of those concerns is bias, sometimes unconscious. The systems are indeed based on machine learning, but if the data from which the system learns and the model around which it is developed suffer from a bias, even a small one, the recommendations it will give will not be fair. Also, the system must know how to explain how it reached the result, otherwise no one will believe it."

Prof. Francesca Rossi, IBM Vice President for Artificial Intelligence Ethics. Photo: PR
Prof. Francesca Rossi, IBM Vice President for Artificial Intelligence Ethics. Photo: Public Relations

"We, at IBM, plan the systems from the first day of their design so that they are interested in the ethical issues," she said. Prof. Rossi mentioned a number of steps taken at the Blue Giant to reach this goal: "We conduct trainings and come up with a long list of questions that the designers of such systems should ask themselves, in order to identify biases and deal with them in each of the model's components. We also opened the API's of the systems to increase transparency. We constantly research the field and transfer the research results to the development of the products and services, and expand the partnership on the subject with the stakeholders. We, as well as other organizations, must be aware of the capabilities of artificial intelligence, but also its limitations."

Examples of biases

Prof. Rossi presented the problem known as the carriage problem, the main point of which is that carriages are traveling on a track in opposite directions, the driver of one of the carriages has to decide whether to collide with the other carriage or deviate to a junction where he will run over pedestrians. Of course, when an autonomous car makes a decision in the event of a certain accident, you just have to decide against whom, you must introduce ethical and moral consideration into it. In response to the question of the science site, if the problem will not be solved when all cars are autonomous since they will be able to communicate with the infrastructure and even with cars that are outside the line of sight (for example, behind a roundabout or an intersection), Rossi answered that it is clear that the chance of having to make such a decision will be small, but still It should be taken into account, what with the fact that pedestrians and cyclists will continue to move on the roads, even though accidents will be rarer, they will still not be completely eliminated, so you have to prepare.

Prof. O'Sullivan noted "a very simple example of bias, which is done when translating on the Internet. Thus, a patient needs to make an appointment with a doctor, and the system automatically assumes that the doctor is a man. But of course a woman can also be a doctor. Also, in April, an article was published in the journal Science, which showed that in many languages, titles of high-ranking people such as president, leader or manager are in the masculine (for example, in Hebrew, usually - אב). Females, on the other hand, are associated with caring roles or support roles. This is a bias that is built into the language itself and it is necessary to overcome these types of biases when it comes to technology and computing."

He added that "there are other biases that arise from systems that interact with each other. Microsoft, for example, developed a chat system based on Twitter tweets. Within 18 hours she had to take down the system, because the chats that took place there became racist and insulting. It's because people use Twitter in a certain kind of language. Besides, the tendency of people to dismantle the system and try to examine it is reflected."

"Another problem of bias can be caused by recommending a product," said Prof. O'Sullivan. "For example, the Netflix system recommends to the viewer films similar to the ones he watched, and only about them. This is how a bubble is formed. This is even more dangerous in the field of news - people only receive the news that corresponds to their political opinion."

What is decent? What is moral?

During the conversation, the two also talked about decency and morality. Prof. Rossi said that "there are many definitions of fairness. Different ideas of fairness are appropriate for different contexts and different tasks. A definition of fairness may be reasonable in one task, but not in another. Transparency can help, because that way we can know how fairness was introduced into the system."

According to Prof. O'Sullivan, "there is no agreement about what is a moral and ethical thing, because these things depend on social norms. When we come to build ethical technological systems, we trust the processes by which they are produced, but do we trust the corporations that do this? We need to think less about ethics and more about the robustness of the systems and the practices of using them." Prof. Rossi said that "we have to let the market do its thing".

More of the topic in Hayadan:

5 תגובות

  1. "We need to let the market do its thing" - with this statement the professor contradicted her words and eliminated the relevance of the moral consideration to the artificial intelligence challenge.
    The "market" has no moral sense. The "market" - both economic and ideological, is of enormous importance if it is free and competitive. But but only if it still operates within the limits of an external ethical framework that does not depend on the internal logic of the market, although free and flexible enough to leave it autonomous enough to function at a competitive level. Otherwise he turns into a monster (- in this case an unsupervised artificial intelligence that can become a threatening or harmful factor), and a monster that the hint saved appears in the first act - kills in the third, if not earlier.

  2. This is the problem with "ethicians". After all, they do not contribute to the vision and in general they are hired "as pike spokesmen" (someone needs to spread populism to the fearful crowd).
    IBM certainly wants to show that they approach the issue carefully.
    It should be noted that this is a wonderful company that works hard to be a pioneer and leads the "brain project" on the way to being exactly the opposite of what the lady says it should be.

  3. "The fact that something can be done with artificial intelligence does not mean that it should": if something is possible with artificial intelligence, it will be done. This corresponds to laws 34 and 35 of the Internet.

    Apart from that, until something is done it will be impossible to understand all its consequences. For example, I pondered for a moment about the possibility of introducing a robot as a servant into my home, and I realized that I had missed something very important, which I had not noticed for the past decades: Asimov's first law is flawed, and should be worded like this: "A robot shall not harm a person or a cat, and shall not allow an act or In default a person or a cat will be harmed.

    This has two comments:
    1. Snoozers are invited to define their dog as "cat".
    2. There is a fundamental problem with a similar adaptation of the second law, because if it is extended in a similar way, then all of the robot's time will be devoted from then on to scratching behind the ears of cats.

  4. You wrote "if the problem won't be solved when" instead of the usual writing "if the problem won't be solved when", but you came out no less correct than correct. Maybe on purpose?

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.