Comprehensive coverage

"The robots may replace any possible service provider - and this is a legal and legal concern"

Who is responsible if a robot that operates using artificial intelligence causes damage? So asks Dov Greenbaum, director of the Zvi Mitar Institute for Technological Innovation, Law and Ethics ● The question his lecture dealt with was "Is artificial intelligence a tool, or is it a new type of creature that has moral obligations?"

 

Dr. Dov Greenbaum, Director of the Zvi Mitar Institute for Legal and Ethics Technological Innovation. Photo: Adi Cohen Tzedek
Dr. Dov Greenbaum, director of the Zvi Mitar Institute for Legal and Ethics Technological Innovation. Photo: Adi Cohen Tzedek

One of the last enterprises of Attorney Zvi Mitar, who passed away about six months ago, was the establishment of the Zvi Mitar Institute for Technological Innovation, Law and Ethics, at the Radziner Law School in the Bethinachumi Center in Herzliya. A week before his sudden death, he still had time to send the head of the center, Dr. Dov Greenbaum, a book called "Rise of the Robots", which raises the important legal questions such as - what happens if a robot hurts someone, who is to blame? The owner, the programmer? And what if the robot is a learning robot, and the initial programming is no longer the only factor?

This is what Dr. Greenbaum said, in a lecture entitled "Legal and Ethical Implications in the Artificial Intelligence Arena", as part of the BI conference in memory of Zvi Mitar.

According to Greenbaum, many fear an era in which artificial intelligence will manage every step of our lives. Recently, a letter written by 20 people was published, including celebrities such as Stephen Hawking who spoke of the beginning of the end of the human race, Noam Chomsky who fears giving robots too much power, and Elon Musk who was even quoted as saying that this is "the greatest existential threat to the human race". And it is not about people who are afraid of technology.

Professions that will disappear - including programming

"The robots may replace every possible service provider: insurance agents, lawyers and even programmers - DARPA together with leading universities in the United States are leading a project in which robots will also program their future generations. We can program artificial intelligence to ignore our requests," Greenbaum explained. "Artificial intelligence is used in an iron dome, and to a certain extent there is a legal, legal concern. We still need people in the loop, because otherwise who is responsible? However, people add mistakes to the system. Another example from the military level is the drones, which today are so smart that not only do they identify suspects, they can also shoot them."

"It turns out that this concern is not new, and it was not just the property of science fiction stories," says Greenbaum. "Alexander Solzhenitsyn, the Russian dissident author, wrote in a book from 1968, a decade after the term artificial intelligence was coined, that he was concerned that the government could collect all our phone calls and use artificial intelligence to know what we say - and that's exactly what the NSA does, and more in the United States.”

"Another aspect of artificial intelligence is the same monkey that grabbed a camera from a researcher and took a selfie. There was a discussion on the question of who owns the copyright, the owner of the camera or the monkey, and since it is probably the monkey, there was a problem because copyright is reserved for people (or corporations). And this is another simple case," Greenbaum explained. What about artificial intelligence that will write books, produce movies"?

What will happen if an artificial intelligence system causes harm

"Recently, a patent was registered in the United States for a machine that will invent patents on its own, operating through technology that provides it with the ability to learn and develop technology on its own. The patent law does not know how to act in such a situation, does the patent belong to the person who developed the machine? Is it the machine itself or nobody? And also tort law in the picture - if I cause harm I am responsible and must compensate for it, but what will happen if an artificial intelligence system causes damage"? Greenbaum asked

"Another topic worth thinking about is what to do in the case of a brain-machine interface," he said. "You insert a chip into a person's brain and the brain controls the device through the chip. What will happen if we add artificial intelligence to the chip and it is actually activated by the subconscious? What will happen if that person unintentionally causes damage because of the chip? Is he guilty? Is the chip"?

"And as for legal responsibility," Greenbaum said, "there are different degrees of responsibility - for example, I have responsibility for my pets, children, and employees - is an artificial intelligence system equivalent to an animal in terms of responsibility? to a child? to the employee? And what about learning machines that can speak, does the protection of freedom of speech apply to them"?

"Up to a certain level we have Asimov's laws of robotics, but they were not invented by a scientist but by a writer, so they are not complete," Greenbaum explained. "Now the European Union is working on a robotics law project - RoboLaw".

In conclusion, Dr. Greenbaum asked, "Is artificial intelligence a tool, or is it a new type of creature that has moral, legal and legal obligations?"

A question that still has no answer.

 

More of the topic in Hayadan:

9 תגובות

  1. Artificial intelligence will have exactly what we put into it. If we introduce altruism, consideration, etc. - this is what we will get. If we include jealousy, narrow-mindedness and ideological / religious fanaticism - well, we have enough of that. If we put into it optimal utilization of resources and a cold calculation of cost versus benefit - it will eat us all for breakfast.

  2. One should not think about how to stop progress, but how to integrate it in an integrated manner so that there is no catastrophe. In Asimov's books there are creative solutions for human and robot coexistence. The Matrix movie series and the Elysium movie with Matt Damon have a black future.

  3. Not really. Usually I oppose religious people in science, although I am a religious person.
    What a large part of the readers of the site miss is that artificial intelligence will be like us - with feelings, thoughts and awareness. We are no less a tool in the hands of the establishment than the intelligence. It is hard for people to understand that the software will have self-awareness, but in my opinion it is very true. Then why did we invent the robots - so that the bureaucrats would replace us? They will also replace them within two generations.

  4. My God, how do you appoint a person who asks such stupid questions to the position of head of the Zvi Mitar Institute for Technological Innovation, Law and Ethics, at the Radziner Law School in the Bethinachumi Center in Herzliya?
    How does such a person manage to get a doctorate?
    Did he steal it?
    "To conclude, Dr. Greenbaum asked, "Is artificial intelligence a tool, or is it a new type of creature that has moral, legal and legal obligations?" - Artificial intelligence is not one or the other, it is a man-made device and legally it should remain as such!
    Nothing came of putting robots in prison.
    A robot must have the status of a minor.
    If a minor becomes pregnant, who is responsible for the child according to the law? - Three: the mother, the inseminator and the guardians (sometimes there are no guardians and then the child is the responsibility of his parents).
    "Artificial intelligence that will write books, produce movies" - a machine with artificial intelligence will sometimes be compared to the way the law perceives the concept of the surrogate mother.
    If a machine writes a book, a song or produces movies - the ownership belongs to the creator of the machine, whether the machine with the intelligence is robotic or a human minor.
    There is no reinventing the wheel here, you just have to find something to compare these issues to and everything is solved!
    Really small children, they were caught in the past for Asimov's 3 laws and that's it?
    Is this all that contemporary philosophy is capable of?
    There is nothing new on the subject of law and morality when it comes to "intelligent" machines - because the laws on the subject were written from the beginning regarding "intelligent" machines (we, humans!).

  5. And when Shimit Mentoisen registers you to receive an antivirus that you don't need at all and charges 10 shekels for it every month for years, who is responsible? And when Shemanit manages to convince some illiterate grandmother to buy a pair of phablets for five thousand shekels, and to sign five parallel plans for high-speed internet, who is responsible? Who can I get an answer from today? Is it possible to talk at all with any manager who is not closely related to the circular scripts created by Limit and the like? Maybe it's even better to give up and let grandma pay for the phablets and give up the drugs she can't afford? And will the judges even care if you file a lawsuit? Will they even return the thieves as their reward and stop sufficient compensation for all the endless torture? Maybe only if you are connected to a national newspaper with Robinhodian aspirations you can force someone from the company's management to return the money while promising that "we will look into how it happened". So what is the difference between a bully who runs over grandma over the phone under the auspices of a huge organization and between a robot who runs over grandma physically and leaves her to face an equally huge and faceless organization?

  6. The current article only emphasizes how redundant the legal system itself is becoming in the age of mass media. Just like milkmen who realize that there is no real need for them and they only create unnecessary chaos and create barriers that harm the quality of life for all of us.
    It will be better for all of us with the judge universe coming to the conclusion that not everything is fair, and the court will be reduced to a natural and limited size, even though they are operating in full format, the results are not amazing.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.