On Wednesday, June 14, the European Parliament voted to approve its own draft proposal for the Artificial Intelligence Law, legislation that has been two years in the making, aiming to shape global standards in the regulation of artificial intelligence
By Nello Christianini, Professor of Artificial Intelligence, University of Bath
These days, the word "risk" often appears in the same sentence as "artificial intelligence" (AI). While it is encouraging to see world leaders considering the potential problems of AI, along with its industrial and strategic benefits, we must remember that not all risks are equal.
On Wednesday, June 14, the European Parliament voted to approve its own draft proposal for a law The artificial intelligence, legislation that has been two years in the making, with the aim of designing global standards in the regulation of artificial intelligence.
After a final round of negotiations, designed to reconcile various drafts produced by the European Parliament, the Commission and the Council, the law should be approved before the end of the year. This will be the first legislation in the world dedicated to regulating artificial intelligence in almost all sectors of society.
Of all the ways in which the regulation of artificial intelligence can be approached, it is worth noting that this legislation deals entirely with the concept of risk. It is not AI itself that is under scrutiny, but the way it is used in specific areas of society, each of which carries different potential problems. The four categories of risk, which are subject to different legal obligations, are: unacceptable, high, limited and minimal.
Systems that are considered to threaten fundamental rights or the values of the European Union, will be classified as having an "unacceptable risk" and will be banned. An example of such a risk would be AI systems used to"Predictive policing". This is the use of artificial intelligence to make risk assessments of people, based on personal information, to predict whether they are likely to commit crimes.
A more controversial case is the use of technology face recognition using street cameras in real time. This case has also been added to the list of invalid risks and its use will only be allowed to identify a crime after it has been committed and under a court order.
Systems classified as "high risk" will be subject to a transparency obligation and are expected to be registered in a special database. There will also be various monitoring or auditing requirements.
The types of applications that should be classified as high risk include AI that can control access to services in education, employment, finance, health and other critical areas. The use of artificial intelligence in such areas is not seen as undesirable, but oversight is essential due to its potential to adversely affect safety or fundamental rights.
The idea is that we should be able to trust that any software that makes decisions about our mortgage will be carefully scrutinized under European laws to ensure that we are not discriminated against on the basis of personal details such as gender or ethnic background – at least if we live in the EU.
"Limited risk" AI systems will be subject to minimal transparency requirements. Similarly, operators of generative artificial intelligence systems – for example, bots that produce text or images – will have to declare that users are interacting with a machine.
With the progress of the legislative process in the European institutions that started in 2019, the legislation has become more and more specific and explicit about the potential risks of deploying AI in sensitive situations - along with how they can be monitored and mitigated. Much more work needs to be done, but the idea is clear: we need to be specific if we want to enjoy the benefits of artificial intelligence.
Danger of extinction?
On the other hand, they have recently been published Petitions calling for reference to "endangered" Allegedly that the artificial intelligence places, without providing additional details. Various politicians voiced these opinions. This is a long-term risk and, as experts have already pointed out, it is too general, and this type of risk is not addressed in the law.
If "risk" is the "expected damage" that may result from something, it is better to focus on possible scenarios that are both harmful and reasonable, because they carry the highest risk. Highly unlikely events, such as an asteroid collision, should not take precedence over more likely events, such as the effects of pollution.
In this sense, the draft legislation just approved by the European Parliament is less glamorous but has more substance than some of the recent warnings about AI. It tries to walk the thin line between protecting rights and values, without preventing innovation, and specifically addressing both the dangers and the way to deal with them. Although the emerging law is far from perfect, it at least provides concrete actions.
The next step in this legislative process will be holding dialogues between the three parties - where the separate drafts of the Parliament, the Committee and the Council will be combined into a final text. At this stage compromises are expected to occur. The law that will be adopted will enter into force, most likely at the end of 2023, before the start of the campaign for the next European elections.
After two or three years, the law will enter into force and every business operating within the European Union will have to comply with it. This long timeline poses some questions of its own, because we don't know what AI, or the world, will look like in 2027.
Let's remember that the president of the European Commission, Ursula von der Leyen, proposed first This installation in the summer of 2019, just before an epidemic, war and energy crisis. This was even before ChatGPT had politicians and the media regularly talking about the existential risk of artificial intelligence.
However, the law is written in a general enough way that it may help it remain relevant for some time. This may affect how researchers and businesses approach artificial intelligence outside of Europe as well.
What is clear, however, is that all technology carries risks, and instead of waiting for something negative to happen, academic institutions and policy makers are trying to think ahead about the implications of research. Compared to the way we adopted previous technologies - such as fossil fuels - this is a certain degree of progress.
More of the topic in Hayadan: