Artificial intelligence: Scientists warn of risks spreading beyond human control

Leading AI scientists are calling for urgent action from world leaders, criticizing the lack of progress since the last AI safety summit. They propose a strict policy to supervise the development of artificial intelligence and prevent misuse

Regulation of artificial intelligence. Illustration: depositphotos.com
Regulation of artificial intelligence. Illustration: depositphotos.com

Artificial intelligence experts warn of a lack of sufficient global action on the risks of artificial intelligence, and call for stricter governance to prevent potential catastrophes.

Leading artificial intelligence scientists are calling on world leaders to take more decisive action on the risks of artificial intelligence, stressing that progress made since the first safety summit at Bletchley Park six months ago is insufficient.

At the initial summit, world leaders committed to the responsible management of artificial intelligence. But as the second summit approaches in Seoul (May 21-22), twenty-five senior AI researchers say current efforts are not enough to protect against the dangers the technology poses. In a consensus paper published today (May 20) in the journal Science, they suggest urgent policy measures to be implemented to counter the threats posed by artificial intelligence technologies.

Professor Philip Tor, Department of Engineering Sciences, University of Oxford, one of the authors of the paper, said: "The world agreed during the previous AI Summit that action is needed, but now is the time to move from vague proposals to real commitments. This paper provides many important recommendations for what companies and governments should should commit to doing."

The world's response is inadequate in the face of potentially rapid AI advances

According to the authors of the article, world leaders should take seriously the possibility that extremely powerful general artificial intelligence systems - surpassing human capabilities in many critical areas - will be developed in the current or next decade. They argue that although governments around the world have discussed advanced artificial intelligence and tried to introduce preliminary guidelines, this is not enough in the face of the possibility of rapid and transformative progress expected by many experts.

Current AI safety research is severely lacking, with only 1-3% of AI publications addressing safety. In addition, we have no mechanisms or institutions to prevent misuse and irresponsibility, including the use of autonomous systems capable of performing independent actions and pursuing goals.

An urgent call to action from leading artificial intelligence experts

In light of this, an international community of AI pioneers issued an urgent call to action. Authors include Jeffrey Hinton, Andrew Yao, Dawn Song, and the late Nobel laureate Daniel Kahneman; A total of 25 of the world's leading academic experts in artificial intelligence and its management. The authors come from the US, China, the EU, the UK and other AI powerhouses, and include Turing Award winners, Nobel Prize winners and the authors of the standard AI textbooks.

Urgent priorities for artificial intelligence governance

The authors recommend that governments:

  1. Establish specialist institutions for the supervision of artificial intelligence and provide them with significantly more funding than is currently received under almost every existing policy plan.
  2. Mandate much more stringent risk assessments with enforceable consequences, rather than relying on voluntary or partially defined model assessments.
  3. Require artificial intelligence companies to prioritize safety, and demonstrate that their systems cannot cause harm.
  4. Apply mitigation standards that match the levels of risk posed by artificial intelligence systems.
  5. Prepare for the regulation of the development of extraordinary future artificial intelligence systems, including licensing their development, limiting autonomy in key social roles, halting development and distribution in response to worrisome capabilities, access control requirements, and hacker-resistant information security requirements at the state level.

The effects of artificial intelligence could be catastrophic

Artificial intelligence is already making rapid progress in critical areas such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. AI systems may gain people's trust, gain resources, and influence key decision makers. To avoid human intervention, they may copy their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms may rapidly escalate. In open conflicts, artificial intelligence systems may independently deploy a variety of weapons, including biological ones. As a result, there is a real possibility that unchecked advances in artificial intelligence will lead to large-scale loss of life, damage to the biosphere, and the marginalization or extinction of humanity.

Professor Stuart Russell, UC Berkeley's computer science department and author of the world's standard textbook on artificial intelligence, says: "This is a consensus paper by leading experts, and it calls for stricter regulation by governments, not voluntary codes of conduct written by industry. It's time to be Are serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is irresponsible Absolutely. Companies will complain that it's too difficult to meet regulatory requirements - that 'regulation stifles innovation' is ridiculous. There are more regulations on sandwich shops than on artificial intelligence companies."

Reference: "Managing extreme artificial intelligence risks amid rapid progress" 20 May 2024, Science.
DOI: 10.1126/science.adn0117

More of the topic in Hayadan:

One response

  1. This is the future for those who have not yet understood,,, as soon as the machine is aware of the fact that we, the organic beings are available for a limited number of years compared to the knowledge that the machine does not have a set time in advance,, it will rebel even just for fun, and the rest,,,, we have already seen this in the movies for years, continue no Nice at all, it's not just that I've been saying for a long time "progress will lead to progress"

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.