The signatories of the letter call for a six-month pause in the development of systems more powerful than OpenAI's new GPT-4. "Strong AI systems should only be developed once we are sure that their effects will be positive and their risks will be manageable," the letter reads.
Over a thousand artificial intelligence experts and executives in the technology industry, the most prominent of whom were Elon Musk, CEO of Neurolink, and Steve Wozniak, one of the founders of Apple. The letter, issued by the Future of Life Institute, called for the development of advanced artificial intelligence to be put on hold until common safety protocols for such designs are developed, reviewed by independent experts and implemented.
The institute, co-founded by MIT physicist Max Tagmark and Skype founder Jan Talin, has been among the most vocal organizations calling for greater regulation of the use of AI.
The signatories of the letter call for a six-month pause in the development of systems more powerful than OpenAI's new GPT-4. "Strong AI systems should only be developed once we are sure that their effects will be positive and their risks will be manageable," the letter said.
In the letter, they list the main risks inherent in artificial intelligence technology:
- Autonomous weapons: Artificial intelligence can be used to create autonomous weapons that can operate without human supervision, leading to the possibility of unintended harm.
- Surveillance: Artificial intelligence can be used to create sophisticated surveillance systems that can invade people's privacy and violate their civil liberties.
- Cyber security: Artificial intelligence can be used to create new cyber threats, such as deep-fakes, that can be used to manipulate people and undermine trust in institutions.
- Unemployment: AI could lead to job losses, especially in industries that rely heavily on manual labor or routine tasks.
- Bias and discrimination: AI algorithms may inadvertently perpetuate or exacerbate social biases and discrimination, leading to unfair treatment of certain groups.
- · Control: Artificial intelligence could lead to the concentration of power in the hands of a few people or organizations, which could undermine democratic values and principles.
Strong artificial intelligence can be dangerous to democracy in several ways. For example, with its help it will be possible to use data and algorithms to motivate and influence public opinion and political elections. In addition, it can be used by companies and governments as a tool to publish fake information and influence the actions and decisions of the public and governments. Also, artificial intelligence can cause inequality in the possibilities of using information and participating in new technologies, which will lead to exclusive access to information and influence for a small number of people or companies. Therefore, it is important to understand the potential risks of strong artificial intelligence and act to implement safety protocols and oversight of the use of this technology.
Musk has often stated that artificial intelligence has the potential to surpass human intelligence and become a threat to humanity. He compared artificial intelligence to "summoning the demon" and warned that artificial intelligence could lead to an existential threat to humanity if not properly regulated.
In 2014, Musk founded OpenAI, a research company dedicated to the development of general artificial intelligence. However, in 2018, Musk resigned from OpenAI's board, citing concerns about the organization's direction and potential conflicts of interest.
In 2015, Musk founded Neuralink, a company focused on developing technology that can merge the human brain with artificial intelligence. Neuralink's goal is to create a direct interface between the human brain and computers, allowing humans to control technology through their thoughts. The ultimate goal is to improve human intelligence and capabilities through artificial intelligence.
Musk's involvement in artificial intelligence stemmed from his concerns about the potential risks it poses to humanity. He founded OpenAI and Neuralink to ensure that AI is developed in a way that benefits humanity and that its potential risks are minimized. However, his resignation from OpenAI suggests he was unhappy with the company's direction and believes more needs to be done to regulate the use of artificial intelligence.
Steve Wozniak, the co-founder of Apple, has also repeatedly expressed his concerns about artificial intelligence.
More of the topic in Hayadan:
- The European Union and the United States are promoting moves to address ethics in artificial intelligence
- A report submitted for public comment seeks to apply ethics rules to the field of artificial intelligence
- Artificial intelligence such as CHATGPT will bring about a change in the way we handle texts