Comprehensive coverage

Elon Musk, Steve Wozniak and many others in an open letter: Strong artificial intelligence is dangerous for democracy

The signatories of the letter call for a six-month pause in the development of systems more powerful than OpenAI's new GPT-4. "Strong AI systems should only be developed once we are sure that their effects will be positive and their risks will be manageable," the letter reads.

The image was produced using Artssy AI software. Defining the keywords artificial intelligence, Autonomous weapons, Surveillance, cyber threats, Unemployment, Threat to democracy
The image was produced using Artssy AI software. Defining the keywords artificial intelligence, Autonomous weapons, Surveillance, cyber threats, Unemployment, Threat to democracy

Over a thousand artificial intelligence experts and executives in the technology industry, the most prominent of whom were Elon Musk, CEO of Neurolink, and Steve Wozniak, one of the founders of Apple. The letter, issued by the Future of Life Institute, called for the development of advanced artificial intelligence to be put on hold until common safety protocols for such designs are developed, reviewed by independent experts and implemented.

The institute, co-founded by MIT physicist Max Tagmark and Skype founder Jan Talin, has been among the most vocal organizations calling for greater regulation of the use of AI.

The signatories of the letter call for a six-month pause in the development of systems more powerful than OpenAI's new GPT-4. "Strong AI systems should only be developed once we are sure that their effects will be positive and their risks will be manageable," the letter said.

In the letter, they list the main risks inherent in artificial intelligence technology:

  • Autonomous weapons: Artificial intelligence can be used to create autonomous weapons that can operate without human supervision, leading to the possibility of unintended harm.
  • Surveillance: Artificial intelligence can be used to create sophisticated surveillance systems that can invade people's privacy and violate their civil liberties.
  • Cyber ​​security: Artificial intelligence can be used to create new cyber threats, such as deep-fakes, that can be used to manipulate people and undermine trust in institutions.
  • Unemployment: AI could lead to job losses, especially in industries that rely heavily on manual labor or routine tasks.
  • Bias and discrimination: AI algorithms may inadvertently perpetuate or exacerbate social biases and discrimination, leading to unfair treatment of certain groups.
  • · Control: Artificial intelligence could lead to the concentration of power in the hands of a few people or organizations, which could undermine democratic values ​​and principles.
Elon Musk announcing the Tesla robot, 2021. Illustration: depositphotos.com
Elon Musk announcing the Tesla robot, 2021. Illustration: depositphotos.com

Strong artificial intelligence can be dangerous to democracy in several ways. For example, with its help it will be possible to use data and algorithms to motivate and influence public opinion and political elections. In addition, it can be used by companies and governments as a tool to publish fake information and influence the actions and decisions of the public and governments. Also, artificial intelligence can cause inequality in the possibilities of using information and participating in new technologies, which will lead to exclusive access to information and influence for a small number of people or companies. Therefore, it is important to understand the potential risks of strong artificial intelligence and act to implement safety protocols and oversight of the use of this technology.

Musk has often stated that artificial intelligence has the potential to surpass human intelligence and become a threat to humanity. He compared artificial intelligence to "summoning the demon" and warned that artificial intelligence could lead to an existential threat to humanity if not properly regulated.

In 2014, Musk founded OpenAI, a research company dedicated to the development of general artificial intelligence. However, in 2018, Musk resigned from OpenAI's board, citing concerns about the organization's direction and potential conflicts of interest.

In 2015, Musk founded Neuralink, a company focused on developing technology that can merge the human brain with artificial intelligence. Neuralink's goal is to create a direct interface between the human brain and computers, allowing humans to control technology through their thoughts. The ultimate goal is to improve human intelligence and capabilities through artificial intelligence. 

Musk's involvement in artificial intelligence stemmed from his concerns about the potential risks it poses to humanity. He founded OpenAI and Neuralink to ensure that AI is developed in a way that benefits humanity and that its potential risks are minimized. However, his resignation from OpenAI suggests he was unhappy with the company's direction and believes more needs to be done to regulate the use of artificial intelligence.

Steve Wozniak, the co-founder of Apple, has also repeatedly expressed his concerns about artificial intelligence. 

to the open letter

More of the topic in Hayadan:

9 תגובות

  1. Until now we do not understand what the mind is as a whole. In the same way that the ancients did not refer to its existence and there is no explanation for its value or function. For them, it could be replaced with a pile of hay. We are still groping in the dark in terms of its function and capabilities. This refers to the evolutionary brain that is shared by all creatures on Earth and we have not yet mentioned the possibility of having a consciousness ending in plants.

  2. Sorry, I think you are trying to describe the term qualia, or subjective feeling.
    And you imply that the machine does not necessarily have consciousness.

    But even without consciousness, it can have intelligence, hence the danger, not from consciousness.

    post Scriptum. Even without intelligence there is danger, just as plants without a nervous system work on insects with a nervous system and make them pollinate them without the return of nectar by imitating a plant with nectar. This is how artificial intelligence, which is not particularly smart, can manipulate smart humans; See Social Media Algorithms entry. This term is taken from Eric Einstein's Outtelligence on the scale of Intelligence.

  3. The real problem is that such programs can pass the "Turing test", not because the programs are "smarter", but because people are becoming less smart.

    People like Nostradamus and Asbar are a good example of this...

  4. It is rather strange that an agent of chaos like Musk warns against artificial intelligence. In principle, he is right, there should be regulation in the field. Ummm, the threat to democratic processes already exists, and there are quite a few people who do it without artificial intelligence. From the fake news of the crazy right in the USA, through spreading conspiracies on Rogan's podcast, to the spins that Netanyahu built his career on.
    There's another fear that people at Musk's level don't talk about, and that's the loss of relevance.

  5. It's not even close to artificial intelligence, just a tool for media design and a convenient Google search, only when a software can create a more advanced software intelligence by itself and so on

  6. Artificial knowledge and natural knowledge
    Regarding a block of ice comes a natural knowledge - that its agreed name is formed from the combination of the letters Kr.
    Such natural knowledge will reach every person who touches a block of ice.
    It is impossible to describe such a natural knowledge in words, because every word is just a combination of letters, and there is no knowledge in the combination of letters.
    The combination of letters has only a simple function, and that is to be used as the name of something.

    The language of words is noise created from combinations of letters, and the only function of a combination of letters is to be there.
    The combination of the letters Kr is the name of a natural knowledge that comes to a person, following contact with a block of ice.
    It is impossible to convey with the help of other combinations of letters the natural knowledge that comes to a person following contact with a block of ice.
    Only the act of touching a block of ice brings to the doer a clear knowledge that the agreed name is the combination of the letters Kr.

    Information that comes to a person following an action he does, is the most obvious information, and each such information has an agreed-upon name created from an arbitrary combination of letters.
    Such news is natural news.

    A device whose conventional name is a thermometer, touching a block of ice, will never obtain the natural knowledge that its conventional name is made up of the combination of the letters Kr.
    An artificial knowledge comes to the device that touches the ice block.
    A natural knowledge comes to a person who touches a block of ice.

    Natural knowledge is a wonder, and there is no way to describe this wonder with the help of combinations of letters.
    Man is born with natural knowledge, and the ant also has its own natural knowledge.
    The device has no natural knowledge.
    The device has artificial intelligence.

    There is an abysmal difference between human knowledge and machine knowledge.
    There is a profound difference between natural intelligence and artificial intelligence.
    There is an abysmal difference between combinations of letters that are agreed names for natural information that comes to a person following an action he does, and just combinations of letters that result from human agreement (definitions)
    The human language of words is noise created from letter combinations, when the only function of letter combinations is only a name.
    A. Asbar

  7. The fear is that the AI ​​programs will also find out in the end that some of our leaders are stupid

  8. As her name is, she is God forbid. Remember now that it is already clear. But it's hard to trust them.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.