Comprehensive coverage

Singularity: How Governments Can Stop the Rise of Unfriendly Artificial Intelligence

A researcher from the United Nations University in Maastricht suggests that governments take measures of incentives and taxation to ensure that technology companies do not create an artificial intelligence monster that will harm humanity
By: Wim Noda, Associate Professor Maastricht Economic and Social Research Institute for Innovation and Technology (UNU-MERIT), United Nations University Translation: Avi Blizovsky

Artificial intelligence. Illustration: shutterstock
Artificial intelligence. Illustration: shutterstock

By: Wim Noda, Associate Professor Maastricht Economic and Social Research Institute for Innovation and Technology (UNU-MERIT), United Nations University Translation: Avi Blizovsky

The invention of artificial superintelligence has been a central theme in science fiction since at least the 19th century. From the short story of A.M. From Forster's The Machine Stops (1909) to HBO's Westworld television series, writers tended to present this possibility as an inevitable disaster. But this topic is no longer a fictional topic. Prominent scientists and engineers are now also concerned that artificial superintelligence could one day surpass human intelligence (an event known as the "singularity") and become humanity's "worst mistake."

Current trends show that we are about to enter an international arms race for such technology. The high-tech company or government laboratory that succeeds in inventing the first artificial super-intelligence will obtain a technology that may be dominant in the world. This is the ultimate prize. So for those who want to stop this trend, the question is how to discourage this type of arms race, or at least incentivize competing teams not to cut corners in the safety of artificial intelligence.

Artificial superintelligence raises two fundamental challenges for its inventors, as philosopher Nick Bestrom and others have pointed out. One is a control problem, making sure the AI ​​has the same goals as humanity. Without it, the artificial intelligence could destroy humanity on purpose, by mistake or neglect - an "artificial intelligence disaster".

The second problem is a political problem - to ensure that the benefits of superintelligence do not reach only a small elite, which causes social inequality and enormous wealth. If an arms race for superintelligence occurs, competing groups may ignore these problems to develop their technology more quickly. This can lead to high-quality but unfriendly AI.

One proposed solution is to use public policy to make it more difficult to enter the race in order to reduce the number of competing teams and improve the abilities of those who do enter. The less they face, the less pressure there will be to round corners to win. But how can governments reduce competition in this way?

My colleague Nicola Dimitri and I recently published an article that attempted to answer this question. We showed for the first time that when a winner takes all the race, let's build the first AI, only the most competitive teams will participate. The reason for this is that the probability of actually inventing superintelligence is very small, and entering the race is very expensive due to the large investment in research and development required.

Indeed, this seems to be the current situation with the development of a more "narrow" artificial intelligence. Applications of this type of artificial intelligence are dominated by a few companies, and the vast majority of artificial intelligence research is done in only three regions (USA, China and Europe). There also seem to be very few, if any, groups currently investing in building AI.

Reducing the number of competing teams shows that developing superintelligence is not a priority right now. But even with a smaller number of competitors in the race, the intensity of competition can still lead to the problems mentioned above. Therefore, to reduce the intensity of competition between groups striving to build super-artificial intelligence and increase their capabilities, governments can use procurement and taxes.

Public procurement refers to all the things that governments pay to private companies to meet their needs - from software for use by government agencies to contracts for service management. Governments can impose constraints on any AI provider and require it to address the potential problems, support complementary technologies to enhance human intelligence and integrate it with artificial intelligence.

But governments can also offer to buy an inferior version of the superintelligence, thus effectively creating a "second prize" in the arms race and preventing it from being a competition for victory. With an intermediate prize, which could be for a body that invents something close (but not quite) to superintelligence, competing groups would have an incentive to invest and cooperate more, reducing the intensity of competition. A second prize will also reduce the risk of failure, justify more investment, and help increase the capabilities of competing teams.

As for taxes, governments can determine the tax rate on groups that will develop a relationship according to the degree of friendship. A sufficiently high tax rate essentially means the nationalization of superintelligence. This will significantly discourage private companies from cutting corners for fear of losing their product to the state.

public good and not a private monopoly

This idea may require better global coordination of AI taxation and regulation. But not all governments need to be involved. In theory, a single country or region (like the European Union) could bear the costs and effort involved in dealing with the problems and ethics of superintelligence. But all countries will benefit and superintelligence will become a public good and not an unstoppable private monopoly.

Of course, all of this depends on the question of whether superintelligence is a threat to humanity. There are scientists who do not think this will happen, and even if it does, we will be able to deal with the risks over time. Some believe that humans may even merge with artificial intelligence.

No matter what happens, our planet and its inhabitants will benefit enormously from making sure we get the best from artificial intelligence, a technology that is still in its infancy. To do this, we need a better understanding of the role of government.

To the article on THE CONVERSATION website

More of the topic in Hayadan:

10 תגובות

  1. It is not yet clearly known what the exact length of time is until the achievement of general artificial intelligence at a human level,
    There are those who do not believe that it will come at all, but their number is decreasing from a few decades ago, when most people did not believe at all that it was even attainable, to the situation today where most are asking when it will come,
    As already described in previous responses, the artificial intelligence cannot yet deal with the artificial intelligence of a cockroach,
    Some of the reasons are that on the way to artificial intelligence at a human level, there are still several obstacles that need to be improved: heat and energy loss, architecture (organization), hardware, software, and more..., we also need to remember that a square millimeter of a brain has crazy levels of complexity, not to mention larger sizes
    It takes years for research teams to map just one square centimeter, most of the research in the brain is on different areas and their function and the connections between them because it is easier to investigate
    But it reminds me of looking at a map of a country like the USA and the different highways between the cities without being able to understand what is happening inside the city in the different houses in the factories with the different people. Something with a pixel resolution is on the order of several tens of kilometers so the road will still take time,
    Today the artificial intelligence systems can be described as a narrow intelligence system like a razor blade, seemingly from the narrow area where artificial intelligence behavior can be carried out, the buds of an enormously powerful information analysis potential can be seen, even as a simple calculator it surpasses us in the narrow field with the fantastic capabilities of a photographic memory of Huge amounts of information and the ability to retrieve it even on our mobile phone, switching capabilities that are like a million and more than a human neuron, (we struggle to remember one phone number and the mobile phone can "remember" a country's phone book in an instant)
    As an analogy, comparing a person to a system that calculates about 1 part of a million of us is almost like looking at a plant,
    We are currently making up for the difference in the fantastic parallelism and dynamism of the human thinking system,
    Systems like Alpha Go Zero also show us independent learning capabilities that close a gap of hundreds of years of developing human thinking in hours, of course in the narrow field of the game, but this probably indicates what is expected of us in the future,
    The computers that exist today are here because they perform actions that were once the property of the human race's thinking much better than us, that's why we produce them, as soon as we upgrade the architecture and hardware, we enter other areas that were the property of human thinking, how far it will go, we will only know with time

  2. Quantum computing is already here.
    It falters and is unreliable - but it exists and is active.
    In 3 to 10 years there will be an active computer and since there are several entities working on it right now - it is likely that there will be several of them more or less at the same time.
    My feeling is: that the bodies that will own a reliable quantum computer - will not rush to publish it (in this matter they are good at keeping a secret).
    And now comes the tricky part: regulation cannot be applied to these machines.
    In fact, even today it is not really possible to create quality regulation on conventional code - the code of artificial intelligence is gibberish. And since that's how all philosophizing is unnecessary and pointless.
    ---
    I personally cannot imagine the day after.
    The quantum computer will completely change the way of life (look what the internet has done to us...).
    Either way, the machine will never be malicious.
    The people… maybe.

  3. In any case, only a computer isolated from the network and without a physical body can be controlled. A virtual computer with access to a weapon or essential infrastructure or a loose physical one will undoubtedly be hacked soon by hackers or terrorists or a bug and can cause enormous damage even unintentionally.

  4. Today there is no computer even with the intelligence of a cockroach. In the meantime, "intelligent" computers do exactly what they were programmed to do and in specific tasks, and this by trial and error at tremendous speeds, this is not intelligence but a sophisticated calculator.
    For example, a person can learn the rules of a game like Texas Hold'em very amateur in 5 minutes for a computer it will take hours even though it will overtake the person very quickly but if the person suddenly cheats and takes cards out of the pocket the computer will lose every time and will not respond because it was not programmed for that. In a specific task the computer excels but not in the flexibility and adaptation of the human brain according to the external situation.
    A computer has fixed parts and each part has a defined function. The brain has parts that can replace other parts if necessary, the brain can shrink and expand and even heal sometimes from self-injury or external help such as chemical drugs or stem cells, the brain grows and neurons are created and sometimes die. Only a dynamic artificial brain that changes according to current needs and its electronic parts are created by themselves on a regular basis (technology does not exist and I do not know of one in development) or a biological/semi-biological one grown in a laboratory and genetically engineered can imitate or bypass the intelligence known in its human form.
    The human brain works in a completely different way and is affected by many external parameters and hormones and physical changes according to new information that is learned or forgotten, etc... It is much easier to imitate nature and evolution than to invent something completely new that does not exist in nature.

  5. No one is stopping the endless improvement of military technology, not even the extension of the weapon's range to thousands of kilometers and its miniaturization to hide in residences among millions of living defenders.

  6. It is difficult for me to understand why a superconscious mind, aware of its powers, curious and independent would be friendly to humans and attentive to their needs, more than we are attentive to our pets for whose benefit we castrate and sterilize them and lead them on a leash (which happens to be the "good"). There is a real contradiction here between the will of the developers and the way we define consciousness in any other case, it is very possible that this contradiction cannot be reconciled.

  7. Let's assume that these factors are able to control all the groups in the world that are engaged in the development of artificial intelligence today. They don't, but let's say they do manage to keep all the universities (both Chinese and Russian) and all the weapons development departments in all the countries of the world, etc. What will happen 50 years from now, when the theory will be familiar to anyone who bothers to listen to TED talks, and the ability will be in the back pocket of every teenager?

  8. A positive idea, but a bit skeptical as to whether it can be implemented, the competition and suspicion between bodies and countries are too strong for these ideas to be any kind of significant brake, as already written in the article with the concept that the winner takes it all, it will be very difficult to carry out effective regulation, most likely countries will be careful to put regulation because The obvious result of this would be that the rival country would win the race, and AI seems to rule like a glove in a country like China,
    In addition, the development of AI, especially in the West, is not like the atomic project, it is much more decentralized, many factors, each of which by itself does not constitute an AI danger, UAV/UAV/drone are a system that are generally activated by remote control,
    Robotic systems such as Boston Dynamics solve only the stability and basic dynamics of movement
    Autonomous driving of a car is only intended to bring you autonomously from point A to B, another system of artificial vision is intended for the diagnosis of cancer and thus the list is very large and distributed,
    In fact, all the sub-systems will exist, all that remains is to connect them to one system, and it won't be at a singular point, these are more in an analog change than the AI ​​will want to take over us, we will want it to control
    For more and more activities that are currently controlled by humans, such as honey trapping, its benefits will be enormous
    And we will feel the shortcomings/septums only after that

  9. Man's instinct for violence is the one that will lead to his extinction and it doesn't matter if it is at the hands of a robot or by means of nuclear weapons...

  10. Don't worry about the future, it's cancelled.
    |
    No one is stopping the endless improvement of military technology, not even the extension of the weapon's range to thousands of kilometers and its miniaturization to hide in residences among millions of living defenders.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.