Comprehensive coverage

A post-apocalyptic think tank

The "Centre for Existential Risk Research" (CSER) of the University of Cambridge was founded in 2012. He develops scientific methods for assessing global risk factors. The organization examines threats arising from developing technologies, dangers that no one has yet considered.

end of the world. Urban Art, Mulhouse, France, July 10, 2014. NeydtStock / Shutterstock.com
end of the world. Urban Art, Mulhouse, France, July 10, 2014. Neydt Stock / Shutterstock.com

The "doomsday clock" moved forward this year for the first time since 2012. This theoretical countdown to the catastrophe was invented 67 years ago by the members of the "Bittown of Atomic Scientists", a group that was incorporated in 1945 to stand watch and included scientists who participated in the "Manhattan Project". Its current members moved the hands of that clock forward three minutes due to the threat of climate change and the slowing down of nuclear disarmament in the world.

But global warming and the nuclear crisis are not the only dangers facing humanity. There is an organization that examines threats arising from developing technologies, dangers that no one has yet considered. The "Centre for Existential Risk Research" (CSER) of the University of Cambridge was founded in 2012. He develops scientific methods for assessing global risk factors in order to find out, for example, if a scenario in which robots take over the earth is a subject of science fiction only or if such a possibility actually exists in real life. Some of the greatest minds in the world, including Stephen Hawking, Jan Talin (one of the founding engineers of Skype) and the philosopher Hugh Price, contribute their opinions.
Scientific American's representative, Erin Biba, sat down with one of the center's founders, astrophysicist Lord Martin Rees, to talk about the possibilities for the rest of life as we know it. Here are edited excerpts from it.

Why is it necessary to establish a group that delves into the dangers associated with new technologies?
Throughout human history, our ancestors faced dangers: plagues, storms, earthquakes and man-made disasters. But this century is different. It is the first century in which one species, our species, can determine the fate of the planet, threaten civilization and endanger the existence of future generations.

What kind of scenarios are you looking at?
At the moment there are very different opinions among the experts both as to the probabilities and as to the effects. Climate scientists disagree on whether there are points of no return beyond which a holocaust could occur. There is a huge range of opinions among experts in artificial intelligence: some believe that artificial intelligence at the level of a person with its own will (goals independent of the goals of humans) can develop by the middle of the century. Others believe that the chance of this is very slim. They argue that we should focus our concerns on the ethics and safety of stupid robotic automatons (like military drones). And there is already a lively debate on the issues at the forefront of biotechnology. I hope that CSER will help forge a stronger general consensus around the question of what the most real dangers are and help put them on the agenda.

What are the main dangers facing humanity in your opinion and how serious are they?
I am personally pessimistic about the community's ability to cope with biotechnological developments. As we know, at the Asilomar Conference, held in the 70s, the pioneers of molecular biology formulated guidelines for recombinant (recombinant) DNA. But these kinds of issues arise today even more sharply. Today there is a debate and concerns about the ethics and caution that must be taken with new techniques: experiments to "improve the function" of viruses and the use of genetic editing technology known as CRISPR. Compared to the 20s, today the community of scientists in the field is more global, more competitive, and more subject to commercial pressures. I'm afraid that anything that can be done will eventually be done by someone somewhere. Even if there are official and agreed upon protocols and regulations, it will be difficult to enforce them just as it is difficult to enforce laws prohibiting the use of drugs. Biotechnological mistakes and biotechnological terrorism rank first on my personal list of dangers for the medium term (next 70 to 10 years).

Is there something people fear unjustly?

Many residents of the developed world worry too much about secondary dangers (carcinogenic substances in food, low exposure to radiation, plane crashes, etc.). Some are too afraid of an asteroid strike, which is one of the most understandable natural hazards and the easiest to estimate. What's more, we will soon be able to reduce the risk of this by diverting asteroids from collision courses with Earth. This is why I am in favor of the B612 Sentinel project.

What should worry us more is new threats. These certainly deserve more attention, and those are what the CSER wants to investigate. An important rule is that the unknown is not improbable. What is at stake is so important that even if we manage to reduce the chance of a disaster by one part in a million, we have done our part.

 

More of the topic in Hayadan:

 

The article was published with the permission of Scientific American Israel

8 תגובות

  1. Haim, there is currently a strong trend of building robots and computer systems built on the basis of learning systems and artificial intelligence. The researchers themselves often admit that they do not know in advance what the results will be and only after the system has gone through a learning process do they see how it reacts, and many times it surprises them with its reactions.

    See for example the neural network that was reported only a few months ago that independently completed how to play a wide variety of computer games, in which it had to, among other things, attack targets and even protect itself from missiles/bombs in order for it not to be destroyed (disqualified) exactly what you claim which is impossible.

    We will see you trying to "turn off the voltage" to the robot that is standing in front of you and it is several times bigger and stronger than you and probably no less intelligent, and trying to protect its existence. I don't think it will be that easy…

    I think that an artificial intelligence system that will be built on the basis of the operation of our brain (neural networks) will be able to understand, be intelligent and even act against whoever it thinks may disturb it or threaten its existence.

  2. reader
    These abilities of the robots are a result of the software that is built for them. What you put in is what you get as a reminder you will remember Chaplin's unforgettable movie "Modern Times" in the scene where the machine feeds the worker. Due to a malfunction in the machine, the engineers of the feeding machine put screws in a plate and the machine feeds the unfortunate worker with screws. I mean that the computer, as a result of its independent decision and without software, will say "I don't want" or "I don't feel like it". Is it capable of this? Take a highly sophisticated computer or robot, unplug it and what happens to it? Take an animal and threaten it. He will defend himself - run away or attack. And again without the software you prepared, will the computer understand that it is in existential danger? I remember that years ago the expression Garbage In Garbage Out was used. Think about it and you'll see what I mean. Try to think in another direction about activating defense systems. Her decision to use these systems to destroy an enemy city. Will the computer for reasons of humanitarian considerations refuse a command? And if you threaten him with form 630 (remember your regular service) will he understand what it is about? I doubt.
    reader,
    You don't always have to come up with such a detailed explanation. You also need to know how to read between the lines to understand the writer's intention. As someone who understands a little about science and engineering (I'm not an engineer) I allow myself to say that not everything is technology.
    . .

  3. Haim, already today there are robots that make decisions completely independently, even in the field of operating missile weapons and defense systems, so your comment is completely unclear.

  4. To say that robots will take over the world, well. As long as a robot cannot say no, such a gloomy vision will not and will never be realized. Questions such as a volcanic eruption so intense that its ash will cover the earth must be thought about. Another example, a bit from the realm of science fiction, but plausible. An international team flies to Mars and on its return to Earth carries with it unfamiliar bacteria. The bacteria spread quickly and tens of millions perish. What to do? The familiar apocalyptic example is the flood story found in every mythology which is what gives it historical credibility. Let's imagine that there is evidence that the ice at the poles will melt within six months, what does this mean about the coastline, how are millions of people moved in an orderly manner into the continent and to high places? To prepare yes, but towards the apocalyptic may lead to thinking in the field of science fiction.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.