Comprehensive coverage

The artificial neural network that developed biological structures that exist in the human brain

Last month, researchers at the Deep Mind company - one of the most advanced companies in the world in the development of artificial intelligence - published a new study in the prestigious scientific journal Nature, about an artificial neural network that developed structures similar to those that exist in a biological brain

neural network. Illustration: shutterstock
neural network. Illustration: shutterstock

Last month, researchers at the company Deep Mind - one of the most advanced companies in the world in the development of artificial intelligence - published a new study in the prestigious scientific journal Nature, about an artificial neural network that developed structures similar to those that exist in a biological brain.

To explain what this is about, we first need to explain a little about artificial neural networks. These are computer-trained simulations, in which millions of simple calculation centers communicate with each other, similar to the way the nerve cells in the brain communicate with each other. Each of the cells in the biological brain is a relatively simple machine without its own intelligence, but the coordinated action of billions of nerve cells together creates thought, emotion and even self-awareness. Similarly, when millions of computing centers communicate together and pass information to each other on a computer, they can be trained to perform tasks for which biological brains are specially adapted: image processing, for example, or spatial navigation.

Obviously, there is a similarity between the way artificial neural networks work and between biological nervous systems, but neuroscientists and artificial intelligence researchers alike also understand the big difference between the two. Brain cells, for example, can respond to a wide variety of neurotransmitters - chemical substances secreted by other nerve cells. Artificial neural networks, on the other hand, transmit only simple messages between the computerized 'cells', but do so at a speed that is much greater than the relatively slow rate of message passage along biological nerve cells. There is a resemblance, certainly, but the difference is very great.

This is probably why so many were surprised when artificial neural networks recently began to develop structures that we know from human brains.

In human brains - and in general in the brains of animals - you can find unique structures of mesh cells. These nerve cells are arranged in hexagons, and 'switch on' for action depending on the person's position. When I go to the right, certain grid cells are activated, and when I go to the left, other grid cells are activated. It can almost be said that the grid cells create a mental map - a kind of internal GPS - through which animals are able to decipher their location, and solve problems related to movement in space. These cells allow us to obey, even with our eyes closed, instructions like this - "Take five steps forward, turn ninety degrees to the left and then continue straight three more steps to the cache."

Originally from DeepMind.

The existence of the network cells has been revealed in the last twenty years, and their discoverers Won the Nobel Prize in Medicine in 2014. They are a winning example of the Polanyi paradox - "We know more than we can explain. "[3] Although we are endowed with sophisticated navigation systems based on grid cells, they allow us to reach intuitive insights, without being able to explain them. If we are thrown, for example, into a messy room, we will be able to identify quick and short ways to reach the destination, while sophisticated artificial intelligences will have to plan several different movement paths before choosing the shortest path - a tedious task that wastes precious computing resources.

artificial intelligence. Illustration: shutterstock
artificial intelligence. Illustration: shutterstock

But what if the most sophisticated and popular artificial intelligences today - those based on artificial neural networks - were endowed with network cells themselves?

To test the question, the researchers at DeepMind ran an artificial neural network, and required it to successfully complete navigational tasks in virtual environments. They discovered that within the network, areas that function similarly to the network cells develop spontaneously. These virtual grid cells allowed the artificial intelligence to successfully solve simple maze puzzles quickly and efficiently - and according to the researchers, "at a superhuman level". The artificial intelligence "surpassed the abilities of professional human actors, demonstrating the kind of flexible navigation usually associated with animals, choosing new paths andTakes shortcuts when possible. "[4]

Originally from DeepMind.

This is all well and good, but at first glance it doesn't look particularly impressive. After all, many already understand that artificial intelligence is starting to reach superhuman levels in many computational tasks. What is different about the current discovery?

The answer is twofold: the understanding that virtual representations of network cells help artificial intelligence to operate, and their spontaneous appearance in artificial neural networks, can also help us decipher the activity of the human brain itself. But more than that - it may hint about the way to the future of artificial neural networks.

Let's start with the first statement. The brain, in many respects, is still a black box. We understand, roughly and generally, the way the different nerve cells work and the ways in which they communicate with each other. We know that different sites in the brain are responsible for different tasks: the hippocampus is responsible for developing long-term memories, for example, while the amygdala is associated with regulating emotions and making decisions. We understand that the total brain activity creates thinking and self-awareness. But we still don't have a clear scientific model of how the flow of information in the brain—a pound-and-a-bit supercomputer with incredibly low energy consumption—allows all of these things to happen. We find it difficult to conduct experiments with the required resolution - at the cellular level - to understand the way the brain works.

Artificial neural networks allow us, for the first time, to trace the workings of the human brain. I am not trying to claim that they are a perfect simulation of biological minds. The truth is that they are very different in many respects from a biological brain, but the fact is that virtual representations of network cells have arisen spontaneously in artificial neural networks. And not only that, but these representations support an activity similar to that of biological network cells. It is clear to us that the brain 'runs' on a computational system that has a clear resemblance to artificial neural networks, and any theory found for the way the neural networks in the computer work, will also have consequences for our understanding of the brain itself. Therefore, running simulations of brain regions in artificial neural networks will help us decipher the way structures and regions in the human brain work.

If this statement is proven to be true, then it has enormous significance for the study of the brain. The Nobel Prize winning physicist Richard Feynman said that - "What I cannot create, I do not understand". If we cannot create brain-like structures and examine them at high resolution, we will not be able to fully understand the human brain. Artificial neural networks provide us with a way to test ideas about the brain that we could never test before.

And this is only the first prediction for the future.

The second prediction is based on the fact that the artificial neural networks in DeepMind's research spontaneously developed 'network cells', as a result of the way they were trained to solve navigation problems. Is it possible that more sophisticated (much more) artificial neural networks will be able to spontaneously develop more complex structures in the future, simulating parts of the human brain?

I know this idea sounds strange on the face of it. The structures of the virtual 'network cells' are very simple, and cannot be compared to the more complex structures in the brain, such as the hippocampus or the frontal lobes. But we are beginning to enter a period of miracles and wonders: certain artificial neural networks are now able to 'produce offspring' - that is, to create and develop artificial neural sub-networks that are particularly suitable for performing a certain task, and carry it out at a higher level of success than human programmers[5]. The development process of these artificial neural networks is based on trial and error - the artificial intelligence runs thousands of artificial neural networks, selects the most efficient ones and prunes and improves them until they are able to perform the tasks assigned to them.

Is it such an exaggeration to think that in the process of in silico (that is, on a computer) evolution of sophisticated artificial neural networks, complex areas can also emerge that mimic the activities of biological brains?

If that does happen, we may find that science fiction author Robert Heinlein was right after all. In his book Tyranny is White, Heinlein described a future supercomputer that develops self-awareness from the moment it becomes complex and large enough. Is it possible that artificial neural networks could develop - even inadvertently - structures that would provide them with self-awareness, similar to that of the human brain?

Even so, it is clear that such a development will not materialize in the coming years. Running a full simulation of the human brain is a task we are not yet close to accomplishing, and artificial neural networks represent only certain aspects of biological brains, while ignoring many others. Nevertheless, it is fascinating to wonder and think whether even such simple artificial neural networks (in relation to the human brain) can develop self-awareness, and if so - how we can recognize it.

The turn of the EM

Physicist and artificial intelligence researcher Robin Hanson is one of the academics whose ideas have already managed to change the world. He developed and perfected the "prediction markets" concept, for example, which many companies use today to Predict the future better[6]. Economist Brian Kaplan wrote that -

"When a typical economist tells me about his latest research, my usual response is, 'Huh, maybe.' And then I forget about it. When Robin Hanson tells me about his latest research, my usual response is “No way! Impossible!" And then I think about it for years."

In his latest book, the turn of the EM (The Age of EM)[7], Hanson tries to develop a new "impossible" idea and decipher what a world would look like in which it would be possible to run full simulations of human minds on a computer (Emulated Minds). He describes a future world in which 'computer brains' will replace humans in most tasks, perform jobs for us efficiently and easily, and even develop innovative ideas about life, death and humanity. This is a future that is a hundred years away from us, according to Hanson, but if he is right - then our children will already be a part of it, and so will many of the young people living today.

Image result for The Age of Em

To reach the future described by Hanson, we still need a large number of scientific and technological breakthroughs - some of which we are not even aware of today. Perhaps the research I covered here, in which an artificial neural network developed a biological-like structure, is one of them, and provides us with another sign on the way to the future of computerized minds. It seems that this is also the final intention of the researchers at DeepMind, who already explained in an interview that -

"Personally, we don't think of any other use than creating a general-purpose algorithm. The brain is the only example we have of a general-purpose algorithm, so why not copy from it?”


You are invited to read more about the future of artificial intelligence and the human brain in the guidebook for the future and "who control the future", in the selected bookstores (and those that are just fine).

More of the topic in Hayadan:

10 תגובות

  1. Yosef
    One of the problems is that we are still not sure that there is such a thing as "consciousness". True, the more layers there are then there is a phenomenon of abstraction. But, if there is consciousness at all, who guarantees that there is a connection between consciousness and abstraction?

    But, this is just a philosophical debate. I even claim that there is a range of consciousness. At one end there is a simple thermostat that knows how to turn on the boiler when it is cold. And on the other hand - Mahatma Gandhi who felt the pain of hundreds of millions of his people.

    I think there is a deep problem with artificial intelligence. I see intelligence as a technique to make a good decision in the absence of all the data. Handwriting recognition, autonomous driving, chess, stock market, medical diagnosis - all these fit this definition.

    Where is the problem? Such a technique will inevitably make errors. When playing - nothing happened. But when diagnosed with cancer? Decide to take a detour?

    Note one more thing - I did not mention "neural networks" anywhere. But that's a topic for another discussion 🙂

  2. Self-aware in the sense that we are self-aware: "I think means I exist", not at the level of collecting sensory data.
    The network begins to understand that it exists and what it does as one entity.
    The whole topic of layered networks shows that in the deeper layers of the network, more and more abstract information is created. There is no one else who fully understands how consciousness is created that understands itself, but we know how to quantify the level of abstraction of consciousness using terms from numerical communication (Claude Shannon's definition of information) or from statistical physics: (E*log(E-.
    There is an echolent definition of energy as information and of information as (log(P) as a function of the probability. Since the weights of a neural network are probabilities. When will someone come along and advance the field. Tononi defines consciousness - perhaps he will be able to explain.

  3. Yosef
    Think modern drones. They know their "situation" - the altitude, battery status, distance from the remote control, position relative to the take-off point, propeller RPM, nose position, speed, usability of many systems, wind, proximity to hazards and so on.

    The glider can decide, of his own accord, that he returns to land, and so he will.

    So how can it be argued that he is not self-aware?

  4. We can give an example of 3 completely scientifically legitimate theoretical theories about consciousness or a more abstract understanding of networks.
    Tononi - theory of consciousness
    Naftali Tashvi - information bottleneck
    Amnon Shashua - explanation of layered networks (convolutional - an example of a Hebrew translation that preserves the insight) by quantum mechanics

    It is difficult for people, including me who believes in the Creator, to accept that it is possible to develop a tool that understands itself - but this is the direction.

  5. Self-awareness. activation of thinking. Controlling emotions. and more. Everything from everything is one big illusion in which we all live. And as it has been expressed throughout human history. Correct methods of action should be those that rely on a large number of "brains"!!!

  6. "Each of the cells in the biological brain is a relatively simple machine without its own intelligence, but the coordinated action of billions of nerve cells together creates thought, emotion and even self-awareness."
    really?
    If, and only if, we can define what is "thought", what is "emotion" and what is "self-awareness", it will be possible to discuss the question of how the "coordinated activity of billions of nerve cells together" may be related to the formation of these mental phenomena. Personally, I am satisfied if this can be done. It seems to me that any attempt to define, for example, "self-awareness" - will end in tautological statements, or in an endless regression. It does not seem possible to make a complete reduction of mental phenomena to material phenomena, and this even if we point to a relationship of certain significance between certain material environmental changes and corresponding mental changes.
    Thus, it is likely that the "coordinated action of billions of nerve cells together" will remain within the scope of "the coordinated action of billions of nerve cells together" and nothing else; Elaborate functional operation of a sophisticated machine. Not beyond that. Even if this machine transmits a message that appears to be intelligent, that will not make it thinking. Even if she declares that she has a "strong feeling of love" towards you - do not believe her, because she is not capable of experiencing and loving. Even if she determines that she is in "full self-awareness" - you will be able to disconnect the connection to the electricity and prove that there is nothing there but electricity and wiring, when you connect her to the electricity again and she will repeat her mechanical determination.

  7. "The coordinated action of billions of nerve cells together creates thought, emotion and even self-awareness"

    And what is the connection between those neurons, those wiring and those elements that came to us with the big bang and the supernovas?
    We haven't found that out yet.
    And for this we will have to develop a suitable device within ourselves, we will have to calibrate our senses.
    Then we will discover that there is additional information in our reality that is not perceived by the five senses.
    We will find that connecting elements creates regeneration.
    Whether at the gas level, whether at the animal level, or at the human level.
    And in the latter the discovery will take place that will revive the new paradigm of the 21st century.
    An ancient wisdom that was hidden for thousands of years and was within the reach of individuals, will spread and be revealed in the world
    And this is what those individuals of virtue wrote: "The great spiritual questions, which used to be solved only for the great and excellent, are obliged to be solved now in different degrees for the whole people. And to bring lofty and sublime things down from a mighty height to the depths of the mass ordinary laity, this requires a great and tremendous wealth of spirit, and a constant and habitual business, only then will the opinion expand and the language become clear, to the point of expressing the deeper things in an easy and popular style, to quench thirsty souls."

  8. Very nice. This is one of the forefronts of technology in Bina. One of the teams is headed by the Jew Dr. David Silver UK.
    One of the founders is Muslim Suleiman. To teach us that we work in one partition.
    The development center is located in England. Breakthroughs in artificial intelligence come from its creator.

    The second front is in the theoretical field, and there are two Israeli professors: Naftali Tashvi, and Yeshua Amnon - separately.
    Amnon the second wrote, or attached a signature (the man is very smart and is also worth 4.3 billion NIS) to two articles that seem to me to have groundbreaking potential, I looked for his kind and did not find it. There is Professor Geoffrey Hinton capsule networks, beleif networks, and Professor Joshua Bengio, and Dr. Goodfellow. They live in Toronto. The last three are all linked to Google like the team in the current article.

    The first strong network was started by Alex Krichevsky in Toronto at Bengio's and for some reason he did not gain fame.
    Before him, in 1998, Jan Le Cone started.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.