Comprehensive coverage

Research: How "smart" is the nerve cell in our brain?

A new study, conducted by Hebrew University researchers, examined for the first time whether a deep learning network consisting of layers of point-like artificial cells can accurately simulate the complex structure of a single real biological neuron

neurons. Illustration: shutterstock
neurons. Illustration: shutterstock

We are in the midst of a real scientific and technological revolution. Today's computers know how to learn from examples and perform tasks that not long ago were considered impossible for artificial machines, from recognizing human faces from different angles to driving autonomous cars. The deep learning networks, which are responsible for the majority of these complicated learning tasks, which modern computers are capable of solving, are based on the basic principles of the structure and operation of the brain: neurons (nerve cells) connected to each other by synapses, through which the different cells transmit signals (input and output) to each other.

The same operating principles of the brain on which deep learning is based today derive from the limited understanding we had of how neurons work in the 50s. Like a bit in a computer, each artificial neuron in the deep network is "pointed": it can be in only two states - zero (inactive) or one (active). However, in recent decades, neuroscience has discovered that each neuron is a particularly complex system, built from a cell body and from which a branching tree (the dendritic tree) emerges, and in its many branches are scattered many synapses (tens of thousands on each cell) that transmit information to it from the cells in the neural network. The nerve cells are also divided into a huge variety of subtypes, each of which works in a slightly different way and each type has a different role both in the healthy brain and when the brain is diseased.

A new study, conducted by Hebrew University researchers, tested for the first time whether a deep learning network composed of layers of point-like artificial cells can accurately simulate the complex structure of a single real biological neuron and the electrical input-output relationships it performs. This is with the aim of using this complex model, instead of the point artificial neuron that the "artificial neuron networks" use today, both to better understand how the nerve cell translates the synaptic input into output and to create a new type of deep learning network - which more accurately simulates the way it works The human brain and hopefully also its extraordinary computational abilities. The study was conducted by the student David Beniaguiev jointly with professors Miki London and Idan Segev from the Edmond and Lily Safra Center for Neuroscience. The article presenting his findings was published in the scientific journal Neuron.

"A deep learning network consists of layers of point artificial neurons, each of which is connected by artificial synapses to the layer above it and the layer below it," explains Prof. Segev. "For example, if we want to teach the network to recognize cats, we will present to the network an image of a cat for the input layer - the first layer." Each artificial neuron in the deep network responds, according to the information that flows to it, with "0" or "1" according to the strength of the synaptic input it receives from the previous layer, and accordingly sends (or not) a signal to the neurons to which it is connected in the next layer. The neurons in this layer also process the information they received and transmit their output to the cells in the next layer.

At the depth of the pyramid of layers, explains Prof. Segev Prof. Segev, there is an artificial neuron that summarizes the signals that flow to it through the previous layers in the network, and it is the one that is required to present an answer, whether the network saw a cat (output 1) or it was not a cat (output 0). In the training phase of the network (based on the presentation of many pictures of different cats), the machine learning algorithm is required to check the answer. If indeed the deep cell in the last layer responds with "1" for the cat, you can go to the next example. If not, a dedicated algorithm "restores" the process and changes the strength of the connections (synapses) between the neurons in the different layers, until the network correctly answers the question of whether it saw a cat in the picture or not.

When repeating the operation with enough artificial neurons in enough layers, and using enough (sometimes hundreds of thousands) examples of cats, the deep learning networks often succeed in completing the training phase and learn to recognize cats that they have not seen before. They learn to generalize the different examples of cat pictures to some general concept of 'felines'. In the same way, these deep networks learn to recognize a certain "traffic light" that it has not seen before, after learning the "traffic lights" from many examples, and the same goes for a "crossing" and so on. This is how the traffic light and crossing recognition systems of the autonomous car work. "Despite the enormous success that is a real 'game changer' in our world, it is not entirely clear how the deep web manages to do this and many groups in the world are trying to understand the source of this success," adds Prof. Segev.

However, the learning ability of each network is limited to the task assigned to it. The system that learned what a cat is will not recognize a dog. Also, for the computer to associate the sound "meow" with cats, a separate learning network is needed, a task that every toddler between two years of age can easily complete. Indeed, despite their very impressive success in performing specific tasks, deep learning networks are very limited compared to the human brain in their need for a large number of examples to complete the training phase. "And we, on the other hand, don't need more than one example to understand that a car accident is dangerous", reminds Prof. Segev.

פרופ מיקי לונדון, צילום באדיבות דוברות האוניברסיטה העבריתMany research groups around the globe are currently engaged in trying to give the deep learning networks comprehensive, integrative and intelligent capabilities such as the possibility to learn from a limited number of examples, to connect different aspects of what a cat is (sight, hearing, emotional meanings and so on), to draw conclusions from one learning about another learning , plan for the long term and understand language (a collection of symbols that follow one after the other). All these are tasks that our brain is so successful at and the deep networks that exist today measure it. "Our approach was to use the existing capabilities of deep learning networks to produce a computer model as accurate as possible of the complex and convoluted tree that makes up the real neuron," says Benyaguiv, "and then replace the simple point unit used by the large deep networks with new units, each from which simulates the nerve cell in all its complexity".

To do this, the three relied on mathematical models built in the laboratory of Prof. Segev and Prof. London in recent years and accurately simulate, with the help of a system of equations, the electrical processes that occur inside different types of nerve cells. "With all the splits, the activation of the many synapses and the flow of electricity in the branches of the tree in the nerve cell," adds Benyaguiv.

דוד בניאגוייבThe researchers hope that the construction of a deep learning network consisting of artificial neurons (which themselves are already deep), simulating the complexity of the operation of a real neuron, will make it possible to make faster and more complex calculations, similar to how the brain works. "For example, to recognize the cat in fewer examples and perform complex actions such as those that require understanding a language. However, we still need to prove this in additional studies", emphasizes Prof. Segev. In such a network, he adds, it will be possible to change not only the strength of the connection between the neurons, but also to integrate different types of neurons within it, similar to the structure and mode of operation of the biological brain. "At the end of the process, a computerized replica was built that imitated the diverse abilities of the brain - general artificial intelligence."

The research also makes it possible to quantify for the first time the computing power of different types of neurons, Prof. explains. exaltation. "For example, in order to simulate a type A neuron, you need seven layers of deep learning built from point neurons, while a model of a type B neuron requires nine such layers. With the help of this tool, it is possible, for example, to quantitatively compare the calculation ability between a nerve cell in the brain of a mouse and the corresponding cell in the human brain, or between two different nerve cells in the human brain.

On a more basic level, Benyaguiv adds, the creation of a computer model that more accurately simulates the way the brain works is also expected to allow insights into the human brain itself. "Our brain builds deep networks that themselves will help us understand the brain, ourselves. For example, we will be able to better understand how the different types of cells and the relationships between them affect the calculation capacity of our brain,' concludes Banyagoev.

for scientific publication

More of the topic in Hayadan: