Prof. Idan Segev, a neuroscience researcher from the Edmond and Lily Safra Center at the Hebrew University: "The digital processors used today for the calculations of "artificial intelligence" require about a million times more energy than the brain, which only needs about 20 watts" and we must learn from the human brain how to save energy"
At the Silicon Club meeting, held at the IBM House in Petah Tikva, Prof. Idan Segev, neuroscience researcher from the Edmond and Lily Safra Center at the Hebrew University, presented innovative insights on the relationship between the study of the human brain and the development of artificial intelligence. In an immersive lecture, he emphasized the importance of understanding the human brain as a basis for improving AI technologies, emphasizing the brain's incredible energy efficiency as well as its structural complexity. "How can complex calculations be performed in a parallel and energetically efficient manner, to create inspiring artificial "partners" for us, who will help us understand our own mind, repair it and perfect its abilities hidden in it for 300,000 years - since the creation of the human race."
"The human brain is a source of inspiration for AI systems," noted Prof. Segev. "Despite the impressive progress, the artificial neural networks we know today are limited compared to human capabilities, especially in terms of energy consumption and performing calculations."
Between the brain and AI
Prof. Segev explained how brain research contributes to understanding the calculation principles of artificial intelligence. He noted that while the brain performs many local calculations using the structure of synapses and neurons, contemporary AI systems rely on linear digital calculations, which consume enormous amounts of energy. "The brain operates at only 20 watts," he emphasized, "compared to current computers, which perform similar tasks with an energy consumption of megawatts."
Segev demonstrated the effect of the unique structure of nerve cells on the efficiency of calculations, and presented his vision for the development of neuromorphic hardware - computational systems that simulate the structure and function of the brain, with the aim of improving performance and reducing energy costs.
High resolution brain research
In the lecture, new methods were presented for mapping the human brain and the connectivity between nerve cells at different levels, from micro to macro. These mappings enable an in-depth understanding of the brain's architecture, which can form the basis for building advanced artificial neural networks, capable of operating in a more natural and efficient manner.
NeuroAI is a new field of research that attempts to bridge our understanding of how the brain performs AI calculations (more correctly, BI - (Biological Intelligence) and our ability to develop intelligent, learning machines that solve scientific problems, plan the future, create "new things out of nothing" Already today the "learning machines" - the "deep networks" - on which the models used by the AI - such as ChatGPT - are based have drawn inspiration from the brain - "Artificial Neuron", "Deep Neural Network" and "Synaptic Learning", it turns out, there are other "patents" that allow us to do all the "wonders of the brain" - speech, facial recognition, movement in the world of vehicles, science and art The physical and computational principles that allow the brain to perform these tasks so efficiently, we can continue to be inspired by it in order to build machines with very advanced performance while consuming energy minuscule
Indeed, the digital processors used today for "artificial intelligence" calculations require about a million times more energy than the brain, which only needs about 20 watts. The big question is how does the brain manage to perform a variety of complex calculations at the same time with almost no "electrical calculation"?
One of the unique components of the brain is the neuron, its basic building block, in which a central part of the computational processes are done in an analog, energy-cheap way, and another part - the signals passing through its output unit, the axon - are digital and more expensive in terms of energy. This combination between analog calculation in one part of the brain microchip and digital calculation in another part of it is a special "patent" that the brain has developed, but has not yet been exhausted in the artificial neural networks. The architecture of connections in the brain is also very different from that of the "deep" artificial networks we built, and this also plays an important role in the brain's success in calculating calculations so efficiently.