"Big Science" projects pave the way to understanding how the world's most complex machine produces our thoughts and feelings
Even after a hundred years of continuous research, neuroscientists still do not understand the mechanisms of action of the organ that is the center of human consciousness and weighs about 1.4 kilograms. Many have tried to attack the problem by studying the nervous system of simpler creatures. In fact, almost 30 years have passed since the day when researchers mapped the connections between all 302 nerve cells of the roundworm Caenorhabditis elegans. However, the wiring diagram of the worm did not help to understand even how the neural connections produce basic behavior patterns such as eating and sex. There was a lack of data linking the activity of nerve cells and a given behavior.
In humans, it is even more difficult to link biology with behavior. The media routinely reports on scans showing that certain areas of the brain are activated when we feel rejected or speak a foreign language. These articles create the impression that the technology available today provides basic insights into the workings of the brain, but this impression is wrong.
A notable example of this false impression is a highly publicized study that identified single brain cells firing electrical signals in response to the face of actress Jennifer Aniston. Despite the media noise, finding the "Jennifer Aniston neuron" is similar to receiving a message from aliens, a sign that there is intelligent life in the universe but without the ability to understand the meaning of the message. We are far from understanding how the electrical signals produced by that cell affect our ability to recognize Jennifer Aniston's face and attribute it to a segment from the TV show "Friends". In order for the brain to recognize the star, it obviously needs to activate a large group of nerve cells, all of which speak a neural code that we still do not understand.
Jennifer Aniston's neuron also demonstrates the intersection of ways that the science of neurobiology has come to. We already have methods for recording the electrical activity of single nerve cells in living humans. But in order to make real progress, a new series of technologies is required in the field that will allow researchers to monitor the electrical activity of thousands or even millions of nerve cells, and also change it in a deliberate way. Such methods will allow us to make our way through what the Spanish researcher Santiago Ramon y Cahal, a pioneer in the field of neuroanatomy, called "the impassable jungle in which many researchers got lost."
Such groundbreaking methods could, in principle, begin to bridge the gap between nerve cell activity and consciousness: sensing, emotion, decision-making and ultimately consciousness itself. Deciphering the exact patterns of brain activity that underlie thinking and behavior will also provide essential insights into what happens when neural circuits go wrong in psychiatric or neurological diseases such as schizophrenia, autism, Alzheimer's and Parkinson's.
The need for a technological leap in brain research is also felt outside the laboratory. Indeed, the Obama administration announced in 2013 that it was establishing a large-scale initiative, known as BRAIN, to promote advanced methods in neurobiology. It is the largest publicized scientific venture of the president's second term.
project brain, which received $100 million in seed funding in 2014, focuses on developing methods to record electrical signals from a much larger number of brain cells, and even from entire brain regions. project brain Complements other major projects in the field of neurobiology outside the US. The Human Brain Project (HBP), funded by the European Union, is planned for 10 years at a cost of 1.6 billion dollars and aims to develop a computer simulation of the entire brain. Ambitious neurobiology ventures have also been launched in China, Japan and Israel. The international consensus on the need to invest in neuroscience is reminiscent of other initiatives that focused on areas of science and technology of national importance after World War II: nuclear energy, atomic weapons, space exploration, computers, alternative energy sources, and the decoding of the human genome sequence. Now the age of the mind has begun.
The TV screen problem
Today it is impossible to track how brain cells process the concept of "Jennifer Aniston", or any similar concept that we encounter during subjective experience or perception of the outside world. In order to do so, it is necessary to move from measuring a single nerve cell to understanding the complex relationships between clusters of such cells, relationships that produce a whole that is greater than the sum of its parts, what scientists call an emergent property. Properties such as temperature, the degree of solidity of a given substance or the magnetic state of a metal, for example, emerge only after bonds are formed between masses of molecules or atoms. Take carbon atoms for example. Each atom can form different chemical bonds that give the material the hardness of a diamond or the softness of graphite, which peels so easily that it can form words on paper. Whether it is hardness or softness, these emergent properties do not depend on the single atom but on a series of interactions between atoms.
The brain also has, obviously, emergent properties that cannot be understood at all by looking at a single nerve cell or even by means of a low-resolution image of the activity of large groups of nerve cells. It is possible to distinguish between the perception of the concept of "flower" and the recovery of a childhood memory only by monitoring the activity of neural circuits that transmit electrical signals along complex series of hundreds or thousands of nerve cells. Although neuroscientists have been aware of these challenges for many years, they still do not have the tools to record the electrical activity of neural circuits in the brain that underlie perception or memory or that lead to complex behavior and cognitive activity.
One attempt to overcome this problem involves creating a map of the anatomical connections, or synapses, between nerve cells, a field that has been given the name connectomics. The Human Connectome Project (on the weight of the Human Genome Project), recently launched in the US, will provide the brain's structural wiring diagram. But like the roundworm, this map is just the starting point. It alone will not be able to record the frequently changing electrical signals that produce given cognitive processes.
In order to record these electrical signals, completely new methods of measuring the electrical activity in the brain are required. These are methods that are not found in the existing technology, which can provide an accurate picture of the electrical activity of small groups of nerve cells, or comprehensive imaging of extensive brain areas but without the resolution required to identify the activation or silencing of given neural circuits. High-precision electrical recordings are now made by inserting a needle-like electrode into the brain of laboratory animals to monitor the firing rate of a single neuron, that is, the electrical activity that occurs after the cell receives chemical signals from other neurons. When a nerve cell receives adequate stimulation, the voltage drop of the cell membrane reverses. The voltage change causes channels in the cell membrane to admit sodium ions or other positively charged ions into the cell. The current of ions causes an electrical signal that moves along the long extension of the cell, known as an axon, and causes it to transmit chemical signals to other nerve cells and thus continue to spread the signal. Recording the activity of just one neuron is likened to the impossible attempt to follow the plot of a high resolution movie by tracking a single pixel. It is also an invasive method that can cause tissue damage.
And at the other end of the spectrum, methods that monitor the joint activity of nerve cells in the entire brain are also insufficient. In the well-known electroencephalography (EEG), invented by Hans Berger in the 20s, electrodes placed on the scalp measure the collective electrical activity of more than 20 nerve cells located beneath them. EEG therefore measures the rising and falling "waves" of electrical activity in millisecond time frames, but it cannot distinguish the electrical activity of a single nerve cell. Functional magnetic resonance imaging (fMRI), which produces color spots that illuminate active areas of the brain, non-invasively records the neural activity in the entire brain, but slowly and with low spatial resolution. Moreover, fMRI does not directly monitor electrical activity but secondary changes in blood flow in a given volume unit.
To obtain information about emerging patterns in brain activity, researchers need new methods that can record the electrical activity from clusters of thousands of neurons. The field of nanotechnology, which brings with it new materials capable of measuring dimensions smaller than a single molecule in certain situations, can help in the development of multiple cell recording methods. Today there are already prototypes of arrays containing more than 100,000 electrodes on silicon. Such arrays will be able to record the electrical activity of tens of thousands of nerve cells in the retina. The continued development of the technology will make it possible to stack the arrays into a three-dimensional structure, reduce the number of electrodes to avoid damage to the tissue and extend them to go deeper into the cerebral cortex, which is the outermost layer of the brain. Such developments will make it possible to record the electrical activity of tens of thousands of nerve cells in sick humans in a way that will make it possible to distinguish the electrical properties of each and every cell.
Electrodes are not the only way to monitor neural activity. Today there are already other methods besides electrical sensors. Biologists, borrowing knowledge from methods developed by physicists, chemists and geneticists, are beginning to develop imaging methods that follow living nerve cells in awake animals behaving as they normally would.
A hint of what was to come was received in 2013, when Misha Ahrens of the Howard Hughes Medical Center's Genelia Pharm Research Campus in Ashburn, Virginia, used zebrafish embryos for whole brain imaging. The zebrafish is one of the favorite creatures of neurobiologists, because in the embryonic stages it is transparent, so you can easily see its internal organs, including the brain. In the experiment, zebrafish neurons were genetically engineered so that they glow when calcium ions enter the cell after electrical firing. A new type of microscope illuminated the brains of zebrafish by projecting a sheet of light over the entire organ while a camera filmed the neurons firing every second by second.
The method, known as calcium imaging, developed by one of us (Yost) to record the electrical activity of neural circuits, allows the activity of 80% of the 100,000 zebrafish neurons to be recorded. It turns out that when the fish were resting, many areas of the embryonic nervous system were activated and silenced in mysterious patterns. Ever since Berger invented the EEG, researchers have known that the nervous system is, in fact, always active. The zebrafish experiment raises hope that newer imaging methods may help overcome one of the major challenges in neurobiology: the meaning of the sustained, spontaneous firing of large groups of neurons.
The zebrafish experiment is just the beginning, because better technologies are needed to discover how brain activity leads to behavior. New types of microscopes must be designed to see neural activity in three dimensions. Likewise, the imaging of calcium currents is too slow to follow the rapid firing of nerve cells and is also unable to detect inhibitory signals that dampen the electrical activity in the cell.
Neurophysiologists, working from Nablus to Nablus with geneticists, physicists and chemists, are trying to improve optical methods so that instead of measuring calcium concentration, nerve activity will be recorded directly by measuring changes in cell membrane tension. It is possible, for example, to genetically engineer the nerve cells with dyes that change their optical properties as a function of voltage to improve imaging compared to that obtained by monitoring calcium currents. This method, known as voltage imaging, may eventually allow researchers to record the electrical activity of each nerve cell in an entire neural circuit.
However, stress simulation is still in its infancy. Chemists need to increase the materials' ability to change color or change other properties when the nerve cell fires an electrical signal. It is also necessary to make sure that the chemical dyes do not damage the nerve cell. However, molecular biologists are already building stress sensors encoded by genetic means. This refers to cells that read a genetic sequence containing the code of a protein that glows with fluorescent light and is launched to the cell's outer membrane. Once the protein gets there, it changes its brightness level in response to changes in the voltage of the nerve cell.
As with electrodes, advanced non-biological materials borrowed from nanotechnology may be helpful. Instead of organic dyes or genetic markers, it is possible to prepare a new type of voltage sensor that will be made of "quantum dots", small semiconductor particles that have quantum-mechanical effects and whose optical properties, such as the color or light intensity, can be precisely designed. Nano-diamonds, for example, which are a new material that came from the field of quantum optics, are sensitive to changes in electric fields that occur when there is electrical activity in the cell. It is also possible to combine nanoparticles with normal organic dyes or with genetically engineered dyes, to produce molecular hybrids in which the nanoparticles will serve as an "antenna" and amplify the weak signals produced by the fluorescent dyes when the nerve cell is activated.
fathom
Another technical challenge that must be overcome to make it possible to see neural activity is the difficulty of sending and collecting light signals from neural circuits that lie deep below the surface of the brain. To solve this problem, the developers of the neurotechnological methods are starting to collaborate with researchers in computational optics, materials engineering and medicine, who also need to see non-invasively through solid objects, be it the skin, the skull or a computer chip. Scientists have known for a long time that part of the light that hits a solid bone is scattered, and that in principle it is possible to use the scattered photons to discover details about the bone.
For example, the light from the flashlight on one side of the palm shines through it, and comes out as a diffuse glow that does not provide any clue as to the location of the bones or blood vessels under the skin. But the information about the path the light travels through the palm of the hand is not completely lost. The irregular light waves scatter and then interfere with each other. It is possible to pick up this light pattern using a camera and reconstruct the figure captured in them using new computational methods. In 2013, this method was used by Rafael Peyston and his colleagues from the University of Colorado at Boulder to see through an opaque material. These methods can be combined with other optical methods, including methods used by astronomers to correct distortions caused to starlight by the influence of the atmosphere. Computational optics, as it's called, can help see the fluorescent light emitted by colors as neurons deep in the brain fire.
Some of these new optical methods have already been successfully used to image the interior of the brains of animals or people after removing a piece of their skull, allowing scientists to see more than a millimeter deep into the cerebral cortex. In further refinement, these methods may provide a way to see through the skull itself. Optical imaging through objects will not be able to penetrate to a distance that would allow identifying structures deep inside the brain, but another new invention, known as microendoscopy, may solve this problem. With this method, neuroradiologists insert a thin, flexible tube into the femoral artery, then navigate it to different parts of the body, including the brain. They then use optical fibers threaded through the tube to make the measurements. In 2010, a group from the Karolinska Institute in Stockholm demonstrated a new device, known as an extruder, which makes it possible to safely puncture the artery or blood vessel through which the endoscope is inserted. With this method, it is possible to examine any area of the brain, and not just the blood vessels, with the help of various imaging methods or electrical recording.
Electrons and photons are obvious candidates for recording electrical activity in the brain, but they are not the only ones. DNA technology can also play a decisive role in the future in monitoring neural activity. One of us (Church) was inspired by the field of synthetic biology, where biological materials are used as if they were machine parts. As research progresses, it will be possible to engineer laboratory animals so that they synthesize a "molecular telegraph tape", meaning a molecule that changes in a unique and identifiable way when a nerve cell is activated. According to one scenario, this molecular film was produced by an enzyme called DNA polymerase, an enzyme that builds a long strand of DNA that binds to another strand that has an existing sequence of nucleotides (the "letters" that are the building blocks of DNA). An influx of calcium ions into the cell, created as a result of the firing of the nerve cell, will cause the polymerase to produce a different sequence of letters, i.e. cause "mistakes" in the expected sequence of nucleotides. After that, it will be possible to determine the double-stranded nucleotide sequence that will be obtained from each nerve cell in the brain of the experimental animal. A new method known as determination of sequence in the cell with the help of fluorescence will make it possible to follow the patterns of change, that is, the errors obtained in the molecular film, which indicate the strength or timing of each signal in each of the many neurons in a given piece of tissue. In 2012, the researchers from Church's laboratory reported on the feasibility of the idea of using a molecular film made of DNA that changes following currents of magnesium, manganese and calcium ions.
In the future, synthetic biology may allow the construction of artificial cells that will serve as sentinels to patrol the human body. A genetically engineered cell could be used as a biological electrode whose diameter is much smaller than the width of a hair, and which could be placed next to a nerve cell to monitor its electrical firing. A nanometer circuit inside the cell, or "electronic dust", will be able to record the firing pattern and transmit the data via a wireless link to a nearby computer. These nanoscale devices, a hybrid between electronics and biological parts, may receive energy from an external ultrasound transmitter or even from within the cell itself, using glucose, adenosine triphosphate (ATP) or another molecule.
On and off switches
To understand what is happening in the vast network of neural circuits in the brain, researchers cannot be content with taking pictures. They need to activate or deactivate selected groups of neurons at will to test what the cells are doing. Optogenetics, a method widely adopted by neurobiologists in recent years, involves the use of animals that have been genetically engineered so that their neurons produce light-sensitive proteins derived from bacteria or algae. When these proteins are exposed to light they activate nerve cells or silence them. Researchers used this method to activate neural circuits associated with pleasure, other reward responses, and the impaired movements that characterize Parkinson's patients. They even used optogenetics to "implant" false memories in mice.
The need for genetic engineering requires long approval processes before optogenetics can be tested or used as a treatment for humans. A more practical alternative for certain uses is a method in which molecules of nerve conductors (neurotransmitters), which regulate the activity of nerve cells, are bound to light-sensitive molecules, called "cages". As soon as the cage is exposed to light, it breaks down, and the neurotransmitters disperse and become active. In a 2012 study, Stephen Rothman of the University of Minnesota, in collaboration with Yost's lab, used ruthenium cages that contained the neurotransmitter GABA that suppresses neural activity. The researchers placed these cages on the exposed cerebral cortex of a rat that had induced epileptic seizures with chemicals. A pulse of blue light on the brain released the GABA and relieved the seizures. Similar "opto-chemical" approaches are now used to study the use of certain neural circuits. Further development will enable the use of such approaches to treat certain neurological or mental illnesses.
The road from basic research to clinical applications is still long. Any new ideas for measuring and manipulating neural activity on a large scale will need to be tested first in fruit flies, roundworms and rodents before reaching humans. An intensive effort may allow researchers to visualize and optically control a significant portion of the 100,000 neurons in the brains of fruit flies within about five years. Devices that will record and regulate neural activity in the brain of an alert mouse are 10 years away. Certain methods, such as thin electrodes for repairing defects in neural circuits in patients with depression or epilepsy, may enter medical use in the coming years, while others may be delayed a decade or more.
As neurological methods become more sophisticated, researchers will need to improve the way they manage and share the vast amounts of data they collect. Simulating the activity of all neurons in the cerebral cortex of a mouse will produce 300 terabits of compressed data within an hour. But this is not an insurmountable task. Sophisticated research facilities, the equivalent of observatories, genome research centers, and particle accelerators, can collect, process, and disseminate such a flood of digital information. Just as the Human Genome Project gave birth to the field of bioinformatics to deal with the information obtained from sequencing the genome, so the academic discipline of computational neurobiology can decipher the mechanism of action of the entire nervous system.
The ability to analyze petabytes of data will therefore create order in the flow of new information. But more than that, it will also pave the way for new theories that will explain how the noise of neural firing is translated into perception, learning and memory. The analysis of the information may also confirm or refute theories that could not be tested before. One intriguing theory posits that the many neurons involved in the activity of a neural circuit develop specific firing sequences called trigger sequences that may represent emerging brain states: thought, memory, or decision. In one recent study, a mouse had to choose which area to cross within a virtual maze projected on a screen. The act of choosing activated dozens of neurons and caused dynamic changes in neural activity similar to those of a pulling sequence.
A better understanding of neural circuits can improve diagnosis of brain diseases from Alzheimer's to autism and provide a deeper understanding of the causes of these diseases. Instead of diagnosing such conditions and treating them based only on symptoms, doctors will be able to look for characteristic changes in the activity of certain neural circuits that underlie each disease and treat them. Also, knowledge about the roots of the disease will probably translate into economic profit for medicine and biotechnology. As happened with the human genome project, ethical and legal issues will need to be addressed, especially if the research leads to methods that can distinguish mental states or change them. Such methods would require strict security measures to ensure patients' informed consent and maintain their privacy.
However, for the various initiatives in brain research to succeed, scientists and their supporters need to focus on the goal of imaging and regulating neural circuits. The idea for the BRAIN research initiative stemmed from an article in the scientific journal "Neuron" published in June 2012. In the article, we and our colleagues proposed a long-term collaboration between physicists, chemists, nanotechnology scientists, molecular biologists and neurobiologists to produce a "brain activity map" using new methods for measuring and controlling activity The tram of complete neural circuits in the brain.
We would like to emphasize that while the ambitious BRAIN initiative is developing, the original emphasis on building new research tools must be maintained. Brain research is vast in scope, and the BRAIN initiative could easily become a wish list that tries to satisfy the wide-ranging interests of the multiple subfields of neuroscience. It may therefore be only a marginal addition to existing projects that are already underway in many laboratories, each working independently.
If this happens, progress will be random, and we may never be able to overcome major technical challenges. We need cooperation between academic research fields. Building devices for simulating voltage changes in millions of nerve cells at once in entire areas of the brain will only be possible following prolonged efforts by a large and multidisciplinary group of researchers. Eventually the technology will be made available in a large, observatory-like facility that will be shared by the neurobiology community. We passionately adhere to the need to focus on developing new methods for recording, controlling and decoding electrical signals which are actually the language of the brain. We believe that without these new tools, neuroscience will not advance and will not be able to identify the emergent properties of the brain that underlie an infinite range of behaviors. Improving the ability to recognize and use the language of electrical signals and nerve cells is the most effective way to develop a comprehensive theory of how nature's most complex machine works.
_______________________________________________________________________________________________________________________________________
About the authors
Raphael Yuste is a professor of biological sciences and neurobiology at Columbia University and co-director of the Cavalli Institute for Neuroscience. He was recently awarded the NIH Director's Award for Pioneering Research.
George M. Church (Church) is a professor of genetics at Harvard University and the founder of personalgenomics.org, an open information source for data on the human genome, imaging of the nervous system, and behavioral and cognitive traits. He serves on the advisory board of Scientific American.
in brief
The mind and how it produces conscious thoughts is still one of the great mysteries of science.
To better understand the brain, neurobiologists need new tools to analyze how neural circuits work.
Methods for recording or regulating the electrical activity of circuits in the brain can meet these needs.
The Obama administration launched a large-scale initiative to promote the development of such methods.
More on the subject
The Brain Activity Map Project and the Challenge of Functional Connectomics. A. Paul Alivasatos et al. in Neuron, Vol. 74, no. 6, pages 970-974; June 21, 2012.
The NIH Brain Initiative. Thomas R. Insel et al. in Science, Vol. 340, pages 687-688; May 10, 2013.
The article was published with the permission of Scientific American Israel
Comments
https://www.hayadan.org.il/brain-research-new-age-2007149
You claim that the color red is physical. At first you claimed that the color red is actually the same as red light, which is a synonym for an electromagnetic wave of a certain wavelength. Then I changed your mind and decided that the red is actually an electric voltage, everything you describe here is confusion, and not a physical size. Voltage on a transistor is the same in every component of the device. That is why they are called digital components. Therefore, it is not possible for a voltage of 5V to be identified with a red color and a green color at the same time.
point
Too bad - because you are shooting yourself in the foot. You could build a good argument, but you, as I understand it, spoil it by insisting that a physical concept is also just a "feeling".
If you make a claim you have to substantiate it. Saying that color is an experience, and that's it, does not contribute to the discussion. There is no problem simulating the parts of the brain that sense color, sound, heat or touch. I would also say it includes taste and smell.
You have no way of knowing that a computer does not know what red is, and you have no valid reason to think so.
Understand - you go beyond your claim to show that it is true. Does not work…
There is no difference between happiness and the color red, both are mental experiences.
point
If you inject a suitable electrical signal into the brain of a blind person he will indeed see a red color (assuming his blindness is due to a problem with his eyes, and not deeper inside the brain).
I think your argument will be more successful if we talk about a concept like "happiness" or "beauty". Replace "red color" in your original claim with "beauty" - then there is already something to talk about.
Miracles, what I call red is the color red. 1000 years ago people who knew nothing about electrical signal still saw red color.
And in general, if electricity is the color red, then it is necessary that if you explain the theory of electricity to a blind person, he will "understand" it and be able to see colors.
point
I am very persistent. In both cases it is a matter of converting a wavelength into an electrical signal, to switch a circuit that says "I see red".
What you call "red" is an electrical signal from a dimmed cell that is sensitive to light of a certain wavelength. What the app calls red is exactly the same.
Miracles you zigzag, each time you say something different.
We agreed that the application does not see a red light in life, maximum electric voltage on which transistor. And we both agree that a digital voltage on some transistor is not a red light.
That is: when the person says that he sees a red light, he is not lying, and when the app says that he sees a red light, he is lying.
point
And again - when the app on my phone sees red, it says red.
I don't understand what you are trying to do.
I don't know about you, but when I say I see red. i see red
point
If you read what I wrote, you would understand that the person is also lying.
Miracles, but the same computer that says "I see red" is lying, because what it "sees" is merely a flow of electrons.
point
In our context - by the word "computer" I do not mean the PC that is on my desk. I mean a machine that contains light-sensitive cells on one side, and a lamp on the other. There is a red filter on the light-sensitive cell, and next to the lamp it says "I see red!!!". If you want, I will describe the whole electrical circuit to you, but I don't think that will contribute to the discussion.
On the other hand - our eye does not see red light either. A photon in the red range hits a rhodopsin molecule in a retinal cell. At this moment, an electrical signal is generated that is transmitted through the layers of nerve cells.
Point - on my phone there is an application that tells me (albeit in English) what color is in the circle in the center of the camera. Do you want the bit sequence of this software?
Miracles Let's start with the fact that that computer does not sense the electromagnetic radiation at all but an electric current or electric voltage that comes from the optical sensor.
point
I will say again - there is no problem for a machine to see the color red and say "I see the color red".
You asked a question - I answer you yes.
what do not you understand??
ASCII
If you didn't understand, too bad. I mean the language in which he writes - half Hebrew, half Hebrew from English. What I wrote is bullshit and on purpose. Show what non-standard Hebrew looks like. From a point I don't expect anything, I expect from you and as for you, I hope your Hebrew is fluent, I hope you got to the end of my opinion.
Life
I got the point. I'm sure most, if not all, understood him. Even you understood what he wanted to say.
So what are you complaining about?
By the way, your last comment is just bullshit.
point
In order for you to understand my intention, I will write the following sentence:
The semi-fluidic vibrations are extremely essential in order to be able to enhance the cash flow in the fourth quantum layer. In order to create a neo-Kantian flucation, it is advisable to weaken the strings in G major and get a triaxial vocal resistance. The result is crackles with a decibel intensity that no one understands.
point
It's not enough that I understand, others need to understand too. Your unsuccessful combination of Hebrew and English clearly shows that what you are doing is demeaning the Hebrew language. Keep in mind that not everyone knows English well
Haim, as long as you understood what I wanted to say, that's enough. For my part, let it be written in Circassian.
Miracles, do you really not understand or are you playing her not understanding?
Life
bit in Hebrew is bit - "binary digit". Well, it's half in Hebrew...
point
Maybe you will learn Hebrew. What is Semaltz This word has a successful translation - simulation. Propaganda - propaganda. The word bit has no word in Hebrew. If you are such a genius you will find a successful translation. All the inflated words from too much self-importance do not make any impression on me.
point
You are right about the light. If you want to be precise - there is no such thing as "red color". Otherwise - how can you claim that we both see something red? Maybe you and I see it in different colors, and just call it red because that's how we were taught?
In general - color also depends on culture. Different languages have different number of colors. I am convinced that my wife sees many more colors than I do….
On the other hand - "red light" is light with a wavelength of 590-620 nm. Therefore - as I argued at the beginning - what is the problem for the computer to detect a wavelength between 590 and 620 nm - and say - "Commander!!! I see a red light!!!" ?
There is no such thing as a red light. There is an electromagnetic wave with a certain frequency. Nothing more.
point
I don't understand why you see a problem. The eyes have cells that are sensitive to red light. What is the problem with stimulating such a cell so that the brain sees a red light?
point:
Of course there is. We just have to wait a few years until we find out 🙂
Propaganda, emerging features or ridiculous claims?
Is there some kind of sequence of bits that will run on a group of supercomputers that will symbolize the brain that will cause the "red color" to pop out?