Comprehensive coverage

The robot that learns by reading minds

The developers of the future are working on developing a robot that will perform actions after reading signals transmitted directly from the brain, without lifting a finger. From folding laundry to "brain games"

Ferbie toys from two generations. On the right is a "classic" Ferby, capable of speaking words: on the left is a 2005 version of Ferby, which already knows how to recognize voices. Photo: Wikimedia/commons
Ferbie toys from two generations. On the right is a "classic" Ferby, capable of speaking words: on the left is a 2005 version of Ferby, which already knows how to recognize voices. Photo: Wikimedia/commons

One of the signs that technology is starting to mature is its application in toys. The appearance of the first domestic robot, a "Furby" toy from 1998, is a good example of this: for the first time, the technology was reliable and cheap enough to create a robot that moved parts of its body, responded to sounds and movements, and seemed to learn the language of the families it came to.
In fact, the robots were programmed to speak English words from a vocabulary that had grown over time since they were first activated, creating the illusion that they were learning words spoken in their environment. This illusion was reinforced by the fact that from a vocabulary programmed into it, the robot was programmed to repeat words it had said itself if its owners petted it every time it uttered those words. (The illusion was so strong that intelligence agencies forbade keeping the toy in their offices, for fear that it would repeat words it heard there...)

by the power of thought

Now it's the turn of the "mind reading" technology: about a year ago, the American toy company Mattel began marketing the Mindflex game (and see the link at the end of the column). The game includes a board with holes arranged in a ring, each of which sprays a jet of air upwards. A ball placed over such a hole will float at a height that depends on the strength of the air jets. In the loop that the player puts around his head, several sensors are integrated that measure the electric field created by the brain.
The player learns how to concentrate so that the strength of the jets is strengthened or weakened. With manual control the player can make the ball move between the jets. The helmet is not sophisticated enough to also allow the "right" or "left" commands - the player's brainwaves are only translated into "up" or "down" commands. This creates challenges of moving the ball through various obstacle courses.
In research laboratories around the world, it is already possible to achieve much more advanced mind reading: in September 2010, electrodes were implanted in the speech centers of an epilepsy patient during brain surgery to treat his disease. After that, the researchers, led by Prof. Bradley Greger (see link at the end of the column) from the University of Utah, asked him to repeat ten words several times: yes, no, hot, cold, hungry, thirsty, peace, goodbye, more and less.
Computers connected to sensors learned the pattern of brain activity for each of the words, and then were able to identify which word was spoken with 76-90% accuracy when required to distinguish between two possible words each time (which is certainly useful when asking a question to which the answer will be "yes" or " no", "more" or "less", etc.). When required to identify a word out of all ten, the accuracy dropped to a range of 24-48%: better than random guessing but still not enough for practical use. This patient was able to speak, but the researchers see this development as having great potential to help patients who have lost the ability to communicate in another way.
Mind reading does not necessarily require brain surgery. Imaging with the fMRI technique, impressive successes have been achieved in recent years in mapping the areas of the brain associated with certain terms. According to one of the reports, for many of us the same areas of the brain will be activated when we think of concepts such as "screwdriver" or "window", so it is also possible to tell when a new subject, who the computer has not "learned" his brain patterns, focuses his thoughts on one of these terms - According to the experience acquired by the computer working with other subjects.
Most of us would prefer not to agree to a brain analysis or to be imprisoned inside an fMRI machine so that we can have a computer recognize a word we thought, and that too only on the condition that we chose a word from a limited vocabulary. With the development of technology, it seems that these limitations will be removed, thus opening up many practical uses: we may see that mind reading appears initially as an aid to the paralyzed, later to facilitate people working in difficult conditions (such as astronauts during takeoff), and then in much more widespread use (if the solution is easy to use And this cheap ink will surely be good news for those who tend to forget where the remote control of the TV sets, DVD, cable box, etc. is).

Decoding signals and learning from them From the point of view of artificial intelligence, such "mind reading" presents several challenges. The first challenge is in the "translation" of the readings of the sensors or the fMRI images: these readings include a large amount of data and every time the person thinks of the same word the readings will be different. Software is required that finds the commonality between readings measured for the same word and the difference between readings for different words. This is a well-known challenge in the field of computer learning: classification of input data into one of the groups known to the software (a challenge similar in nature, but may be more difficult, is the classification of images by identifying the objects that appear in each image).
This is an old field in artificial intelligence and there are many techniques for learning an effective classification process. Most of them have in common the process that starts with learning from one data set - the "learning set". In our case, the computer will be given the measurements or the images produced from the brain as soon as the person thought of words from the learned list, and next to each measurement the computer will also be given the identification of the word chosen at that moment. The computer will look for repeating patterns of data for the same word so that these patterns are as far as possible from patterns detected for all other words. In the second stage of the process, the quality of the learning is checked by identifying the words spoken, when this time the word is known to the software developer but not to the software itself. This way you can check the level of accuracy that the learning has reached.
Learning may also enable recognition of words that were not included in the learning set. In April 2010, Intel Company reported on software it developed with Carnegie Mellon and Pittsburgh Universities. After this software learns the patterns of brain activity measured in fMRI for certain words, it also manages to understand which new word the subject is concentrating on. The researchers explain this as follows: some of the brain areas that were identified as active when the brain activity in response to the word "puppy" was studied will also be activated when thinking about the word "bear", but alongside them we also see activation of the amygdala (the amygdala), a brain area that participates in the "fight or flight" reactions ( Fight or Flight).
This combination allows us to guess that the reference is to a bear, or at least to an animal with fur that is perceived as threatening. To reach a high level of independence in this type of identification requires semantic information about many words and an understanding of the effect of semantic connections on brain activity, although from the report on the Intel project it is not clear how general the analysis performed there was and how much it required human involvement.

Learning from observation, learning by doing In other studies the goal is not to learn words but actions. Like a child watching an adult and learning to perform actions by imitation, there are also robots that are able to learn by watching humans. The robot DiGORO, for example, developed at UEC University in Tokyo, learns to perform actions such as putting objects into a bucket while having a dialogue with the human demonstrator, as you can see in the video link at the end of the column.
After learning the action, the robot can repeat it even if it is asked to put other objects into the bucket. Diggoro's height is about 150 cm and he moves from place to place on a segway adapted to his needs. He is also gifted with the ability to recognize faces and voices and learn to recognize people presented to him.
In other projects, the learning of human movements is more accurate, because they do not use the motion picture, like Digoro, but the motion sensors that the person demonstrates in combination with photography. An example of this is the COGNITO project, which began in January 2010 with funding from the European Union. One of the goals of the project is to study complex movements used in industrial production processes.
The "mind reading" technology enables another step forward in robotic learning: activating the robot with the help of signals from the brain itself, so that the robot can then repeat the same actions independently. About a year ago, the Honda company, the manufacturer of the famous ASIMO robot, demonstrated an EEG helmet that measures electrical activity of the brain and transmits them to the command for the robot to raise an arm or a leg.
In Dr. Rajesh Rao's group ((Rao from the University of Washington), the learning is more advanced: initially the robot is equipped with some basic actions, such as walking, and the human operator practices giving commands through the helmet to perform these actions. More complex actions are learned as a series of simple actions, and after They then receive a new command that activates them. This is how the robot can be controlled at an increasing level of complexity, when the robot knows how to break down a complicated command into a series of simpler commands, some of which in themselves consist of basic steps.
Dr. Rao compares this to hierarchical actions in humans: "For example, a behavior like driving a car is first learned but then becomes an almost automatic action, which frees you to recognize a friend and wave to you while driving" (It is not clear whether the National Road Safety Authority will encourage examples such).
In a sense, these ideas are not new: in the third part of the book "My name will be the Hangman" by the science fiction writer Roger Zelazny (Zelazny; the story appeared separately in English as the novel Home is the "Hangman", 1975) an independent robot that has reached a high level of artificial intelligence appears and independence through a learning process.
In this process, the robot was remotely controlled by its creators, who used the helmet to transmit signals from their brains to the robot and to directly sense the state of the robot's electronic mind. In the process, the robot's intelligence and personality were shaped, so it is no wonder that it treats its creators as parents. Zilazni uses the technological idea to discuss questions of responsibility, guilt and atonement during a fascinating and surprising story, and not necessarily to predict the future. However, it is hard not to see the similarities between the story written 35 years ago and contemporary developments.
Today the main limitation is the inaccuracy in reading brain activity through EEG sensors that measure electrical potentials on the scalp. These signals cannot reflect in detail the activity of the brain, and therefore Dr. Rao does not expect that the helmets he is currently using will enable the performance or learning of operations that require fine motor skills such as opening a medicine bottle or tying shoelaces.
When such problems are solved and the technology is cheap and reliable enough to be used in every home, they will certainly lead to the production of a new type of games ("brain games" instead of "computer games"?), but will also be used for more prosaic needs: initially we can teach a robot how to iron a shirt or fold pants, to wash and dry laundry, and then we can give just one command - "do the laundry", and use this command to build even more complex behaviors, all without lifting a finger but only by thinking.
And if all this doesn't save enough time, a new profession will surely develop - "robot trainers" who will teach the home and personal robots everything they should do, without the robot owners having to demonstrate the actions with their own hands even once. Long before this "brave new world" is realized, such methods will be able to bring about a revolution in the lives of the disabled and paralyzed, make it easier for the elderly and give them independence, and open a new path for the interaction of our minds with the world around us.

links:

MindFlex game

Decoding signals from the brain by recognizing the spoken word

Mind reading in fMRI imaging

Mind reading that also recognizes words that have not been learned in advance

The DiGORO robot learns from watching human actions

Cognito project where human movements are learned by motion sensors

A robot that learns from the brainwaves of a human guide

Israel Binyamini works at ClickSoftware developing advanced optimization methods

17 תגובות

  1. Brain surgery??
    It's half past ten now
    Do you want me to have bad dreams (God forbid)?????????????????????
    incidentally
    Where do you get a Ferbie?
    at a discounted price???
    That sounds very entertaining
    for the big holiday
    All in all, I'm looking forward to the big holiday
    What fun!!!!!!!!!!!
    But I'm going to work in my mother's garden
    From eight

  2. They implant the chips with the cables in the human body and brain against their will, only they don't tell you about it. I have chips and cables that go from my head, my brain, my whole body and my hands, and despite MRI scans and isotopes, medicine refuses to remove them.
    Don't believe everything you see, maybe you too have cables in your body, take MRI scans and isotopes and check for yourself because the doctors won't tell you the truth.
    Which cables cause cancer and psoriasis and other diseases.
    It's very real. And it's hard to fight with you. They even refuse to give me legal aid, not to mention the press that cooperates in keeping the whole truth quiet.
    I have photos that will make anyone anxious. you have been warned.

  3. To Tomer
    It cannot be said that everyone's brain works the same way. First, it is clear that it depends on the person and you may find people in whom similar areas are activated when hearing several words such as bear, dog, chair, etc. But with those people there will certainly be differences in other words.

    When we are born we are already different, and each one develops his brain, his first experiences (which are probably the basis of the following associations) in a different way. Therefore, it cannot be said that everyone's brain works the same way.

    There are similar things, but a low common denominator can be achieved in reactions measured for example by a real machine (not brain in this case) or by learning several commands such as forward right... (or my feet, come, land) to the software connected to the helmet.

    It would be cool if there was a helmet or some kind of non-invasive equipment that would read the activity of the whole brain in high resolution and give electrical and other information (like a physical appearance, for example an ultrasound, and maybe a certain chemical activity,...). Then we can play with the information, explore it and create amazing applications such as eliminating the mouse and keyboard which will increase productivity drastically, and shorten the entry into the world of computing and typing. Or amazing games without the need for a manual remote.

  4. man"
    Another thing,
    When you imagine a bear and when you see a bear, the same areas of the brain are activated.
    What's more, when you watch a person run, your brain creates responses as if you were running yourself.

  5. man",
    Mind reading as described here, reads thoughts and not words, therefore the thought of a bear's brother will be different from the thought of a bear.
    The same goes for words with a double meaning;
    For example, the word plaintiff, in the legal context, creates an image of a person who fights, perhaps for the justice he believes in, in court, in this context, areas of the brain related to the sense of justice and judgment may be activated.
    On the other hand, the word drowning brings up for us associations of a person in danger of life, therefore other areas of the brain will operate, in the context of danger of life, a warning or perhaps a desire to save and altruism.

  6. How does the brain perceive objects or words?

    When I say a bear, or concentrate on the word bear, I don't see a bear in my mind's eye.
    And when I see a bear, my concentration is not on the word bear.

    And what about Deb's brother? If part of the perception of the word is associative, what parts of his brain work when he
    Thinking of the word bear?

    And what about words with double meaning? (The article talks about "reading thoughts" as recognition of words).

    At the moment it seems that mind reading is more like training a dog.
    "My legs", "Come", "I will land"

  7. Where is the guaranteed link?
    "Market the Mindflex game (and see the link at the end of the column)"

  8. But no matter what… cool! And 2, I know of several "future" developments :) that will focus on the dramatic reduction of these devices...

  9. The phrase is funny: the developers of the future... after all, the developers of the present are the ones who develop the future... according to this the "developers of the future" are not yet developing...

  10. Terrible deception!
    There is a reason that fMRI devices are large (field generation), and brain scans are invasive. The only non-invasive technology, which is also the only one that can become toys, is capable of a very limited number of commands, and it is much more efficient to use hands that are a million times more flexible.
    Maybe in the future there will be non-invasive technology with sufficient brain resolution. Today there is none.

  11. interesting.
    I would like to put a mind reading device on the heads of politicians.
    I am sure that the computer will not find any difference between "yes" and "no". In general, I doubt whether they will discover signs indicating the existence of a brain in them.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.