Researchers have found that humans are able to sort data using less than XNUMX percent of the original information
Humans learn to recognize complex objects very quickly and also the changes that have occurred in them. Normally we recognize the letter Aleph regardless of the font, texture and background in which it appears, or the face of a friend even when she is wearing a hat or after she changes her hairstyle. In addition, the human brain is able to recognize an object even when only a small part of it is visible, for example - the corner of the bed or part of the door hinge. How is this done? Are there simple methods that humans use for a variety of different tasks? And will it be possible to imitate these methods for computer programs to improve capabilities and performance in the fields of computer vision, machine learning or robotics?
Researchers from the Georgia Institute of Technology discovered that humans are able to classify data using less than 1 percent of the original information, and developed an algorithm that describes how humans learn - a method that can also be used for machine learning, data analysis and computer vision. "How do we manage to understand such a large amount of data that comes to us from the environment, consisting of so many types, at such a high speed and in such a reliable way?" said Santosh Vempala, a professor of computer science at the Georgia Institute of Technology. "We ask - at the fundamental level, how do humans do this? This is actually a computational problem."
The researchers examined humans in "random levy" type tests in order to understand how they are able to learn about objects. They showed people test objects while being shown the abstract, original pictures of the objects, and then asked them if they were able to successfully identify the same picture when only a random part of it was shown. "We hypothesized that random projection may be one of the ways in which humans learn their environment," explains the researcher. "In short, our assumption was correct - only 0.15 percent of the entire data is needed in order for humans to be able to identify the bone." Next, the researchers looked at a computer algorithm that allows machines (which are a very simple model for neural networks) to complete the same tasks. The machines did perform their tasks at the same level as humans, a result that provided us with new insights into how humans learn. "We found evidence that, indeed, humans and the neural network work in a similar way," notes the lead researcher. The research findings were published in the scientific journal Neural Computation. This is believed to be the first ever study of "random levy" in human subjects, the core component of the theory the researchers devised.
To test the validity of their theory, the researchers created three sets of abstract images with a resolution of 150 x 150 pixels, and then random, small drawings of those images. The human subjects viewed the large pictures for 10 seconds, after which they were randomly presented with 16 drawings of each picture. "We were surprised to find out how similar the performance of simple neural networks is compared to humans," says the researcher. Despite the fact that the researchers cannot categorically claim that the human brain does indeed work only through random projection, the results support the claim that this mechanism is a reasonable possibility. In addition, the results suggest a possible use of a method that will be useful for machine learning: today, the analysis of a huge data collection is a big challenge, and random levy may be one of the ways to process data efficiently without losing essential content, at least for basic tasks such as classification and decision-making. The theory regarding learning based on random projection has already been cited more than 300 times and has become a useful and widespread method in the field of machine learning, with the aim of analyzing many different types of data.
The news about the study
One response
"Random levy may be one way to process data (in the brain)"
A really confusing and unclear article, random projection was the type of test they did to the subjects (and the neural networks), but we have known for a long time that humans (and neural networks) are able to recognize objects even when only a very small part of them is visible to the eye.
So what is the big innovation here? What is "the brain works by the method of random projection"?