Teach the computer to see like a human eye

Does the picture change? Artificial neural networks will have a hard time deciphering it

Artificial evidence. Genetic editing. Photo: depositphotos.com
Artificial evidence. Illustration: depositphotos.com

Artificial intelligence is a branch of computer science that studies the ability to program computers to act like the human brain. At its core is deep learning (Deep Learning) - algorithms that allow computer systems to learn from previous examples and experiences, and thus perform a variety of computational tasks.

Prof. Yair Weiss from the School of Engineering and Computer Science at the Hebrew University researches artificial intelligence, deep learning and computer vision (a field that studies how computers analyze images to extract visual information from them and interpret it). One of the main topics he focuses on - which combines these three areas - is artificial neuron networks (nerve cells). It is a computational mathematical model (algorithms) developed inspired by the neuron networks that exist in the human brain. The artificial neural networks consist of many information units (input and output) linked to each other using numbers and transfer data from one to another, based on deep learning. Each number expresses the strength of the connection between them, and ultimately the connectivity creates intelligence. Thus, these networks can be used in almost all computer applications. For example, identifying objects in photos and videos, deciphering medical simulations, robotics and autonomous transportation.

In the last decade, artificial neural networks were built that led to breakthroughs in many fields, including computer vision. According to Prof. Weiss, "Deep learning - on which the neural networks are based - and computer vision go hand in hand. In our research, we try to teach computers to see similar to the human eye. We give them many examples (input) and thus, using the neural networks, they should learn to see - for example To identify details in the images and to differentiate between them, we actually change the numbers that connect the neurons, that is, the exact output, for example. We upload many pictures of dogs and cats to the computer, it activates neurons that recognize them, and they are supposed to decide what is a dog and what is a cat."

"However," notes Prof. Weiss, "these networks are not yet sufficiently developed as the human brain. They can make significant calculation errors only due to small changes in the examples. So, for example, if we move an image of a dog by one pixel, the computer may recognize it as a different animal , which of course will not happen to the human vision system; a human being will decode an image accurately even if it changes a little." What is the question? Why do artificial neural networks fail and what should be done to will work better?

Therefore, Prof. Weiss and his team are focusing on developing the neural networks, so that they can generalize better and be more accurate in recognition. In their latest study - which won a research grant from the National Science Foundation - the scientists wanted to check why they fail and how they could work better. They embedded them in computers and showed them pictures, for example of animals. At first, they recognized the content of the images accurately, but when the researchers made slight changes to them, for example, moved them a pixel to the right or enlarged them slightly, the neural networks became 'confused'. For example, they identified a ferret as a cat or a sea lion.

After that, the researchers conducted a mathematical analysis of the data and discovered that the reason for the networks' failures is the sampling theorem; This mathematical theorem explains, among other things, the relationship between images and the pixel sampling in them; How much information in sampled pixels is required to reconstruct an image. The researchers found that the networks do not fulfill the sampling theorem in the pixel sampling process, and therefore make mistakes in recognizing the image when it changes slightly. Today they are trying to fix this - to teach the networks to fulfill the sampling theorem within a short calculation time.

Prof. Yair Weiss

Prof. Yair Weiss

Says Prof. Weiss: "We realized that the artificial neural networks are not yet resistant to small changes. When they are taken out of their comfort zone, they fail. Therefore, at this stage, it is not certain that they can be trusted in various computer applications. For example, if a camera in an autonomous vehicle that is based on one of the networks we studied would perform A reasonable change - such as zooming in or out - will cause the car to get confused, for example, it won't recognize traffic signs or it will only recognize them At a certain point in time. That's why our goal today is to understand how the networks can be developed and promoted so that they decode visual information accurately, similar to the human eye."

Life itself:

Yair Weiss is a professor of computer science, lives in Jerusalem, is married and has four children (ages 10, 15, 17.5 and 20). In addition to being a researcher, he is a consultant to the technology company Mobileye and the owner of a football team - Hapoel (Katmon) Jerusalem.

Tags:

For the article on the Voice of Science website

Comments

  1. Mr. Porat
    Inhibitory synapses are not necessary for object recognition, they are necessary for negation of recognition.

  2. Mr. Porat,
    Inhibitory synapses are not necessary for object recognition, they are necessary for negation of recognition.

  3. To Mike: "off synapses" in artificial neuron networks should be expressed by a negative weight of the corresponding connection. But negative numbers are not necessary in networks because all calculations are not sensitive to a general shift in the constant, so everything the network gives can be obtained from non-negative numbers with the same degree of success.

    In other words: inhibitory dynamics are an inherent part of artificial networks without the need to define them as such.

  4. "And ultimately the connectivity creates intelligence." - A bit too strong a claim in the current state of affairs.

  5. By the way, this is called the "inhibitory connections model". According to this model inhibitory synapses are of great importance in preventing false identification between similar objects.

  6. There is something more beyond the sampling rate, and that is inhibitory synapses. Do artificial neurons include inhibitory synapses? It is possible that inhibitory synapses function to prevent identification of a ferret as a cat. That is, to reduce the chance of an action potential in the event that there is no match between the stimulus and the memory.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.