Comprehensive coverage

"Google glasses are the first to take a sequence of photos and contain an algorithm that fuses them into an improved image"

Says Prof. Fayman Milnefar from the University of California Santa Cruz, at a conference at the Technion

Prof. Fayman Milnefar with Google glasses at the TCE conference held at the Technion. Photo: Yossi Sharam, Technion Spokesperson
Prof. Fayman Milnefar with Google glasses at the TCE conference held at the Technion. Photo: Yossi Sharam, Technion Spokesperson

"Clear-eyed people, and those who are not, can differentiate between a picture taken with a high-quality camera and a picture taken with a cell phone camera, but it won't stay that way for long." So said Prof. Feiman Milnefar from the University of California in Santa Cruz, an expert in image processing and artificial vision who recently worked for about a year at Google. He spoke at the fourth annual international conference of the Henry Taub Center for Computer Engineering (TCE) held at the Technion. The team in which Prof. Milnefar worked develops the software for Google Glass.

Prof. Oded Shmueli, the vice president of the Technion for Research who opened the conference, said: "We are in the midst of a process that will lead to a new era. The research areas discussed at the conference, such as artificial intelligence, computer vision and image processing, will affect all areas of our lives. Within a decade from today, cars equipped with a computer, sensors, navigation system and radar will be driving on the roads, and these systems will allow them to drive on their own without the intervention of the driver."
"The Computer Engineering Center at the Technion was inaugurated three years ago, and since then it has become a center of excellence leading ground-breaking research," said the head of the center Prof. Assaf Shuster, "We have succeeded in creating a new model for cooperation between academia and industry."
According to Prof. Milnefar, the simple cameras that are installed today in cell phones and tablets, and in the near future will be used in wearable computing devices - it is difficult, if not impossible, to reach the level of dedicated cameras. They lack all the moving parts and the complex and heavy lenses that professional cameras have. Also the need not to burden the users, which causes the designers to make them light and small, does not allow them to compete with the best cameras without encountering limitations

physicality. The ever-smaller miniaturization of the devices makes it very difficult to introduce light into the device, and what remains is to use sophisticated algorithms to compensate for the miniaturization.
"My role at Google was to develop the field of computerized photography that merges into it a number of old fields such as image processing, photography, computer graphics and computer vision. It includes the development of algorithms, optical hardware and image processing (rendering) techniques," emphasized Prof. Milnefar.

"The principle is quite simple - instead of one picture, a sequence of pictures is taken, and then they are merged into one picture. This can be a very high resolution image, a trivial feature that is made possible by the very use of multiple photographs, but you can also think of other 'tricks', such as taking several pictures from different angles, and calculating the distance to the objects, so that it will be possible to decide on an area in the picture that will look focused and the rest of the picture will be Vague, to get a sense of depth. Another 'trick' would be to capture things that cannot be seen with the eye, such as night vision (using infrared sensors), the ability to notice changes that occur very quickly or very slowly, distinguishing fine details (for example, a baby's breathing movement in cameras installed in children's rooms)."

The scientists (and high school students) who use the microscope are probably familiar with the phenomenon that when they look at a sample, they only see the part in the center of the image clearly and the other parts are blurred. Fusion of the images will allow you to produce one photo where all parts of the sample are sharp and clear. "Google Glass" is the first device that contains a camera that, with each click, takes several pictures and freezes them," added Prof. Milnefar.

Prof. Amnon Shashua from the Hebrew University, CEO of Mobileye and one of the founders of the start-up company Orcam described a different concept for camera-based wearable computing. The company has developed a system that includes a camera and a microphone that connects to regular glasses. The system allows the visually impaired to point to objects such as street signs, traffic lights, buses or restaurant menus and the device reads them the menu or tells them the color of the traffic light.
"OrCam's concept is different from Google's - not taking a picture every time the user requests it, but taking continuous video and immediate processing. This requires completely different assessments in terms of hardware and in particular in regards to energy consumption," said Professor Shashua.

In the photo: Prof. Fayman Milnefar with Google glasses.
Photo: Yossi Sharam, Technion Spokesperson.

One response

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.