Says Prof. Fayman Milnefar from the University of California Santa Cruz, at a conference at the Technion
"Clear-eyed people, and those who are not, can differentiate between a picture taken with a high-quality camera and a picture taken with a cell phone camera, but it won't stay that way for long." So said Prof. Feiman Milnefar from the University of California in Santa Cruz, an expert in image processing and artificial vision who recently worked for about a year at Google. He spoke at the fourth annual international conference of the Henry Taub Center for Computer Engineering (TCE) held at the Technion. The team Prof. Milnefar worked with develops the software for Google Glass.
Prof. Oded Shmueli, the vice president of the Technion for Research who opened the conference, said: "We are in the midst of a process that will lead to a new era. The areas of research discussed at the conference, such as artificial intelligence, computer vision and image processing, will affect all areas of our lives. Within a decade from today they will be traveling on the roads Cars equipped with a computer, sensors, a navigation system and radar, and these systems will allow them to drive by themselves without the intervention of the driver."
"The Computer Engineering Center at the Technion was inaugurated three years ago, and since then it has become a center of excellence leading ground-breaking research," said the head of the center Prof. Assaf Shuster, "We have succeeded in creating a new model for cooperation between academia and industry."
According to Prof. Milnefar, the simple cameras that are installed in cell phones and tablets today, and in the near future will be used in wearable computing devices - it is difficult and perhaps impossible to reach the level of dedicated cameras. They lack all the moving parts and the complex and heavy lenses that professional cameras have. Also the need not to burden the users, which causes the designers to make them light and small, does not allow them to compete with the best cameras without encountering limitations
physicality. The ever-smaller miniaturization of the devices makes it very difficult to introduce light into the device, and what remains is to use sophisticated algorithms to compensate for the miniaturization.
"My role at Google was to develop the field of computerized photography that merges into it a number of old fields such as image processing, photography, computer graphics and computer vision. It includes the development of algorithms, optical hardware and image processing (rendering) techniques," emphasized Prof. Milnefar.
"The principle is quite simple - instead of one photo, you take a series of photos, and then merge them into one photo. This can be a very high-resolution photo, a trivial feature that is made possible by the very use of multiple photos, but you can also think of other 'tricks', such as taking several Pictures from different angles, and calculating the distance to the objects, so you can decide on an area in the picture that will look focused and the rest of the picture will be blurry, to get a sense of depth. Another 'trick' would be to capture things that are not It is possible to see with the eye, such as night vision (using infrared sensors), the ability to notice changes that occur very quickly or very slowly, distinguishing fine details (for example, a baby's breathing movement in cameras installed in children's rooms)."
The scientists (and high school students) who use the microscope are probably familiar with the phenomenon that when they look at a sample, they only see the part in the center of the image clearly and the other parts are blurred. Fusion of the images will allow you to produce one photo where all parts of the sample are sharp and clear. "Google Glass" is the first device that contains a camera that, with each click, takes several pictures and fuses them," added Prof. Milnefar.
Prof. Amnon Shashua from the Hebrew University, CEO of Mobileye and one of the founders of the start-up company Orcam described a different concept for camera-based wearable computing. The company developed a system that includes a camera and a microphone that connects to regular glasses. The system allows the visually impaired to point to objects such as street signs, traffic lights, buses or menus in a restaurant and the device reads them the menu or tells them what the color of the traffic light is.
"OrCam's concept is different from Google's - not taking a picture every time the user requests it, but continuous video recording and immediate processing. This requires completely different assessments in terms of hardware and in particular with regard to energy consumption," said Professor Shashua.
In the photo: Prof. Fayman Milnefar with Google glasses.
Photo: Yossi Sharam, Technion Spokesperson.
One response
Good article, thank you