Comprehensive coverage

Ten fingers on the screen

Computer monitors that recognize multiple touches at the same time could improve teamwork without the need for a mouse or keyboard

By Stuart P. Brown

Microsoft multi-user touch screen
Microsoft multi-user touch screen
The

When Apple released its iPhone to the market in 2007, it also revealed to the general public multi-touch displays that recognize multiple touches at the same time. You can move an image across the screen with the touch of a finger or zoom in and out if you touch the edge and move your fingertips apart or bring them closer together. The pleasure involved in touching the interface, apart from its usefulness, quickly won him praise. The activation was intuitive, sensual even. But already at the time of the iPhone's launch, the multi-touch displays in laboratories around the world had progressed far beyond two-finger commands. Engineers have developed much larger displays that respond to ten fingers at the same time and even to many hands of many users.

It is easy to imagine how photographers, graphic designers or architects - professionals who have to handle a lot of visual material and often work in teams - would welcome this method of operating the computer. However, the technology is already being applied in even more general situations, where anyone, without training, can reach out during a brainstorming session and move or mark objects and plans.

Pixels of fast perception

Jeff Hahn, a computer science consultant to New York University and founder of Perceptive Pixel in New York City, is at the forefront of multi-touch display technology. A flat screen measuring almost one meter by two and a half meters welcomes visitors in the lobby of the company building. Han approaches the electronic wall and evokes a whole world of images with just the touch of his fingers. You can run more than ten video feeds at the same time, and you don't see any toolbar. When Han wants to view other files, he double-taps the monitor and brings up charts or menus that can also be tapped.

Some pioneers who tend to be ahead of the curve and adopt new technologies have already purchased complete systems, in which intelligence agencies in their situation rooms compare and arrange images obtained during surveillance according to their geographic location. CNN news anchors used a large Perspective Pixel system that brilliantly displayed the results of the presidential election in all 50 states of the United States. To illustrate the results, the presenters standing in front of the show enlarged the maps of certain countries, and even of provinces, and reduced them again with dramatic gestures of moving their fingers across the map. Hahn predicts that in the future, the technology will find its place in all fields that use a lot of graphics, such as energy trading or medical imaging.

According to Bill Baxton, a principal researcher in Microsoft's research division, the initial work in the field of multi-touch interfaces began already in the early 80s. But around the year 2000, Hahn at New York University embarked on a journey designed to overcome one of the biggest obstacles holding back the technology: achieving high-resolution fingertip sensing. The solution required both hardware and software innovations.

Perhaps the most fundamental innovation was the utilization of an optical effect known as "frustrated total internal reflection" (FTIR), which is also used in fingerprint identification equipment [see "Technical Matters", Scientific American Israel, October-November 2004]. Hahn, who describes himself as "a very tactile person," became aware of the effect when he looked through a glass full of water one day. He noticed how clear his fingerprint looked on the glass as he looked at it through the water at a sharp angle. He envisioned an electronic system that could optically track fingertips placed across a transparent computer monitor. Thus began his six-year immersion in multi-touch interfaces.

At first, Hahn considered building a very high-resolution version of the single-touch displays used in vending and information machines. These monitors often sense the electrical capacitance created when a finger touches predetermined points on the monitor. But tracking a randomly moving finger across the screen would have required crazy wiring behind the screen which would have also limited its usability. In the end, Hahn developed a square panel of transparent plastic material that acts as a guide for the light waves. Light emitting diodes (LED) around the edges of the board radiate infrared light into it. The light that passes through the panel is reflected from its inner walls, similar to how light passes through an optical fiber. As long as the panel is not touched, no light leaks out. But when someone places a finger on one of the sides of the panel, some of the light that hits it is scattered, crosses the panel and exits through the opposite side. Cameras located behind the display sense this leaking light (as a result of the FTIR effect) and identify the place where the contact took place. The cameras can track the light leakage from many points at the same time.

Hahn soon discovered that the plastic panel could also be used as a scatter display; A projector located behind the board and connected to a computer can project images onto it that will pass through it and appear on its other side. In this way, the display can be used both as a visual output and as an input means of touching the same images.

Sensing the exact position of the fingers was one challenge. But creating software routines capable of tracking finger movements and converting them into instructions for manipulating the images on the monitor was even more difficult. The six software developers who worked with Hahn had, first of all, to write software that would function as a powerful graphics engine, partly to give the display a short response time, or to reduce the appearance of the "ghost image" that is created when the fingers quickly drag objects across the screen. They also had to deal with the unpredictable FTIR light output obtained from fingertips passing in random directions.

Deep in the architecture of every computer's operating system lies the assumption that input will come from the user via a keyboard or mouse. Keyboards are unambiguous: "K" always means "K". Mouse movement is represented using Cartesian coordinates, X and Y values ​​on a XNUMXD grid. Such methods of representing input belong to a general domain known as "graphical user interface", or GUI. Hahn's multi-touch display generates ten or more sources of X and Y coordinates simultaneously. "Traditional GUIs are not built for this kind of concurrency," Hahn says. Today's operating systems—Windows, Macintosh, Linux—are so dependent on the single mouse cursor that "we had to break down a lot of plumbing to create a new multi-touch graphical framework," Hahn says.

In the course of all this work, Hahn discovered that it is also possible to achieve pressure sensing if a thin layer of polymer whose surface is indented with microscopic grooves is spread over the surface of the plastic board. When the user presses harder or weaker at some point, the polymer bends a little, the fingerprint area gets bigger or smaller, and the spot of scattered light gets brighter or darker - and this the camera can detect. A user who long and hard clicks on any object displayed on the monitor can slide it under an adjacent object.

Hahn's Perceptive Pixel team, founded in 2006, put all these elements together and demonstrated the system at the TED (Technology, Entertainment and Design) conference that year to an enthusiastic audience. Since then, the orders go into the system and multiply. Perceptive Pixel does not disclose its prices.

Microsoft is scratching the surface

While Hahn was perfecting his invention, engineers elsewhere were pursuing the same goals by other means. The software giant Microsoft now offers a smaller multi-touch computer called Surface and is trying to brand this category of hardware as "surface computers". The initiative began in 2001, when Stevie Batiche from the hardware division and Andy Wilson from Microsoft's research division began developing an interactive desktop device capable of recognizing physical objects placed on it. The two inventors imagined how the device could be used as an electronic "flipper" machine, as a video attachment or as a photo browser.

After more than 85 prototypes, the two created a table with a clear plastic top and a floor-mounted projector. The projector displays images on the 30-inch horizontal monitor. Infrared LED light also illuminates the surface, splashed from fingertips or objects on the other side and allows the device to recognize commands from human fingers. The processing is done on a computer with the "Windows Vista" operating system installed on it.

Microsoft is marketing Surface desktops to four partners in the leisure, retail and entertainment industries that it believes are most likely to use the technology. For example, the Sheraton chain of Starwood hotels will try to install desktop computers in the lobbies of its hotels that will allow guests to select and listen to music, send home digital photos or order meals and drinks. Customers at T-Mobile stores in the US will be able to compare different models of cell phones simply by placing them on top of the surface monitor; "Domino"-like tags dotted in black at the bottom of the phones will instruct the system to display price details, features and usage plans. Additional Microsoft software will allow a digital camera with wireless connectivity to be placed on the surface, and the photos will be downloaded to the computer without a cable.

The prices of first generation surface systems range from $5,000 to $10,000. As with most electronic products, the company expects the price to decrease as production volumes increase. According to Microsoft, Surface computers will be offered at reasonable prices for the private consumer within three to five years.

Mitsubishi is also joining

Technology developers may be interested in the DiamondTouch table from Framingham, Mass.-based startup Circle Twelve, which recently spun off from Mitsubishi Research Labs. The table, developed at Mitsubishi, is designed so that outside developers can write application software as they wish. Several dozen tables are already in the hands of academic researchers and business customers.

The purpose of DiamondTouch is to "support collaborative teamwork in small groups," says Adam Bogue, Mitsubishi's vice president of marketing. "Several people can work with each other, and the system knows how to recognize each one." The people sit in chairs around the table and are connected to the computer below it. When one of them touches the surface of the table, an array of antennas embedded in the monitor sends a very tiny amount of energy at the frequency of radio waves through the body of the touching person and through the chair he is sitting on to a receiver located on the computer. The method is known as capacitive coupling. Alternatively, you can use a special carpet on the floor to close the circuit. The conjugated antennas indicate the point where the person touched the monitor.

Although this device appears to be restrictive, it is able to track the input coming from each person, and give control to whoever touches the surface first. In this case it will ignore any other touch, sensed through the assigned seating arrangement, until the first user completes his or her input. The system can also remember who made which changes to images, such as in videos of buildings.

After the international engineering company Parsons Brinkerhoff, headquartered in New York, tried these tables, it plans to purchase more. "During a large project we hold thousands of meetings," says Timothy Case, the regional director of the company's imaging department. "We can place many such tables in many places, and everyone will be able to observe the exact same thing."

Both the DiamondTouch and Perceptive Pixel systems feature virtual keyboards that are projected onto the display to allow users to type text. But it is unlikely that users will prefer to use the dynamic systems for this day-to-day purpose. The great advantage of multi-touch monitors is the possibility for many people to work together on a complex activity. It's hard to remember the feeling of freedom that the mouse gave us when it freed us from the clutches of the keyboard upon its appearance, about 25 years ago. Soon, maybe the new touch interface will help us get rid of the common mouse. "It's very rare that you come across a completely new user interface," Hahn says. "We are only at the beginning of this story."

key concepts

Multi-touch displays do not respond solely to the presence of a single finger, but are able to follow instructions that come simultaneously from many fingers.

A screen the size of a wall, developed by Perceptive Pixel, is able to respond to ten fingers or the palms of several people. Microsoft and Mitsubishi offer smaller specialized systems for hotels, shops and engineering and design companies.

Operating the computer in this way may free us from the mouse in the future and serve as our main computer interface as at the time the mouse freed us from the keyboard.

How it works - to follow the fingers

The most advanced multi-touch displays respond to the movement and pressure of many fingers. In the design of the Perceptive Pixel company (near right), the images are projected through a plastic panel onto a surface in front of the viewer. When fingers (or other objects such as a touch pen) touch the surface, infrared light emitted by LEDs inside the plastic panel is reflected from them and picked up by the sensors. Software decodes the data as finger movements. Tapping the display activates a menu of commands when needed.

To create a signal, the LEDs shine into the plastic panel. The light is reflected and concentrated between the sides and is not emitted outside. But if you place a finger on the surface (up), the light will scatter from it towards the sensors. Also, when you press hard, or gently, on a pressure-sensitive coating, it bends and causes the light that the fingertip caused to be scattered to brighten or darken. A computer interprets this as stronger or gentler pressure.

Looking inside - touch table

A projector, located inside Microsoft's multi-touch table (called Surface), sends an image through a panel of acrylic plastic. LEDs radiate upward near-infrared light, which is reflected from objects or fingers and picked up by infrared cameras. A computer monitors the returns to track finger movements.

About the author

Stuart P. Brown is a technology and engineering reporter from Irvington, New York. He wrote about 2007D displays in the October XNUMX issue of Scientific American Israel.

And more on the subject

3 תגובות

  1. I don't think it's particularly worth paying 5000 dollars for such a screen, especially in today's quality, it doesn't give much more than a serious mouse. It just seems like it's better. Is it worth paying so much money for??

  2. Error: Even two touches, regarding touch surfaces, is called "Multitouch", because it is a different level of interface (an additional dimension, if you will) compared to what has been common for years until now (in PDAs, for example) - a single touch .

  3. The iPhone does not recognize multiple touches, but only two touches. And even then he does not differentiate between (X1,Y1),(X2,Y2) and (X1,Y2),(X2,Y1)

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.