Comprehensive coverage

Supersensory perception / Gershon Davlon and Joseph A. Paradiso

How a world full of sensors will change the way we see, hear, think and live

Input from multiple sensors. Illustration: shutterstock
Input from multiple sensors. Illustration: shutterstock

Here's a nice experiment: try to count how many electronic sensors are around you right now. Your computer has cameras and microphones. The smartphone has GPS sensors and gyroscopes. The fitness device has an accelerometer. If you work in a modern office building, or live in a new house, you are probably always surrounded by sensors that measure movement, temperature and humidity.

 

Sensors have become common because they develop, more or less, according to Moore's Law: they become smaller, cheaper and more powerful. A few decades ago, gyroscopes and accelerometers, now found in every smartphone, were bulky, expensive and limited to applications such as spacecraft and missile guidance. Also, as we know, there has been a huge increase in network connectivity. Thanks to the progress in microelectronics, as well as in the management of the use of energy and the electromagnetic spectrum, a chip that costs less than a dollar is now able to link an array of sensors to wireless communication networks with low power consumption.

The amount of information that such an extensive network of sensors produces is enormous - almost unimaginable. Nevertheless, most of this data is invisible to us. Today, sensor data is usually accumulated in isolated repositories, accessible to only one device for use by only one application, such as controlling your home thermostat or tracking the number of steps you take each day.

If we eliminate this isolation of the repositories, the world of computing and communication will change in fundamental ways. When we have protocols that will allow devices and applications to transfer information between them (and there are already several protocols candidates for the job), any application will be able to use the sensors on any device. When this happens, we will enter an era of ubiquitous computing, an era that was predicted long ago, including in Mark Weiser's article published in Scientific American about a quarter of a century ago [see: "The Computer for the 21st Century"; September 1991].

In our opinion, the transition to comprehensive computing will not be gradual. We believe that this will be a real transformation, similar to the emergence of the Internet. The first buds of this change are now found in smartphone apps like Google Maps and Twitter, and the huge initiatives that have developed around them. But innovation will break through the dam when data from common sensors will be made freely available to all devices. The giant technology companies of the future will be "accumulating connections", which will turn the information from the sensors that surround us into new generation applications.

Just as it was difficult to predict, thirty years ago, how the Internet would change the world, so it is difficult to predict today how pervasive computing and sensor data will affect our daily lives. Fortunately, media theory can give us direction. In the 60s, media theorist Marshall McLuhan spoke of how the electronic media, especially television, is becoming an extension of the human nervous system. If only McLuhan was alive today! When there are sensors everywhere, and when the information they collect can be integrated into human perception in new ways, where do our senses end? What will the meaning of the word "presence" be when we can move our perception freely in time, space and scale?

Illustration of sensor data

We perceive the world through all our senses, but we consume most digital data through small XNUMXD screens on mobile devices. So, no wonder we are stuck in an information bottleneck. As the amount of information about the world increases, we are less and less able to stay present in this world. Nevertheless, the abundance of information also has an advantage, if only we can learn to use it properly. To this end, our group at the Media Lab of the Massachusetts Institute of Technology (MIT) has been working for years on ways to translate the information, collected by sensor networks, into the language of human perception.

Just as browsers like Netscape gave us access to the vast amounts of information on the Internet, so future browser software will allow us to make sense of the flood of sensor data coming at us. So far, the best tool for developing such a browser is a computer game engine: software that allows millions of players to interact with each other in rich, constantly changing 3D environments. Using the Unity XNUMXD game engine, we developed an application called DoppelLab, which collects data streams from sensors scattered in the environment and translates the information into a graphic representation, which is "put on" a computerized architectural model of a building. In the media lab, for example, DoppelLab collects data from sensors in the building and displays the results on a computer screen in real time. The user can see on the screen the temperature in each room, the amount of foot traffic in each area, and even the position of the ball on our smart table tennis table.

The DoppelLab application is able to do much more than display data visually. It also collects sounds picked up by microphones throughout the building and uses them to create virtual sound environments. To ensure privacy, the audio streams are mixed in the original sensor device before transmission. As a result, it is impossible to understand speech, but the sound atmosphere of the space is preserved, and the sound character of those in it is preserved. The application also allows you to experience previously recorded information. One can observe a certain moment in time from different angles, or run data quickly to examine it on different time scales and discover hidden cycles in the life of a building.

Sensor browsers like DoppelLab have immediate commercial applications, such as virtual dashboards for controlling large buildings where sensors are installed. In the past, a building manager who wanted to locate a problem with the heating system had to go through spreadsheets and graphs, catalog unusual temperature measurements and look for patterns that would point to the source of the problem. Using DoppelLab, the manager can see the actual and desired temperature in all rooms at the same time, and quickly locate problems affecting several rooms or several floors. Also, planners, designers and also tenants in the building can see how the infrastructure is used. They can see where people congregate and when, and how changes in the building affect how people behave and work in it.

However, we did not create DoppelLab with its commercial power in mind, but rather to explore a larger and more intriguing topic: the impact of pervasive computing on the very meaning of presence.

redefine presence

As soon as computers and sensors allow us to travel virtually in remote environments and "be" there in real time, the meanings of "here" and "now" begin to change. We intend to investigate the change in the perception of presence through DoppelLab and through a project called Living Observatory, which aims to make virtual and real visitors enter a changing natural environment, in a wetland area called Tidmarsh Farms.

Since 2010, private and public environmental organizations have been engaged in the restoration of a 1,000 dunam wetland area in southern Massachusetts, which was used to grow cranberries, and turning it into a nature reserve of coastal wetlands. One of the owners of the swamps is our partner, Gloriana Davenport. After building her career at the Media Lab on the future of documentary films, Davenport was fascinated by the idea of ​​an environment rich in sensors that creates her own "documentary". With it we develop sensor networks that record ecological processes and allow people to experience the information received from them. We began to populate the area with hundreds of wireless sensors that measure temperature, humidity in the air and soil, light, movement, wind, sounds, the flow of tree resin, and in some cases also levels of various chemicals.

Efficient battery utilization methods will allow these sensors to operate for years. Some of them will be equipped with solar cells, which will give them enough power to transmit sound input in sequence: the sound of the wind, the chirping of birds nearby, raindrops falling on the leaves. Our colleagues from the field of earth sciences from the University of Massachusetts at Amherst are placing sophisticated ecological sensors in Tidmarsh, including underwater temperature meters with optical fibers, and devices that measure oxygen levels in the water. All of this data will flow into a database on our server, and users will be able to query and explore it with a variety of applications.

Some of these applications will help ecologists visualize environmental data collected in wetlands. Others will be adapted for the general public. For example, we are developing a DoppelLab-like browser that will allow a virtual visit to Tidmarsh from any computer connected to the Internet. In this browser, the background will be a digital reconstruction of the swamp topography, with virtual trees and vegetation on it. The game engine will add sounds and data collected from sensors in the field. The sounds from the array of microphones will be combined and played at relative intensities depending on the virtual location of the visitors: you can soar above the swamp and hear everything at once, listen carefully to what is happening in a limited area, or swim underwater and hear what the hydrophones picked up. A virtual wind, driven by data collected on site in real time, will blow through the virtual trees.

The Living Observatory is more of a demonstration project than a practical prototype, but it's easy to imagine similar real-world applications. Farmers will be able to use such a system to monitor plots of land and follow the movement of water, insecticides, fertilizers and animals on their land. City authorities will be able to use the system to monitor the progress of storms and floods in the city, locate people in danger and call for help. With a little more imagination, you can see this technology being used in everyday life as well. Many of us are already looking for information about restaurants on websites like Yelp before we go to them, and in the future we will be able to check the atmosphere in the restaurant (is it crowded and noisy there now?) before we go.

Ultimately, such a distant presence would be the closest thing to teleportation. Sometimes we use DoppelLab to connect to the media lab when we're traveling, because it feels closer to home when we hear the noises and see the activity. In the same way, future travelers will be able to "throw" themselves home to spend time with family while on the road.

heighten the senses

We can say with great confidence that wearable devices will be the next wave in the world of computing. In our view, this is an opportunity to create much more natural ways to interact with sensor data. The wearable devices could become, in effect, prosthetics for the senses.

For a long time, researchers have been experimenting with wearable sensors and motors as aids. They map electrical signals coming from sensors and combine them with the human's natural senses in a process known as "sensory replacement". According to the latest research, neural plasticity, i.e. the ability of the brain to adjust itself physically to new stimuli, allows awareness at a perceptual level to "supersensory" stimuli that are transmitted through the usual sensory channels. However, there is still a very large gap between the data from the sensor networks and the human sensory experience.

In our opinion, one of the keys to unlocking the potential of artificial sensory organs is a better understanding of the wearer's state of attention. Today's most advanced wearable devices, such as Google Glass, tend to act as third-party agents that offer the wearer information relevant to the context (such as a recommendation for a certain movie when the wearer walks by a movie theater). But these recommendations come out of nowhere, so to speak, and are often disturbing to the point of annoyance, in a way that would never occur in the natural sensory systems. The natural systems allow us to focus and lose focus dynamically, to pay attention to external stimuli if they require attention, or to focus on the current task if they do not. Our team is conducting experiments to test whether wearable computers can tap into the brain's inherent ability to focus on tasks while maintaining a pre-attentional connection with the environment.

Our first experiment will determine if a wearable device is able to pick up, from several sound sources, who the user is listening to. We wish to use this information to enable a person wearing an appropriate device to focus on the microphones and hydrophones at Tidmarsh as they would on natural sound sources. Imagine that you are focusing on a distant island in the lake, and slowly begin to notice the distant sounds, as if your ears are sensitive enough to skip the physical distance. Imagine walking along a stream and hearing sounds under the water, or looking towards the trees and hearing the chirping of birds from the tops. This approach to digital information transfer could mark the beginning of a flexible connection between our sensing systems and data from sensor networks. At some point, sensory or nerve implants will surely arrive that will realize this connection. We hope that such devices, and the information they provide, will integrate into existing sensory processing systems rather than crowding them out.

Dream or nightmare?

For many, and we are among them, the world just described has the potential to be terrifying as well. Redefining presence will change our relationship with the environment and with others. And what is worrying about this: comprehensive computing has enormous implications for privacy. Even so, in our opinion there are many ways to integrate protection mechanisms within the technology.

Ten years ago, in one of our group's projects, Matt Leibovich placed 40 cameras and sensors throughout the laboratory. He connected a large lamp switch to each device, so that it would be possible to disable the sensor easily and visibly. Today, there are too many cameras, microphones, and other sensors around us to disable them all, even if they all had a kill switch. We will therefore have to think of other solutions.

One approach is to make the sensors responsive to the person's context and preferences. Nan-Wei Gong explored such an idea a few years ago, when she was a member of our group. She built a special electronic remote control, which transmitted a wireless beacon that informed the sensors in the environment what the owner's privacy preferences were. Each such tag had a large button with the inscription "No", and when the user pressed the button, he was guaranteed a period of complete privacy, during which all sensors in the vicinity were blocked and did not transmit any data about him.

Any solution must ensure that all sensors in the human environment pick up such requests and honor them. Designing such a protocol has technical and legal challenges. However, research groups around the world are already looking at different approaches to the problem. For example, the law may give a person ownership or control over the data that is produced in their environment. He will be able to choose to encrypt them, or limit their passage to the network. One of the goals of DoppelLab and the Living Observatory is to examine these implications for privacy in the safe environment of an open research lab, and when the failures and harmful consequences emerge, we can find solutions to them. As demonstrated by the recent revelations of Edward Snowden, a former contractor for the US National Security Agency, transparency is essential and threats to privacy need to be addressed through legislation, in an open forum. More than that, we believe that developing hardware and software from the bottom up, in open source, is the best defense against a systematic invasion of privacy.

In the meantime, we can begin to see what new experiences await us in a sensor-driven world. The possibilities are exciting. In our opinion, it is possible to develop technologies that will integrate both in the environment and in our bodies. Such tools will take our noses away from the smartphone screen and back to the environment. They will make us more present, not less, in the world around us.

__________________________________________________________________________________________________________________________________________________________________

in brief

The modern world is full of networked electronic sensors, but most of the information they produce is hidden from our eyes and stored for the use of specific applications. If we eliminate the barriers and allow every device connected to the network to use this data, the era of comprehensive computing will truly open.

It is impossible to know exactly how the comprehensive computing will change our lives, but it is likely that electronic sensors embedded in the environment will be used as extensions of the human nervous system: wearable computers will, in fact, become prosthetics for the senses.

Sensors and computers will allow us to travel, virtually, in distant environments and "be there" in real time. This will have far-reaching consequences for the concepts of privacy and physical presence.

About the authors

Gershon Dublon is a PhD student at the Media Lab of the Massachusetts Institute of Technology (MIT) where he develops new tools for investigating and understanding sensor data.

Joseph A. Paradiso is a professor of art and media science at the Media Lab at MIT. He directs the Responsive Environments group there which investigates how sensor networks augment and mediate human experiences, interactions and perceptions.

How It Works

The reality browser

The authors' sensor browser software, called DoppelLab, collects data from sensors scattered throughout the MIT Media Lab and displays it visually on a transparent model of the structure. The browser is automatically updated in real time, so users can connect from anywhere and see what is happening at any given moment, in any room. Temperature, movement, sounds and other features are displayed using icons.

More on the subject

Rainbow's End. Vernor Vinge. Tor Books, 2006.

Metaphor and Manifestation: Cross Reality with Ubiquitous Sensor/Actuator Networks. Joshua Lifton et al. in IEEE Pervasive Computing, Vol. 8, no. 3, pages 24-33; July-September 2009.

Demonstration of the sensor browser

The article was published with the permission of Scientific American Israel

 

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.