Comprehensive coverage

Bill Gates/Robot in every home

The leader of the personal computer revolution predicts: The next hot field will be robotics

By Bill Gates, published in the April-May 2007 issue of Scientific American

Honda's Asimo robot demonstrates a connection between a human brain and a machine
Honda's Asimo robot demonstrates a connection between a human brain and a machine

Imagine yourself present at the birth of a new industry. An industry based on groundbreaking new technologies, at the center of which are several established corporations that sell devices adapted for business use, and alongside them are rapidly growing start-up companies that produce innovative toys, hobby accessories and other interesting niche products. But the industry is highly fragmented and has very few common platforms or standards. The projects are complex, progress is slow and practical applications are relatively rare. In fact, despite all the excitement and apparent promise, no one can say with confidence when, or even if, this industry will reach critical mass. At the same time, if she succeeds in this, she will change the world.

The previous paragraph could certainly fit a description of the computer industry in the mid-70s, around the time Paul Allen and I founded Microsoft. In those days, large and expensive mainframes managed the office software of large companies, government offices and other institutions. Researchers at leading universities and commercial laboratories created the basic building blocks that made the information age possible. Intel introduced the 8080 microprocessor, and Atari sold the popular electronic game Pong. In home circles, the computer freaks tried to find out what could be done with this new technology.

But I was actually referring to something much more recent: the emergence of the robotics industry, which is developing in a similar way to how the computer business developed 30 years ago. Think of the robots used in the industrial production of vehicles as the equivalent of yesterday's mainframe computers. The industry's niche products include robotic arms that perform surgeries, military robots used in Iraq and Afghanistan to neutralize side charges, and household robots that vacuum the floor. Electronics companies produce robotic toys that mimic humans, dogs or dinosaurs, and hobbyists are eager to get their hands on the latest version of the Lego robotics system.

Meanwhile, some of the world's best minds are trying to solve the toughest problems in robotics, such as visual recognition, navigation, and machine learning, and they're succeeding. In the "Grand Challenge 2004" race of the American Defense Advanced Research Projects Agency (DARPA), whose goal was to encourage the development of the first robotic vehicle that could navigate independently along a difficult 227-kilometer course in the Mojave desert, the leading competitor managed to go only about 12 kilometers before breaking down. In 2005, however, five vehicles reached the finish line, and the winner did so at an average speed of 30.5 km/h (Another interesting connection between the robotics and computer industries is the source of funding: DARPA funded the project that led to the creation of Arpanet, which preceded the Internet.)

Furthermore, the challenges facing the robotics industry are similar to those we faced in the field of computing 30 years ago. The robotics companies do not have a standard operating system that would allow popular software to run on a variety of devices. There is almost no standard for robotic processors and other hardware, and very little of the software code used on one machine can be run on another. Whenever someone wants to build a new robot, they almost always have to start from scratch.

Despite these difficulties, when I talk to people who are involved in robotics - from academic researchers to entrepreneurs, hobbyists and high school students - their level of excitement and anticipation reminds me very much of the days when Paul Allen and I saw how new technologies fit together and dreamed of the day when there would be a computer on every desk in every house When I now look at the trends that are beginning to emerge, I can envision a future where robotic devices will be an integral part of our daily lives. I believe that technologies such as distributed computing, voice recognition, visual recognition and broadband wireless connectivity will open the door to a new generation of independent devices that will allow computers to perform tasks for us in the physical world. We may soon be standing on the threshold of a new era, where the personal computer will rise from the table and allow us to see, hear, touch and manipulate objects in places where our body is not present.

From science fiction to reality

Czech playwright Karl Chapek made the word "robot" everyone's favorite in 1921, but people have been dreaming of creating robot-like devices for thousands of years. In Greek and Roman mythology, the metalworking gods built mechanical servants made of gold. Hieron of Alexandria, the great engineer who is considered the man who invented the first steam engine, designed interesting automatons in the first century AD, including one that was said to be able to walk. Leonardo da Vinci sketched in 1495 a mechanical knight who could stand up in his seat and move his arms and legs, a sketch that is considered the first plan for building a humanoid robot.

In the last hundred years, humanoid machines have become familiar characters in mass culture thanks to books like I, Robot by Isaac Asimov, movies like Star Wars and TV series like Star Trek. The sympathy for robots in fictional works shows that humans are ripe for the idea that one day such machines will walk among them and serve as assistants and even as companions. But even though robots play a vital role in industries such as vehicle manufacturing - where there is one robot for every ten workers - the day when real robots will catch up with their science fiction counterparts is still far off.

One reason for this gap is the greater than expected difficulty of giving computers and robots the ability to sense their surroundings and react quickly and accurately. It turns out that it is very difficult to give robots abilities that we humans take for granted. For example, it is difficult for them to orient themselves among the objects in the room, respond to sounds, decipher speech and grasp objects of different size, texture or degree of fragility. A robot will have a very hard time even with a simple task like distinguishing between an open door and a window.

But researchers are beginning to discover the answers. One trend that helped them do this was the increasing availability of massive computing power. One megahertz of processing power, which cost more than $7,000 in 1970, sells for pennies today. The price of one megabit of storage has also dropped similarly. The availability of cheap computing power has allowed scientists to work on many of the difficult fundamental problems of making robots practical tools. For example, voice recognition software today is able to recognize words with considerable success. However, building machines that will understand the meaning of words in the right context is a much bigger challenge. As computational power increases, robot designers will have the processing power they need to tackle even more complex problems.

Another barrier that stopped the development of robots was the price of the hardware. Such as sensors to determine the distance between the robot and a certain object or motors and servo motors that allow it to handle the object forcefully or gently. But the prices are dropping fast. Laser rangefinders, used in robotics to accurately measure distances, cost about $10,000 a few years ago. Today they can be purchased for $2,000. New and even more accurate sensors based on very wide band radar are sold at an even cheaper price.

Today, robot developers can also add Global Positioning System (GPS) chips, video cameras, microphone arrays (which can distinguish between speech and background noise better than normal microphones) and a variety of other sensors at a reasonable price. The resulting improvement in capabilities, along with the strong processing power and expanded storage volume, allow today's robots to perform tasks that commercially produced machines were unable to perform until just a few years ago, such as vacuuming a room or helping to neutralize explosive devices.

BASIC approach

In February 2004, I visited several leading universities, including Carnegie Mellon University, the Massachusetts Institute of Technology (MIT), Harvard University, Cornell University, and the University of Illinois, to talk about the important role that computers can play in solving some of the most pressing problems facing society. My goal was to help students understand how exciting and important computer science can be, in the hope that I could encourage some of them to think about a career in technology. At each university, after giving the speech, I had the opportunity to see with my own eyes some of the most interesting research projects in the institution's computer science department. Everywhere, and almost without exception, my hosts showed me at least one project that dealt with robotics.

At the same time, my colleagues at Microsoft were approached by people from academia and commercial robotics companies, who were interested in knowing if our company was doing any work in the field of robotics that might help in their development efforts. We did not deal with this, so we decided to delve into it more. I asked Tandy Traver, a member of my strategy team and a 25-year Microsoft veteran, to go on a wide-ranging mission to gather information and talk to people across the robotics community. He found a general enthusiasm for the potential inherent in robotics, and the industry's need for tools to facilitate development. "Many think that the robotics industry is at a turning point, where the transition to personal computer architecture seems more and more logical," Tandy wrote in the report he submitted to me at the end of the assignment. “As Red Whittaker, [Carnegie Mellon University's] team leader in DARPA's 'Grand Challenge' race, recently said, hardware capabilities are almost here; The problem now is to adjust the software in the right way."

In the early days of the personal computer, we realized that we would need a factor that would allow all the pioneering work to reach critical mass and take shape into a real industry, capable of producing truly useful products on a commercial scale. It turned out that what was needed was Microsoft's BASIC programming language. When we created this programming language in the 70s, we provided a common basis that allowed computer programs developed for one type of hardware to run on other types as well. The BASIC language made computer programming much easier and brought more people into the industry. There is no doubt that many people made vital contributions to the development of personal computers, but Microsoft's BASIC was one of the main catalysts for the software and hardware innovations that made the personal computer revolution possible.

After reading Tandy's report, it was clear to me that the robotics industry, like the personal computer industry 30 years ago, must find the missing piece to make a similar leap forward. I therefore asked Tandy to put together a small team to work with people in the field of robotics and create a basic collection of programming tools that would allow anyone interested in robots, and who has a basic understanding of computer programming, to easily write robotic applications that will work with different types of hardware. The goal was to see if it was possible to provide the same initial, common basis for the combination of hardware and software needed for designing robots, as Microsoft's BASIC was for computer programmers.

Tandy's robotics group relied on several cutting-edge technologies developed by a team led by Craig Mundy, Microsoft's chief research and strategy officer. One such technology would help solve one of the most difficult problems facing robot designers: how to simultaneously process all the data coming from multiple sensors and send the appropriate commands to the robot's motors, a challenge known as simultaneity. One common approach is to write a single, normal computer program - a long loop that first reads all the data coming from the sensors, then processes this input, provides an output that determines the behavior of the robot, and returns to the beginning again for another round. The disadvantages are self-evident: if the robot receives new data from the sensors, which tell it to be on the brink of an abyss, but the software, which is still at the beginning of the loop, spins the wheels faster based on the old information it has, it is likely that the robot will fall before it has time to process the new information.

Simultaneity is a fascinating challenge even outside the field of robotics. Today, as more and more applications are written for distributed computer networks, programmers struggle to understand how to simultaneously schedule code that runs on many different servers. Now that single-processor computers are being replaced by multi-processor computers or multi-core computers (integrated circuits in which two or more processors are embedded for improved performance), software designers will need a new way to program applications and operating systems for desktop computers. To take full advantage of the power of concurrent processors, the new software must deal with the problem of concurrency.

One approach to dealing with concurrency is to write programs that deal with multiple tasks (multithreading) and allow data to move in many paths. But this task, as any developer who has written multitasking code will testify, is one of the most difficult in programming. Craig's team's answer to the concurrency problem is sometimes referred to as "runtime concurrency and coordination" (CCR). It is a library of computer functions, sequences of software code that perform defined tasks, which makes it easier to write multitasking applications, which are also able to coordinate several simultaneous operations. Designed to help programmers harness the power of multiprocessor and multicore systems, the CCR has also proven ideal for robotics. Robot designers who rely on such a library when writing their programs can significantly reduce the chance that their software, which is busy sending output to the wheels, will cause their robots to collide with walls because it cannot simultaneously read new input from the sensors.

Besides tackling the problem of concurrency, the work done by Craig's team will also simplify the writing of distributed robotic applications using a technology called "distributed software services" (DSS). This approach allows developers to create applications in which service programs, responsible for example, for reading the information coming from the sensors or for controlling the engine, operate as separate processes. You can schedule them like you schedule a combination of text, images and information from several servers on a single web page. Because DSS allows software components to operate independently of each other, it is possible to turn off a failed component in the robot and restart it - or even replace it - without having to restart the entire machine. This architecture, combined with broadband wireless technology, will allow robots to be remotely monitored and easily adjusted using a web browser.

What's more, you don't have to install all the DSS application components that control it inside the robot itself, but you can distribute them on several computers. Because of this, the robot may be a relatively inexpensive device that outsources complex processing tasks to high-performance hardware such as that found in today's desktop computers. I believe that these advances will pave the way for a whole new class of robots, which will essentially be portable, wireless companion devices that will connect to powerful desktop computers to handle processing power-demanding tasks, such as visual recognition and navigation. Since it will be possible to connect such devices in a network, we can expect the appearance of groups of robots that will work together to achieve goals such as mapping the seabed or sowing grain.

These technologies are a key part of Microsoft Robotics Studio, a suite of software development tools built by Tandy's team. The collection also includes tools that help write robotic applications in a wide variety of programming languages. One example is a simulation tool, which allows robot builders to test their applications in a XNUMXD virtual environment before trying them out in the real world. Our goal is to create an open platform at a reasonable price that will allow robot developers to easily integrate hardware and software into their designs.

Should we call them robots?

When will robots become part of our daily lives? According to the International Federation of Robotics, in 2004 approximately two million personal robots were used worldwide, and by 2008 another seven million will be installed. South Korea's Ministry of Information and Communications hopes to place a robot in every home in the country by 2013. The Japan Robot Association predicts that by 2025 the global personal robot industry will be worth more than $50 billion a year, compared to about $5 billion today.

As with the personal computer industry in the 70s, it is impossible to predict exactly which applications will drive the new industry. However, it is thought that robots will play a central role in providing physical assistance and even host company for the elderly. Robotic devices will probably help disabled people, and increase the strength and endurance of soldiers, construction workers and medical teams. Robots will maintain dangerous machinery in industrial plants, handle hazardous materials and drill remote oil pipelines. They will allow medical workers to diagnose diseases and treat patients who may be thousands of kilometers away from them, and will play a central role in security systems and search and rescue operations.

Some of the robots of the future may look like the humanoid devices of "Star Wars", but most of them will look nothing like C-3PO. In fact, with the increasing proliferation of mobile companion devices, it may be hard to say what exactly a robot is. Because the new devices will be so common, will be used for very specific purposes and will be very different from the two-legged automatons of science fiction, that it is likely that we will not call them robots at all. However, as the devices become more available to consumers, they will have a far-reaching impact on how we work, communicate, learn and spend time – just as personal computers have impacted our lives for the past 30 years.
Overview / Robotic Future

The robotics industry faces many challenges similar to those faced by the personal computer industry 30 years ago. The lack of common platforms and standards usually forces developers to start from scratch when building their machines.

Another challenge is to give robots the ability to sense and react quickly to their surroundings. The decrease in the prices of processing power and sensors allows researchers today to deal with these problems.

Robot builders can also take advantage of new software tools, which make it easier to write programs that run on different types of hardware. Networks of wireless robots will be able to connect to personal computers and use their power to perform tasks such as visual recognition and navigation.

The robot and the PC can be friends

Connecting home robots to personal computers can provide many benefits. For example, an office worker will be able to remotely monitor the security of his home, the cleaning of the floor and the folding of the laundry, as well as help his bedridden mother by monitoring a network of home robots from his personal computer. The machines will be able to call each other wirelessly as well as to the personal computer at home.

About the author

Bill Gates is one of the founders of Microsoft, the largest software company in the world, and its chairman. In the 70s while studying at Harvard University, Gates developed a version of the BASIC programming language for the first microcomputer, the MITS Altair. Gates dropped out of Harvard during his freshman year to devote his energies to Microsoft, the company he founded in 1975 with his childhood friend Paul Allen. In 2000, Gates and his wife Melinda founded the Bill and Melinda Gates Foundation dedicated to improving health, alleviating poverty and increasing access to technology worldwide.

And more on the subject

More general information about robotics is available at these web addresses:

The Center for Innovative Robotics: www.cir.ri.cmu.edu

The homepage of the DARPA Grand Challenge: www.darpa.mil/grandchallenge

International Federation of Robotics: www.ifr.org

The Robotics Alliance project: www.robotics.nasa.gov

Robotics Industries Association: www.roboticsonline.com

The Institute of Robotics: www.ri.cmu.edu

The Technology Museum:

Robotics: www.thetech.org/robotics/

Technical details and additional information about Microsoft Robotics Studio can be found at: www.msdn.microsoft.com/robotics

6 תגובות

  1. What is Microsoft building on? Market us home robots?

  2. It's really interesting that Gates devotes his resources to writing about this topic. (Obviously he didn't write it, but it's clear that there is a desire to focus on the field here)

  3. Anyone interested in the fields of robotics, electronics and software-embedded hardware is invited to enter the Hebrew websites on these topics on the Internet (advertisement deleted)

  4. Components for building a robot can be found online (advertisement deleted)

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.