Comprehensive coverage

The robot that learns to walk by itself

Dreamer managed to learn to walk so fast. Babies need long months of training before they are able to walk steadily. The four-legged robot that guided Dreamer in the experiments, could walk, maintain balance and even try to escape from the laboratory, in just one hour

Two copies of the Quadruped robot A1 imitate the behavior of dogs. Screenshot from a video of the researchers
Two robots guided by the Quadruped robot A1 algorithm imitate the behavior of dogs. Screenshot from a video of the researchers

How does a baby learn to walk? with a lot of falls. No one explains to him how to place one leg at a time, lift and lower, or move the center of gravity. He tries, falls, gets up and tries again. Miraculously, he eventually manages to walk.

How does a robot learn to walk?

The answer to the question depends on who is asking. The ancient robots - that is, those from more than ten years ago - did not really 'learn' to walk. Humans modeled the correct way of movement of each robotic leg, on each of its joints and screws, and literally programmed each step of the robot. This worked well, on the whole, until the robot advanced more than a few steps, or reached an area where the floor sloped at a tenth of a percent - and suddenly the rigid instructions were no longer enough to keep it standing.

In the last decade, the field of machine learning has leapt forward, and roboticists have begun to train robots to walk. To do this, they created an image of the robot in a virtual simulator, and let it try to walk there. I mean, it wasn't a robot at all: it was just an algorithm that simulated the shape of the robot, with a simulator that simulated the world. The algorithm played almost randomly with the robot's legs, aiming to propel it forward. It would take hundreds of thousands of runs and attempts, but eventually, the virtual robot would learn to walk in the virtual world. All that had to be done then was to copy the algorithm to the physical body of a robot - and voila, now he too knew how to walk.

That, at least, is the theory. The trouble is that virtual simulations fail to reflect the full complexity of the physical world, so the algorithm trained in the virtual world fails to control a metal-and-silicon robot well. Such simulations also require enormous computing power: to train a robotic hand to move a cube, the OpenAI company needed a hundred years of simulation, approximately.  

At this point a new approach has entered the game, just in the last month. The idea of ​​the research group from Berkeley was to produce an algorithm called Dreamer - dreamer in clear Hebrew, but we will simply call it Dreamer. Dreamer serves as a robotic brain endowed with a so-called "world model". It runs a model at high speed, which examines the world around it and is able to calculate the probability that a certain action will succeed in fulfilling the goal defined for it. For example, moving the leg in some way will cause the robot to move forward. He predicts the future, and what is more important: he can update his predictions according to the results.

Dreamer operates in the physical world, and according to Research published on arXiv at the end of June, he lands an impressive success. He begins his training session lying on his back with his legs in the air. From this bleak spot, and through trial and error in the real world, he finds a way to roll over, stand up, walk and even run. When he is pushed, he falls - but after a few minutes of such pushing and trying to cope, he already understands how to deal with them and maintain his balance with impressive composure. All in all, Dreamer only needs one hour to achieve achievements that would previously have required thousands of hours of simulations and training.

What does dreamer mean?

First of all, it illustrates the fact that advanced artificial intelligences increasingly resemble the minds of biological robots: that is, flesh-and-blood minds. We also run a "world model" between the walls of our skull, and thanks to it we can walk, run and throw a ball. The brain runs the model, and the more we practice, the more accurate it becomes. After ten-thousand ball throws - each of which trains the model just-a-little-more, we already reach a high level.

The surprising thing to me is that Dreamer was able to learn to walk so quickly. Babies need long months of training before they are able to walk steadily. The four-legged robot that guided Dreamer in the experiments, could walk, maintain balance and even try to escape from the laboratory, in just one hour.

If algorithms like Dreamer continue to prove themselves, then the robots of the near future - tomorrow morning, roughly - will be able to acquire new talents in a period of a few hours. This is another step into the robot revolution: a world where startup companies are able to build a robot from off-the-shelf products and teach it to perform its tasks in a short time. And they will be able to do all this without needing professional roboticists. As twenty years ago, children learned to program and became millionaires at a young age, so today's children will soon begin to come up with innovative ideas for using robots - and implement them in the field. And thanks to Dreamer, they will be able to develop robots with different shapes - and that will still learn to move on their own.

A third and particularly interesting implication is that algorithms like Dreamer can make sure that a robot continues to function even if it is damaged. Think of a four-legged robot, for example, that breaks one of its legs during a rescue mission in the Amazon. The robot will be able to decipher in a short time how to move with only three legs. He won't do it gracefully - but he's also not completely sedentary like old-generation robots. And in the not-too-distant future, the same robot will also be able to reach out an arm, pick a branch and use it as a replacement for the injured leg. Such progress is of great importance on the battlefield, and for robots sent to remote locations. It can extend the life of robots that would normally be out of use due to damage. And who knows? Perhaps the brains of robots in space - the kind that mine metals in asteroids to send them to Earth, for example - will rely on an algorithm similar to Dreamer.

Dreamer, in short, is a sign of things to come: for smarter robots that are able to perform a wider range of actions, and that the tools for their development will be placed in the hands of the general public. Along the way, he also shows how technology is converging on a method of operation similar to that of the human brain, with a sophisticated "world model" - but also beginning to bypass our abilities and teach robots to perform actions in a short time and with superhuman speed.

I can't wait to see the capabilities of the robots, a decade from now, and discover the uses we will still make of them. We will see robots in space, at sea, in the air and on land, and some of them will be able to learn by themselves how to adapt to living environments and changing conditions. It is going to be very interesting here in the future - in the world of robots.

More of the topic in Hayadan: