Comprehensive coverage

Why did the autonomous cars fail?

Autonomous taxis have indeed reached the roads in some cities, but they are still far from replacing taxi drivers, or capturing a significant share of the public transportation market. Private autonomous vehicles - those that are privately owned - have nothing to talk about at all. what went wrong?

The year is 2014, and Elon Musk uses one of his super talents and is interviewed for television.

"Autonomous cars [driverless vehicles. R.C.] will become a reality for sure." He predicts on the CNN network. "A Tesla car next year will probably be able to be in ninety percent self-driving mode. Like, ninety percent of your emails could be in the autopilot. Certainly when driving on the highway."[1]

The year 2014 was replaced by 2015, and the world as usual: with humans behind the wheel. This did not deter Musk, who every year since has promised the same thing in a well-publicized and documented manner. Every year, the expectation was that driverless cars would be just around the corner. And every year anew, hopes were shattered on the hard and jagged rock of reality. Even when Tesla tried to launch a new "autonomous driving" mode - known as FSD - it quickly became clear that it required the presence of a human. Not only because the legislature requires human supervision, but simply because Tesla's vehicles are not capable of handling the roads in any situation[2].

It's fun to make fun of Musk, but he wasn't the only one who predicted a bright future for autonomous vehicles. Most of the others just talked with the money, not the lips. Companies that have developed autonomous vehicles have raised capital for the cause. To explain the ear, the Cruise company burned an insane amount of 2022 billion dollars in 1.4 on the research, development and training of the robotic vehicles[3]. Since 2010, 106 billion dollars have been invested in the development of autonomous vehicles[4]. The result?


All the promising predictions have come true in recent years. Autonomous taxis have indeed reached the roads in some cities, but they are still far from replacing taxi drivers, or capturing a significant share of the public transportation market. Private autonomous vehicles - those that are privately owned - have nothing to talk about at all. If they exist, then they are in the possession of the billionaires only, and especially those who own the aforementioned companies.

You can learn about the dismal disillusionment from the financial data as well. Waymo, a company for the development of autonomous vehicles, has lost approximately eighty percent of its value in the last five years: from a valuation of 175 billion dollars, to only thirty billion[5]. Other companies in the field, such as Drive.AI, Voyage and Zoox found themselves in the same predicament. They just couldn't deliver the goods.

What happened? Why hasn't the vision of autonomous cars come to fruition yet, despite all the replica resources that have been poured into it? What needs to happen to breathe new life into it? And what can we learn from the dream and its break, that will help us understand the future of artificial intelligence more generally?

To answer these questions, we need to go back to prehistory - that is, to the second decade of the 21st century - to the time when Tesla's vehicle tried, completely on its own, to run over a plane. And he succeeded.

In the middle of the year 2022, an unknown millionaire summoned his Tesla Model Y car to him. The car came out of park, and moved steadily and gently toward its owner, as it was taught to do. She used all her sensors to scan her surroundings, and knew how to avoid any stray vehicle, stop for cats and not step on the sidewalk.

Unfortunately for the owner of the vehicle, his Tesla did not know how to deal with the private plane parked in the area. She decided to ignore him and continue driving forward.

As video of the accident shows, it was a mistake. A mistake that ended with a car worth more than $50,000 colliding with a plane worth two million dollars. But who's counting?[6].

The reason the plane confused the car so much is that its artificial intelligence is not rule-based. No one explained the traffic rules to the car in detail. Instead, the artificial intelligence custody on the roads, in the parking lots and on the streets for hundreds-thousands of hours. She received positive and negative reinforcements depending on the performance she demonstrated, and thus learned to deal mainly with the most common events that a car can encounter on the roads.

These, it turns out, do not include parked planes.

The plane - and the Tesla - are precious anecdotes that teach about one of the biggest limitations of artificial intelligence today: it lacks common sense. or in English - common sense. She has difficulty generalizing to different situations and understanding their larger meaning. Because of this, it needs to be trained specifically for each individual case, which is simply impossible.

Think of a situation where a black-and-white soccer ball rolls onto the road - and a boy with long blond hair in a blue shirt runs after it. The autonomous car that encounters such an event for the first time during its training period, sees that its supervisor immediately takes control and brakes. She realizes that from now on, every time a black-and-white soccer ball rolls onto the road, followed by a boy with long blond hair in a blue shirt, she has to brake.

But what if the child has baldness? For this, the car was not trained - and will not stop. What if he measured with a walking stick? For this, the car was not trained - and will not stop. What if he also has a red shirt? For that, the car was not trained - and here comes the end of the last fan of Hapoel Tel-Aviv.

It is clear to us as humans that this is a change of some marginal details from the whole, but that the most important thing is still true: a human being ran to the road following the object he lost. Artificial intelligence does not yet have the ability to understand this. She is drowning in a tangle of pieces of information, and is unable to decipher which of them is the most important.

Such events are known as "extreme events". The artificial intelligence can train on normal driving and be more successful in doing so than a human driver, certainly. But the extreme situations are the ones that confuse and complicate it. This is a rule of thumb that applies to most uses of artificial intelligence today. The problem is that on the road, the extreme events can easily cost human lives. Not only that, but these events are not rare. In fact, they are very common statistically. Thousands of drivers all over the world come across children running after their balls into the road. Millions of drivers see flocks of birds on the road - and know that they should not brake suddenly, because the birds will fly away as soon as the vehicle approaches. Countless drivers hear sirens behind them, and realize they need to start crowding to allow the emergency vehicle to pass.

The autonomous vehicle does not understand these situations easily, and any mistake will cause an accident. And when it comes to situations that repeat frequently, it will be a huge number of accidents, which human drivers would avoid without too much cognitive effort.

The companies that developed autonomous vehicles invested a fortune in training the vehicles. This is why they ran cars on the roads for thousands of hours: to try and train them to recognize and deal with every possible situation. The champion in the field is probably Tesla, which let its drivers train the vehicles on more than five billion kilometers of roads[7]. It turns out that is not enough.

This, then, is the first and most important problem that the autonomous vehicle companies encountered, and they have not yet been able to solve it. Artificial intelligence has been progressing in leaps and bounds in recent years, and one can understand why the companies hoped to be able to train the vehicles within a few years to deal with extreme situations. Unfortunately, that hasn't happened yet.

But this is only the first problem.

Towards the end of 2019, one of Uber's autonomous vehicles ran into an identity crisis: it couldn't understand what it was seeing in front of it. A woman holding a bicycle began to cross the road, and the combination of the two confused the vehicle.

The human leadership that was supposed to assist in such a case was not available. I mean, she was behind the wheel, but according to police reports she was watching a TV show on her smartphone and not focusing on the road[8]. The autonomous vehicle had to make a decision on its own. His electronic mind flickered between three options: vehicle, bicycle, or "other". In the end he came to the decision that he had to brake.

Unfortunately for the walker, he made the decision just 1.3 seconds before he struck and killed her.

The incident drew a lot of attention to autonomous vehicles, both from the public and from the legislator. This was the first case in which an autonomous car ran over a person to death, and suddenly the dangers became much more tangible. Uber stopped its road tests in all cities for six months. The other companies stopped to do a home inspection, at least ostensibly.

What about the legislator? What have governments done about it? Officially, no law has been changed or updated as a result of the case. But there is a long distance between formality and action. The laws designed to protect the public from unsafe vehicles have existed for over a hundred years. They only had to be seriously enforced on the companies that released robots onto the roads. It is difficult to find clear evidence, but I have no doubt that legislators and municipalities all over the world have begun to limit the steps of the autonomous vehicle companies more, and to demand from them serious references for the safety of the vehicles. Each such limitation and delay reduced the number of hours that autonomous vehicles could spend on the roads, thus hampering their training. Even if the artificial intelligences could theoretically reach a high enough level of common sense about what is happening on the roads - the delay hindered their ability to reach this achievement.

The autonomous vehicle companies had to grit their teeth and face the new situation, in which their cars are also recognized as potential killing tools. They had no other choice, since most of them were based on only one product: the autonomous vehicle itself.

When Amnon Shashua sold Mobileye to Intel in 2017, it was the largest transaction in the history of the Israeli economy. The future looks rosy for Mobileye, which developed, manufactured and marketed assistance systems for autonomous driving. The world, according to Shashua, was about to be filled with autonomous vehicles.

Today, six years later, Mobilay does not seem to have realized the vision. Intel decided to convert Mobileye into a public company. Mobileye was issued at the end of 2022 on Wall Street at a value of almost 17 billion dollars. These are definitely not divorced, but the amount is also far from what he was asked for in the past.

And yet, Mobileye is in an excellent position compared to its competitors. She managed to keep her kit while all the rest were left by the wayside. Why?

And here lies the third reason for the failure of autonomous vehicles: the companies behind them mostly relied on only one business model. Specifically, they tried to develop complete autonomous vehicles, the kind that you can get into - and take you directly to your destination. If the technology advanced fast enough, they would be hugely successful and dominate a market worth hundreds of billions of dollars. But it's hard to predict the future - and the company managers woke up one day and discovered that they didn't have a product that worked, they didn't have paying customers and they didn't have any sources of income. What did they have? Salaries to pay, and lots of them. This was the stage when these companies collapsed, disappeared or were acquired by others.

Mobileye executives had luck, brains, or both. According to his statements, Shashua believed in the vision of autonomous vehicles, but Mobileye did not try to build the vehicle of the future in its entirety. It produced technologies designed to solve specific problems already today, even in vehicles under human control. It developed and sold advanced cameras and sensors, which could also be placed in ordinary vehicles and helped reduce the chance of traffic accidents. She advanced into the future more slowly, but surely. And so, she is still here, alive and kicking[9]. And no, it hasn't given up on autonomous cars. In fact, Shashua believes in him more than ever.

"The public sentiment around autonomous driving moves like a pendulum," he said recently, as quoted by Omar Kabir in Calcalist. |Four years ago the sentiment was in the pole of 'it's just around the corner', now it is in the pole of 'it will happen in 2050'. The reality is not that we are somewhere in the middle, but closer to turning the corner."[10]

Is he right? I believe so, but only for a very specific type of autonomous vehicles.

Shashu realized that if the artificial intelligence has difficulty dealing with a wide variety of tasks and situations, then one solution could be to reduce the frequency of extreme situations. And so the dream of producing a private autonomous vehicle, which anyone can purchase and drive in a crowded neighborhood or in a squirrel-infested forest, was abandoned. The autonomous vehicle of the near future is going to be more like a taxi, but it will travel mainly in lanes specially reserved for it. Those who will probably be separated by a fence from any child with a ball in hand, or from pedestrians in general.

Shashua claims that this year a hundred Robotaxis cars - armed with Mobileye's advanced technology, of course - will hit the city streets. According to Tal Shaf from Tech12, who covered Dabri Shashua in early 2023, the number of robotic taxis will reach tens of thousands in the next five years[11]. The revolution will come - but more slowly and in a more limited way than they hoped.

The revolution will not be limited only to taxis. Antony Lewandowski - one of the leaders of the vision of autonomous vehicles - already decided several years ago that the future lies precisely in the field of mining and construction. He stopped developing robotic private vehicles, and switched to investing efforts in robotic trucks. These travel hundreds of times a day back and forth, along the same steep and narrow path, and carry dozens of tons of rocks and minerals. This is an environment with a minimal number of surprises, where they save the former drivers five hours of work a day[12]. And yes, it's a dirty job that most of us don't like to think about, but it's one that the robotic vehicles can do successfully. And at least on the construction sites they haven't killed anyone yet.

When I read back everything I wrote, I am struck by a strong feeling of uncertainty about the future. We futurists like to make big claims. Many claim, for example, that general artificial intelligence is just around the corner. It will be an artificial intelligence that can learn any field that humans can learn, and manage to specialize in it at a human level. Can we take such predictions seriously, given the failure of autonomous vehicles, which resulted precisely from the fact that artificial intelligence did not develop fast enough?

My answer to the question is that I am optimistic - but cautiously. I am optimistic because the rate of improvement in artificial intelligence capabilities is only increasing. I am optimistic because I see how artificial intelligence integrates into every scientific field and helps speed up developments in it, including computer science. And while it's clear that today's computer models are still imperfect, and not yet armed with the grand prize of common sense, I believe that the artificial intelligences of the present will help us reach the artificial intelligences of the future faster than most of us expect.

But - and this is important to understand - this technological-scientific leap will not happen tomorrow morning. If it happens before the end of the decade, it will be a big surprise to me. I would bet on the year 2040 or 2050 - the period when Kurzweil and a significant part of the artificial intelligence experts from the Eastern countries believe that we will reach this development. And then, for sure, we can also have our autonomous cars.

You just have to wait less than twenty years. A little longer than the time we waited for the light rail in Tel Aviv.

We'll get through that too.













More of the topic in Hayadan:

Skip to content