Autonomous cars: What could go wrong?

Sometimes the autonomous car's recognition system does not recognize an emergency vehicle with flashing lights stopped on the side of the road. What technological solutions to this problem do the researchers intend to develop?

Lior Weitzhandler, Davidson Institute, the educational arm of the Weizmann Institute

The artificial intelligence software must decide what to do based on input received from sensors, such as cameras, lasers, radar, and more. A Tesla electric car.
The artificial intelligence software must decide what to do based on input received from sensors, such as cameras, lasers, radar, and more. A Tesla electric car. Illustration: depositphotos.com

In recent years, we have witnessed a change in driving culture, stemming from technological developments in hardware and software – and especially from the emergence of autonomous cars. The main part of these vehicles is a computer on which software with artificial intelligence is installed. The role of the software is similar to that of the analytical human brain: it must decide what to do based on input received from sensors, such as cameras, lasers, radar, and more. 

Like the human driver, the artificial driver – the AI ​​software – is not free from errors, although they are very different. While some accidents involving a human driver depend on factors such as fatigue or lack of concentration, the artificial driver’s errors stem from the software itself, for example, incorrect interpretation of input. 


The artificial intelligence software must decide what to do based on input received from sensors, such as cameras, lasers, radar, and more. Tesla electric cars

Even a computer can make mistakes.

An autonomous car relies primarily on visual, image input, and the ability to distinguish between objects in its environment: the road, lanes, traffic signs, and other vehicles. This ability is critical: Imagine what would happen if the car's computer didn't recognize another vehicle that had stopped further down the lane.

The software model responsible for object recognition is extremely complex, but basically it is a code system trained on a specific database. That is, given a large number of images in which different objects, such as a car, motorcycle or truck, the algorithm learns each of them and classifies them. After the training phase, when a new image is received, the output of the model is the image itself, so that each object it has recognized in it is surrounded by a rectangle with a number between zero and one that ranks the certainty of the recognition. The information from the output is then fed to the next stage of the system, where decisions are made based on the degree of recognition. It is customary to define a certainty threshold, for example 0.7, with any number higher than this being recognized as a real object. 

This important operation may seem simple at first glance, but it is complicated by the huge variety of vehicles on the road. In addition, changing lighting and weather conditions affect the image input fed to the system and make identification and classification operations difficult. Therefore, autonomous car software developers encounter difficult challenges in unusual situations that occur on the road. One such case has sparked a wave of interest.

The flashing warning light greatly reduces the certainty of identification as a "car." A police car with flashing lights.
The flashing warning light greatly reduces the certainty of identification as a "car." A police car with flashing lights. Illustration: depositphotos.com

What confuses the model?

It turns out that Tesla vehicles have difficulty dealing with vehicles with hazard lights (chaklaka) stopped on the side of the road in dark conditions. In fact, no fewer than 16 accidents have been reported in which Tesla vehicles collided with such vehicles. An investigation by the United States Department of Transportation claims that the main culprit is drivers who were not sufficiently alert.

Researchers From Ben-Gurion University, in collaboration with Fujitsu JapanThey sought to understand The technical aspect of the failure and to find a solution to it. They called the problem EpileptiCar, a combination of the words "epilepsy" and "car" in English. To understand the source of the problem, the researchers used different cameras and object recognition models and compared them. They filmed videos that showed a vehicle with its warning light off for half a minute and another half minute in which it was on. They then separated the video into the images (frames) that make it up and let the recognition model analyze them and determine the degree of certainty. This created a process in which the model was fed a large number of images in which the warning light was off, and then a sequence of images in which the warning light was turned off and on alternately.


Confidence in the identification of a car-type object by frame. The red line represents the time the warning light is off and the blue line the time the warning light is on | From Feldman et al
Confidence in the identification of a car-type object by frame. The red line represents the time the warning light is off and the blue line the time the warning light is on | From Feldman et al

As can be seen in the diagram, the warning light does indeed manage to confuse the model and cause fluctuations in the certainty of the vehicle's identification. To substantiate the claim, the researchers conducted another test, in which they examined the frequency at which the warning light turned off and on, and found that it was 1.3 Hz. That is, 1.3 cycles per second. And indeed, this is exactly the rate of change between certainty and uncertainty in the model's output.

So why did the warning light cause the model's recognition rate to drop so sharply? Each image is made up of a collection of dots (pixels), and each pixel is made up of three numerical values ​​that represent the amount of red, green, and blue light. According to the researchers, the problem lies in the distribution of the blue color in the image, especially in the area of ​​the parked car, which changes due to the flashing of the warning light and thus changes the model's input.

Once the problem was understood, the researchers moved on to find a solution – a “medicine”, which they called Caracetamol, based on the word paracetamol – a painkiller. As we mentioned, the object recognition model is trained on a pre-labeled database. The images from the video received from the vehicle camera are transferred to a model that recognizes the objects and passes the information on. The researchers essentially added additional layers to the recognition process. First, upon receiving each image, the researchers checked whether a warning light was present. If not, the images were transferred to the regular object recognition model. If a warning light was indeed detected, the image was transferred to another recognition model that was adapted for vehicles with warning lights. Finally, the outputs of all the models were combined together into a uniform output.

The researchers were able to improve the model's volatility compared to the original recognition model. On average, the minimum certainty increased by 0.21, the average certainty by 0.27, and the maximum certainty by 0.07. In addition, there was a decrease of 0.19 in the certainty range, that is, the difference between the maximum and minimum certainty, which represents a significant improvement in the stability of the model. Although the additional steps increased the analysis time, the researchers claim that this is not an addition that will burden the vehicle's computer resources.

The video shows the EpileptiCar problem. 

And what's next? 

The researchers did not use Tesla's cameras and autonomous vehicle models, but rather standard commercial cameras and open-source recognition models. They also had difficulty finding databases of images of vehicles with warning lights in the dark, and of course the larger the database, the higher its reliability. 

Artificial intelligence-based technological systems will certainly occupy an increasing place in our lives in the future. However, they still have dangerous flaws, and there are those who question their safety and the morality of their use. Some claim that countries are already using these systems in wars, and there are even rumors of artificial intelligence systems that have attacked targets without being ordered to do so. In addition to these discussions, quite a few questions arise regarding the copyright on the content used to train the artificial intelligence. 

Davidson Institute website

More of the topic in Hayadan:

One response

  1. Interesting article.
    I'm not at all sure that there is royal intelligence within the modules in Tesla cars. The solution presented here does not include royal intelligence, but rather the opposite – the machine is coded to get a better solution, not to "learn" on its own based on the results.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismet to filter spam comments. More details about how the information from your response will be processed.