Comprehensive coverage

MIT and Toyota are releasing a dataset that will help build models for autonomous driving training

DriveSeg contains pixel-accurate representations of many common road-scattered objects, as seen through the lens of a continuous video driving scene 

A sampling of frames from the MIT AgeLab video array. and Toyota Courtesy of researchers Li Ding, Jack Trewilliger, Rini Sharoni, Brian Reimer and Lex Friedman, MIT
A sampling of frames from the MIT AgeLab video array. and Toyota Courtesy of researchers Li Ding, Jack Trewilliger, Rini Sharoni, Brian Reimer and Lex Friedman, MIT

MIT AgeLab and Toyota's Collaborative Research Laboratory have jointly released a database containing representations of common objects scattered on roads to improve the training capability of autonomous cars.

"How can we train self-driving vehicles so that they have a deeper awareness of their surroundings? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unexpected situations? These are some of the questions that researchers from AgeLab at the MIT Center for Transportation and Logistics and Toyota's Collaborative Safety Research Center (CSRC) are trying to answer by sharing a new open data set called DriveSeg.

Through DriveSeg, the Massachusetts Institute of Technology and Toyota are working to advance research into autonomous driving systems, which, similar to human perception, perceive the driving environment as a continuous stream of visual information.

"By sharing this dataset, we hope to encourage researchers, industry and other developers to develop new insights and directions for AI temporal models that enable the next generation of driver assistance technologies," says Brian Reimer, principal investigator. "Our long-standing working relationship with the Toyota CSRC has enabled our research efforts to influence future safety technologies."

"The ability to see is an important part of human intelligence," says Rini Sharoni, Toyota CSRC's chief engineer. "Every time we drive, we always monitor the movements of the environment to identify potential risks and make safer decisions. By sharing this data set we hope to accelerate research into autonomous driving systems and advanced safety features that are better adapted to the complexity of the environment that surrounds them. "

To date, self-driving data made available to the research community has mostly consisted of single static image representations that can be used to identify and track common objects in and around the road, such as bicycles, pedestrians, or traffic lights, using "boundary boxes." In contrast, DriveSeg contains more pixel-accurate representations of many of the same common road objects, but as seen through the lens of a continuous video driving scene. This type of segmentation can be particularly useful for identifying more amorphous objects - such as roads under construction and vegetation - which do not always have such defined and uniform shapes.

According to Sharoni, the video-based driving scene perception provides a data flow that more closely resembles real-world dynamic driving situations. It will also allow researchers to explore data patterns as they play the simulation over time, potentially leading to advances in machine learning, scene understanding and behavioral prediction.

DriveSeg is freely available and can be used by researchers for non-commercial purposes at this link

More of the topic in Hayadan:

One response

  1. In the simulation of autonomous driving there is a lot of complexity. This article did not explain what of this complexity it came to describe.
    I will try to explain:
    An autonomous vehicle has a series of assemblies that make up its senses. Things such as cameras in different locations, lidars (which is actually a laser that measures the distance of a certain point), calculators and more.
    Every actual thing in the driving environment has to be coordinated to the level that the specific vehicle receives its information from each of its senses. For example, a reflection from a puddle on the road, how the camera sees a cardboard box on the road, etc.
    Without representation of all these there is no simulation.
    Then it is necessary to build challenging environments for the car. It makes no sense for it to drive when there is normal traffic around it, because then until something important happens that challenges the vehicle it will take a long time.
    You have to build scenarios in which interesting things happen and this is an art in itself, which things like AI are not really capable of at the moment.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.