Unleashing the potential of artificial intelligence: a new algorithm developed at MIT improves neural network training efficiency by 50 times

MIT researchers have developed an advanced algorithm for reinforcement learning (initial training of an artificial intelligence system) that optimizes the decision-making of AI systems in complex tasks such as urban traffic control, while significantly saving time and resources

Reinforcement learning of artificial intelligence. Illustration: depositphotos.com
Reinforcement learning of artificial intelligence. Illustration: depositphotos.com

Researchers from MIT have developed an innovative reinforcement learning algorithm that enhances AI's ability to make decisions in complex situations. The algorithm focuses on choosing optimal tasks for training, and provides significant performance improvement with minimal data - up to 50 times more efficiently.

This algorithm not only saves time and resources, but also introduces new horizons for effective AI applications in real-world situations, such as traffic control, transportation optimization, and advanced mobility systems.

Challenges in reinforcement learning

AI systems based on reinforcement learning face significant difficulty when tasks change. For example, in urban traffic control, the model may fail to manage different intersections with varying speed settings, lane structures and traffic patterns.

To address this challenge, the researchers developed an innovative algorithm that optimizes AI training through strategic task selection.

A strategic approach to task selection

The algorithm makes a strategic selection of the most effective tasks for training AI agents, to improve overall performance on the entire set of tasks.

For example, in urban traffic control, each task may represent a single node within the urban space. The algorithm focuses on nodes with a significant contribution to the overall performance, thereby maximizing effectiveness while maintaining low training costs.


Innovation in the MBTL algorithm

MIT researchers have developed an algorithm called Model-Based Transfer Learning (MBTL), which allows selecting tasks with the highest value to improve performance.

The algorithm measures two key parameters:

  1. The performance of the algorithm if it was trained on a single task.
  2. The decrease in performance when the algorithm is applied to other tasks, a concept known as "Generalization Performance"

By focusing on the tasks that guarantee the highest performance improvement, MBTL significantly improves the efficiency of the training process.


Implications for the future of artificial intelligence

In tests on simulated tasks, such as traffic control, real-time speed advisory and other classical tasks, the algorithm showed 5 to 50 times higher efficiency than other methods.

For example, with MBTL you can train an algorithm on only two tasks and achieve the same performance as a standard method that uses data from 100 tasks.

According to Prof. Kathy Wu, the leader of the study:

"Our approach shows that you may not need all the information from all the tasks. In fact, training them all may be confusing for the algorithm and lead to lower performance."

In the future, the researchers plan to apply MBTL to more complex problems, including multidimensional task spaces, and advance the use of this approach in real-world applications, especially in advanced mobile systems.

for the scientific article


More of the topic in Hayadan:

One response

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.