If it's research, then let it be in motion

Researchers have created a computer model whose input is text about movement, and whose output is an animated character that performs what is written

Images from the neuron network developed by the researchers, which knows how to animate Spider-Man, knows the famous movement of the sprinter Usain Bolt and the ballet "Swan Lake" without learning about them directly. Courtesy of the researchers
Images from the neuron network developed by the researchers, which knows how to animate Spider-Man, knows the famous movement of the sprinter Usain Bolt and the ballet "Swan Lake" without learning about them directly. Courtesy of the researchers

Computer-generated animated characters should have believable and convincing movement that resembles human movement. Today, this can be done using computer science technologies that are based on three merged subfields - computer graphics, computer vision and machine learning.

In recent years there has been a breakthrough in these branches, among other things thanks to the use of artificial neuron networks (nerve cells) - a computational mathematical model (algorithm) developed inspired by the neuron networks that exist in the human brain. The artificial neural networks consist of many information units (input and output) that are linked to each other and transfer numerical data from one to another, based on machine learning. The numerical data represents the strength of the connection between the units, and ultimately the connectivity creates intelligent behavior. These networks can be used in almost all computer applications. For example, identifying objects in photos and videos, deciphering medical simulations, robotics and autonomous transportation.

Prof. Daniel Lishchinsky from the School of Engineering and Computer Science at the Hebrew University and Prof. Daniel Cohen-Or from the School of Computer Science at Tel Aviv University are developing innovative methods for the production and editing of photos, video, animation and more. In their latest research, which won a grant from the National Science Foundation, they sought to generate movement of animated characters using text. "The goal was to make the characters in the animation move using a command written in natural language. We create a skeleton of a figure, dress the body on it and describe how it should move, for example walk, run, jump, dance and raise hands - a sequence of movements", explains Prof. Cohen-Or.

This is actually how the researchers link a model of language (English) to a model of animation. Prof. Cohen-Or says: "We write the text for each movement and action and also for a sequence of movements and actions, and the neural networks implement this in the animation."

The animation technology developed by the researchers is based on the neuron network model whose input is text that describes a movement and whose output is the animated character that executes the text. It is a diffusion model, a relatively new type of neural network that starts with Gaussian white noise (random signals) and gradually cleans it until a clean signal describing the movement is obtained.

"The programs we built take back the noise we put into them, clean it and turn it into movement. And while cleaning the noise, we also enter a text input and then the neural networks do what is written. That is, we introduce noise so that later we can turn it into movement. In the experiments we conducted we saw how this technology works, we watched the animation obtained thanks to the neural network software we developed. In the current situation, animators manually draw every movement and action of the animated characters or model the movements and actions of a human being and make them computerized (for example, they record the movements of actors using sensors and translate them into digital code. Using the code, they give the computerized characters realistic movements that resemble the movements of the actors) . We managed to save this entire process", Prof. Cohen-Or concludes.

Life itself:

Prof. Daniel Cohen-Or, married + three (27-22), lives in Hod Hasharon. In the past, he received the "Friend of the Chinese People" award from the Prime Minister of China thanks to his cooperation with researchers in the country. In his free time he likes to run and swim in the sea.

Prof. Daniel Lishchinsky, married + three (30-23), lives in Jerusalem. In the past he collaborated with the Pixar company that published many computer animation films. Lover of travel, photography and music.

More of the topic in Hayadan: