Connect with us

Reinforcement Learning

AI Model Might Let Game Developers Generate Lifelike Animations

mm

Updated

 on

A team of researchers at Electronic Arts have recently experimented with various artificial intelligence algorithms, including reinforcement learning models, to automate aspects of video game creation. The researchers hope that the AI models can save their developers and animators time doing repetitive tasks like coding character movement.

Designing a video game, particularly the large, triple-A video games designed by large game companies, requires thousands of hours of work. As video game consoles, computers, and mobile devices become more powerful, video games themselves become more and more complex. Game developers are searching for ways to produce more game content with less effort, for example, they often choose to use procedural generation algorithms to produce landscapes and environments. Similarly, artificial intelligence algorithms can be used to generate video game levels, automate game testing, and even animate character movements.

Character animations for video games are often completed with the assistance of motion capture systems, which track the movements of real actors to ensure more life-like animations. However, this approach does have limitations. Not only does the code that drives the animations still need to be written, but animators are also limited only to the actions that have been captured.

As Wired reported, researchers from EA set out to automate this process and save both time and money on these animations. The team of researchers demonstrated that a reinforcement learning algorithm could be used to create a human model that moves in realistic fashions, without the need to manually record and code the movements. The research team used “Motion Variational Autoencoders” (Motion VAEs) to identify relevant patterns of movement from motion-capture data. After the autoencoders extracted the movement patterns, a reinforcement learning system was trained with the data, with the goal of creating realistic animations based on certain objectives (such as running after a ball in a soccer game). The planning and control algorithms used by the research team were able to generate the desired motions, even producing motions that weren’t in the original set of motion-capture data. This means that after learning how a subject walks, the reinforcement learning model can determine what running looks like.

Julian Togelius, NYU professor and AI tools company Modl.ai co-founder was quoted by Wired as saying that the technology could be quite useful in the future and is likely to change how content for games is created.

“Procedural animation will be a huge thing. It basically automates a lot of the work that goes into building game content,” Togelius said to Wired.

According to professor Michiel van de Panne from UBC, who was involved with the reinforcement learning project, the research team is looking to take the concept further by animating non-human avatars with the same process. Van de Panne said to Wired that although the process of creating new animations can be quite difficult, he is confident the technology will be able to render appealing animations someday.

Other applications of AI in the development of video games include the generation of basic games. For instance, researchers at the University of Toronto managed to design a generative adversarial network that could recreate the game Pac-Man without access to any of the code used to design the game. Elsewhere, researchers from the University of Alberta used  AI models to generate levels of video games based off on the rules of different games like Super Mario Bros. and Mega Man.

Spread the love