December 28, 2019

422 words 2 mins read

Using AI to create interactive digital actors

Using AI to create interactive digital actors

Digital character interaction is hard to fake, whether its between two characters, between users and characters, or between a character and its environment. Nevertheless, interaction is central to building immersive XR experiences, robotic simulation, and user-driven entertainment. Kevin He explains how to use physical simulation and machine learning to create interactive character technology.

Talk Title Using AI to create interactive digital actors
Speakers Kevin He (DeepMotion)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location New York, New York
Date April 16-18, 2019
URL Talk Page
Slides Talk Slides
Video

Digital character interaction is hard to fake, whether it’s between two characters, between users and characters, or between a character and its environment. Nevertheless, interaction is central to building immersive XR experiences, robotic simulation, and user-driven entertainment. Kevin He explains how to use physical simulation and machine learning to create interactive character technology. You’ll explore interactive motion and the physical underpinnings needed to build reactive characters—starting from building a physics engine optimized for joint articulation that can dynamically solve for multiple connected rigid bodies—and then discover how to build a standard rag doll, ascribe the rag doll musculature and posture, and eventually to solve for basic locomotion via control of the simulated joints and muscles. Kevin covers the applications of this real-time physics simulation for characters in XR—from dynamic full-body VR avatars to interactive NPC and pets to entirely interactive AR avatars and emojis. Kevin then explains how to apply an AI model to your physically simulated character to generate interactive motion intelligence—giving your character controllers sample motion data to learn motor skills via reverse engineering the motion in terms of simulated musculature and optimizing for additional reward functions to ensure natural, flexible motion. The character can then produce the behavior in real-time physical simulation—transforming static animation data into a flexible and interactive skill. This is the first step in building a digital cerebellum. You’ll learn how to build on this knowledge to create a “motion brain” by doing another level of training to stitch multiple behaviors into a control graph, allowing for seamless blending between learned motor skills in a variety of sequences. Kevin concludes with a look at some advanced applications of motion intelligence and explains how to use the product you built to create an applied AI tool for developers. You’ll see how to build a basketball-playing physically simulated character (as published in a paper for SIGGRAPH 2018) using deep reinforcement learning. This character can control a 3D ball in space using advanced ball control skills.

comments powered by Disqus