Advances in AI machine learning, in addition to advances in areas such as audiovisual production, augmented and virtual reality, are generating an instance of mutual reinforcement for these technologies.
Following an idea that has been approached with many approaches over time, a research team combined these resources to generate animations that follow in the footsteps of a motion capture source, but with movement in a natural way.
COMPUTER-GENERATED VIDEO DANCES
In itself, the attempt to make animations with this premise is not new. The challenge at this point lies in the likelihood that the results will generate a level of visual credibility.
Addressing one of the main weaknesses in this area to date, a research paper, developed by professionals at Adobe Research and University College London, proposes a new approach to learning the dynamic aspect of an actor and synthesizing complex and invisible movement sequences.
The garments being irregular, distort some proportions of the human anatomy and hide part of their movements. Until now, this was an obstacle in working with these models, but the recently presented AI system is concerned with analyzing the appearance of the person presented on the camera, to effectively represent movements with the resources provided by its machine learning tools.
This tool is responsible for maintaining high-quality visual results, with a high level of probability. The system, in addition to fine-tuning some details (edges and cuts in general), takes particular care of the appearance of the body in motion, taking care of the proportions and temporal coherence of the movements, despite its own special speed and dynamism. of dance.
This video-based appearance synthesis method shows high-quality results that, according to the research team, have not been shown outside of this study before.
On a technical level, the authors of this research observed: “We adopted a StyleGAN-based architecture for the person-specific video-based motion reorientation task. Introducing a new motion signature that is used to modulate generator weights to capture dynamic changes in appearance, as well as to smooth out single-frame-based pose estimates to improve temporal consistency. We have evaluated our method on a demanding series of videos and have shown that our approach achieves cutting-edge performance both qualitatively and quantitatively. “
Previously, we learned about a Google effort related to dance and artificial intelligence, through its AI Choreographer project, capable of generating choreography based on music stimuli and dance step workouts.
In the case of this new development presented by the people of Adobe and UCL, the proposal is closer to that of deepfakes, with a much more positive and friendly approach to audiovisual work, than other more complex possibilities previously known.