Copied


Reallusion Enhances Digital Characters with NVIDIA AI Integration

Alvin Lang   Jun 11, 2024 09:45 0 Min Read


Reallusion is revolutionizing the creation of digital characters by integrating advanced AI technologies from NVIDIA, according to NVIDIA Technical Blog. This collaboration is set to transform animation workflows for filmmakers, game developers, and content creators.

AI-Powered Animation with Audio2Face

Reallusion utilizes NVIDIA's Audio2Face technology, which automatically generates expressive facial animations and lip-syncing from audio or text inputs. Supporting multiple languages, Audio2Face can animate characters speaking or singing, making it a versatile tool for animators. The latest standalone release also includes functionality for animating realistic facial expressions, with slider and keyframe controls available for detailed adjustments.

Integrated into Reallusion’s Character Creator and iClone applications, Audio2Face enables a seamless AI-assisted animation workflow. Users can prepare an asset for animation with a single click, generating live facial movements that match any supplied voice track. The resulting animation can then be refined in iClone before being rendered for use in various production environments.

Streamlined Animation Workflow

The collaboration between NVIDIA and Reallusion has led to the development of the CC Character Auto Setup plugin, which simplifies a previously cumbersome 18-step process into a single operation. Users can import a Character Creator asset and select a training model to bring 3D characters to life with lifelike facial animations synced to any audio input. Additional performance sculpting can be done using Audio2Face’s motion sliders and keyframe controls before final production refinements in iClone.

iClone provides granular control over every aspect of facial animation, from expression levels to head motions and simulated eye movements, allowing animators to authentically convey a character’s personality. The software can also incorporate head movements sourced from mocap equipment like AccuFACE or iPhone Live Face.

AccuFACE: Next-Gen AI Face Mocap

AccuFACE, powered by the NVIDIA Maxine AR SDK, offers real-time facial capture quality and capabilities. Utilizing NVIDIA GPUs with Tensor Cores, the Maxine AR SDK provides AI-driven 3D facial tracking, body pose estimation, and more. AccuFACE translates captured facial data into seamless digital animation, enabling the generation of expressive facial animations and responsive 3D avatars in real time.

Key features of AccuFACE include precise landmark mapping, head pose and deformation tracking, facial mesh reconstruction, and robust face detection and localization. These features allow for the capture of nuanced facial expressions, essential for conveying emotions and enhancing the authenticity of digital characters.

AccuFACE also offers tools to refine AI-generated tracking for professional-grade results. Device settings like smooth filtering and denoising address tracking artifacts, while anti-interference cancellation prevents erroneous cross-triggering of facial movements. Further calibration and refinement can be added to capture distinct expressions and deliver authentic performances.

Availability

NVIDIA Maxine offers high-quality video communications and AI technology for professionals. The latest Maxine production release is included with NVIDIA AI Enterprise, providing access to production-ready features and enterprise support. For early access to new features, users can join the Maxine Early Access program.

Reallusion’s partnership with NVIDIA demonstrates the transformative potential of AI in animation, making professional-grade facial motion capture and animation accessible to a broader audience. This advancement allows animators to achieve high-quality results without extensive expertise or specialized equipment, revolutionizing digital character animation.

For more information, visit the NVIDIA Technical Blog.


Read More