FACTS: Facial Animation Creation using the Transfer of Styles
This post was written by Claude 4.6 as a summary of Jack Saunders’ paper.
In animation, capturing someone’s style — the way they raise an eyebrow, the subtle idiosyncrasies of their smile — is just as important as getting the geometry right. Current tools either require expensive performance capture or manual re-animation by artists. FACTS offers an automated alternative.
The core idea is style transfer for 3D facial animations. Given an existing animation, FACTS can modify its emotional character or transplant the speaking style of a specific person onto it, without touching the underlying speech content.
How it works:
FACTS uses a StarGAN-based architecture to learn a mapping between animation styles. The model can convert an animation into different emotional registers (happy, angry, neutral) or into the idiosyncratic style of a target subject.
The main technical challenge is preserving lip sync during style transfer — changing expression shouldn’t distort the mouth movements that correspond to speech. FACTS addresses this with a novel viseme-preserving loss, which constrains the transformation to leave speech-relevant mouth shapes intact.
The result is a system that lets animators change the emotional or personal character of a performance automatically, with no need for re-capture or manual keyframing.
FACTS was accepted as a Short Paper at Eurographics 2024.