GASP

Real-time animatable Gaussian Splatting avatars from a single image or monocular video, using synthetic priors. CVPR 2025.

GASP produces high-quality, animatable Gaussian Avatars from a single monocular video or image, rendered here from 360 degrees at 70fps.

Authors: Jack Saunders, Charlie Hewitt, Yanan Jian, Marek Kowalski, Tadas Baltrušaitis, Yiye Chen, Darren Cosker, Virginia Estellers, Nicholas Gydé, Vinay Namboodiri, Benjamin Lundell

Affiliations: Microsoft, University of Bath

Venue: Computer Vision and Pattern Recognition (CVPR) 2025


Abstract

Gaussian Splatting has transformed real-time photo-realistic rendering. One of its most popular applications is creating animatable Gaussian Avatars. Recent works have pushed the boundaries of quality and rendering efficiency, but suffer from two key limitations: they either require expensive multi-camera rigs for free-view rendering, or can be trained with a single camera but only rendered well from that fixed viewpoint.

We propose GASP: Gaussian Avatars with Synthetic Priors. To overcome the limitations of existing datasets, we exploit the pixel-perfect nature of synthetic data to train a Gaussian Avatar prior. By fitting this prior to a single photo or short video and fine-tuning it, we obtain a high-quality Gaussian Avatar that supports 360-degree rendering. The prior is only required for fitting — not inference — enabling real-time application. Our method produces high-quality, animatable avatars from limited data that can be rendered at 70fps on commercial hardware.


Video


Citation

@inproceedings{saunders2025gasp,
  title={{GASP}: Gaussian Avatars with Synthetic Priors},
  author={Saunders, Jack and Hewitt, Charlie and Jian, Yanan and Kowalski, Marek and Baltru\v{s}aitis, Tadas and Chen, Yiye and Cosker, Darren and Estellers, Virginia and Gyd{\'e}, Nicholas and Namboodiri, Vinay P and others},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
  pages={271--280},
  month={June},
  year={2025}
}