gaussianstyle: Efficient Monotonic Video Style Avatar through Deferred Neural Rendering
Under Review
-
Pinxin Liu
University of Rochester -
Luchuan Song
University of Rochester -
Daoan Zhang
University of Rochester -
Hang Hua
University of Rochester -
Yunlong Tang
University of Rochester -
Huaijin Tu
Georgia Institute of Technology -
Jiebo Luo
University of Rochester -
Chenliang Xu
University of Rochester
Abstract
Existing methods like Neural Radiation Fields (NeRF) and 3D Gaussian Splatting (3DGS) have made significant strides in facial attribute control such as facial animation and components editing, yet they struggle with fine-grained representation and scalability in dynamic head modeling. To address these limitations, we propose GaussianStyle, a novel framework that integrates the volumetric strengths of 3DGS with the powerful implicit representation of StyleGAN. The GaussianStyle preserves structural information, such as expressions and poses, using Gaussian points, while projecting the implicit volumetric representation into StyleGAN to capture high-frequency details and mitigate the over-smoothing commonly observed in neural texture rendering. Experimental outcomes indicate that our method achieves state-of-the-art performance in reenactment, novel view synthesis, and animation.
BibTeX
@inproceedings{liu2024gaussianstylegaussianheadavatar, title={GaussianStyle: Gaussian Head Avatar via StyleGAN}, author={Pinxin Liu and Luchuan Song and Daoan Zhang and Hang Hua and Yunlong Tang and Huaijin Tu and Jiebo Luo and Chenliang Xu}, year={2024}, booktitle={3DV}, year={2024} }