STAvatar: Soft Binding and Temporal Density Control for Monocular 3D Head Avatars Reconstruction

Jiankuo Zhao1,2, Xiangyu Zhu1,2, Zidu Wang1,2, Zhen Lei1,2,3
1MAIS, Institute of Automation, Chinese Academy of Sciences
2School of Artificial Intelligence, University of Chinese Academy of Sciences 3CAIR, HKISI, Chinese Academy of Sciences

Abstract

MY ALT TEXT

Reconstructing high-fidelity and animatable 3D head avatars from monocular videos remains a challenging yet essential task. Existing methods based on 3D Gaussian Splatting typically bind Gaussians to mesh triangles and model deformations solely via Linear Blend Skinning, which results in rigid motion and limited expressiveness. Moreover, they lack specialized strategies to handle frequently occluded regions (e.g., mouth interiors, eyelids). To address these limitations, we propose STAvatar, which consists of two key components: (1) a UV-Adaptive Soft Binding framework that leverages both image-based and geometric priors to learn per-Gaussian feature offsets within the UV space. This UV representation supports dynamic resampling, ensuring full compatibility with Adaptive Density Control (ADC) and enhanced adaptability to shape and textural variations. (2) a Temporal ADC strategy, which first clusters structurally similar frames to facilitate more targeted computation of the densification criterion. It further introduces a novel fused perceptual error as clone criterion to jointly capture geometric and textural discrepancies, encouraging densification in regions requiring finer details. Extensive experiments on four benchmark datasets demonstrate that STAvatar achieves state-of-the-art reconstruction performance, especially in capturing fine-grained details and reconstructing frequently occluded regions. The code will be publicly available.

Method

MY ALT TEXT

Overview of STAvatar. (a) In addition to a fixed identity reference image and its UV position map, we further rasterize the vertex offsets between reference mesh and control mesh to obtain a UV displacement map as input. (b) We construct a dual-branch network to predict a feature offset map in UV space, from which an offset \( \delta_i \) is sampled for each Gaussian \( g_i \). This offset is added to the coarsely estimated parameters \( \tilde{\theta} \) to get final parameters \( \theta^* \). The final images are then rendered using Gaussian Splatting. (c) We first construct a perceptual error map by combining \( \mathcal{L}_1 \) map and \( \mathcal{L}_{\mathrm{d-ssim}} \) map. Then, we estimate the 2D projection of each Gaussian \( g_i \) using the recorded attributes, based on which the fused perceptual error is computed.

Additional Method Diagram

FLAME-Conditioned Temporal Clustering. We cluster video frames into K clusters and conduct ADC within each cluster's training.

Qualitative Results

MY ALT TEXT

Qualitative results of head avatar reconstruction. Our method recovers finer details and delicate structures such as wrinkles and teeth. Moreover, it produces clearer results in challenging regions like mouth interiors and eyelids.

MY ALT TEXT

Qualitative results of cross-identity reenactment. Our method accurately animates the source avatar performing expressions such as smiling and eye-closing.

Quantitative Results

MY ALT TEXT

As shown in Table 1, our method consistently outperforms existing state-of-the-art approaches across most datasets and metrics, achieving notable improvements in reconstruction quality. In particular, our method attains the highest SSIM scores and the lowest LPIPS values on all four datasets, demonstrating its strong ability to preserve both geometric accuracy and perceptual fidelity.