国家明智 发表于 2025-3-30 08:45:16

http://reply.papertrans.cn/25/2424/242320/242320_51.png

出来 发表于 2025-3-30 15:07:46

Lecture Notes in Computer Sciencehttp://image.papertrans.cn/d/image/242320.jpg

从容 发表于 2025-3-30 19:57:21

Expressive Whole-Body 3D Gaussian Avatar,noticeable artifacts under novel motions. To address them, we introduce our hybrid representation of the mesh and 3D Gaussians. Our hybrid representation treats each 3D Gaussian as a vertex on the surface with pre-defined connectivity information (., triangle faces) between them following the mesh t

有权 发表于 2025-3-30 22:48:30

Controllable Human-Object Interaction Synthesis, contact. To overcome these problems, we introduce an object geometry loss as additional supervision to improve the matching between generated object motion and input object waypoints; we also design guidance terms to enforce contact constraints during the sampling process of the trained diffusion m

微枝末节 发表于 2025-3-31 04:52:42

,PAV: Personalized Head Avatar from Unstructured Video Collection,NeRF framework to model appearance and shape variations in a single unified network for multi-appearances of the same subject. We demonstrate experimentally that PAV outperforms the baseline method in terms of visual rendering quality in our quantitative and qualitative studies on various subjects.

上下倒置 发表于 2025-3-31 08:55:36

,Strike a Balance in Continual Panoptic Segmentation,nnotated only for the classes of their original step, we devise balanced anti-misguidance losses, which combat the impact of incomplete annotations without incurring classification bias. Building upon these innovations, we present a new method named Balanced Continual Panoptic Segmentation (BalConpa

称赞 发表于 2025-3-31 11:34:51

http://reply.papertrans.cn/25/2424/242320/242320_57.png

dandruff 发表于 2025-3-31 15:41:33

http://reply.papertrans.cn/25/2424/242320/242320_58.png

critique 发表于 2025-3-31 20:37:04

,UniTalker: Scaling up Audio-Driven 3D Facial Animation Through A Unified Model,, typically less than 1 h, to 18.5 h. With a single trained UniTalker model, we achieve substantial lip vertex error reductions of 9.2% for BIWI dataset and 13.7% for Vocaset. Additionally, the pre-trained UniTalker exhibits promise as the foundation model for audio-driven facial animation tasks. Fi

钉牢 发表于 2025-3-31 21:51:05

http://reply.papertrans.cn/25/2424/242320/242320_60.png
页: 1 2 3 4 5 [6] 7
查看完整版本: Titlebook: Computer Vision – ECCV 2024; 18th European Confer Aleš Leonardis,Elisa Ricci,Gül Varol Conference proceedings 2025 The Editor(s) (if applic