乳汁 发表于 2025-4-1 01:57:55

,Learn to Preserve and Diversify: Parameter-Efficient Group with Orthogonal Regularization for Domair-Efficient Group with Orthogonal regularization (PEGO) for vision transformers, which effectively preserves the generalization ability of the pre-trained network and learns more diverse knowledge compared with conventional PEFT. Specifically, we inject a group of trainable Low-Rank Adaptation (LoRA

复习 发表于 2025-4-1 06:44:48

http://reply.papertrans.cn/25/2424/242329/242329_62.png

遍及 发表于 2025-4-1 13:10:15

http://reply.papertrans.cn/25/2424/242329/242329_63.png

Accomplish 发表于 2025-4-1 15:59:59

http://reply.papertrans.cn/25/2424/242329/242329_64.png

Mechanics 发表于 2025-4-1 21:45:23

,PointRegGPT: Boosting 3D Point Cloud Registration Using Generative Point-Cloud Pairs for Training,oud registration. When equipped with our approach, several recent algorithms can improve their performance significantly and achieve SOTA consistently on two common benchmarks. The code and dataset will be released on ..

doxazosin 发表于 2025-4-2 02:10:12

General Geometry-Aware Weakly Supervised 3D Object Detection,es on the image plane, and . to build a Point-to-Box alignment loss to further refine the pose of estimated 3D boxes. Experiments on KITTI and SUN-RGBD datasets demonstrate that our method yields surprisingly high-quality 3D bounding boxes with only 2D annotation. The source code is available at ..

addition 发表于 2025-4-2 03:12:47

,Long-CLIP: Unlocking the Long-Text Capability of CLIP,is goal is far from straightforward, as simplistic fine-tuning can result in a significant degradation of CLIP’s performance. Moreover, substituting the text encoder with a language model supporting longer contexts necessitates pretraining with vast amounts of data, incurring significant expenses. A
页: 1 2 3 4 5 6 [7]
查看完整版本: Titlebook: Computer Vision – ECCV 2024; 18th European Confer Aleš Leonardis,Elisa Ricci,Gül Varol Conference proceedings 2025 The Editor(s) (if applic