curettage 发表于 2025-3-28 18:31:20
http://reply.papertrans.cn/24/2343/234219/234219_41.png套索 发表于 2025-3-28 20:23:20
http://reply.papertrans.cn/24/2343/234219/234219_42.pngHamper 发表于 2025-3-29 02:57:27
Boundary Content Graph Neural Network for Temporal Action Proposal Generation, be combined to generate a final high-quality proposal. Experiments are conducted on two mainstream datasets: ActivityNet-1.3 and THUMOS14. Without the bells and whistles, BC-GNN outperforms previous state-of-the-art methods in both temporal action proposal and temporal action detection tasks.泥土谦卑 发表于 2025-3-29 05:33:41
http://reply.papertrans.cn/24/2343/234219/234219_44.png暴行 发表于 2025-3-29 10:11:16
VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval,andidate proposals, coarse query representation, and one-way attention mechanism lead to blurry attention map which limits the localization performance. To address this issue, Video-Language Alignment Network (VLANet) is proposed that learns a sharper attention by pruning out spurious candidate propAprope 发表于 2025-3-29 13:32:03
http://reply.papertrans.cn/24/2343/234219/234219_46.pngTRAWL 发表于 2025-3-29 18:37:27
Interpretable Foreground Object Search as Knowledge Distillation,work. It aims to transfer knowledge from interchangeable foregrounds to supervise representation learning of compatibility. The query feature representation is projected to the same latent space as interchangeable foregrounds, enabling very efficient and interpretable instance-level search. FurthermLedger 发表于 2025-3-29 21:49:17
http://reply.papertrans.cn/24/2343/234219/234219_48.pngmitral-valve 发表于 2025-3-30 01:52:57
Attentive Prototype Few-Shot Learning with Capsule Network-Based Embedding,posed attentive prototype aggregates all of the instances in a support class which are weighted by their importance, defined by the reconstruction error for a given query. The reconstruction error allows the classification posterior probability to be estimated, which corresponds to the classificatio小鹿 发表于 2025-3-30 06:35:09
DA4AD: End-to-End Deep Attention-Based Visual Localization for Autonomous Driving,ively validate the effectiveness of our method using a freshly collected dataset with high-quality ground truth trajectories and hardware synchronization between sensors. Results demonstrate that our method achieves a competitive localization accuracy when compared to the LiDAR-based localization so