使习惯于
发表于 2025-3-30 09:27:38
http://reply.papertrans.cn/24/2343/234245/234245_51.png
独裁政府
发表于 2025-3-30 15:08:52
http://reply.papertrans.cn/24/2343/234245/234245_52.png
淡紫色花
发表于 2025-3-30 18:09:07
http://reply.papertrans.cn/24/2343/234245/234245_53.png
躺下残杀
发表于 2025-3-30 22:26:27
,PTQ4ViT: Post-training Quantization for Vision Transformers with Twin Uniform Quantization,). Recently, vision transformers have demonstrated great potential in computer vision. However, previous post-training quantization methods performed not well on vision transformer, resulting in more than 1% accuracy drop even in 8-bit quantization. Therefore, we analyze the problems of quantization
muffler
发表于 2025-3-31 01:19:42
http://reply.papertrans.cn/24/2343/234245/234245_55.png
伴随而来
发表于 2025-3-31 08:59:11
http://reply.papertrans.cn/24/2343/234245/234245_56.png
lanugo
发表于 2025-3-31 09:11:06
Latent Discriminant Deterministic Uncertainty,s are computationally intensive. In this work, we attempt to address these challenges in the context of autonomous driving perception tasks. Recently proposed Deterministic Uncertainty Methods (DUM) can only partially meet such requirements as their scalability to complex computer vision tasks is no
Eructation
发表于 2025-3-31 14:55:27
http://reply.papertrans.cn/24/2343/234245/234245_58.png
LUCY
发表于 2025-3-31 19:49:12
HIVE: Evaluating the Human Interpretability of Visual Explanations,an interpretable. Despite the recent growth of interpretability work, there is a lack of systematic evaluation of proposed techniques. In this work, we introduce HIVE (Human Interpretability of Visual Explanations), a novel human evaluation framework that assesses the utility of explanations to huma
cardiac-arrest
发表于 2025-4-1 01:24:48
,BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks,. While Bayesian deep learning techniques allow uncertainty estimation, training them with large-scale datasets is an expensive process that does not always yield models competitive with non-Bayesian counterparts. Moreover, many of the high-performing deep learning models that are already trained an