贪婪的人 发表于 2025-3-23 09:53:48
A Theoretically Grounded Extension of Universal Attacks from the Attacker’s Viewpointformance of state-of-the-art gradient-based universal perturbation. As evidenced by our experiments, these novel universal perturbations result in more interpretable, diverse, and transferable attacks.Foreknowledge 发表于 2025-3-23 14:12:10
http://reply.papertrans.cn/63/6206/620538/620538_12.pngAND 发表于 2025-3-23 20:57:09
http://reply.papertrans.cn/63/6206/620538/620538_13.png钢笔尖 发表于 2025-3-24 00:02:13
http://reply.papertrans.cn/63/6206/620538/620538_14.pngATOPY 发表于 2025-3-24 05:36:19
Walking Noise: On Layer-Specific Robustness of Neural Architectures Against Noisy Computations and Aorkload. We propose a methodology called . which injects layer-specific noise to measure the robustness and to provide insights on the learning dynamics. In more detail, we investigate the implications of additive, multiplicative and mixed noise for different classification tasks and model architect上坡 发表于 2025-3-24 09:19:24
KAFÈ: Kernel Aggregation for FEderatedel .ggregation for .derated Learning. KAFÈ leverages Kernel Density Estimation (KDE) to construct a novel classification layer for the global model, drawing upon the estimated weight distributions of the individual classifiers. We conducted several experiments on image and text datasets to evaluatetransplantation 发表于 2025-3-24 12:28:08
http://reply.papertrans.cn/63/6206/620538/620538_17.pngObstruction 发表于 2025-3-24 15:39:06
http://reply.papertrans.cn/63/6206/620538/620538_18.png撕裂皮肉 发表于 2025-3-24 20:39:33
Low-Hanging Fruit: Knowledge Distillation from Noisy Teachers for Open Domain Spoken Language Unders techniques to generate more reliable annotations for unlabelled OD-SLU data, thereby fostering “Consistently Guiding Students”. Initially, IPPS aims to solve the straightforward intent prediction task in OD-SLU using self-ranked prompting, enhancing LLMs precision using similar examples from a smal把手 发表于 2025-3-25 01:41:50
The Price of Labelling: A Two-Phase Federated Self-learning Approachsuch as class imbalance and distribution shift across clients. This poses a challenge for creating high-quality pseudo-labels without addressing data heterogeneity. To overcome these challenges, we propose a two-phase FL approach based on data augmentation and self-learning, coined 2PFL. In the firs