找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Machine Learning and Knowledge Discovery in Databases; European Conference, Massih-Reza Amini,Stéphane Canu,Grigorios Tsoumaka Conference p

[复制链接]
楼主: Bunion
发表于 2025-3-28 17:51:44 | 显示全部楼层
Class-Incremental Learning via Knowledge Amalgamations methods have been proposed to address the catastrophic forgetting problem where an agent loses its generalization power of old tasks while learning new tasks. We put forward an alternative strategy to handle the catastrophic forgetting with knowledge amalgamation (CFA), which learns a student netw
发表于 2025-3-28 22:46:20 | 显示全部楼层
Trigger Detection for the sPHENIX Experiment via Bipartite Graph Networks with Set Transformerlso plays a vital role in facilitating the downstream offline data analysis process. The sPHENIX detector, located at the Relativistic Heavy Ion Collider in Brookhaven National Laboratory, is one of the largest nuclear physics experiments on a world scale and is optimized to detect physics processes
发表于 2025-3-29 02:46:59 | 显示全部楼层
Understanding Difficulty-Based Sample Weighting with a Universal Difficulty Measureto calculate their weights. In this study, this scheme is called difficulty-based weighting. Two important issues arise when explaining this scheme. First, a unified difficulty measure that can be theoretically guaranteed for training samples does not exist. The learning difficulties of the samples
发表于 2025-3-29 03:37:54 | 显示全部楼层
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networksmodels without access to past data. Current methods focus only on selecting a sub-network for a new task that reduces forgetting of past tasks. However, this selection could limit the forward transfer of . past knowledge that helps in future learning. Our study reveals that satisfying both objective
发表于 2025-3-29 09:47:53 | 显示全部楼层
PrUE: Distilling Knowledge from Sparse Teacher Networkshead on deployment. To compress these models, knowledge distillation was proposed to transfer knowledge from a cumbersome (teacher) network into a lightweight (student) network. However, guidance from a teacher does not always improve the generalization of students, especially when the size gap betw
发表于 2025-3-29 12:05:38 | 显示全部楼层
Fooling Partial Dependence via Data Poisoningut that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be ma
发表于 2025-3-29 18:15:14 | 显示全部楼层
发表于 2025-3-29 23:26:41 | 显示全部楼层
发表于 2025-3-30 00:37:07 | 显示全部楼层
Hypothesis Testing for Class-Conditional Label Noiseactitioner already has preconceptions on possible distortions that may have affected the labels, which allow us to pose the task as the design of hypothesis tests. As a first approach, we focus on scenarios where a given dataset of instance-label pairs has been corrupted with ., as opposed to ., wit
发表于 2025-3-30 06:59:23 | 显示全部楼层
On the Prediction Instability of Graph Neural Networksst in machine learning systems. In this paper, we systematically assess the prediction instability of node classification with state-of-the-art Graph Neural Networks (GNNs). With our experiments, we establish that multiple instantiations of popular GNN models trained on the same data with the same m
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-18 14:15
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表