找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Explainable Artificial Intelligence; First World Conferen Luca Longo Conference proceedings 2023 The Editor(s) (if applicable) and The Auth

[复制链接]
楼主: 固执已见
发表于 2025-3-28 17:44:28 | 显示全部楼层
发表于 2025-3-28 19:40:51 | 显示全部楼层
Dear XAI Community, We Need to Talk!unately, these unfounded parts are not on the decline but continue to grow. Many explanation techniques are still proposed without clarifying their purpose. Instead, they are advertised with ever more fancy-looking heatmaps or only seemingly relevant benchmarks. Moreover, explanation techniques are
发表于 2025-3-29 02:11:05 | 显示全部楼层
Speeding Things Up. Can Explainability Improve Human Learning? such circumstances, the algorithm requests a teacher, usually a human, to select or verify the system’s prediction on the most informative points. The most informative usually refers to the instances that are the hardest for the algorithm to label. However, it has been proven that humans are more l
发表于 2025-3-29 06:15:09 | 显示全部楼层
发表于 2025-3-29 08:25:02 | 显示全部楼层
发表于 2025-3-29 15:01:03 | 显示全部楼层
Do Intermediate Feature Coalitions Aid Explainability of Black-Box Models? a hierarchical structure in which each level corresponds to features of a dataset (i.e., a player-set partition). The level of coarseness increases from the trivial set, which only comprises singletons, to the set, which only contains the grand coalition. In addition, it is possible to establish me
发表于 2025-3-29 18:04:43 | 显示全部楼层
Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Valuesbutions are susceptible to adversarial attacks. This originates from target function evaluations at extrapolated data points, which are easily detectable and hence, enable models to behave accordingly. In this paper, we introduce a novel strategy for increased robustness against adversarial attacks
发表于 2025-3-29 20:07:49 | 显示全部楼层
Strategies to Exploit XAI to Improve Classification Systemsesults beyond their decisions. A significant goal of XAI is to improve the performance of AI models by providing explanations for their decision-making processes. However, most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploi
发表于 2025-3-30 00:19:01 | 显示全部楼层
Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAImodels. Surrogation, emulating a black-box model (BB) with a white-box model (WB), is crucial in applications where BBs are unavailable due to security or practical concerns. Traditional fidelity measures only evaluate the similarity of the final predictions, which can lead to a significant limitati
发表于 2025-3-30 07:32:07 | 显示全部楼层
iPDP: On Partial Dependence Plots in Dynamic Modeling Scenariosinable artificial intelligence (XAI) to understand black-box machine learning models. While many real-world applications require dynamic models that constantly adapt over time and react to changes in the underlying distribution, XAI, so far, has primarily considered static learning environments, whe
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-1 21:44
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表