找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Explainable Artificial Intelligence; First World Conferen Luca Longo Conference proceedings 2023 The Editor(s) (if applicable) and The Auth

[复制链接]
楼主: Forbidding
发表于 2025-3-23 12:47:53 | 显示全部楼层
Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Spons Unfair Commercial Practices Directive (UCPD) in the European Union, or Section 5 of the Federal Trade Commission Act. Yet enforcing these obligations has proven to be highly problematic due to the sheer scale of the influencer market. The task of automatically detecting sponsored content aims to en
发表于 2025-3-23 15:00:34 | 显示全部楼层
Human-Computer Interaction and Explainability: Intersection and Terminologychnological artifacts or systems. Explainable AI (xAI) is involved in HCI to have humans better understand computers or AI systems which fosters, as a consequence, better interaction. The term “explainability” is sometimes used interchangeably with other closely related terms such as interpretabilit
发表于 2025-3-23 18:31:56 | 显示全部楼层
Explaining Deep Reinforcement Learning-Based Methods for Control of Building HVAC Systems learning techniques. However, due to the black-box nature of these algorithms, the resulting control policies can be difficult to understand from a human perspective. This limitation is particularly relevant in real-world scenarios, where an understanding of the controller is required for reliabili
发表于 2025-3-24 00:36:59 | 显示全部楼层
发表于 2025-3-24 04:26:22 | 显示全部楼层
Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and Without Intclassify an instance in an ordered list of categories, on the basis of multiple and conflicting criteria. Several models can be used to achieve such goals ranging from the simplest one assuming independence among criteria - namely the weighted sum model - to complex models able to represent complex
发表于 2025-3-24 08:13:53 | 显示全部楼层
XInsight: Revealing Model Insights for GNNs with Flow-Based Explanationssystems. While this progress is significant, many networks are ‘black boxes’ with little understanding of the ‘what’ exactly the network is learning. Many high-stakes applications, such as drug discovery, require human-intelligible explanations from the models so that users can recognize errors and
发表于 2025-3-24 11:59:17 | 显示全部楼层
What Will Make Misinformation Spread: An XAI Perspective when making the decisions. Online social networks have a problem with misinformation which is known to have negative effects. In this paper, we propose to utilize XAI techniques to study what factors lead to misinformation spreading by explaining a trained graph neural network that predicts misinfo
发表于 2025-3-24 17:26:29 | 显示全部楼层
发表于 2025-3-24 20:53:00 | 显示全部楼层
发表于 2025-3-24 23:28:52 | 显示全部楼层
Handbook of Practical Astronomyan aggregation of the explanations provided by the clients participating in the cooperation. We empirically test our proposal on two different tabular datasets, and we observe interesting and encouraging preliminary results.
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-23 12:10
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表