BIBLE 发表于 2025-3-23 12:47:53
Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Spons Unfair Commercial Practices Directive (UCPD) in the European Union, or Section 5 of the Federal Trade Commission Act. Yet enforcing these obligations has proven to be highly problematic due to the sheer scale of the influencer market. The task of automatically detecting sponsored content aims to en虚弱的神经 发表于 2025-3-23 15:00:34
Human-Computer Interaction and Explainability: Intersection and Terminologychnological artifacts or systems. Explainable AI (xAI) is involved in HCI to have humans better understand computers or AI systems which fosters, as a consequence, better interaction. The term “explainability” is sometimes used interchangeably with other closely related terms such as interpretabilitCarcinoma 发表于 2025-3-23 18:31:56
Explaining Deep Reinforcement Learning-Based Methods for Control of Building HVAC Systems learning techniques. However, due to the black-box nature of these algorithms, the resulting control policies can be difficult to understand from a human perspective. This limitation is particularly relevant in real-world scenarios, where an understanding of the controller is required for reliabiliOphthalmoscope 发表于 2025-3-24 00:36:59
http://reply.papertrans.cn/32/3193/319288/319288_14.png挑剔为人 发表于 2025-3-24 04:26:22
Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and Without Intclassify an instance in an ordered list of categories, on the basis of multiple and conflicting criteria. Several models can be used to achieve such goals ranging from the simplest one assuming independence among criteria - namely the weighted sum model - to complex models able to represent complexfatty-acids 发表于 2025-3-24 08:13:53
XInsight: Revealing Model Insights for GNNs with Flow-Based Explanationssystems. While this progress is significant, many networks are ‘black boxes’ with little understanding of the ‘what’ exactly the network is learning. Many high-stakes applications, such as drug discovery, require human-intelligible explanations from the models so that users can recognize errors andarbovirus 发表于 2025-3-24 11:59:17
What Will Make Misinformation Spread: An XAI Perspective when making the decisions. Online social networks have a problem with misinformation which is known to have negative effects. In this paper, we propose to utilize XAI techniques to study what factors lead to misinformation spreading by explaining a trained graph neural network that predicts misinfoImplicit 发表于 2025-3-24 17:26:29
http://reply.papertrans.cn/32/3193/319288/319288_18.pngMaximizer 发表于 2025-3-24 20:53:00
http://reply.papertrans.cn/32/3193/319288/319288_19.pngMORPH 发表于 2025-3-24 23:28:52
Handbook of Practical Astronomyan aggregation of the explanations provided by the clients participating in the cooperation. We empirically test our proposal on two different tabular datasets, and we observe interesting and encouraging preliminary results.