Lacerate 发表于 2025-3-25 05:46:01
http://reply.papertrans.cn/32/3194/319308/319308_21.pngDRILL 发表于 2025-3-25 10:31:43
What Does It Cost to Deploy an XAI System: A Case Study in Legacy Systemsle way. We develop an aggregate taxonomy for explainability and analyse the requirements based on roles. We explain in which steps on the new code migration process machine learning is used. Further, we analyse additional effort needed to make the new way of code migration explainable to different stakeholders.合乎习俗 发表于 2025-3-25 14:07:37
Cecilia L. Ridgeway,Sandra Nakagawaf localised structures in NN, helping to reduce NN opacity. The proposed work analyses the role of local variability in NN architectures design, presenting experimental results that show how this feature is actually desirable.羊齿 发表于 2025-3-25 19:14:05
http://reply.papertrans.cn/32/3194/319308/319308_24.png威胁你 发表于 2025-3-25 22:49:23
The Moral Identity in Sociologytical relationships between different parameters. In addition, the explanations make it possible to inspect the presence of bias in the database and in the algorithm. These first results lay the groundwork for further additional research in order to generalize the conclusions of this paper to different XAI methods.2否定 发表于 2025-3-26 01:03:28
Vapor-Liquid Critical Constants of Fluids,through a consistent features attribution. We apply this methodology to analyse in detail the March 2020 financial meltdown, for which the model offered a timely out of sample prediction. This analysis unveils in particular the contrarian predictive role of the tech equity sector before and after the crash.overshadow 发表于 2025-3-26 04:48:37
https://doi.org/10.1007/978-3-319-22041-3ey factors that should be included in evaluating these applications and show how these work with the examples found. By using these assessment criteria to evaluate the explainability needs for Reinforcement Learning, the research field can be guided to increasing transparency and trust through explanations.象形文字 发表于 2025-3-26 10:43:20
http://reply.papertrans.cn/32/3194/319308/319308_28.pngcatagen 发表于 2025-3-26 13:31:10
A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable, or Undersncepts in a concise and coherent way, yielding a classification of three types of AI-systems: incomprehensible, interpretable, and understandable. We also discuss how the established relationships can be used to guide future research into XAI, and how the framework could be used during the development of AI-systems as part of human-AI teams.轻信 发表于 2025-3-26 19:09:50
Towards an XAI-Assisted Third-Party Evaluation of AI Systems: Illustration on Decision Treestical relationships between different parameters. In addition, the explanations make it possible to inspect the presence of bias in the database and in the algorithm. These first results lay the groundwork for further additional research in order to generalize the conclusions of this paper to different XAI methods.