找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Wojciech Samek,Grégoire Montavon,Klaus-Robert Müll Book 2019 Sprin

[复制链接]
查看: 46652|回复: 51
发表于 2025-3-21 17:06:13 | 显示全部楼层 |阅读模式
书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
编辑Wojciech Samek,Grégoire Montavon,Klaus-Robert Müll
视频video
概述Assesses the current state of research on Explainable AI (XAI).Provides a snapshot of interpretable AI techniques.Reflects the current discourse and provides directions of future development
丛书名称Lecture Notes in Computer Science
图书封面Titlebook: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning;  Wojciech Samek,Grégoire Montavon,Klaus-Robert Müll Book 2019 Sprin
描述.The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner...The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecti
出版日期Book 2019
关键词artificial intelligence; computer vision; deep Learning; explainable AI; explanation Methods; fuzzy contr
版次1
doihttps://doi.org/10.1007/978-3-030-28954-6
isbn_softcover978-3-030-28953-9
isbn_ebook978-3-030-28954-6Series ISSN 0302-9743 Series E-ISSN 1611-3349
issn_series 0302-9743
copyrightSpringer Nature Switzerland AG 2019
The information of publication is updating

书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning影响因子(影响力)




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning影响因子(影响力)学科排名




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning网络公开度




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning网络公开度学科排名




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning被引频次




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning被引频次学科排名




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning年度引用




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning年度引用学科排名




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning读者反馈




书目名称Explainable AI: Interpreting, Explaining and Visualizing Deep Learning读者反馈学科排名




单选投票, 共有 0 人参与投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用户组没有投票权限
发表于 2025-3-21 23:18:22 | 显示全部楼层
发表于 2025-3-22 03:21:50 | 显示全部楼层
发表于 2025-3-22 06:54:58 | 显示全部楼层
Understanding Neural Networks via Feature Visualization: A Surveys in machine learning enable a family of methods to synthesize preferred stimuli that cause a neuron in an artificial or biological brain to fire strongly. Those methods are known as Activation Maximization (AM) [.] or Feature Visualization via Optimization. In this chapter, we (1) review existing A
发表于 2025-3-22 10:20:28 | 显示全部楼层
发表于 2025-3-22 12:54:50 | 显示全部楼层
发表于 2025-3-22 18:43:36 | 显示全部楼层
发表于 2025-3-22 21:30:23 | 显示全部楼层
Explanations for Attributing Deep Neural Network Predictionsalthcare decision-making, there is a great need for . and . . of “why” an algorithm is making a certain prediction. In this chapter, we introduce 1. Meta-Predictors as Explanations, a principled framework for learning explanations for any black box algorithm, and 2. Meaningful Perturbations, an inst
发表于 2025-3-23 05:21:49 | 显示全部楼层
Gradient-Based Attribution Methodsile several methods have been proposed to explain network predictions, the definition itself of explanation is still debated. Moreover, only a few attempts to compare explanation methods from a theoretical perspective has been done. In this chapter, we discuss the theoretical properties of several a
发表于 2025-3-23 08:40:29 | 显示全部楼层
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-18 03:26
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表