书目名称 | Transparency and Interpretability for Learned Representations of Artificial Neural Networks | 编辑 | Richard Meyes | 视频video | | 图书封面 |  | 描述 | .Artificial intelligence (AI) is a concept, whose meaning and perception has changed considerably over the last decades. Starting off with individual and purely theoretical research efforts in the 1950s, AI has grown into a fully developed research field of modern times and may arguably emerge as one of the most important technological advancements of mankind. Despite these rapid technological advancements, some key questions revolving around the matter of transparency, interpretability and explainability of an AI’s decision-making remain unanswered. Thus, a young research field coined with the general term .Explainable AI. (XAI) has emerged from increasingly strict requirements for AI to be used in safety critical or ethically sensitive domains. An important research branch of XAI is to develop methods that help to facilitate a deeper understanding for the learned knowledge of artificial neural systems. In this book, a series of scientific studies are presented that shed lighton how to adopt an empirical neuroscience inspired approach to investigate a neural network’s learned representation in the same spirit as neuroscientific studies of the brain.. | 出版日期 | Book 2022 | 关键词 | Transparency; Interpretability; Explainability; Learned Representation; XAI; Explainable AI; Artificial Ne | 版次 | 1 | doi | https://doi.org/10.1007/978-3-658-40004-0 | isbn_softcover | 978-3-658-40003-3 | isbn_ebook | 978-3-658-40004-0 | copyright | The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wies |
The information of publication is updating
|
|