过多 发表于 2025-3-23 10:29:14

http://reply.papertrans.cn/32/3193/319283/319283_11.png

北极人 发表于 2025-3-23 15:50:31

http://reply.papertrans.cn/32/3193/319283/319283_12.png

Myocarditis 发表于 2025-3-23 21:56:12

Model-Agnostic Methods for XAI,In this chapter, we start our journey through XAI model-agnostic methods that are, as we said, potent techniques to produce explanations without relying on ML model internals that are “opaque.”

zonules 发表于 2025-3-24 00:25:42

http://reply.papertrans.cn/32/3193/319283/319283_14.png

Transfusion 发表于 2025-3-24 03:13:47

http://reply.papertrans.cn/32/3193/319283/319283_15.png

统治人类 发表于 2025-3-24 07:38:23

https://doi.org/10.1007/978-3-030-68640-6XAI; Artificial Intelligence; Machine Learning; intrinsic interpretable models; Shapley Values; Deep Tayl

拔出 发表于 2025-3-24 11:32:21

http://reply.papertrans.cn/32/3193/319283/319283_17.png

Saline 发表于 2025-3-24 18:10:23

http://reply.papertrans.cn/32/3193/319283/319283_18.png

HARP 发表于 2025-3-24 21:37:41

http://image.papertrans.cn/e/image/319283.jpg

Bernstein-test 发表于 2025-3-25 01:18:45

Adversarial Machine Learning and Explainability,d by the same NN as a gibbon with 99.3% confidence. What is happening here? The first thoughts are about some mistakes in designing or training the NN, but the point that will emerge from this chapter is that this mistake in classification is due to an adversarial attack
页: 1 [2] 3 4 5
查看完整版本: Titlebook: Explainable AI with Python; Leonida Gianfagna,Antonio Di Cecco Book 2021 The Editor(s) (if applicable) and The Author(s), under exclusive