削减 发表于 2025-3-23 10:31:04
Boston Studies in Applied Economicshttp://image.papertrans.cn/j/image/501005.jpgJuvenile 发表于 2025-3-23 17:29:00
http://reply.papertrans.cn/51/5011/501005/501005_12.png不安 发表于 2025-3-23 21:13:07
http://reply.papertrans.cn/51/5011/501005/501005_13.png讲个故事逗他 发表于 2025-3-23 23:23:54
http://reply.papertrans.cn/51/5011/501005/501005_14.pngPalter 发表于 2025-3-24 05:45:26
Peter B. Doeringer agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ名次后缀 发表于 2025-3-24 08:51:18
Wellford W. Wilmsng exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particulCubicle 发表于 2025-3-24 12:27:23
http://reply.papertrans.cn/51/5011/501005/501005_17.pngSubstance-Abuse 发表于 2025-3-24 14:56:14
http://reply.papertrans.cn/51/5011/501005/501005_18.pngGRILL 发表于 2025-3-24 20:01:41
Donna E. Olszewski agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ不规则 发表于 2025-3-25 02:05:08
http://reply.papertrans.cn/51/5011/501005/501005_20.png