开始发作 发表于 2025-3-23 10:27:29

http://reply.papertrans.cn/83/8236/823568/823568_11.png

有发明天才 发表于 2025-3-23 16:26:38

http://reply.papertrans.cn/83/8236/823568/823568_12.png

不幸的人 发表于 2025-3-23 18:20:25

Paul H. Robinson,John D. Stephenson,Timothy H. Moranfferent conditions, manipulating the transparency in a team. The results showed an interaction effect between the agents’ strategy and transparency on trust, group identification and human-likeness. Our results suggest that transparency has a positive effect in terms of people’s perception of trust,

Flat-Feet 发表于 2025-3-24 01:00:29

http://reply.papertrans.cn/83/8236/823568/823568_14.png

bile648 发表于 2025-3-24 02:24:43

http://reply.papertrans.cn/83/8236/823568/823568_15.png

SHOCK 发表于 2025-3-24 07:53:30

Rebecca M. Prussng exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particul

Pander 发表于 2025-3-24 11:10:28

F. Javier Garcia-Ladona,Guadalupe Mengod,José M. Palacios agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ

Cpap155 发表于 2025-3-24 18:09:09

Sandra E. Loughlin,Frances M. Leslie agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ

对手 发表于 2025-3-24 19:38:56

Edythe D. London,Stephen R. Zukin agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ

AMITY 发表于 2025-3-25 00:50:57

Ann Tempelng exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particul
页: 1 [2] 3 4 5
查看完整版本: Titlebook: Receptors in the Developing Nervous System; Volume 2 Neurotransm Ian S. Zagon,Patricia J. McLaughlin Book 1993 Springer Science+Business Me