找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Receptors in the Developing Nervous System; Volume 2 Neurotransm Ian S. Zagon,Patricia J. McLaughlin Book 1993 Springer Science+Business Me

[复制链接]
楼主: tricuspid-valve
发表于 2025-3-23 10:27:29 | 显示全部楼层
发表于 2025-3-23 16:26:38 | 显示全部楼层
发表于 2025-3-23 18:20:25 | 显示全部楼层
Paul H. Robinson,John D. Stephenson,Timothy H. Moranfferent conditions, manipulating the transparency in a team. The results showed an interaction effect between the agents’ strategy and transparency on trust, group identification and human-likeness. Our results suggest that transparency has a positive effect in terms of people’s perception of trust,
发表于 2025-3-24 01:00:29 | 显示全部楼层
发表于 2025-3-24 02:24:43 | 显示全部楼层
发表于 2025-3-24 07:53:30 | 显示全部楼层
Rebecca M. Prussng exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particul
发表于 2025-3-24 11:10:28 | 显示全部楼层
F. Javier Garcia-Ladona,Guadalupe Mengod,José M. Palacios agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ
发表于 2025-3-24 18:09:09 | 显示全部楼层
Sandra E. Loughlin,Frances M. Leslie agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ
发表于 2025-3-24 19:38:56 | 显示全部楼层
Edythe D. London,Stephen R. Zukin agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three differ
发表于 2025-3-25 00:50:57 | 显示全部楼层
Ann Tempelng exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particul
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-11 19:39
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表