找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Explainable Human-AI Interaction; A Planning Perspecti Sarath Sreedharan,Anagha Kulkarni,Subbarao Kambham Book 2022 Springer Nature Switzer

[复制链接]
楼主: 断岩
发表于 2025-3-26 22:14:30 | 显示全部楼层
Explanation as Model Reconciliation,xplanations. Rather than force the robot to choose behaviors that are inherently explicable in the human model, here we will let the robot choose a behavior optimal in its model and use communication to address the central reason why the human is confused about the behavior in the first place, i.e.,
发表于 2025-3-27 04:03:31 | 显示全部楼层
Acquiring Mental Models for Explanations, strong assumptions. Particularly, the setting assumes that the human’s model of the robot is known exactly upfront. In this chapter, we will look at how we can relax this assumption and see how we can perform model reconciliation in scenarios where the robot has progressively less information about
发表于 2025-3-27 06:49:42 | 显示全部楼层
Balancing Communication and Behavior,del of the robot. We have been quantifying some of the interaction between the behavior and human’s model in terms of three interpretability scores, each of which corresponds to some desirable property one would expect the robot behavior to satisfy under cooperative scenarios. With these measures de
发表于 2025-3-27 10:25:01 | 显示全部楼层
Explaining in the Presence of Vocabulary Mismatch,. This suggests that the human and the robot share a common vocabulary that can be used to describe the model. However, this cannot be guaranteed unless the robots are using models that are specified by an expert. Since many of the modern AI systems rely on learned models, they may use representatio
发表于 2025-3-27 14:47:03 | 显示全部楼层
发表于 2025-3-27 19:20:14 | 显示全部楼层
Applications,in this chapter will explicitly model the human’s mental model of the task and among other things use it to generate explanations. In particular, we will look at two broad application domains. One where the systems are designed for collaborative decision-making, i.e., systems designed to help user c
发表于 2025-3-28 00:04:07 | 显示全部楼层
发表于 2025-3-28 02:25:21 | 显示全部楼层
1939-4608 ons when needed. The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human 978-3-031-03757-3978-3-031-03767-2Series ISSN 1939-4608 Series E-ISSN 1939-4616
发表于 2025-3-28 07:01:13 | 显示全部楼层
https://doi.org/10.1007/978-0-387-30441-0nication of objectives might not always be suitable. For instance, the . and . of explicit communication may require additional thought. Further, several other aspects like cost of communication (in terms of resources or time), delay in communication (communications signals may take time to reach th
发表于 2025-3-28 12:00:44 | 显示全部楼层
An Overview of Stochastic Approximation, plan is only limited by the agent’s ability to effectively explain it. In this chapter, in addition to introducing the basic framework of explanation as model reconciliation under a certain set of assumptions, we will also look at several types of model reconciliation explanations, study some of th
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-18 23:22
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表