找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Computer Games; 5th Workshop on Comp Tristan Cazenave,Mark H.M. Winands,Julian Togelius Conference proceedings 2017 Springer International

[复制链接]
楼主: emanate
发表于 2025-3-28 17:54:45 | 显示全部楼层
发表于 2025-3-28 21:56:50 | 显示全部楼层
Taxonomy Matching Using Background Knowledgeystem trains a Q-network capable of strong play with no search. After two weeks of Q-learning, NeuroHex achieves respective win-rates of 20.4% as first player and 2.1% as second player against a 1-s/move version of MoHex, the current ICGA Olympiad Hex champion. Our data suggests further improvement might be possible with more training time.
发表于 2025-3-29 00:32:26 | 显示全部楼层
发表于 2025-3-29 03:28:24 | 显示全部楼层
Learning from the Memory of Atari 2600posed in [.] and received comparable results in all considered games. Quite surprisingly, in the case of Seaquest we were able to train RAM-only agents which behave better than the benchmark screen-only agent. Mixing screen and RAM did not lead to an improved performance comparing to screen-only and RAM-only agents.
发表于 2025-3-29 10:55:09 | 显示全部楼层
Clustering-Based Online Player Modelinglay tendencies. The models can then be used to play the game or for analysis to identify how different players react to separate aspects of game states. The method is demonstrated on a tablet-based trajectory generation game called ..
发表于 2025-3-29 15:10:03 | 显示全部楼层
A General Approach of Game Description Decomposition for General Game Playingse serial games composed of two subgames and games with compound moves while avoiding, unlike previous works, to rely on syntactic elements that can be eliminated by simply rewriting the GDL rules. We tested our program on 40 games, compound or not, and we can decompose 32 of them successfully in less than 5 s.
发表于 2025-3-29 15:49:14 | 显示全部楼层
1865-0929 Workshop, CGW 2016, and the 5th Workshop on General Intelligence in Game-Playing Agents, GIGA 2016, held in conjunction with the 25th International Conference on Artificial Intelligence, IJCAI 2016, in New York, USA, in July 2016.The 12 revised full papers presented were carefully reviewed and selec
发表于 2025-3-29 21:02:53 | 显示全部楼层
Matching Evaluations and Datasetsyout policy online that dynamically adapts the playouts to the problem at hand. We propose to enhance NRPA using more selectivity in the playouts. The idea is applied to three different problems: Bus regulation, SameGame and Weak Schur numbers. We improve on standard NRPA for all three problems.
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 吾爱论文网 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
QQ|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-8-8 13:29
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表