入会 发表于 2025-3-23 11:22:40
http://reply.papertrans.cn/23/2224/222338/222338_11.pngAGOG 发表于 2025-3-23 15:12:20
http://reply.papertrans.cn/23/2224/222338/222338_12.png传染 发表于 2025-3-23 18:02:02
People Search Engines and Public Records,arnt by a genetic algorithm. The system has been applied to a breast cancer application domain. The results show that with our methodology we can improve the results obtained with a case base in which attributes have been manually selected by physicians, saving physicians work in future.ANTIC 发表于 2025-3-23 22:27:13
http://reply.papertrans.cn/23/2224/222338/222338_14.pngAllege 发表于 2025-3-24 04:12:04
https://doi.org/10.1007/978-1-4842-2820-3nner. An algorithm that incorporates CBR techniques into the Heuristically Accelerated Q–Learning is also proposed. Empirical evaluations were conducted in a simulator for the RoboCup Four-Legged Soccer Competition, and results obtained shows that using CB-HARL, the agents learn faster than using either RL or HARL methods.厌恶 发表于 2025-3-24 09:06:31
http://reply.papertrans.cn/23/2224/222338/222338_16.pngCAND 发表于 2025-3-24 13:51:34
Rosendo Abellera,Lakshman Bulusuexecutions for meta-reasoning. We illustrate its benefits with experimental results from a system implementing our approach called . in a real-time strategy game. The evaluation of Meta-Darmok shows that the system successfully adapts itself and its performance improves through appropriate revision of the case base.Cougar 发表于 2025-3-24 18:41:38
https://doi.org/10.1007/978-1-4842-3264-4 the defined measures are applicable to any representation language for which a refinement lattice can be defined. We empirically evaluate our measures comparing them to other measures in the literature in a variety of relational data sets showing very good results.美丽的写 发表于 2025-3-24 22:20:34
http://reply.papertrans.cn/23/2224/222338/222338_19.pngBORE 发表于 2025-3-25 00:10:30
Improving Reinforcement Learning by Using Case Based Heuristicsnner. An algorithm that incorporates CBR techniques into the Heuristically Accelerated Q–Learning is also proposed. Empirical evaluations were conducted in a simulator for the RoboCup Four-Legged Soccer Competition, and results obtained shows that using CB-HARL, the agents learn faster than using either RL or HARL methods.