找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Machine Learning and Knowledge Extraction; 7th IFIP TC 5, TC 12 Andreas Holzinger,Peter Kieseberg,Edgar Weippl Conference proceedings 2023

[复制链接]
查看: 11259|回复: 66
发表于 2025-3-21 17:12:47 | 显示全部楼层 |阅读模式
书目名称Machine Learning and Knowledge Extraction
副标题7th IFIP TC 5, TC 12
编辑Andreas Holzinger,Peter Kieseberg,Edgar Weippl
视频video
丛书名称Lecture Notes in Computer Science
图书封面Titlebook: Machine Learning and Knowledge Extraction; 7th IFIP TC 5, TC 12 Andreas Holzinger,Peter Kieseberg,Edgar Weippl Conference proceedings 2023
描述This volume LNCS-IFIP constitutes the refereed proceedings of the 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023 in Benevento, Italy, during August 28 – September 1, 2023.  .The 18 full papers presented together were carefully reviewed and selected from 30 submissions. The conference focuses on integrative machine learning approach, considering the importance of data science and visualization for the algorithmic pipeline with a strong emphasis on privacy, data protection, safety and security...
出版日期Conference proceedings 2023
关键词artificial intelligence; computer networks; computer science; computer systems; computer vision; cyber-in
版次1
doihttps://doi.org/10.1007/978-3-031-40837-3
isbn_softcover978-3-031-40836-6
isbn_ebook978-3-031-40837-3Series ISSN 0302-9743 Series E-ISSN 1611-3349
issn_series 0302-9743
copyrightIFIP International Federation for Information Processing 2023
The information of publication is updating

书目名称Machine Learning and Knowledge Extraction影响因子(影响力)




书目名称Machine Learning and Knowledge Extraction影响因子(影响力)学科排名




书目名称Machine Learning and Knowledge Extraction网络公开度




书目名称Machine Learning and Knowledge Extraction网络公开度学科排名




书目名称Machine Learning and Knowledge Extraction被引频次




书目名称Machine Learning and Knowledge Extraction被引频次学科排名




书目名称Machine Learning and Knowledge Extraction年度引用




书目名称Machine Learning and Knowledge Extraction年度引用学科排名




书目名称Machine Learning and Knowledge Extraction读者反馈




书目名称Machine Learning and Knowledge Extraction读者反馈学科排名




单选投票, 共有 0 人参与投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用户组没有投票权限
发表于 2025-3-22 00:07:42 | 显示全部楼层
,Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognshow that the domain-specific evaluation is especially beneficial for challenging use cases such as facial expression recognition and provides application-grounded quality criteria that are not covered by standard evaluation methods. Our comparison of the domain-specific evaluation method with stand
发表于 2025-3-22 04:15:06 | 显示全部楼层
,Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning,icial Intelligence (xAI) methods, changing the datasets to create counterfactual explanations. The adapted datasets could influence the local model’s characteristics and thereby create a federated version that distils their diverse knowledge in a centralized scenario. This work demonstrates the feas
发表于 2025-3-22 07:59:39 | 显示全部楼层
发表于 2025-3-22 11:10:51 | 显示全部楼层
,Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data,yper-Stacked, a novel AutoML component built natively on Apache Spark. Hyper-Stacked combines multi-fidelity hyperparameter optimisation with the Super Learner stacking technique to produce a strong and diverse ensemble. Integration with Spark allows for a parallelised and distributed approach, capa
发表于 2025-3-22 14:41:09 | 显示全部楼层
发表于 2025-3-22 17:09:49 | 显示全部楼层
发表于 2025-3-23 00:29:15 | 显示全部楼层
,Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice,h pixel attribution maps. These cases were associated with the same AI advice for the base case, but one case was accurate while the other was erroneous with respect to the ground truth. While the introduction of this support system did not significantly enhance diagnostic accuracy, it was highly va
发表于 2025-3-23 03:18:48 | 显示全部楼层
发表于 2025-3-23 08:14:59 | 显示全部楼层
,The Split Matters: Flat Minima Methods for Improving the Performance of GNNs, can improve the performance of GNN models by over 2 points, if the train-test split is randomized. Following Shchur et al., randomized splits are essential for a fair evaluation of GNNs, as other (fixed) splits like “Planetoid” are biased. Overall, we provide important insights for improving and fa
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-7-5 18:52
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表