找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Engineering Dependable and Secure Machine Learning Systems; Third International Onn Shehory,Eitan Farchi,Guy Barash Conference proceedings

[复制链接]
楼主: Coronary-Artery
发表于 2025-3-27 00:05:54 | 显示全部楼层
Principal Component Properties of Adversarial Samples,a benign image that can easily fool trained neural networks, posing a significant risk to their commercial deployment. In this work, we analyze adversarial samples through the lens of their contributions to the principal components of . image, which is different than prior works in which authors per
发表于 2025-3-27 02:09:38 | 显示全部楼层
发表于 2025-3-27 08:50:46 | 显示全部楼层
Density Estimation in Representation Space to Predict Model Uncertainty,ir training dataset. We propose a novel and straightforward approach to estimate prediction uncertainty in a pre-trained neural network model. Our method estimates the training data density in representation space for a novel input. A neural network model then uses this information to determine whet
发表于 2025-3-27 11:14:04 | 显示全部楼层
Automated Detection of Drift in Deep Learning Based Classifiers Performance Using Network Embeddingly sampled test set is used to estimate the performance (e.g., accuracy) of the neural network during deployment time. The performance on the test set is used to project the performance of the neural network at deployment time under the implicit assumption that the data distribution of the test set
发表于 2025-3-27 16:41:21 | 显示全部楼层
发表于 2025-3-27 19:29:28 | 显示全部楼层
Dependable Neural Networks for Safety Critical Tasks, perform safely in novel scenarios. It is challenging to verify neural networks because their decisions are not explainable, they cannot be exhaustively tested, and finite test samples cannot capture the variation across all operating conditions. Existing work seeks to train models robust to new sce
发表于 2025-3-27 22:21:19 | 显示全部楼层
发表于 2025-3-28 05:32:03 | 显示全部楼层
Neue Entwicklungen und Zukunftsperspektiven,TSRB and MS-COCO. Our initial results suggest that using attention mask leads to improved robustness. On the adversarially trained classifiers, we see an adversarial robustness increase of over 20% on MS-COCO.
发表于 2025-3-28 07:59:39 | 显示全部楼层
发表于 2025-3-28 11:58:23 | 显示全部楼层
Technischer Lehrgang: Hydraulische Systemeerformance assessment. Here we demonstrate a novel technique, called IBM FreaAI, which automatically extracts explainable feature slices for which the ML solution’s performance is statistically significantly worse than the average. We demonstrate results of evaluating ML classifier models on seven o
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-15 19:58
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表