变形 发表于 2025-3-27 00:05:54

Principal Component Properties of Adversarial Samples,a benign image that can easily fool trained neural networks, posing a significant risk to their commercial deployment. In this work, we analyze adversarial samples through the lens of their contributions to the principal components of . image, which is different than prior works in which authors per

雄伟 发表于 2025-3-27 02:09:38

http://reply.papertrans.cn/32/3108/310749/310749_32.png

Forage饲料 发表于 2025-3-27 08:50:46

Density Estimation in Representation Space to Predict Model Uncertainty,ir training dataset. We propose a novel and straightforward approach to estimate prediction uncertainty in a pre-trained neural network model. Our method estimates the training data density in representation space for a novel input. A neural network model then uses this information to determine whet

Glaci冰 发表于 2025-3-27 11:14:04

Automated Detection of Drift in Deep Learning Based Classifiers Performance Using Network Embeddingly sampled test set is used to estimate the performance (e.g., accuracy) of the neural network during deployment time. The performance on the test set is used to project the performance of the neural network at deployment time under the implicit assumption that the data distribution of the test set

occult 发表于 2025-3-27 16:41:21

http://reply.papertrans.cn/32/3108/310749/310749_35.png

squander 发表于 2025-3-27 19:29:28

Dependable Neural Networks for Safety Critical Tasks, perform safely in novel scenarios. It is challenging to verify neural networks because their decisions are not explainable, they cannot be exhaustively tested, and finite test samples cannot capture the variation across all operating conditions. Existing work seeks to train models robust to new sce

minaret 发表于 2025-3-27 22:21:19

http://reply.papertrans.cn/32/3108/310749/310749_37.png

SIT 发表于 2025-3-28 05:32:03

Neue Entwicklungen und Zukunftsperspektiven,TSRB and MS-COCO. Our initial results suggest that using attention mask leads to improved robustness. On the adversarially trained classifiers, we see an adversarial robustness increase of over 20% on MS-COCO.

ENACT 发表于 2025-3-28 07:59:39

http://reply.papertrans.cn/32/3108/310749/310749_39.png

手工艺品 发表于 2025-3-28 11:58:23

Technischer Lehrgang: Hydraulische Systemeerformance assessment. Here we demonstrate a novel technique, called IBM FreaAI, which automatically extracts explainable feature slices for which the ML solution’s performance is statistically significantly worse than the average. We demonstrate results of evaluating ML classifier models on seven o
页: 1 2 3 [4] 5
查看完整版本: Titlebook: Engineering Dependable and Secure Machine Learning Systems; Third International Onn Shehory,Eitan Farchi,Guy Barash Conference proceedings