找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Software Verification and Formal Methods for ML-Enabled Autonomous Systems; 5th International Wo Omri Isac,Radoslav Ivanov,Laura Nenzi Conf

[复制链接]
楼主: 小故障
发表于 2025-3-25 06:10:55 | 显示全部楼层
发表于 2025-3-25 10:20:09 | 显示全部楼层
A Cascade of Checkers for Run-time Certification of Local Robustness ranging from adversarial training with robustness guarantees to post-training and run-time certification of local robustness using either inexpensive but incomplete verification or sound, complete, but expensive constraint solving. We advocate for the use of a run-time cascade of over-approximate,
发表于 2025-3-25 15:01:09 | 显示全部楼层
发表于 2025-3-25 17:17:26 | 显示全部楼层
发表于 2025-3-25 20:15:36 | 显示全部楼层
发表于 2025-3-26 02:51:11 | 显示全部楼层
Neural Networks in Imandra: Matrix Representation as a Verification Choicel applications. Matrices are a data structure essential to formalising neural networks. Functional programming languages encourage diverse approaches to matrix definitions. This feature has already been successfully exploited in different applications. The question we ask is whether, and how, these
发表于 2025-3-26 08:12:05 | 显示全部楼层
Self-correcting Neural Networks for Safe Classificationl notion of safety for classifiers via constraints called .. These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature.
发表于 2025-3-26 09:25:28 | 显示全部楼层
发表于 2025-3-26 13:01:32 | 显示全部楼层
Verified Numerical Methods for Ordinary Differential Equationss and are often used in computational models with safety-critical applications. For critical computations, numerical solvers for ODEs that provide useful guarantees of their accuracy and correctness are required, but do not always exist in practice. In this work, we demonstrate how to use the Coq pr
发表于 2025-3-26 20:12:06 | 显示全部楼层
Neural Network Precision Tuning Using Stochastic Arithmeticedded system with limited resources. A possible solution consists in reducing the precision of their neurons parameters. In this article, we present how to use auto-tuning on neural networks to lower their precision while keeping an accurate output. To do so, we use a floating-point auto-tuning tool
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-11 09:58
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表