找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Adversarial Machine Learning; Yevgeniy Vorobeychik,Murat Kantarcioglu Book 2018 Springer Nature Switzerland AG 2018

[复制链接]
发表于 2025-3-27 00:18:21 | 显示全部楼层
Categories of Attacks on Machine Learning,ulnerabilities centers around precise threat models. In this chapter, we present a general categorization of threat models, or attacks, in the context of machine learning. Our subsequent detailed presentation of the specific attacks will be grounded in this categorization.
发表于 2025-3-27 02:06:25 | 显示全部楼层
发表于 2025-3-27 06:47:01 | 显示全部楼层
发表于 2025-3-27 10:52:37 | 显示全部楼层
Attacks at Decision Time, spam, phishing, and malware detectors trained to distinguish between benign and malicious instances, with adversaries manipulating the nature of the objects, such as introducing clever word misspellings or substitutions of code regions, in order to be misclassified as benign.
发表于 2025-3-27 14:27:05 | 显示全部楼层
Defending Against Decision-Time Attacks,follow-up question: how do we defend against such attacks? As most of the literature on robust learning in the presence of decision-time attacks is focused on supervised learning, our discussion will be restricted to this setting. Additionally, we deal with an important special case of such attacks
发表于 2025-3-27 18:05:06 | 显示全部楼层
Data Poisoning Attacks,they take place . learning, when the learned model is in operational use. We now turn to another broad class of attacks which target the learning . by tampering directly with data used for training these.
发表于 2025-3-27 23:10:52 | 显示全部楼层
发表于 2025-3-28 05:35:52 | 显示全部楼层
Attacking and Defending Deep Learning,natural language processing [Goodfellow et al., 2016]. This splash was soon followed by a series of illustrations of fragility of deep neural network models to small . changes to inputs. While initially these were seen largely as robustness tests rather than modeling actual attacks, the language of
发表于 2025-3-28 08:55:14 | 显示全部楼层
发表于 2025-3-28 13:46:36 | 显示全部楼层
1939-4608 ontent of malicius objects they develop...The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adver978-3-031-00452-0978-3-031-01580-9Series ISSN 1939-4608 Series E-ISSN 1939-4616
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 吾爱论文网 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
QQ|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-7-27 18:15
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表