harmony 发表于 2025-3-27 00:18:21

Categories of Attacks on Machine Learning,ulnerabilities centers around precise threat models. In this chapter, we present a general categorization of threat models, or attacks, in the context of machine learning. Our subsequent detailed presentation of the specific attacks will be grounded in this categorization.

anaerobic 发表于 2025-3-27 02:06:25

http://reply.papertrans.cn/16/1505/150410/150410_32.png

Adrenaline 发表于 2025-3-27 06:47:01

http://reply.papertrans.cn/16/1505/150410/150410_33.png

小步走路 发表于 2025-3-27 10:52:37

Attacks at Decision Time, spam, phishing, and malware detectors trained to distinguish between benign and malicious instances, with adversaries manipulating the nature of the objects, such as introducing clever word misspellings or substitutions of code regions, in order to be misclassified as benign.

逃避系列单词 发表于 2025-3-27 14:27:05

Defending Against Decision-Time Attacks,follow-up question: how do we defend against such attacks? As most of the literature on robust learning in the presence of decision-time attacks is focused on supervised learning, our discussion will be restricted to this setting. Additionally, we deal with an important special case of such attacks

Militia 发表于 2025-3-27 18:05:06

Data Poisoning Attacks,they take place . learning, when the learned model is in operational use. We now turn to another broad class of attacks which target the learning . by tampering directly with data used for training these.

vertebrate 发表于 2025-3-27 23:10:52

http://reply.papertrans.cn/16/1505/150410/150410_37.png

insincerity 发表于 2025-3-28 05:35:52

Attacking and Defending Deep Learning,natural language processing . This splash was soon followed by a series of illustrations of fragility of deep neural network models to small . changes to inputs. While initially these were seen largely as robustness tests rather than modeling actual attacks, the language of

CALL 发表于 2025-3-28 08:55:14

http://reply.papertrans.cn/16/1505/150410/150410_39.png

黄瓜 发表于 2025-3-28 13:46:36

1939-4608 ontent of malicius objects they develop...The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adver978-3-031-00452-0978-3-031-01580-9Series ISSN 1939-4608 Series E-ISSN 1939-4616
页: 1 2 3 [4] 5
查看完整版本: Titlebook: Adversarial Machine Learning; Yevgeniy Vorobeychik,Murat Kantarcioglu Book 2018 Springer Nature Switzerland AG 2018