期刊全称 | Backdoor Attacks against Learning-Based Algorithms | 影响因子2023 | Shaofeng Li,Haojin Zhu,Xuemin (Sherman) Shen | 视频video | | 发行地址 | Thorough review of backdoor attacks and their potential mitigations in learning-based algorithms.Focus on challenges such as design of invisible backdoor triggers and natural language processing syste | 学科分类 | Wireless Networks | 图书封面 |  | 影响因子 | This book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning..Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in th | Pindex | Book 2024 |
The information of publication is updating
|
|