障碍物 发表于 2025-3-25 07:07:54

Jinyin Chen,Ximin Zhang,Haibin ZhengThe security problems of different data modes, different model structures and different tasks are fully considered.The attack problems are comprehensively studied, and the system flow of the attack-de

JOG 发表于 2025-3-25 08:38:33

http://image.papertrans.cn/b/image/164877.jpg

事与愿违 发表于 2025-3-25 13:22:20

http://reply.papertrans.cn/17/1649/164877/164877_23.png

CHOKE 发表于 2025-3-25 19:16:00

http://reply.papertrans.cn/17/1649/164877/164877_24.png

novelty 发表于 2025-3-25 20:00:01

Adversarial Attacks on GNN-Based Vertical Federated Learningllected from users, GNN may struggle to deliver optimal performance due to the lack of rich features and complete adjacent relationships. To address this challenge, a solution called vertical federated learning (VFL) has been proposed, which aims to protect local data privacy by training a global mo

Fibrinogen 发表于 2025-3-26 01:50:32

A Novel DNN Object Contour Attack on Image Recognitioneptible to adversarial examples. Currently, the primary focus of research on generating adversarial examples is to improve the attack success rate (ASR) while minimizing the perturbation size. Through the visualization of heatmaps, previous studies have identified that the feature extraction capabil

群居动物 发表于 2025-3-26 04:58:09

Query-Efficient Adversarial Attack Against Vertical Federated Graph Learninga. However, the performance of GNN is limited by distributing data silos. Vertical federated learning (VFL) enables GNN to process distributed graph-structured data. While vertical federated graph learning (VFGL) has experienced prosperous development, its robustness against adversarial attacks has

Working-Memory 发表于 2025-3-26 08:30:04

Targeted Label Adversarial Attack on Graph Embedding The increasing interest in graph mining has led to the development of attack methods on graph embedding. Most of these attack methods aim to generate perturbations that maximize the deviation of prediction confidence. However, they often struggle to accurately misclassify instances into the desired

Manifest 发表于 2025-3-26 13:15:32

http://reply.papertrans.cn/17/1649/164877/164877_29.png

灰姑娘 发表于 2025-3-26 19:37:35

http://reply.papertrans.cn/17/1649/164877/164877_30.png
页: 1 2 [3] 4 5 6
查看完整版本: Titlebook: Attacks, Defenses and Testing for Deep Learning; Jinyin Chen,Ximin Zhang,Haibin Zheng Book 2024 The Editor(s) (if applicable) and The Auth