颂扬本人 发表于 2025-3-28 15:43:27
Adversarial Attacks on GNN-Based Vertical Federated Learningon the noise-enhanced global node embeddings, leveraging privacy leakage and the gradient of pairwise nodes. Our approach begins by stealing the global node embeddings and constructing a shadow model of the server for the attack generator. Next, we introduce noise into the node embeddings to confuserecede 发表于 2025-3-28 19:17:09
http://reply.papertrans.cn/17/1649/164877/164877_42.pngchassis 发表于 2025-3-29 01:13:38
Query-Efficient Adversarial Attack Against Vertical Federated Graph Learningd using the manipulated data to imitate the behavior of the server model in VFGL. Consequently, the shadow model can significantly boost the success rate of centralized attacks with minimal queries. Multiple tests conducted on four real-world benchmarks show that our method can enhance the performanwangle 发表于 2025-3-29 06:56:17
http://reply.papertrans.cn/17/1649/164877/164877_44.png冷漠 发表于 2025-3-29 09:21:32
Backdoor Attack on Dynamic Link Predictionet. This process helps reduce the size of the triggers and enhances the concealment of the attack. Experimental results demonstrate that our method successfully launches backdoor attacks on several state-of-the-art DLP models, achieving a success rate exceeding 90%.Measured 发表于 2025-3-29 15:06:32
Attention Mechanism-Based Adversarial Attack Against DRLdversarial state. DQN is one of the state-of-the-art DRL models and is used as the target model to train the Flappybird gaming environment to assure continuous operation and high success rates. We performed comprehensive attack experiments on DQN and examined its attack performance in terms of rewar缩短 发表于 2025-3-29 17:46:55
http://reply.papertrans.cn/17/1649/164877/164877_47.pngclarify 发表于 2025-3-29 20:14:56
http://reply.papertrans.cn/17/1649/164877/164877_48.png压舱物 发表于 2025-3-30 03:14:53
http://reply.papertrans.cn/17/1649/164877/164877_49.pnggarrulous 发表于 2025-3-30 07:26:08
Adaptive Channel Transformation-Based Detector for Adversarial Attacksn instances but also can recognize the types of attacks, such as white-box attacks and black-box attacks. In order to validate the detection efficiency of our method, we conduct comprehensive experiments on MNIST, CIFAR10, and ImageNet datasets. With 99.05% and 98.8% detection rates on the MNIST and