TY - JOUR
T1 - Flexible Noise Based Robustness Certification Against Backdoor Attacks in Graph Neural Networks
AU - Kato, Hiroya
AU - Meguro, Ryo
AU - Hidano, Seira
AU - Suganuma, Takuo
AU - Hiji, Masahiro
N1 - Publisher Copyright:
© 2025 by SCITEPRESS – Science and Technology Publications, Lda.
PY - 2025
Y1 - 2025
N2 - Graph neural networks (GNNs) are vulnerable to backdoor attacks. Although empirical defense methods against such attacks are effective to some extent, they may be bypassed by adaptive attacks. Thus, recently, robustness certification that can certify the model robustness against any type of attack has been proposed. However, existing certified defenses have two shortcomings. The first one is that they add uniform defensive noise to the entire dataset, which degrades the robustness certification. The second one is that unnecessary computational costs for data with different sizes are required. To address them, in this paper, we propose flexible noise based robustness certification against backdoor attacks in GNNs. Our method can flexibly add defensive noise to binary elements in an adjacency matrix with two different probabilities. This leads to improvements in the model robustness because the defender can choose appropriate defensive noise depending on datasets. Additionally, our method is applicable to graph data with different sizes of adjacency matrices because a calculation in our certification depends only on the size of attack noise. Consequently, computational costs for the certification are reduced compared with a baseline method. Our experimental results on four datasets show that our method can improve the level of robustness compared with a baseline method. Furthermore, we demonstrate that our method can maintain a higher level of robustness with larger sizes of attack noise and poisoning.
AB - Graph neural networks (GNNs) are vulnerable to backdoor attacks. Although empirical defense methods against such attacks are effective to some extent, they may be bypassed by adaptive attacks. Thus, recently, robustness certification that can certify the model robustness against any type of attack has been proposed. However, existing certified defenses have two shortcomings. The first one is that they add uniform defensive noise to the entire dataset, which degrades the robustness certification. The second one is that unnecessary computational costs for data with different sizes are required. To address them, in this paper, we propose flexible noise based robustness certification against backdoor attacks in GNNs. Our method can flexibly add defensive noise to binary elements in an adjacency matrix with two different probabilities. This leads to improvements in the model robustness because the defender can choose appropriate defensive noise depending on datasets. Additionally, our method is applicable to graph data with different sizes of adjacency matrices because a calculation in our certification depends only on the size of attack noise. Consequently, computational costs for the certification are reduced compared with a baseline method. Our experimental results on four datasets show that our method can improve the level of robustness compared with a baseline method. Furthermore, we demonstrate that our method can maintain a higher level of robustness with larger sizes of attack noise and poisoning.
KW - AI Security
KW - Backdoor Attacks
KW - Graph Neural Networks
KW - Robustness Certification
UR - http://www.scopus.com/inward/record.url?scp=105001680675&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105001680675&partnerID=8YFLogxK
U2 - 10.5220/0013188700003899
DO - 10.5220/0013188700003899
M3 - Conference article
AN - SCOPUS:105001680675
SN - 2184-4356
VL - 2
SP - 552
EP - 563
JO - International Conference on Information Systems Security and Privacy
JF - International Conference on Information Systems Security and Privacy
T2 - 11th International Conference on Information Systems Security and Privacy, ICISSP 2025
Y2 - 20 February 2025 through 22 February 2025
ER -