Home  | Publications | SGG+25

Exact Certification of (Graph) Neural Networks Against Label Poisoning

MCML Authors

Link to Profile Stephan Günnemann PI Matchmaking

Stephan Günnemann

Prof. Dr.

Principal Investigator

Link to Profile Debarghya Ghoshdastidar

Debarghya Ghoshdastidar

Prof. Dr.

Principal Investigator

Abstract

Machine learning models are highly vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance. Thus, deriving robustness certificates is important to guarantee that test predictions remain unaffected and to understand worst-case robustness behavior. However, for Graph Neural Networks (GNNs), the problem of certifying label flipping has so far been unsolved. We change this by introducing an exact certification method, deriving both sample-wise and collective certificates. Our method leverages the Neural Tangent Kernel (NTK) to capture the training dynamics of wide networks enabling us to reformulate the bilevel optimization problem representing label flipping into a Mixed-Integer Linear Program (MILP). We apply our method to certify a broad range of GNN architectures in node classification tasks. Thereby, concerning the worst-case robustness to label flipping: (i) we establish hierarchies of GNNs on different benchmark graphs; (ii) quantify the effect of architectural choices such as activations, depth and skip-connections; and surprisingly, (iii) uncover a novel phenomenon of the robustness plateauing for intermediate perturbation budgets across all investigated datasets and architectures. While we focus on GNNs, our certificates are applicable to sufficiently wide NNs in general through their NTK. Thus, our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.

inproceedings


ICLR 2025

13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. Spotlight Presentation.
Conference logo
A* Conference

Authors

M. Sabanayagam • L. GoschS. GünnemannD. Ghoshdastidar

Links

URL GitHub

Research Areas

 A1 | Statistical Foundations & Explainability

 A3 | Computational Models

BibTeXKey: SGG+25

Back to Top