Home  | Publications | GSG+24

Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks

MCML Authors

Link to Profile Debarghya Ghoshdastidar

Debarghya Ghoshdastidar

Prof. Dr.

Principal Investigator

Link to Profile Stephan Günnemann PI Matchmaking

Stephan Günnemann

Prof. Dr.

Principal Investigator

Abstract

Generalization of machine learning models can be severely compromised by data poisoning, where adversarial changes are applied to the training data. This vulnerability has led to interest in certifying (i.e., proving) that such changes up to a certain magnitude do not affect test predictions. We, for the first time, certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph. Our certificates are white-box and based upon (i) the neural tangent kernel, which characterizes the training dynamics of sufficiently wide networks; and (ii) a novel reformulation of the bilevel optimization describing poisoning as a mixed-integer linear program. We note that our framework is more general and constitutes the first approach to derive white-box poisoning certificates for NNs, which can be of independent interest beyond graph-related tasks.

inproceedings


AdvML-Frontiers @NeurIPS 2024

3rd Workshop on New Frontiers in Adversarial Machine Learning at the 38th Conference on Neural Information Processing Systems. Vancouver, Canada, Dec 10-15, 2024.

Authors

L. Gosch • M. Sabanayagam • D. GhoshdastidarS. Günnemann

Links

URL

Research Areas

 A1 | Statistical Foundations & Explainability

 A3 | Computational Models

BibTeXKey: GSG+24

Back to Top