Daniel Zügner
Dr.
* Former Member
In this thesis we look at graph neural networks (GNNs) from a perspective of adversarial robustness. We generalize the notion of adversarial attacks -- small perturbations to the input data deliberately crafted to mislead a machine learning model -- from traditional vector data such as images to graphs. We further propose robustness certification procedures for perturbations of the node attributes as well as the graph structure.
BibTeXKey: Zue22