Theoretical Foundations of Artificial Intelligence
is Professor for Theoretical Foundations of Artificial Intelligence at TU Munich.
He conducts research in the theory of machine learning, artificial intelligence and network science. The main focus of his research is on the statistical understanding and interpretability of methods used in machine learning. His works provide new insights and algorithms for decision problems, involving complex data such as networks and preference relations, that arise in various fields including neuroscience, crowdsourcing and computer vision.
Machine learning models are highly vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance. Thus, deriving robustness certificates is important to guarantee that test predictions remain unaffected and to understand worst-case robustness behavior. However, for Graph Neural Networks (GNNs), the problem of certifying label flipping has so far been unsolved. We change this by introducing an exact certification method, deriving both sample-wise and collective certificates. Our method leverages the Neural Tangent Kernel (NTK) to capture the training dynamics of wide networks enabling us to reformulate the bilevel optimization problem representing label flipping into a Mixed-Integer Linear Program (MILP). We apply our method to certify a broad range of GNN architectures in node classification tasks. Thereby, concerning the worst-case robustness to label flipping: (i) we establish hierarchies of GNNs on different benchmark graphs; (ii) quantify the effect of architectural choices such as activations, depth and skip-connections; and surprisingly, (iii) uncover a novel phenomenon of the robustness plateauing for intermediate perturbation budgets across all investigated datasets and architectures. While we focus on GNNs, our certificates are applicable to sufficiently wide NNs in general through their NTK. Thus, our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.
Theoretical Foundations of Artificial Intelligence
©all images: LMU | TUM
2025-03-17 - Last modified: 2025-03-17