Home | Research | Groups | Debarghya Ghoshdastidar

Research Group Debarghya Ghoshdastidar


Link to website at TUM

Debarghya Ghoshdastidar

Prof. Dr.

Principal Investigator

Theoretical Foundations of Artificial Intelligence

Debarghya Ghoshdastidar

is Professor for Theoretical Foundations of Artificial Intelligence at TU Munich.

He conducts research in the theory of machine learning, artificial intelligence and network science. The main focus of his research is on the statistical understanding and interpretability of methods used in machine learning. His works provide new insights and algorithms for decision problems, involving complex data such as networks and preference relations, that arise in various fields including neuroscience, crowdsourcing and computer vision.

Publications @MCML

2024


[1]
M. Sabanayagam, L. Gosch, S. Günnemann and D. Ghoshdastidar.
Exact Certification of (Graph) Neural Networks Against Label Poisoning.
Preprint (Dec. 2024). arXiv
Abstract

Machine learning models are highly vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance. Thus, deriving robustness certificates is important to guarantee that test predictions remain unaffected and to understand worst-case robustness behavior. However, for Graph Neural Networks (GNNs), the problem of certifying label flipping has so far been unsolved. We change this by introducing an exact certification method, deriving both sample-wise and collective certificates. Our method leverages the Neural Tangent Kernel (NTK) to capture the training dynamics of wide networks enabling us to reformulate the bilevel optimization problem representing label flipping into a Mixed-Integer Linear Program (MILP). We apply our method to certify a broad range of GNN architectures in node classification tasks. Thereby, concerning the worst-case robustness to label flipping: (i) we establish hierarchies of GNNs on different benchmark graphs; (ii) quantify the effect of architectural choices such as activations, depth and skip-connections; and surprisingly, (iii) uncover a novel phenomenon of the robustness plateauing for intermediate perturbation budgets across all investigated datasets and architectures. While we focus on GNNs, our certificates are applicable to sufficiently wide NNs in general through their NTK. Thus, our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.

MCML Authors
Link to website

Lukas Gosch

Data Analytics & Machine Learning

Link to Profile Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

Link to Profile Debarghya Ghoshdastidar

Debarghya Ghoshdastidar

Prof. Dr.

Theoretical Foundations of Artificial Intelligence