Home | Research | Groups | Debarghya Ghoshdastidar

Research Group Debarghya Ghoshdastidar


Link to website at TUM

Debarghya Ghoshdastidar

Prof. Dr.

Principal Investigator

Theoretical Foundations of Artificial Intelligence

Debarghya Ghoshdastidar

is Professor for Theoretical Foundations of Artificial Intelligence at TU Munich.

He conducts research in the theory of machine learning, artificial intelligence and network science. The main focus of his research is on the statistical understanding and interpretability of methods used in machine learning. His works provide new insights and algorithms for decision problems, involving complex data such as networks and preference relations, that arise in various fields including neuroscience, crowdsourcing and computer vision.

Publications @MCML

2025


[2]
M. Sabanayagam, L. Gosch, S. Günnemann and D. Ghoshdastidar.
Exact Certification of (Graph) Neural Networks Against Label Poisoning.
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. To be published. Preprint available. URL
Abstract

Machine learning models are highly vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance. Thus, deriving robustness certificates is important to guarantee that test predictions remain unaffected and to understand worst-case robustness behavior. However, for Graph Neural Networks (GNNs), the problem of certifying label flipping has so far been unsolved. We change this by introducing an exact certification method, deriving both sample-wise and collective certificates. Our method leverages the Neural Tangent Kernel (NTK) to capture the training dynamics of wide networks enabling us to reformulate the bilevel optimization problem representing label flipping into a Mixed-Integer Linear Program (MILP). We apply our method to certify a broad range of GNN architectures in node classification tasks. Thereby, concerning the worst-case robustness to label flipping: (i) we establish hierarchies of GNNs on different benchmark graphs; (ii) quantify the effect of architectural choices such as activations, depth and skip-connections; and surprisingly, (iii) uncover a novel phenomenon of the robustness plateauing for intermediate perturbation budgets across all investigated datasets and architectures. While we focus on GNNs, our certificates are applicable to sufficiently wide NNs in general through their NTK. Thus, our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.

MCML Authors
Link to website

Lukas Gosch

Data Analytics & Machine Learning

Link to Profile Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

Link to Profile Debarghya Ghoshdastidar

Debarghya Ghoshdastidar

Prof. Dr.

Theoretical Foundations of Artificial Intelligence


2024


[1]
L. Gosch, M. Sabanayagam, D. Ghoshdastidar and S. Günnemann.
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks.
AdvML-Frontiers @NeurIPS 2024 - 3rd Workshop on New Frontiers in Adversarial Machine Learning at the 38th Conference on Neural Information Processing Systems (NeurIPS 2024). Vancouver, Canada, Dec 10-15, 2024. URL
Abstract

Generalization of machine learning models can be severely compromised by data poisoning, where adversarial changes are applied to the training data. This vulnerability has led to interest in certifying (i.e., proving) that such changes up to a certain magnitude do not affect test predictions. We, for the first time, certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph. Our certificates are white-box and based upon (i) the neural tangent kernel, which characterizes the training dynamics of sufficiently wide networks; and (ii) a novel reformulation of the bilevel optimization describing poisoning as a mixed-integer linear program. We note that our framework is more general and constitutes the first approach to derive white-box poisoning certificates for NNs, which can be of independent interest beyond graph-related tasks.

MCML Authors
Link to website

Lukas Gosch

Data Analytics & Machine Learning

Link to Profile Debarghya Ghoshdastidar

Debarghya Ghoshdastidar

Prof. Dr.

Theoretical Foundations of Artificial Intelligence

Link to Profile Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning