Home  | Publications | MPJ+25

Graph Representational Learning: When Does More Expressivity Hurt Generalization?

MCML Authors

Abstract

Graph Neural Networks (GNNs) are powerful tools for learning on structured data, yet the relationship between their expressivity and predictive performance remains unclear. We introduce a family of premetrics that capture different degrees of structural similarity between graphs and relate these similarities to generalization, and consequently, the performance of expressive GNNs. By considering a setting where graph labels are correlated with structural features, we derive generalization bounds that depend on the distance between training and test graphs, model complexity, and training set size. These bounds reveal that more expressive GNNs may generalize worse unless their increased complexity is balanced by a sufficiently large training set or reduced distance between training and test graphs. Our findings relate expressivity and generalization, offering theoretical insights supported by empirical results.

misc


Preprint

May. 2025

Authors

S. MaskeyR. Paolino • F. Jogl • G. Kutyniok • J. Lutzeyer

Links


Research Area

 A2 | Mathematical Foundations

BibTeXKey: MPJ+25

Back to Top