Image-to-graph transformers can effectively encode image information in graphs but are typically difficult to train and require large annotated datasets. Contrastive learning can increase data efficiency by enhancing feature representations, but existing methods are not applicable to graph labels because they operate on categorical label spaces. In this work, we propose a method enabling supervised contrastive learning for image-to-graph transformers. We introduce two supervised contrastive loss formulations based on graph similarity between label pairs that we approximate using a graph neural network. Our approach avoids tailored data augmentation techniques and can be easily integrated into existing training pipelines. We perform multiple empirical studies showcasing performance improvements across various metrics.
inproceedings
BibTeXKey: BBL+24