Home  | Publications | SKW+22

Relationformer: A Unified Framework for Image-to-Graph Generation

MCML Authors

Georgios Kaissis

Dr.

Associate

* Former Associate

Link to Profile Volker Tresp

Volker Tresp

Prof. Dr.

Principal Investigator

Abstract

A comprehensive representation of an image requires understanding objects and their mutual relationship, especially in image-to-graph generation, e.g., road network extraction, blood-vessel network extraction, or scene graph generation. Traditionally, image-to-graph generation is addressed with a two-stage approach consisting of object detection followed by a separate relation prediction, which prevents simultaneous object-relation interaction. This work proposes a unified one-stage transformer-based framework, namely Relationformer that jointly predicts objects and their relations. We leverage direct set-based object prediction and incorporate the interaction among the objects to learn an object-relation representation jointly. In addition to existing [obj]-tokens, we propose a novel learnable token, namely [rln]-token. Together with [obj]-tokens, [rln]-token exploits local and global semantic reasoning in an image through a series of mutual associations. In combination with the pair-wise [obj]-token, the [rln]-token contributes to a computationally efficient relation prediction. We achieve state-of-the-art performance on multiple, diverse and multi-domain datasets that demonstrate our approach’s effectiveness and generalizability.

inproceedings


ECCV 2022

17th European Conference on Computer Vision. Tel Aviv, Israel, Oct 23-27, 2022.
Conference logo
A* Conference

Authors

S. Shit • R. Koner • B. Wittmann • J. C. Paetzold • I. Ezhov • H. Li • J. Pan • S. Sharifzadeh • G. KaissisV. Tresp • B. Menze

Links

DOI GitHub

Research Areas

 A3 | Computational Models

 C1 | Medicine

BibTeXKey: SKW+22

Back to Top