Home  | Publications | TEC+24

SatSynth: Augmenting Image-Mask Pairs Through Diffusion Models for Aerial Semantic Segmentation

MCML Authors

Link to Profile Daniel Cremers PI Matchmaking

Daniel Cremers

Prof. Dr.

Director

Abstract

In recent years, semantic segmentation has become a pivotal tool in processing and interpreting satellite imagery. Yet, a prevalent limitation of supervised learning techniques remains the need for extensive manual annotations by experts. In this work, we explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks. The main idea is to learn the joint data manifold of images and labels, leveraging recent ad-vancements in denoising diffusion probabilistic models. To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation. We find that the obtained pairs not only display high quality in fine-scale features but also ensure a wide sampling diversity. Both aspects are crucial for earth observation data, where semantic classes can vary severely in scale and occurrence frequency. We employ the novel data instances for downstream segmentation, as a form of data augmentation. In our experiments, we provide comparisons to prior works based on discriminative diffusion models or GANs. We demonstrate that integrating generated samples yields significant quantitative improvements for satellite semantic segmentation - both compared to baselines and when training only on the original data.

inproceedings


CVPR 2024

IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA, Jun 17-21, 2024.
Conference logo
A* Conference

Authors

A. Toker • M. Eisenberger • D. Cremers • L. Leal-Taixé

Links

DOI

Research Area

 B1 | Computer Vision

BibTeXKey: TEC+24

Back to Top