Home  | Publications | RSB+23

Deep Bregman Divergence for Self-Supervised Representations Learning

MCML Authors

Abstract

Neural Bregman divergence measures the divergence of data points using convex neural networks, which is beyond Euclidean distance and capable of capturing divergence over distributions. The non-Euclidean geometry is not well explored in deep representation learning and remains a challenging endeavor for self-supervised representation learning. In this paper, we propose deep Bregman divergences for self-supervised pretext task learning, where we aim to enhance self-supervised embedding representation by training additional networks based on functional Bregman divergences. Our framework can capture the divergence of embedding distributions and improve the quality of learned representation using an arbitrary Bregman divergence over data embedding. Specifically, we develop a novel self-supervised architecture and a new divergence loss that measures the asymmetric distance of arbitrary Bergman divergences of neural networks. We show that the combination of self-supervised contrastive learning and our proposed method outperforms the baseline as well as most established methods for self-supervised and semi-supervised learning on multiple classifications and object detection tasks and datasets. Moreover, the learned representations generalize well when transferred to other datasets and tasks.

article


Computer Vision and Image Understanding

235.103801. Oct. 2023.
Top Journal

Authors

M. Rezaei • F. Soleymani • B. Bischl • S. Azizi

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: RSB+23

Back to Top