Home  | Publications | BHH+21

Implicit Hough Transform Neural Networks for Subspace Clustering

MCML Authors

Link to Profile Thomas Seidl PI Matchmaking

Thomas Seidl

Prof. Dr.

Director

Abstract

Subspace clustering constitutes a fundamental task in data mining and unsupervised machine learning with myriad applications. We present a novel approach to subspace clustering that detects affine hyperplanes in a given arbitrary-dimensional dataset by explicitly parametrizing them and optimizing their parameters using gradient updates w.r.t. a differentiable loss function. The explicit parametrization allows our model to avoid the exponential search space incurred by models relying on an explicit Hough transform to detect subspaces by searching for high-density points in parameter space. Compared to other existing approaches, our method is highly scalable, can be trained very efficiently on a GPU, is applicable to out-of-sample data, and is amenable to anytime scenarios since training can be stopped at any time and convergence is usually fast. The model can further be viewed as a linear neural network layer and trained end-to-end with an autoencoder to detect arbitrary non-linear correlations. We provide empirical results on a wide array of synthetic datasets with different characteristics following a rigorous evaluation protocol. Our results demonstrate the advantageous properties of our model and additionally reveal that it is particularly robust to jitter and noise present in the data.

inproceedings


Workshop @ICDM 2021

Workshop at the 21st IEEE International Conference on Data Mining. Auckland, New Zealand, Dec 07-10, 2021.

Authors

J. Busch • M. Hünemörder • J. Held • P. Kröger • T. Seidl

Links

DOI

Research Area

 A3 | Computational Models

BibTeXKey: BHH+21

Back to Top