Profile Areas


Spatial and Temporal Machine Learning

Principal Investigators:

Spatial and spatio-temporal data, i.e. data with a dynamic spatial or time component such as sensor data or camera recordings, play an important role in many applications today. In this profile area, Machine Learning methods will be developed that address the challenges inherent to spatio-temporal data, such as integrating data from different sources and with different resolutions or processing stream data in real-time. In particular, we will develop new Deep Learning methods for time series analysis, Bayesian methods for image analysis and reinforcement learning techniques for finding optimal strategies in data-driven environments.

The profile area Spatial and Temporal Machine Learning focuses on three main research areas:

  • Learning to Learn Spatio-Temporal Meta Information (Prof. Dr. Nassir Navab) intends to use and propose meta-learning methods for improving the training process of neural networks that better suits medical applications.
  • Performant Bayesian Learning from Images (Prof. Dr. Volker Schmid) aims in developing computationally efficient approximate methods for Bayesian image analysis and their application, especially in the field of image segmentation.
  • Optimized Strategies in data-driven Environmental Models (Prof. Dr. Matthias Schubert) develops new techniques for learning the dynamics of environmental models and optimizing policies by means of reinforcement learning.

Learning on Graphs and Networks

Principal Investigators:

In many data-intensive applications, from social media, genome research to mobility, attributed graphs and networks have proven to be a powerful and highly informative data source. In real applications, however, the stored networks are often subject to errors, contain outliers and are highly noisy. Handling such data requires the development of machine learning techniques on graphs that can handle impure and inaccurate data and are robust to errors. Especially if the edges of a network are not deterministic but stochastic, statistical models that were originally used to model social networks in the field of social sciences can be used. In general, statistical models and methods of machine learning for the analysis of relational data have experienced a significant upswing in the past 5 to 10 years. However, the scaling of models and methods to today’s high-dimensional networks is still in its infancy. In this profile area, we focus on machine learning methods for analyzing knowledge graphs. Knowledge graphs have emerged from the tradition of knowledge modeling and the semantic web and represent a significant breakthrough: e.g. the Google Knowledge Graph contains over 100 billion statements and is the basis for search and question-answer dialog systems. For the statistical modeling of knowledge graphs, approaches based on factoring the adjacency tensors have prevailed. A particular methodological challenge is the exchange of information and the interaction with unstructured data such as signals, texts, as well as image and video data since knowledge graphs are increasingly used e.g. in communicating agents, in supply chain management and in IoT applications.

The profile area Learning on Graphs and Networks includes three main research areas:

  • Representation Learning on Graphs (Prof. Dr. Stephan Günnemann) examines graph analysis via graph embeddings and the generation of graphs.
  • Statistical Analysis of Networks (Prof. Dr. Göran Kauermann) explores statistic models for network analysis, in particular on large network data.
  • Machine Learning with Knowledge Graphs (Prof. Dr. Volker Tresp) aims at a generalization of knowledge graph models across different domains and developing application-specific approaches.

Representation Learning

Principal Investigators:

The research field of representation learning involves the automated generation of meaningful features from high-dimensional data sets. The generation of these characteristics from high-dimensional observations is traditionally done manually based on expert knowledge. In recent years, progress in deep learning, combined with an exponential increase of available data sets for training, has leveraged the automatic identification of meaningful data characteristics in various fields of application. The focus of this profile area is on the further development of these methods in order to improve their performance and to open up new fields of application for machine learning. Challenges here include the generation of data characteristics for heterogeneous data sets, the robustness of the methods with regard to non-representative observations, the reduction of the necessary amount of data with constant performance and the interpretability of the calculated characteristics. The success of the envisioned developments is measured by the benefit for the corresponding application, e.g. by generating new scientific knowledge in a given scientific domain. Therefore, our research in this profile area is strongly linked to specific applications.

The profile area Representation Learning focuses on six main research areas:

  • Causal Representation Learning (Prof. Dr. Moritz Große-Wentrup) extends predictive representation learning from machine learning to causal concepts.
  • Representation Learning for Natural Language (Prof. Dr. Hinrich Schütze) develops a better statistical understanding of word-embeddings and extends word-embedding models beyond co-competition.
  • Graphic Modelling and Deep Learning in Single Genomics (Prof. Dr. Dr. Fabian Theis) applies the methods of machine learning on single cell genomics.
  • High quality subword vocabulary induction (Prof. Dr. Hinrich Schütze) is a research focus which cooperates with the Computer Sciences Laboratory for Mechanics and Engineering Sciences (LIMSI) on development of a neural-network-based method for learning subvocabularies.
  • Generative models for data integration and state prediction in single-cell transcriptomics (Prof. Dr. Dr. Fabian Theis) builds on the concepts proposed in scGen to firstly increase its predictive power and secondly make scGen more interpretable by including biologically motivated priors.
  • Deep patient learning with application in ophthalmology (Prof. Dr. Dr. Fabian Theis) aims in improving the clinical usability of deep learning in ophthalmology.

Automatic and Explainable Modeling

Principal Investigators:

  • Prof. Dr. Bernd Bischl
  • Prof. Dr. Anne-Laure Boulesteix
  • PD Dr. Fabian Scheipl
    Valid benchmarking of machine learning methods is essential to gain robust guarantees for the practical use of models. Successful machine learning involves much more than efficient optimization of arisk function within a given model: Preprocessing, hyperparameter tuning, model selection, feature generation and selection are central aspects of the modeling process, often critical to the success of a project. For the development of fully automated systems, statistically valid benchmarking is especially important. After a model has been selected and validated, the interpretation of the model is of crucial importance. Models are often complex after optimal model selection and have ways to be created to make them understandable. Ideally, this should be done model-agnostically, so that a model diagnosis can be performed independently of the actual (automatic) model selection.

The profile area Automatic and Explainable Modeling focuses on the following main research areas:

  • Automatic Machine Learning (Prof. Dr. Bernd Bischl) aims at automatic crucial aspects of model selection and configuration of ML pipelines.
  • Benchmarking and Best Practice for Clustering and Feature-Rankings (Prof. Dr. Anne-Laure Boulesteix) concerns statistically valid and reliable analysis of unsupervised learning algorithms.
  • Standardisation, Benchmarking and Interpretation for Functional Data Analysis (PD Dr. Fabian Scheipl) addresses data situations where data (or parts of observations) are measured as curves, e.g. when being generated by sensors.
  • Best of Both Worlds: Statistics and Machine Learning (Prof. Dr. Bernd Bischl) tries to better join foundations and concepts from both scientific areas, especially regarding tree-based models in order to generate more reliable and interpretable predictive models.
  • Computational Aspects of Component-Wise Gradient Boosting (Prof. Dr. Bernd Bischl) studies matters of efficiency and optimization for this very flexible and general class of models.
  • Explainable AI (Prof. Dr. Bernd Bischl) aims to increase interpretability of machine learning models and of the Automatic Machine Learning process.

Computational Models for Large-Scale Machine Learning

Principal Investigators:

Large-scale machine learning covers supervised as well as unsupervised analysis of Big Data. The amount of data to be analyzed as well as the number of dimensions increases steadily and new basic technologies like, e.g., distributed computing and parallel processing with graphics cards provide a plethora of new possibilities to learn from large amounts data. As some machine learning algorithms are easily parallelizable, there are still many architectures not investigated thoroughly yet regarding their applicability for large-scale and high-dimensional data. Especially methods of unsupervised learning as clustering of high dimensional data, e.g., subspace clustering or correlation clustering, or community detection in graphs were often developed without focus on Big-Data. With the amount of data also the demand of explainability of analysis results increases. Interactive approaches can support explainabilty and use expert knowledge by offering hyperparameters. Applications allowing users to select different underlying statistical models could use expert knowledge even more nuanced. Also permanently and fast available results become more and more important, while the time to process and analyze data increases with its amount and dimensionality. Thus, developing anytime algorithms, which are able to deliver results at any time, is another goal.

To address these aspects of large scale Machine Learning methods, this profile area features three research projects:

  • High-performance Machine Learning (Prof. Dr. Christian Böhm) explores the development of parallel data mining algorithms under different architectures such as distributed systems, GPUs, etc.
  • Unsupervised Anytime-Techniques for Real-Time-Analysis of large Data Sets (Prof. Dr. Peer Kröger) develops new anytime algorithms for unsupervised problems such as clustering and outlier detection and explores their integration in new architectures.
  • Distributed Algorithms for Supervised and Unsupervised Learning (Prof. Dr. Thomas Seidl) systematically explores potential interaction points for data mining algorithms that will lead to new interactive algorithms and examines the deployment of these algorithms to novel architectures.

Computer Vision

Principal Investigators:

Research in computer vision and image analysis is of central importance to the advancement of machine learning methods, because the processing of images is the arguably the most important use case for machine learning algorithms. Not surprisingly, some of the most influ- ential innovations in machine learning such as the deep convolutional neural networks (CNNs) emerged in the field of computer vision: The paper of Krizhevsky et al. demonstrated that such sufficiently deep convolutional networks provide a drastic boost in performance on the ImageNet classification challenge and the paper counts over 45000 citations since its publica- tion in 2012. In the wake of this work, deep networks have swept the field of computer vision and are gradually taking over many other areas of data analysis. While deep networks are merely one paradigm in machine learning, we believe that the analysis of images and videos is of central importance and inspiration for the development of novel machine learning algorithms. To reflect this importance, we introduce the new profile areas “computer vision” with a number of projects that predominantly revolve around the challenge of generalizing deep networks in various ways.

The profile area Computer Vision is focused on a number of projects in the area of computer vision that revolve around the challenge of going beyond the classical neural networks and generalizing them in various ways:

  • Deep Learning Methods for Time Series Analysis (Prof. Dr. Daniel Cremers) extends modern neural networks (attention-mechanisms, external memory, neuroevolution, transformer-networks) on time series data.
  • Constrained Deep Learning Models (Prof. Dr. Daniel Cremers) focus on devising techniques for imposing hard constraints on deep networks.
  • Probabilistic Modeling for Learning Systems (Prof. Dr. Daniel Cremers) aims at devising new learning systems which combine the power of deep neural networks with the advantages of probabilistic graphical models.
  • Novel Strategies in Deep Learning (Prof. Dr. Daniel Cremers) is focused on developing novel strategies in deep learning. Specifically we propose to construct novel goal-oriented loss functions, provide a more systematic approach to network topology design and combine concepts of deep neural networks with ideas from evolutionary algorithms.
  • Reconstruction and Analysis of 3D Objects (Prof. Dr. Daniel Cremers) is aimed at developing novel algorithms for 3D shape analysis, including physical and adaptive models for 3D reconstruction from cameras, with a particular emphasis on learning methods for 3D shapes.
  • End-to-end Learnable Video Analysis (Prof. Dr. Laura Leal-Taixé) is focused on deep learning methods for video analysis. The aim is to develop end-to-end learning strategies for object detection, multiple object tracking and automatic video-based data annotation.
  • Efficient 3D Semantic Scene Understanding (Prof. Dr. Matthias Nießner) is focused on 3D neural networks for data processing with a particular emphasis on geometry-aware network operators. While neural networks are traditionally designed for image and video processing, the processing and understanding of the 3D world with suitable neural network approaches is a significant open challenge.
  • Deep Dynamic 3D Scene Understanding (Prof. Dr. Laura Leal-Taixé) aims at bringing learning to points cloud data. Such data arises in applications like autonomous navigation where either with lidar or with camera, the environment around the car is reconstructed in terms of a point cloud.
  • Combining Deep Networks and Classical Optimization (Prof. Dr. Daniel Cremers) is focused on exploring relationships between deep learning approaches and classical inverse problems approaches. In particular, we will develop hybrid techniques which combine the advantages of both paradigms.
  • Deep Networks for Visual SLAM (Prof. Dr. Daniel Cremers) is focused on deploying deep learning in the area of visual SLAM. This involves challenges like depth-prediction from a single image and enhancing classical SLAM methods with deep networks.