Home | Research | Groups | Michael Hedderich

Research Group Michael Hedderich

Link to Michael Hedderich

Michael Hedderich

Dr.

JRG Leader Human-Centered NLP

Artificial Intelligence and Computational Linguistics

B2 | Natural Language Processing

Michael Hedderich

leads the MCML Junior Research Group 'Human-Centered NLP' at LMU Munich.

His team's research covers the intersection of machine learning, natural language processing (NLP) and human-computer interaction. Human factors have a crucial interplay with modern AI and NLP development, from the way data is obtained, e.g. in low-resource scenarios, to the need to understand and control models, e.g. through global explainability methods. AI technology also does not exist in a vacuum but must be validated together with the application experts and stakeholders it should serve.

The group explores these questions from different perspectives, taking the lense of machine learning, natural language processing and human-computer interaction. By embracing these diverse perspectives, the researcher value how each viewpoint enriches the understanding of the same issues and how different skill sets complement one another.

Team members @MCML

Link to Florian Eichin

Florian Eichin

Artificial Intelligence and Computational Linguistics

JRG Human-Centered NLP

B2 | Natural Language Processing

Publications @MCML

[1]
T. Papamarkou, M. Skoularidou, K. Palla, L. Aitchison, J. Arbel, D. Dunson, M. Filippone, V. Fortuin, P. Hennig, J. M. Hernández-Lobato, A. Hubin, A. Immer, T. Karaletsos, M. E. Khan, A. Kristiadi, Y. Li, S. Mandt, C. Nemeth, M. A. Osborne, T. G. J. Rudner, D. Rügamer, Y. W. Teh, M. Welling, A. G. Wilson and R. Zhang.
Position: Bayesian Deep Learning in the Age of Large-Scale AI.
41st International Conference on Machine Learning (ICML 2024). Vienna, Austria, Jul 21-27, 2024. URL.
Abstract

In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets. However, a broader perspective reveals a multitude of overlooked metrics, tasks, and data types, such as uncertainty, active and continual learning, and scientific data, that demand attention. Bayesian deep learning (BDL) constitutes a promising avenue, offering advantages across these diverse settings. This paper posits that BDL can elevate the capabilities of deep learning. It revisits the strengths of BDL, acknowledges existing challenges, and highlights some exciting research avenues aimed at addressing these obstacles. Looking ahead, the discussion focuses on possible ways to combine large-scale foundation models with BDL to unlock their full potential.