leads the MCML Junior Research Group 'Human-Centered NLP' at LMU Munich.
His team's research covers the intersection of machine learning, natural language processing (NLP) and human-computer interaction. Human factors have a crucial interplay with modern AI and NLP development, from the way data is obtained, e.g. in low-resource scenarios, to the need to understand and control models, e.g. through global explainability methods. AI technology also does not exist in a vacuum but must be validated together with the application experts and stakeholders it should serve.
The group explores these questions from different perspectives, taking the lense of machine learning, natural language processing and human-computer interaction. By embracing these diverse perspectives, the researcher value how each viewpoint enriches the understanding of the same issues and how different skill sets complement one another.
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets. However, a broader perspective reveals a multitude of overlooked metrics, tasks, and data types, such as uncertainty, active and continual learning, and scientific data, that demand attention. Bayesian deep learning (BDL) constitutes a promising avenue, offering advantages across these diverse settings. This paper posits that BDL can elevate the capabilities of deep learning. It revisits the strengths of BDL, acknowledges existing challenges, and highlights some exciting research avenues aimed at addressing these obstacles. Looking ahead, the discussion focuses on possible ways to combine large-scale foundation models with BDL to unlock their full potential.