Home  | News

News

Link to Joint Retreat of MCML and Tuebingen AI Center on the Latest Advances in Computer Vision

27.01.2026

Joint Retreat of MCML and Tuebingen AI Center on the Latest Advances in Computer Vision

Short Recap

In January 2026, two research groups of the MCML and Tuebingen AI Center met for a four-day retreat in Gaschurn/ Austria. About 60 researchers joined for talks and poster sessions. The joint retreat …

Link to Daniel Rückert Speaks at DLD Munich 2026

27.01.2026

Daniel Rückert Speaks at DLD Munich 2026

AI and the Future of Medicine: From Sci-Fi to Your Doctor's Office

Our Director Daniel Rückert spoke at the DLD Conference 2026 as part of the BIOSPHERE Health Track. In his talk, Daniel explored how AI is moving from science fiction into real clinical practice and …

Link to Fabian Theis Speaks at DLD Munich 2026

27.01.2026

Fabian Theis Speaks at DLD Munich 2026

From Models to Medicines: AI-Guided Experimental Biology

We are pleased to share that our PI Fabian Theis contributed to DLD Conference 2026 as part of the BIOSPHERE Health Track. Fabian gave a talk titled “From Models to Medicines: AI-guided Experimental …

Link to Björn Ommer Speaks at DLD Munich 2026

27.01.2026

Björn Ommer Speaks at DLD Munich 2026

It’s Gonna Be Wild: When AI Moves Faster Than Society

Our PI Björn Ommer was a featured speaker at DLD Munich 2026 in the session “It’s Gonna Be Wild: When AI Moves Faster Than Society”, together with LMU VP Armin Nassehi. The discussion reflects on the …

Link to Industry Pitch Talks Recap

22.01.2026

Industry Pitch Talks Recap

With the DeepL Community

On January 20th, we visited DeepL at their Munich office for an edition of our “MCML Pitchtalks with Industry.” DeepL presented their current state-of-the-art work in NLP, and our junior members—Siyao …

Link to From Global to Regional Explanations: Understanding Models More Locally

22.01.2026

From Global to Regional Explanations: Understanding Models More Locally

MCML Research Insight - With Giuseppe Casalicchio, Thomas Nagler and Bernd Bischl

Machine learning models can be powerful, but understanding why they behave the way they do is often much harder. Early global interpretability tools were designed to show how each feature affects the …

Back to Top