22.01.2026
MCML Research Insight - With Giuseppe Casalicchio, Thomas Nagler and Bernd Bischl
Machine-learning models can be powerful, but understanding why they behave the way they do is often much harder. Early global interpretability tools were designed to show how each feature affects the …
©MCML
22.01.2026
Andrea Maldonado – Funded by the MCML AI X-Change Program
Between Freundenberg – “happiness mountain” – and Rosenberg – “roses mountain”, I had the pleasure to visit the Institute of Computer Science (ICS-HSG) at the University of St. Gallen (HSG) in …
15.01.2026
MCML Research Insight - With Dominik Schnaus, Nikita Araslanov, and Daniel Cremers
Vision-language models have shown that images and text can live in a shared space: a picture of a “cat” often lands close to the word “cat” in the embedding space. But such …
08.01.2026
MCML Research Insight - With Johannes Schusterbauer, Pingchuan Ma, Vincent Tao Hu, and Björn Ommer
Image generation models today can create almost anything, like a futuristic city glowing at sunset, a classical painting of your cat, or a realistic spaceship made of glass. But when you ask them to …
18.12.2025
MCML Research Insight - With Leander Girrbach, Yiran Huang, Stephan Alaniz and Zeynep Akata
Using AI and LLMs at work feels almost unavoidable today: they make things easier, but they can also go wrong in important ways. One of the trickiest problems? Gender bias. For example, if you ask …
11.12.2025
MCML Research Insight - With Lu Sang and Daniel Cremers
Ever wondered how a 3D shape can smoothly change — like a robot arm bending or a dog rising from sitting to standing — without complex simulations or hand-crafted data? Researchers from MCML and the …
2024-11-22 - Last modified: 2026-01-22