27.11.2025
Seeing the Bigger Picture – One Detail at a Time
MCML Research Insight - With Rui Xiao, Sanghwan Kim, and Zeynep Akata
Large vision-language models (VLMs) like CLIP (Contrastive Language-Image Pre-training) have changed how AI works with mixed inputs of images and text, by learning to connect pictures and words. Given an image with a caption like “a dog playing with a ball”, CLIP learns to link visual patterns (the …
25.11.2025
Daniel Rückert Among the World’s Most Cited Researchers
TUM News
MCML Director Daniel Rückert is listed among the world’s most frequently cited researchers in the Cross-Field category for his work on artificial intelligence in healthcare and medicine. In total, 17 TUM scientists were recognized in the 2025 Highly Cited Researchers rankings by Clarivate, …
©MCML
24.11.2025
Research Stay at Stanford University
Kun Yuan – Funded by the MCML AI X-Change Program
During my research stay at Stanford University from July to September 2025, I had the pleasure of being part of the research group led by Assistant Professor Serena Yeung in the Department of Biomedical Data Science. My two-month stay in California gave me the opportunity to investigate how public …