25.08.2025
Can AI from satellite imagery help us design more liveable cities, improve well-being, and ensure sustainable food production? Ivica Obadić, PhD student at TUM and MCML, develops transparent AI models that not only predict change but also give actionable insights for urban planners.
18.08.2025
Azade Farshad researches digital twins of patients at TUM and MCML to improve personalized treatment, surgical planning, and training. Using graph-based analysis and multimodal patient data, she builds models that create realistic surgical simulations — helping surgeons preview procedures, spot potential complications, and optimize strategies.
12.08.2025
Tracking actions, not just objects: The Spatiotemporal Action Grounding Challenge and Workshop at ICCV 2025 focusses on detecting and localizing actions both in space and time within complex, real-world videos. Unlike standard action recognition, this task requires identifying when and where an action occurs, pushing models to handle long, diverse, …
07.08.2025
Text-to-image (T2I) models like Stable Diffusion have become masters at turning prompts like “a happy man and a red car” into vivid, detailed images. But what if you want the man to look just a little older, or the car to appear slightly more luxurious without changing anything else? Until now, that level of subtle, subject-specific control was …
06.08.2025
Sven Nyholm, Chair of the Ethics of AI at LMU Munich and PI at MCML, explores one of the most urgent questions in AI: how responsibility, agency, and credit shift when intelligent systems make decisions for us.
04.08.2025
Meet Dominik Bär, MCML junior member and PhD student at LMU exploring how AI can enhance the integrity of social media platforms. Dominik’s work goes beyond just detecting harmful content like hate speech and misinformation. He’s developing AI that understands why content is posted and generates thoughtful, human-like responses - so-called …
31.07.2025
Machine‑learning models can be undermined before training even starts. By silently altering a small share of training labels - marking “spam” as “not‑spam,” for instance - an attacker can cut accuracy by double‑digit percentages. The paper “Exact Certification of (Graph) Neural Networks Against Label Poisoning” by MCML Junior Member Lukas Gosch, …