Home  | News

25.08.2025

Teaser image to Satellite Insights for a Sustainable Future - with researcher Ivica Obadic

Satellite Insights for a Sustainable Future - With Researcher Ivica Obadic

Research Film

Can AI from satellite imagery help us design more liveable cities, improve well-being, and ensure sustainable food production? Ivica Obadić, PhD student in the group of our PI Xiaoxiang Zhu, and MCML, develops transparent AI models that not only predict change but also give actionable insights for urban planners.

This video is part of the project KI Trans, an initiative in collaboration with TüftelLab and Uta Hauck-Thum from Ludwig-Maximilians-Universität München, focused on equipping teachers with the essential skills to navigate AI in schools. The project is funded by the Bundesministerium für Forschung, Technologie und Raumfahrt as part of DATIpilot.

 

#blog #research #zhu
Subscribe to RSS News feed

Related

Link to World’s First Complete 3D Model of All Buildings Released

04.12.2025

World’s First Complete 3D Model of All Buildings Released

Xiaoxiang Zhu’s team releases GlobalBuildingAtlas, a high-res 3D map of 2.75B buildings for advanced urban and climate analysis.

Link to When to Say "I’m Not Sure": Making Language Models More Self-Aware

04.12.2025

When to Say "I’m Not Sure": Making Language Models More Self-Aware

ICLR 2025 research by the groups of David Rügamer, and Bernd Bischl introduces methods to make LLMs more reliable by expressing uncertainty.

Link to Research Stay at Princeton University

01.12.2025

Research Stay at Princeton University

Abdurahman Maarouf spent three months at Princeton with the AI X-Change Program, advancing causal ML and studying short-form video platform effects.

Link to

28.11.2025

MCML at NeurIPS 2025

MCML researchers are represented with 46 papers at NeurIPS 2025 (37 Main, and 9 Workshops).

Link to Seeing the Bigger Picture – One Detail at a Time

27.11.2025

Seeing the Bigger Picture – One Detail at a Time

FLAIR, introduced by Zeynep Akata’s group at CVPR 2025, brings fine-grained, text-guided detail recognition to vision-language models.

Back to Top