26

Mar

Teaser image to MCML Internal Pitchtalks

MCML Internal Pitchtalks

MCML Internal Pitchtalks

Tobias Schmidt, Rodrigo González, Helmholtz AI / MCML
Valentin Melnychuk, LMU / MCML
Udo Schlegel, LMU / MCML
Ivica Obadic, TUM / MCML

   26.03.2025

   4:00 pm - 5:45 pm

   LMU Munich, Akademiestrasse 7, 1st floor

We will have three talks by Junior Members of the MCML. Afterwards, at 6pm, the MCML Stammtisch is taking place at Türkenhof. We are looking forward to seeing you there.


Agenda

04:00 PM - 04:05 PM

Quick welcome: Patrick Kolpaczki, Thomas Meier


4:05 PM – 4:30 PM

Speakers: Tobias Schmidt, Rodrigo González (Helmholtz AI)

Title: Self-supervised contrastive learning performs non-linear system identification

Abstract: Self-supervised learning (SSL) approaches have brought tremendous success across many tasks and domains. It has been argued that these successes can be attributed to a link between SSL and identifiable representation learning: Temporal structure and auxiliary variables ensure that latent representations are related to the true underlying generative factors of the data. Here, we deepen this connection and show that SSL can perform system identification in latent space. We propose DynCL, a framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model, give theoretical guarantees and validate them empirically.


04:30 PM - 04:55 PM

Speaker: Valentin Melnychuk

Title: Causal ML for predicting treatment outcomes

Abstract: Causal machine learning (ML) offers flexible, data-driven methods for predicting treatment outcomes, with large potential for personalizing decision-making in medicine and management. Here, we explore recent advances in Causal ML and their relevance for translation in medicine and business decision-making, often motivated by practical considerations. (1) Many of the existing Causal ML methods are aimed at more standard settings but several, specialized settings in practice have been explored only recently (e.g., Causal ML for dosage combinations, Causal ML for time-series settings). (2) Causal ML often generates only point estimates while decision-making, especially in medicine, requires uncertainty estimates. (3) Causal ML rests on formal assumptions, which typically cannot be tested. Here, new methods such as in causal sensitivity analysis help improve the reliability in real-world settings.


04:55 PM - 05:20 PM

Speaker: Udo Schlegel

Title: Bridging the Gap: Extracting and Communicating Explanations for Time Series Models

Abstract: As explainable AI (XAI) continues to evolve, numerous techniques have been developed to extract explanations from model behaviors. However, the challenge of effectively communicating these explanations to human users remains largely unaddressed. In this talk, I will first introduce methods for extracting and evaluating explanations in time series classification models, highlighting their strengths and limitations. I will then explore strategies for making these explanations more interpretable and actionable for users. By improving the communication of model insights, we can enhance trust, facilitate debugging, and enable the refinement of both models and data, ultimately bridging the gap between AI systems and human understanding.


05:20 PM - 05:45 PM

Speaker: Ivica Obadic

Title: Towards Interpretable Graph Neural Networks for Computer Vision

Abstract: Deep learning models based on graph neural networks have emerged as a popular approach for solving computer vision problems. They encode the image into a graph structure and can be beneficial for efficiently capturing the long-range dependencies typically present in remote sensing imagery. However, an important drawback of these methods is their black-box nature which may hamper their wider usage in critical applications. In this talk talk, we tackle this problem by presenting our Interpretable Window Vision GNN (i-WiViG) approach, which provides explanations by automatically identifying the relevant subgraphs for the model prediction. This is achieved with window-based image graph processing that constrains the node receptive field to a local image region and by using a self-interpretable graph bottleneck that ranks the importance of the long-range relations between the image regions. We evaluate our approach to remote sensing classification and regression tasks, showing it achieves competitive performance while providing inherent and faithful explanations through the identified relations. Further, the quantitative evaluation reveals that our model reduces the infidelity of post-hoc explanations compared to other Vision GNN models, without sacrificing explanation sparsity.


05:45 PM

Wrap-Up and Closing.


06:00 PM

MCML-Stammtisch at Türkenhof

Happy to see you there.


This event is for MCML members only.

Organized by:

Thomas Meier MCML

Patrick Kolpaczki LMU / MCML


Related

Link to MCML Industry Pitchtalks with SAP

Pitchtalk Series  •  15.07.2025  •  Friedrich-Ludwig-Bauer-Straße 5, 85748 Garching bei München

MCML Industry Pitchtalks With SAP

MCML Pitchtalks: Join us on July 15th at SAP Labs Garching to explore AI Agents and Generative AI with MCML members and SAP researchers.


Link to MCML Industry Pitchtalks with Bain & Company

Pitchtalk Series  •  10.04.2025  •  Bain&Company, Karlsplatz 1, 80335 München

MCML Industry Pitchtalks With Bain & Company

On April 10th, we will be invited to Bain & Company for a Meetup in our series MCML Pitchtalks with Industry.


Link to MCML Industry Pitchtalks With appliedAI

Pitchtalk Series  •  02.04.2025  •  Ludwigstrasse 33. 1st floor. Seminar room of the Statistics Institute

MCML Industry Pitchtalks With AppliedAI

MCML Pitchtalks: On April 2nd, appliedAI and partners join us to discuss AI Agentic systems, RAG, Generative AI, and AGI-benchmarking. Fully booked!