18

Jul

Teaser image to Representation Learning: A Causal Perspective

AI Keynote Series

Representation Learning: A Causal Perspective

Yixin Wang, University of Michigan

   18.07.2024

   5:00 pm - 6:30 pm

   Online via Zoom

Representation learning aims to create low-dimensional representations that capture essential features of high-dimensional data, such as images and texts. Ideally, these representations should efficiently capture meaningful, non-spurious features and be disentangled for interpretability. However, defining and enforcing these qualities is challenging.

In this talk, a causal perspective on representation learning is presented. The desiderata for effective representation learning are formalized using counterfactual concepts, which lead to metrics and algorithms designed to achieve efficient, non-spurious, and disentangled representations. The talk covers the theoretical foundations of the proposed algorithm and demonstrates its performance in both supervised and unsupervised settings.

Organized by:

Institute of AI in Management LMU Munich


Related

Link to Practical Causal Reasoning as a Means for Ethical ML

Colloquium  •  25.06.2025  •  LMU Department of Statistics and via zoom

Practical Causal Reasoning as a Means for Ethical ML

25.06.25, 4:15-5:45 pm: Isabel Valera, Uni Saarbrücken explores fairness in ML and introduces DeCaFlow, a causal model for counterfactuals.


Link to Veridical Data Science and PCS Uncertainty
Quantification

Colloquium  •  11.06.2025  •  LMU Department of Statistics and via zoom

Veridical Data Science and PCS Uncertainty Quantification

11.06.25, 4:15-5:45 pm: Bin Yu, UC Berkeley on how PCS improves AI reliability by tackling hidden uncertainty in data science decisions.