12.06.2025
Why Causal Reasoning Is Crucial for Reliable AI Decisions
MCML Research Insight - With Christoph Kern, Unai Fischer-Abaigar, Jonas Schweisthal, Dennis Frauen, Stefan Feuerriegel and Frauke Kreuter
As algorithms increasingly make decisions that impact our lives, from managing city traffic to recommending hospital treatments, one question becomes urgent: Can we trust them?
«Causal reasoning offers a powerful but also necessary foundation for improving the safety and reliability of ADM»
Christoph Kern et al.
MCML Associate
In a recent Comment published in Nature Computational Science, our Associate Christoph Kern and our Junior Members Unai Fischer-Abaigar, Jonas Schweisthal, and Dennis Frauen argue alongside our PIs Stefan Feuerriegel and Frauke Kreuter, and collaborators Rayid Ghani and Mihaela van der Schaar, that for algorithmic decision-making systems (ADMs) to be reliable, they must be grounded in causal reasoning. The reason is simple: ADM systems don’t just predict outcomes, they change them. If we want our models to be meaningful in the real world, they must understand and model cause-and-effect relationships.
That’s because every decision, whether it’s selecting a treatment, adjusting traffic lights, or targeting social policies, actively influences the outcome. This makes decision-making fundamentally different from passive prediction. Crucially, we can never observe the outcome of untaken decisions, meaning we must reason about counterfactuals - what would have happened if we had chosen differently?
To make such reasoning valid, explicit causal assumptions are necessary. These assumptions allow us to link the causal estimand (what we ultimately want to know) to the statistical estimand (what we can actually estimate from data). Without these links, even the most accurate-looking model can be misleading.
The authors highlight two key challenges here:
- Identifiability: Can the causal estimand (the true effect of a decision) be expressed using observable data?
- Estimatability: Can we compute the statistical estimate reliably, given finite data?
To clarify how the decision problem maps to estimable quantities, given causal assumptions and observable data , the authors propose the following diagram:

©Kern et al.
First published in Nature Computational Science 5, pages 356–360 (2025) by Springer Nature.
This figure starts with the decision-making objective and the available interventions/ treatments, and illustrates how we then connect these decisions to quantities that can be estimated from data: first by defining the causal estimand (the effect we want to know), then linking it to the statistical estimand (what we can estimate from data) and finally producing the model estimate, the result the algorithm computes. This path only works if we make our causal assumptions explicit. The figure is a powerful reminder: without a clear causal framework, data-driven models can produce misleading or unreliable decisions.
The Comment further tackles practical issues such as distribution shifts, uncertainty, performativity and benchmarking, exploring how algorithmic decisions can shape future data and outcomes.
Rather than an optional feature, causal reasoning is a core requirement for creating ADM systems that earn our trust and meet real-world standards.
Read the full comment, published in Nature Computational Science, to gain an in-depth understanding of the future of reliable algorithmic decision-making, and to learn why causality must lie at its core:
Algorithms for reliable decision-making need causal reasoning.
Nature Computational Science 5. May. 2025. DOI
Share Your Research!
Get in touch with us!
Are you an MCML Junior Member and interested in showcasing your research on our blog?
We’re happy to feature your work—get in touch with us to present your paper.
Related
©Gorodenkoff-stock.adobe.com
03.11.2025
Research on Human-Centred Exosuit Technology Highlighted in Börsen-Zeitung
Julian Rodemann worked with Harvard on interpretable algorithms for “Back Exosuits,” improving human–machine interaction.
02.11.2025
MCML at EMNLP 2025
MCML researchers are represented with 37 papers at EMNLP 2025 (17 Main, 13 Findings, and 7 Workshops).
©Terzo Algeri/Fotoatelier M/ TUM
30.10.2025
Language Shapes Gender Bias in AI Images
Alexander Fraser shows AI image generators reproduce gender stereotypes differently across languages, highlighting the need for fair multilingual AI.
26.10.2025
Barbara Plank Featured on ARD
MCML PI Barbara Plank featured on ARD, highlighting AI challenges in understanding regional dialects.