12.06.2025

Teaser image to Why Causal Reasoning is Crucial for Reliable AI Decisions

Why Causal Reasoning Is Crucial for Reliable AI Decisions

MCML Research Insight - With Christoph Kern, Unai Fischer-Abaigar, Jonas Schweisthal, Dennis Frauen, Stefan Feuerriegel and Frauke Kreuter

As algorithms increasingly make decisions that impact our lives, from managing city traffic to recommending hospital treatments, one question becomes urgent: Can we trust them?

«Causal reasoning offers a powerful but also necessary foundation for improving the safety and reliability of ADM»


Christoph Kern et al.

MCML Associate

In a recent Comment published in Nature Computational Science, our Associate Christoph Kern and our Junior Members Unai Fischer-Abaigar, Jonas Schweisthal, and Dennis Frauen argue alongside our PIs Stefan Feuerriegel and Frauke Kreuter, and collaborators Rayid Ghani and Mihaela van der Schaar, that for algorithmic decision-making systems (ADMs) to be reliable, they must be grounded in causal reasoning. The reason is simple: ADM systems don’t just predict outcomes, they change them. If we want our models to be meaningful in the real world, they must understand and model cause-and-effect relationships.

That’s because every decision, whether it’s selecting a treatment, adjusting traffic lights, or targeting social policies, actively influences the outcome. This makes decision-making fundamentally different from passive prediction. Crucially, we can never observe the outcome of untaken decisions, meaning we must reason about counterfactuals - what would have happened if we had chosen differently?


To make such reasoning valid, explicit causal assumptions are necessary. These assumptions allow us to link the causal estimand (what we ultimately want to know) to the statistical estimand (what we can actually estimate from data). Without these links, even the most accurate-looking model can be misleading.

The authors highlight two key challenges here:

  • Identifiability: Can the causal estimand (the true effect of a decision) be expressed using observable data?
  • Estimatability: Can we compute the statistical estimate reliably, given finite data?

To clarify how the decision problem maps to estimable quantities, given causal assumptions and observable data , the authors propose the following diagram:

The decision problem diagram

First published in Nature Computational Science 5, pages 356–360 (2025) by Springer Nature.

This figure starts with the decision-making objective and the available interventions/ treatments, and illustrates how we then connect these decisions to quantities that can be estimated from data: first by defining the causal estimand (the effect we want to know), then linking it to the statistical estimand (what we can estimate from data) and finally producing the model estimate, the result the algorithm computes. This path only works if we make our causal assumptions explicit. The figure is a powerful reminder: without a clear causal framework, data-driven models can produce misleading or unreliable decisions.


The Comment further tackles practical issues such as distribution shifts, uncertainty, performativity and benchmarking, exploring how algorithmic decisions can shape future data and outcomes.

Rather than an optional feature, causal reasoning is a core requirement for creating ADM systems that earn our trust and meet real-world standards.


Read the full comment, published in Nature Computational Science, to gain an in-depth understanding of the future of reliable algorithmic decision-making, and to learn why causality must lie at its core:

C. Kern, U. Fischer-Abaigar, J. Schweisthal, D. Frauen, R. Ghani, S. Feuerriegel, M. van der Schaar and F. Kreuter.
Algorithms for reliable decision-making need causal reasoning.
Nature Computational Science 5 (May. 2025). DOI
Abstract

Decision-making inherently involves cause–effect relationships that introduce causal challenges. We argue that reliable algorithms for decision-making need to build upon causal reasoning. Addressing these causal challenges requires explicit assumptions about the underlying causal structure to ensure identifiability and estimatability, which means that the computational methods must successfully align with decision-making objectives in real-world tasks. Algorithmic decision-making (ADM) has become common in a wide range of domains, including precision medicine, manufacturing, education, hiring, the public sector, and smart cities. At the core of ADM systems are data-driven models that learn from data to recommend decisions, often with the goal of maximizing a defined utility function1. For example, in smart city contexts, ADM is frequently used to optimize traffic flow through predictive models that analyze real-time data, thereby reducing congestion and improving urban mobility. Another prominent application area for ADM are normative decision support systems (often subsumed under ‘prescriptive analytics’) or, more recently, artificial intelligence (AI) agents that either inform or automatically execute managerial and operational decisions in industry. Yet, the applications of ADM to high-stakes decisions face safety and reliability issues1,2,3. Often, the objectives of ADM systems fail to align with the nuanced goals of real-world decision-making, thus creating a tension between the potential of ADM and the risk of harm and failure. Especially when deployed in dynamic, real-world environments, ADM can amplify systemic disadvantages for vulnerable communities and lead to flawed decisions. In this Comment, we argue that reliable algorithmic decision-making — systems that perform safely and robustly under deployment conditions — must be grounded in causal reasoning.

MCML Authors
Free-access view only

Share Your Research!


Get in touch with us!

Are you an MCML Junior Member and interested in showcasing your research on our blog?

We’re happy to feature your work—get in touch with us to present your paper.

#blog #research #feuerriegel #kern #kreuter
Subscribe to RSS News feed

Related

Link to SIC: Making AI Image Classification Understandable

16.10.2025

SIC: Making AI Image Classification Understandable

SIC by the team of Christian Wachinger at ICCV 2025: Transparent AI for intuitive, reliable, and interpretable medical image classification.

Link to Digdeep Podcast: How do we get Germany on track digitally, Cihan Sügür?

10.10.2025

Digdeep Podcast: How Do We Get Germany on Track Digitally, Cihan Sügür?

In the current episode of #digdeep, Cihan Sügür, associated partner of MHP, talks about digitalization in Germany.

Link to Rethinking AI in Public Institutions - Balancing Prediction and Capacity

09.10.2025

Rethinking AI in Public Institutions - Balancing Prediction and Capacity

Unai Fischer Abaigar explores how AI can make public decisions fairer, smarter, and more effective.

Link to MCML-LAMARR Workshop at University of Bonn

08.10.2025

MCML-LAMARR Workshop at University of Bonn

MCML and Lamarr researchers met in Bonn to exchange ideas on NLP, LLM finetuning, and AI ethics.

Link to Three MCML Members Win Best Paper Award at AutoML 2025

08.10.2025

Three MCML Members Win Best Paper Award at AutoML 2025

Former MCML TBF Matthias Feurer and Director Bernd Bischl’s paper on overtuning won Best Paper at AutoML 2025, offering insights for robust HPO.

Back to Top