12.06.2025

Teaser image to Why Causal Reasoning is Crucial for Reliable AI Decisions

Why Causal Reasoning Is Crucial for Reliable AI Decisions

MCML Research Insight - With Christoph Kern, Unai Fischer-Abaigar, Jonas Schweisthal, Dennis Frauen, Stefan Feuerriegel and Frauke Kreuter

As algorithms increasingly make decisions that impact our lives, from managing city traffic to recommending hospital treatments, one question becomes urgent: Can we trust them?

«Causal reasoning offers a powerful but also necessary foundation for improving the safety and reliability of ADM»


Christoph Kern et al.

MCML Associate

In a recent Comment published in Nature Computational Science, our Associate Christoph Kern and our Junior Members Unai Fischer-Abaigar, Jonas Schweisthal, and Dennis Frauen argue alongside our PIs Stefan Feuerriegel and Frauke Kreuter, and collaborators Rayid Ghani and Mihaela van der Schaar, that for algorithmic decision-making systems (ADMs) to be reliable, they must be grounded in causal reasoning. The reason is simple: ADM systems don’t just predict outcomes, they change them. If we want our models to be meaningful in the real world, they must understand and model cause-and-effect relationships.

That’s because every decision, whether it’s selecting a treatment, adjusting traffic lights, or targeting social policies, actively influences the outcome. This makes decision-making fundamentally different from passive prediction. Crucially, we can never observe the outcome of untaken decisions, meaning we must reason about counterfactuals - what would have happened if we had chosen differently?


To make such reasoning valid, explicit causal assumptions are necessary. These assumptions allow us to link the causal estimand (what we ultimately want to know) to the statistical estimand (what we can actually estimate from data). Without these links, even the most accurate-looking model can be misleading.

The authors highlight two key challenges here:

  • Identifiability: Can the causal estimand (the true effect of a decision) be expressed using observable data?
  • Estimatability: Can we compute the statistical estimate reliably, given finite data?

To clarify how the decision problem maps to estimable quantities, given causal assumptions and observable data , the authors propose the following diagram:

The decision problem diagram

First published in Nature Computational Science 5, pages 356–360 (2025) by Springer Nature.

This figure starts with the decision-making objective and the available interventions/ treatments, and illustrates how we then connect these decisions to quantities that can be estimated from data: first by defining the causal estimand (the effect we want to know), then linking it to the statistical estimand (what we can estimate from data) and finally producing the model estimate, the result the algorithm computes. This path only works if we make our causal assumptions explicit. The figure is a powerful reminder: without a clear causal framework, data-driven models can produce misleading or unreliable decisions.


The Comment further tackles practical issues such as distribution shifts, uncertainty, performativity and benchmarking, exploring how algorithmic decisions can shape future data and outcomes.

Rather than an optional feature, causal reasoning is a core requirement for creating ADM systems that earn our trust and meet real-world standards.


Read the full comment, published in Nature Computational Science, to gain an in-depth understanding of the future of reliable algorithmic decision-making, and to learn why causality must lie at its core:

C. Kern, U. Fischer-Abaigar, J. Schweisthal, D. Frauen, R. Ghani, S. Feuerriegel, M. van der Schaar and F. Kreuter.
Algorithms for reliable decision-making need causal reasoning.
Nature Computational Science 5 (May. 2025). DOI
Abstract

Decision-making inherently involves cause–effect relationships that introduce causal challenges. We argue that reliable algorithms for decision-making need to build upon causal reasoning. Addressing these causal challenges requires explicit assumptions about the underlying causal structure to ensure identifiability and estimatability, which means that the computational methods must successfully align with decision-making objectives in real-world tasks. Algorithmic decision-making (ADM) has become common in a wide range of domains, including precision medicine, manufacturing, education, hiring, the public sector, and smart cities. At the core of ADM systems are data-driven models that learn from data to recommend decisions, often with the goal of maximizing a defined utility function1. For example, in smart city contexts, ADM is frequently used to optimize traffic flow through predictive models that analyze real-time data, thereby reducing congestion and improving urban mobility. Another prominent application area for ADM are normative decision support systems (often subsumed under ‘prescriptive analytics’) or, more recently, artificial intelligence (AI) agents that either inform or automatically execute managerial and operational decisions in industry. Yet, the applications of ADM to high-stakes decisions face safety and reliability issues1,2,3. Often, the objectives of ADM systems fail to align with the nuanced goals of real-world decision-making, thus creating a tension between the potential of ADM and the risk of harm and failure. Especially when deployed in dynamic, real-world environments, ADM can amplify systemic disadvantages for vulnerable communities and lead to flawed decisions. In this Comment, we argue that reliable algorithmic decision-making — systems that perform safely and robustly under deployment conditions — must be grounded in causal reasoning.

MCML Authors
Link to Profile Christoph Kern

Christoph Kern

Prof. Dr.

Social Data Science and AI Lab

Link to website

Jonas Schweisthal

Artificial Intelligence in Management

Link to website

Dennis Frauen

Artificial Intelligence in Management

Link to Profile Stefan Feuerriegel

Stefan Feuerriegel

Prof. Dr.

Artificial Intelligence in Management

Link to Profile Frauke Kreuter

Frauke Kreuter

Prof. Dr.

Social Data Science and AI

Free-access view only

Share Your Research!


Get in touch with us!

Are you an MCML Junior Member and interested in showcasing your research on our blog?

We’re happy to feature your work—get in touch with us to present your paper.

12.06.2025


Subscribe to RSS News feed

Related

Link to SceneDINO: How AI Learns to See and Understand Images in 3D–Without Human Labels

24.07.2025

SceneDINO: How AI Learns to See and Understand Images in 3D–Without Human Labels

Accepted at ICCV 2025, SceneDINO infers 3D geometry and semantics from one image—no labels, inspired by human scene understanding.

Link to How Reliable Are Machine Learning Methods? With Anne-Laure Boulesteix and Milena Wünsch

23.07.2025

How Reliable Are Machine Learning Methods? With Anne-Laure Boulesteix and Milena Wünsch

In this research film, Anne-Laure Boulesteix and Milena Wünsch reveal how subtle biases in ML benchmarking can lead to misleading results.

Link to  AI-Powered Cortical Mapping for Neurodegenerative Disease Diagnoses - with Christian Wachinger

16.07.2025

AI-Powered Cortical Mapping for Neurodegenerative Disease Diagnoses - With Christian Wachinger

Research film with Christian Wachinger shows how AI maps the brain’s cortex to support diagnoses of neurodegenerative diseases.

Link to Beyond Prediction: How Causal AI Enables Better Decision-Making - With Stefan Feuerriegel

10.07.2025

Beyond Prediction: How Causal AI Enables Better Decision-Making - With Stefan Feuerriegel

Stefan Feuerriegel in our new film shows how Causal AI helps pick better actions by predicting outcomes for each possible decision.

Link to Capturing Complexity in Surgical Environments

09.07.2025

Capturing Complexity in Surgical Environments

Published at CVPR 2025, MM-OR is a multimodal dataset of robotic knee surgeries, capturing OR dynamics via video, audio, tracking, and robot logs.