Home  | News

13.11.2025

Teaser image to Explaining AI Decisions: Shapley Values Enable Smart Exosuits

Explaining AI Decisions: Shapley Values Enable Smart Exosuits

MCML Research Insight - With Julia Herbinger, Giuseppe Casalicchio, Yusuf Sale, Bernd Bischl and Eyke Hüllermeier

Picture a typical day in a warehouse: one worker lifts, bends, and carries out the same task over and over again. While the routine may seem simple, the physical toll steadily builds—affecting joints and muscles. To combat the long-term health risks associated with such repetitive movements, businesses are increasingly turning to exoskeletons and exosuits—innovative wearable technologies designed to support the body and ease the burden of manual labor.

Engineers are tuning the settings for each warehouse worker, like how strongly the suit assists when you lift something. But finding the perfect balance isn’t easy. You can’t simply write down a formula for “comfort”. Instead, you try, test, adjust, and try again.


From Trial and Error to Intelligent Optimization

This kind of trial-and-error process is exactly what Bayesian optimization (BO) does in AI. It’s a powerful way to search for the best solution when the problem is too complex to solve directly, from tuning exosuits to designing drugs or ML models. The catch: BO can feel like a black box. It proposes new settings, but rarely explains why. That’s where ShapleyBO comes in: a framework in collaboration with MCML researchers that turns BO’s next move into a short, readable rationale and invites experts into the loop.

In the paper “Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration for Exosuit Personalization”, MCML members Julia Herbinger, Yusuf Sale, and Giuseppe Casalicchio along with MCML Director Bernd Bischl and PI Eyke Hüllermeier as well as first author Julian Rodemann and collaborators Federico Croppi, Philipp Arens, Thomas Augustin, and Conor J. Walsh, outline ShapleyBO, their method that creates explainable BO and adds an expert to weigh in on the outputs. The result: better choices made sooner, and with reasons. The project was carried out in cooperation with the Harvard Biodesign Lab, which provided real data on a new generation of exosuits.


«Not without a dash of irony, BO is often considered a black box itself, lacking ways to provide reasons as to why certain parameters are proposed to be evaluated…particularly relevant in human-in-the-loop applications of BO, such as in robotics.»


Yusuf Sale et al.

MCML Junior Members

Interpreting Bayesian Optimization Using Shapley Values

The idea behind ShapleyBO is surprisingly intuitive. It borrows a concept from game theory, the world of mathematical reasoning about cooperation and fairness. In a cooperative game, you want to figure out how much each player contributed to the team’s success. The Shapley value is a method for fairly dividing that “credit.”

Now imagine each parameter of your optimization problem, like the lift assistance parameter in the exosuit problem, as a “player” in the game. ShapleyBO uses the same principle to ask: how much did each parameter contribute to the optimizer’s latest decision?

In other words, it translates the algorithm’s reasoning into a human-understandable explanation. Instead of saying “I picked this configuration because it’s optimal”, ShapleyBO can explain “I picked it because the lifting gain strongly reduced uncertainty, while the lowering gain fine-tuned the mean performance.”

Soft back exosuit and example controller profiles

Figure 1: Soft back exosuit and example controller profiles. (A) Hardware layout with key components and straps/BOA system that anchor the assistive modules. (B) Two candidate controller settings (pink, blue) and their force command vs. trunk angle trajectories over a full lift cycle. Silhouettes illustrate start–mid–end postures during the lift.


Peeking Inside the AI’s Mind

Bayesian optimization constantly juggles two goals:

  • Exploration – trying new things to learn more about the problem
  • Exploitation – using what it already knows to improve results.

ShapleyBO helps untangle these two forces. Even more, it can separate different types of uncertainty:

  • Aleatoric uncertainty, which comes from noise or randomness that can’t be reduced (like measurement errors)
  • Epistemic uncertainty, which comes from missing knowledge and can, in principle, be reduced with more data.

By showing how much each parameter contributes to reducing each type of uncertainty, ShapleyBO provides a richer picture of what’s going on under the hood.


Humans and Algorithms Working Together

Perhaps the most exciting part of the research is how ShapleyBO also improves human-AI collaboration. In the exosuit example, engineers could watch ShapleyBO’s explanations in real time. When the optimizer proposed a new configuration, they could see why, and decide whether to accept or override it.

In simulated experiments, teams that had access to ShapleyBO’s explanations were able to reach good solutions faster than those without them. The explanations didn’t just make the AI more transparent; they made the collaboration more efficient.


«The use case of customizing exosuits illustrates the practical benefits of this approach, suggesting that ShapleyBO could be a valuable practical tool for personalizing soft back exosuits.»


Yusuf Sale et al.

MCML Junior Members

Why It Matters

As the complexity of AI systems increases, so does their opacity. If we want to rely on them to make critical decisions, it is crucial to understand why those decisions are being made.

By extending interpretability to optimization algorithms, ShapleyBO opens a new frontier for explainable AI. It helps transform AI from a mysterious black box into a transparent partner that we can reason with, challenge, and ultimately trust.


Open Challenges

Some challenges that lie ahead in this area are the explainability as the functions become more complex, or as more parameters are added.


Interested in Exploring Further?

The full paper, presented at the ECML-PKDD 2025 conference, provides a more in-depth exploration of the potential of this new method and technology, laying the foundations for future advancements in 3-D modeling and deformation.

A Conference
J. Rodemann • F. Croppi • P. Arens • Y. SaleJ. HerbingerB. BischlE. Hüllermeier • T. Augustin • C. J. Walsh • G. Casalicchio
Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration For Exosuit Personalization.
ECML-PKDD 2025 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Porto, Portugal, Sep 15-19, 2025. DOI GitHub

The Börsen Zeitung also covered the research in an article


Share Your Research!


Get in touch with us!

Are you an MCML Junior Member and interested in showcasing your research on our blog?

We’re happy to feature your work—get in touch with us to present your paper.

#blog #research #bischl #huellermeier
Subscribe to RSS News feed

Related

Link to Needle in a Haystack: Finding Exact Moments in Long Videos

05.02.2026

Needle in a Haystack: Finding Exact Moments in Long Videos

ECCV 2024 research introduces RGNet, an AI model that finds exact moments in long videos using unified retrieval and grounding.

Link to Benjamin Busam Leads Design of Bavarian Earth Observation Satellite Network “CuBy”

04.02.2026

Benjamin Busam Leads Design of Bavarian Earth Observation Satellite Network “CuBy”

Benjamin Busam leads the scientific design of the “CuBy” satellite network, delivering AI-ready Earth observation data for Bavaria.

Link to Cracks in the foundations of cosmology

30.01.2026

Cracks in the Foundations of Cosmology

Daniel Grün examines cosmological tensions that challenge the Standard Model and may point toward new physics.

Link to How Machines Can Discover Hidden Rules Without Supervision

29.01.2026

How Machines Can Discover Hidden Rules Without Supervision

ICLR 2025 research shows how self-supervised learning uncovers hidden system dynamics from unlabeled, high-dimensional data.

Link to Matthias Nießner Co-Founds AI Startup Synthesia

28.01.2026

Matthias Nießner Co-Founds AI Startup Synthesia

Julien Gagneur comments on DeepMind’s AlphaGenome, highlighting its precision and remaining challenges in genome prediction.

Back to Top