Home  | News

23.07.2025

Teaser image to How Reliable Are Machine Learning Methods? With Anne-Laure Boulesteix and Milena Wünsch

How Reliable Are Machine Learning Methods? With Anne-Laure Boulesteix and Milena Wünsch

Research Film

Often a new machine learning method claims to outperform the last. Whether it’s in bioinformatics, finance, or image recognition, the message is the same: this algorithm is faster, more accurate, more powerful. But can we trust those claims?

«It’s not just about the algorithms. It’s about how we compare them—and what we choose to report or ignore.»


Milena Wünsch

MCML Junior Member

Beneath the surface of many benchmarking studies lies a quiet problem: subtle biases that skew comparisons and inflate performance. These issues often go unnoticed — but they can have real consequences, especially when such models are used to inform research or high-stakes decisions.

«It doesn’t matter whether the bias is deliberate or not. It still shapes how methods are judged and used.»


Anne-Laure Boulesteix

MCML PI

Anne-Laure Boulesteix, Professor of Biometry at LMU and MCML PI, and Milena Wünsch, PhD student at LMU and MCML, study how seemingly harmless methodological choices can lead to misleading results.

One common issue: when a method fails on a dataset, researchers may simply drop it from the analysis. While convenient, this can introduce bias and overstate performance.

Bias can also arise from less obvious sources — like spending more time tuning one method, being more familiar with a tool, or unconsciously interpreting results in its favor.

With so many studies promoting the “next best” algorithm, it’s hard to know which results to trust. Researchers may end up using a method that only looked good due to biased comparisons. Still, the researchers are hopeful. In recent years, the methodological machine learning community has made real progress — pushing for better standards, more transparency, and more careful benchmarking.

Watch in Full Quality on YouTube

The film was produced and edited by Nicole Huminski and Nikolai Huber.

 

#blog #research #boulesteix
Subscribe to RSS News feed

Related

Link to Research Stay at Stanford University

24.11.2025

Research Stay at Stanford University

Kun Yuan spent two months at Stanford with the AI X-Change Program, advancing biomedical vision-language models and launching three joint projects.

Link to Zigzag Your Way to Faster, Smarter AI Image Generation

20.11.2025

Zigzag Your Way to Faster, Smarter AI Image Generation

ZigMa, introduced by Björn Ommer’s group at ECCV 24, improves high-res AI image and video generation with fast, memory-efficient zigzag scanning.

Link to Anne-Laure Boulesteix Among the World’s Most Cited Researchers

13.11.2025

Anne-Laure Boulesteix Among the World’s Most Cited Researchers

MCML PI Anne‑Laure Boulesteix named Highly Cited Researcher 2025 for cross-field work, among 17 LMU scholars recognized globally.

Link to Björn Ommer Featured in Frankfurter Rundschau

13.11.2025

Björn Ommer Featured in Frankfurter Rundschau

Björn Ommer highlights how Google’s new AI search mode impacts publishers, content visibility, and the diversity of online information.

Link to Fabian Theis Among the World’s Most Cited Researchers

13.11.2025

Fabian Theis Among the World’s Most Cited Researchers

Fabian Theis is named a Highly Cited Researcher 2025 for his work in mathematical modeling of biological systems.

Back to Top