23.07.2025
						
How Reliable Are Machine Learning Methods? With Anne-Laure Boulesteix and Milena Wünsch
Research Film
Often a new machine learning method claims to outperform the last. Whether it’s in bioinformatics, finance, or image recognition, the message is the same: this algorithm is faster, more accurate, more powerful. But can we trust those claims?
«It’s not just about the algorithms. It’s about how we compare them—and what we choose to report or ignore.»
Milena Wünsch
MCML Junior Member
Beneath the surface of many benchmarking studies lies a quiet problem: subtle biases that skew comparisons and inflate performance. These issues often go unnoticed — but they can have real consequences, especially when such models are used to inform research or high-stakes decisions.
«It doesn’t matter whether the bias is deliberate or not. It still shapes how methods are judged and used.»
Anne-Laure Boulesteix
MCML PI
Anne-Laure Boulesteix, Professor of Biometry at LMU and MCML PI, and Milena Wünsch, PhD student at LMU and MCML, study how seemingly harmless methodological choices can lead to misleading results.
One common issue: when a method fails on a dataset, researchers may simply drop it from the analysis. While convenient, this can introduce bias and overstate performance.
Bias can also arise from less obvious sources — like spending more time tuning one method, being more familiar with a tool, or unconsciously interpreting results in its favor.
With so many studies promoting the “next best” algorithm, it’s hard to know which results to trust. Researchers may end up using a method that only looked good due to biased comparisons. Still, the researchers are hopeful. In recent years, the methodological machine learning community has made real progress — pushing for better standards, more transparency, and more careful benchmarking.
©MCML
The film was produced and edited by Nicole Huminski and Nikolai Huber.
#blog #research #boulesteix
Related
					
02.11.2025
MCML at EMNLP 2025
MCML researchers are represented with 37 papers at EMNLP 2025 (17 Main, 13 Findings, and 7 Workshops).
					©Terzo Algeri/Fotoatelier M/ TUM
30.10.2025
Language Shapes Gender Bias in AI Images
Alexander Fraser shows AI image generators reproduce gender stereotypes differently across languages, highlighting the need for fair multilingual AI.
					
20.10.2025
Björn Ommer Appointed LMU Chief AI Officer
Our PI Björn Ommer has been appointed LMU’s Chief AI Officer to strengthen AI research and collaborations.
					
17.10.2025
MCML at ICCV 2025
MCML researchers are represented with 28 papers at ICCV 2025 (22 Main, and 6 Workshops).