Home  | Publications | KMS+25

Which LIME Should I Trust? Concepts, Challenges, and Solutions

MCML Authors

Abstract

As neural networks become dominant in essential systems, Explainable Artificial Intelligence (XAI) plays a crucial role in fostering trust and detecting potential misbehavior of opaque models. LIME (Local Interpretable Model-agnostic Explanations) is among the most prominent model-agnostic approaches, generating explanations by approximating the behavior of black-box models around specific instances. Despite its popularity, LIME faces challenges related to fidelity, stability, and applicability to domain-specific problems. Numerous adaptations and enhancements have been proposed to address these issues, but the growing number of developments can be overwhelming, complicating efforts to navigate LIME-related research. To the best of our knowledge, this is the first survey to comprehensively explore and collect LIME's foundational concepts and known limitations. We categorize and compare its various enhancements, offering a structured taxonomy based on intermediate steps and key issues. Our analysis provides a holistic overview of advancements in LIME, guiding future research and helping practitioners identify suitable approaches. Additionally, we provide a continuously updated interactive website (this https URL), offering a concise and accessible overview of the survey.

inproceedings


xAI 2025

3rd World Conference on Explainable Artificial Intelligence. Istanbul, Turkey, Jul 09-11, 2025.

Authors

P. Knab • S. Marton • U. Schlegel • C. Bartelt

Links

DOI GitHub

Research Area

 A3 | Computational Models

BibTeXKey: KMS+25

Back to Top