Home  | Publications | SMF+25

Investigating the Impact of Conceptual Metaphors on LLM-Based NLI Through Shapley Interactions

MCML Authors

Link to Profile Fabian Fumagalli

Fabian Fumagalli

Prof. Dr.

Thomas Bayes Fellow

Link to Profile Eyke Hüllermeier PI Matchmaking

Eyke Hüllermeier

Prof. Dr.

Principal Investigator

Abstract

Metaphorical language is prevalent in everyday communication, often used unconsciously, as in “rising crime.” While LLMs excel at identifying metaphors in text, they struggle with downstream tasks that implicitly require correct metaphor interpretation, such as natural language inference (NLI). This work explores how LLMs perform on NLI with metaphorical input. Particularly, we investigate whether incorporating conceptual metaphors (source and target domains) enhances performance in zero-shot and few-shot settings. Our contributions are two-fold: (1) we extend metaphorical texts in an existing NLI dataset by source and target domains, and (2) we conduct an ablation study using Shapley values and interactions to assess the extent to which LLMs interpret metaphorical language correctly in NLI. Our results indicate that incorporating conceptual metaphors often improves task performance.

inproceedings SMF+25


Findings @EMNLP 2025

Findings of the Conference on Empirical Methods in Natural Language Processing. Suzhou, China, Nov 04-09, 2025.
Conference logo

Authors

M. Sengupta • M. MuschalikF. Fumagalli • B. Hammer • E. Hüllermeier • D. Ghosh • H. Wachsmut

Links

DOI

Research Areas

 A1 | Statistical Foundations & Explainability

 A3 | Computational Models

BibTeXKey: SMF+25

Back to Top