Home  | Publications | BFV+20a

Interpretable and Fair Comparison of Link Prediction or Entity Alignment Methods With Adjusted Mean Rank

MCML Authors

Link to Profile Volker Tresp

Volker Tresp

Prof. Dr.

Principal Investigator

Abstract

In this work, we take a closer look at the evaluation of two families of methods for enriching information from knowledge graphs: Link Prediction and Entity Alignment. In the current experimental setting, multiple different scores are employed to assess different aspects of model performance. We analyze the informativeness of these evaluation measures and identify several shortcomings. In particular, we demonstrate that all existing scores can hardly be used to compare results across different datasets. Moreover, we demonstrate that varying size of the test size automatically has impact on the performance of the same model based on commonly used metrics for the Entity Alignment task. We show that this leads to various problems in the interpretation of results, which may support misleading conclusions. Therefore, we propose adjustments to the evaluation and demonstrate empirically how this supports a fair, comparable, and interpretable assessment of model performance.

inproceedings


WI-IAT 2020

IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology. Virtual, Dec 14-17, 2020.

Authors

M. BerrendorfE. Faerman • L. Vermue • V. Tresp

Links

DOI

Research Area

 A3 | Computational Models

BibTeXKey: BFV+20a

Back to Top