Uncertainty in machine learning models is a timely and vast field of research. In supervised learning, uncertainty can already occur in the first stage of the training process, the annotation phase. This scenario is particularly evident when some instances cannot be definitively classified. In other words, there is inevitable ambiguity in the annotation step and hence, not necessarily a single ‘ground truth’ associated with each instance. This work approaches the problem from a statistical modelling perspective. The main idea is to drop the assumption of a ground truth label and instead embed the annotations into a multidimensional space. This embedding is derived from the empirical distribution of annotations within a Bayesian setup, modelled using a Dirichlet-Multinomial framework. We estimate the model parameters and posteriors using a stochastic Expectation Maximizsation algorithm with Markov Chain Monte Carlo (MCMC) steps. The methods developed in this article readily extend to various situations in which multiple annotators independently label instances. To showcase the generality of the proposed approach, we apply our approach to three benchmark datasets for image classification and natural language inference (NLI), in which multiple annotations per instance are available. Besides the embeddings, we can investigate the resulting correlation matrices, which reflect the semantic similarities of the original classes for all three exemplary datasets.
article HSZ+25
BibTeXKey: HSZ+25