Data perspectivism goes beyond majority vote label aggregation by recognizing various perspectives as legitimate ground truths. However, current evaluation practices remain fragmented, making it difficult to compare perspectivist approaches and analyze their impact on differ-ent users and demographic subgroups. To ad-dress this gap, we introduce PersEval, the first unified framework for evaluating perspectivist models in NLP. A key innovation is its evaluation at the individual annotator level and its treatment of annotators and users as dis-tinct entities, consistently with real-world scenarios. We demonstrate PersEval's capabilities through experiments with both Encoder-based and Decoder-based approaches, as well as an analysis of the effect of sociodemographic prompting. By considering global, text-, trait-and user-level evaluation metrics, we show that PersEval is a powerful tool for examining how models are influenced by user-specific in-formation and identifying the biases this information may introduce.
inproceedings
BibTeXKey: LCB+25