Home  | Publications | LML+23

Debiased Contrastive Loss for Collaborative Filtering

MCML Authors

Abstract

Collaborative filtering (CF) is the most fundamental technique in recommender systems, which reveals user preference by implicit feedback. Generally, binary cross-entropy or bayesian personalized ranking are usually employed as the loss function to optimize model parameters. Recently, the sampled softmax loss has been proposed to enhance the sampling efficiency, which adopts an in-batch sample strategy. However, it suffers from the sample bias issue, which unavoidably introduces false negative instances, resulting inaccurate representations of users’ genuine interests. To address this problem, we propose a debiased contrastive loss, incorporating a bias correction probability to alleviate the sample bias. We integrate the proposed method into several matrix factorizations (MF) and graph neural network-based (GNN) recommendation models. Besides, we theoretically analyze the effectiveness of our methods in automatically mining the hard negative instances. Experimental results on three public benchmarks demonstrate that the proposed debiased contrastive loss can augment several existing MF and GNN-based CF models and outperform popular learning objectives in the recommendation. Additionally, we demonstrate that our method substantially enhances training efficiency.

inproceedings


KSEM 2024

16th International Conference Knowledge Science, Engineering and Management. Guangzhou, China, Aug 16-18, 2023.

Authors

Z. Liu • Y. Ma • H. Li • M. Hildebrandt • Y. Ouyang • Z. Xiong

Links

DOI

Research Area

 A3 | Computational Models

BibTeXKey: LML+23

Back to Top