no code implementations • 11 Nov 2023 • Roger Zhe Li
This accuracy is usually evaluated with some user-oriented metric tailored to the recommendation scenario, but because recommendation is usually treated as a machine learning problem, recommendation models are trained to maximize some other generic criteria that does not necessarily align with the criteria ultimately captured by the user-oriented evaluation metric.
1 code implementation • 25 Jul 2023 • Roger Zhe Li, Julián Urbano, Alan Hanjalic
Mainstream bias, where some users receive poor recommendations because their preferences are uncommon or simply because they are less active, is an important aspect to consider regarding fairness in recommender systems.
1 code implementation • 4 Jun 2021 • Roger Zhe Li, Julián Urbano, Alan Hanjalic
Most methods following this approach aim at optimizing the same metric being used for evaluation, under the assumption that this will lead to the best performance.
1 code implementation • 2 Feb 2021 • Roger Zhe Li, Julián Urbano, Alan Hanjalic
In this paper we focus on the so-called mainstream bias: the tendency of a recommender system to provide better recommendations to users who have a mainstream taste, as opposed to non-mainstream users.