This research investigates a novel and unexplored question in recommender systems: Does the presence of gender-typed preferences distort or suppress the representation of more neutral preferences within a recommender model?
This is particularly relevant in multi-domain data where strongly gender-targeted items (e.g., cosmetics) exist alongside more general-audience items (e.g., science books). The research examines whether recommender systems inadvertently narrow exposure diversity by over-emphasising demographic signals.
When recommender models learn from data containing both gender-typed and gender-neutral items, do the gender-typed preferences act as proxies that bias recommendations away from neutral items?
The research follows a systematic methodology:
Identifying how gender-typed preferences affect neutral item recommendations, revealing potential harm to recommendation diversity.
A reproducible framework for analysing demographic-typed preference biases that can be generalised to other demographics and datasets.
Strategies for mitigating bias to promote more balanced exposure across user groups and content types.
While this research focuses on gender as a demographic dimension, the framework and findings have broader implications:
This research sits at the intersection of recommender systems, fairness in machine learning, and computational social science. As recommender systems become increasingly influential in shaping user choices and experiences, understanding and mitigating their biases becomes critical for ensuring equitable outcomes.