Mostra i principali dati dell'item

dc.contributor.authorChizari, Nikzad
dc.contributor.authorTajfar, Keywan
dc.contributor.authorMoreno García, María Navelonga 
dc.date.accessioned2025-08-28T10:35:43Z
dc.date.available2025-08-28T10:35:43Z
dc.date.issued2023-02-17
dc.identifier.citationChizari, N.; Tajfar, K.; Moreno-García, M.N. Bias Assessment Approaches for Addressing User-Centered Fairness in GNN-Based Recommender Systems. Information 2023, 14, 131. https://doi.org/10.3390/info14020131es_ES
dc.identifier.urihttp://hdl.handle.net/10366/166825
dc.description.abstract[EN]In today’s technology-driven society, many decisions are made based on the results provided by machine learning algorithms. It is widely known that the models generated by such algorithms may present biases that lead to unfair decisions for some segments of the population, such as minority or marginalized groups. Hence, there is concern about the detection and mitigation of these biases, which may increase the discriminatory treatments of some demographic groups. Recommender systems, used today by millions of users, are not exempt from this drawback. The influence of these systems on so many user decisions, which in turn are taken as the basis for future recommendations, contributes to exacerbating this problem. Furthermore, there is evidence that some of the most recent and successful recommendation methods, such as those based on graphical neural networks (GNNs), are more sensitive to bias. The evaluation approaches of some of these biases, as those involving protected demographic groups, may not be suitable for recommender systems since their results are the preferences of the users and these do not necessarily have to be the same for the different groups. Other assessment metrics are aimed at evaluating biases that have no impact on the user. In this work, the suitability of different user-centered bias metrics in the context of GNN-based recommender systems are analyzed, as well as the response of recommendation methods with respect to the different types of biases to which these measures are addressed.es_ES
dc.format.mimetypeapplication/pdf
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectRecommender systemses_ES
dc.subjectGNN (Graph Neural Networks)es_ES
dc.subjectBiases_ES
dc.subjectFairnesses_ES
dc.subjectSensitive featureses_ES
dc.titleBias Assessment Approaches for Addressing User-Centered Fairness in GNN-Based Recommender Systemses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publishversionhttps://doi.org/10.3390/info14020131es_ES
dc.subject.unesco1203 Ciencia de los ordenadoreses_ES
dc.identifier.doi10.3390/info14020131
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.identifier.essn2078-2489
dc.journal.titleInformationes_ES
dc.volume.number14es_ES
dc.issue.number2es_ES
dc.page.initial131es_ES
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Files in questo item

Thumbnail

Questo item appare nelle seguenti collezioni

Mostra i principali dati dell'item

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional