Algorithmic Gatekeepers: The Invisible Players Shaping Knowledge Dissemination

March 29, 2024

Authors: Raphaël Tuor, Denis Lalanne, Bruno Dumas and Juliette Parmentier | Human-IST Institute.

Exploring the unprecedented challenges associated with the integration of epistemic gatekeepers in news recommender systems

The widespread adoption of new communication technologies enabled seamless and ubiquitous access to information in today’s industrialized world[1]. Recommender systems facilitate the diffusion of knowledge at a large scale and play a crucial role in the development of society and its individuals[2]. Taking the shapes of social media, search engines, news aggregators, knowledge machines[3] like Wikipedia and AI-powered chatbots, the internet has become a convenient primary source of knowledge[4], leading to significant changes in the information society[5][6][7] and the traditional information distribution system.

We are entering an era shaped, on the one hand, by the increasing power of algorithms[8], and on the other hand, by the dissemination of new tools enhancing the epistemic agency of their users[9]. In this information overload era, algorithmic gatekeepers are an efficient way to manage and regulate content, affecting the availability of knowledge, and the way our society obtains, processes, stores, uses and produces it[10].

The multiple challenges stemming from unregulated media consumption

From a Human-Computer Interaction (HCI) perspective, research has already identified patterns underpinning several ethical problems linked to digital media consumption[1] and the perception of personalization[2]. Individuals’ attitudes and behaviours are affected in multiple ways[3]: troubles range from psychological problems such as decreased productivity and affective well-being[4], and news avoidance[5], to human-technology relationship problems such as distrust towards recommendations or technology altogether.

Algorithmic gatekeepers: highly powerful but opaque

Making the inner workings of a system transparent does not guarantee its interpretability by a human (the explainee). Interpretability can be defined as the ability of a system to give information that is meaningful to the agent receiving it[6]. The lack of interpretability and the hidden nature of algorithmic gatekeepers[7], which distribute, filter and rank content, is one of the problems preventing a healthy relationship with news recommender systems[8]. Content recommendation is a multiple-step process that relies on an increasingly complex set of factors with intertwined dimensions and novel methods, such as Machine Learning (ML), and, more recently, Deep Learning (DL) and Natural Language Models (NLM), that allow artificial agents to consume, understand and suggest information based on language models. Due to that complex chain of inputs and outputs, recommender systems based on deep learning present high accuracy, but low interpretability, at the opposite of standard decision trees[9]. That opacity, inherent to systems that rely on complex decision processes, is coined in the term “black-box effect”[10]. Research in interpretable systems could take inspiration from social sciences and explanation theories[11]. For instance, following the classification of Malle and Pearce[12], actions performed by recommender systems are interpreted as intentional and unobservable, and thus require explanations. Such causal explanations should provide information about the historical information that led to a given outcome.

The goal of eXplainable Artificial Intelligence (XAI) can be reformulated as “congruence between the AI’s input-output mapping, and the human mental model of that mapping”[13]. To achieve that, a successful XAI system should help explain both individual prediction outcomes (why did the system suggest this content to me?) and how the model produces suggestions to reduce (mis-)conceptions of the human towards AI. Explanations should be interpretable, in the sense that they should be short and clear enough to be understood by humans[14]. They should allow to shift human’s beliefs of AI’s behaviour towards actual AI’s behaviour[15]. In addition, new normative initiatives should encourage transparent communication at several levels: from the explicit communication of the values that the platform endorses to the factors affecting recommendations with XAI, to the design of interfaces that favour user’s interests (epistemic welfare[16], among others) and allowing them to understand and control the way their preferences impact the customization of the recommender system’s rankings[17].

Algorithmic gatekeepers: great power is synonym with great accountability

From Gutenberg’s printing revolution in 1440 to the advent of the internet in the 1980s[18], technological revolutions in terms of information distribution have been continuously reshaping our content consumption habits and the entire sociotechnical system. On their turn, algorithmic gatekeepers are shifting the roles and responsibilities among the actors involved in media content production. This situation generates the need for new regulations, with as a priority, the need to respect the values and needs of every actor involved[19]. In an information overload era, algorithmic gatekeepers are at the core of news recommender systems that act as an intermediary between content producers and consumers.

From content producers to consumers, algorithmic gatekeepers are acting at several levels, defined by the roles attributed to them. For the former, these algorithmic systems can change the way journalists produce content[20], encouraging the production of the most engaging types of content while sacrificing journalistic values such as accuracy, relevance, independence, and impartiality, and thus undermining epistemic welfare. For the latter, algorithmic gatekeepers affect users’ epistemic well-being with the way they filter and rank content. When not used with the user’s well-being and welfare in mind, this type of practice can lead to several issues: while dark patterns encourage users to consume content, they did not intend to[21], filter bubbles and echo chambers reduce the diversity of the content, reducing their epistemic agency[22]. The quantity of information shown also plays a significant role: the mere reduction of a Facebook newsfeed results in a reduction of the time spent on the app[23]. The question of the roles that we should assign to algorithmic gatekeepers remains central here[24]: in which context do we want algorithms to act as filter bubbles, and in which context should they promote diversity?

 

eXplainable AI: a key role to for a trustful relationship

In the end, problems of trust towards technology arise: the platform’s interests and values remain opaque, and new ways to produce knowledge bring new issues regarding the consumption of trustworthy information sources. Factors defining the relevance of the content should be influenced by robust epistemic values that favour a healthy society, rather than mere audience or engagement metrics. Constantly and significantly reshaping our content consumption habits and the entire sociotechnical system, these invisible and complex filters require special care to ensure a trustful human-technology relationship. Content providers should stop considering users’ attention and emotional response as a mere metric to improve engagement and short-term profitability, without consideration towards the relevance of the content to the consumers’ knowledge or interests. This shift of responsibilities and agency among actors should highlight the need to make algorithmic gatekeepers and their developers accountable and to develop new regulations in priority, considering human values, and the needs of every actor involved, including journalistic, social, and legal domains. All the actors should take part in the development of more transparent and accountable models[25]. Wikipedia’s bots represent a relevant example of the implementation of such models: these bots play moderation and gatekeeping roles to facilitate the publishing of content on Wikipedia while enabling developers to actively take part in their design and development[26].

From eXplainable AI, i.e., the explicit communication of the values endorsed by media platforms and of the factors affecting consumers’ content recommendations, to the design of interfaces tailored to serve users’ interests and values – epistemic welfare and well-being, amongst others – adequate education of individuals regarding new information technologies should be ensured in order to promote critical thinking[27]. These novel normative theories should also be explicit to the scientific community regarding the values defining the researcher’s visions of public interest. The epistemic toolboxes that will result should encourage, at several levels, harmonious human-technology interaction, and ethical knowledge dissemination. The next step in our research efforts towards more transparent and ethical algorithmic gatekeeping practices will focus on understanding in more detail how they affect news consumers. By conducting questionnaires, we will be able to determine the most common consumers’ beliefs and needs regarding the use of AI in news recommender systems.

————————-

[1] Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2020). The ethics of algorithms: key problems and solutions. AI & SOCIETY, 37, 215 – 230.

[2] Kleanthous, S., & Siklafidis, C. (2023). Perception of Personalization Processes: Awareness, Data Sharing and Transparency. Proceedings of the 2nd International Conference of the ACM Greek SIGCHI Chapter.

[3] Iyer, A. (2023). The Effect of Social Media News Feed Consumption on Personal Productivity: A Statistical Study. International Journal of Innovative Science and Research Technology, 8, 733 – 746.

[4] Pang, H., & Ruan, Y. (2023). Can information and communication overload influence smartphone app users’ social network exhaustion, privacy invasion and discontinuance intention? A cognition-affect-conation approach. Journal of Retailing and Consumer Services, 73, 103378.

[5] Edgerly, S. (2021). The head and heart of news avoidance: How attitudes about the news media relate to levels of news consumption. Journalism, 23, 1828 – 1845.

[6] Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: a state of the art. Artificial Intelligence Review, 56(4), 3005-3054.

[7] Koene, A.R. (2017). Human Agency on Algorithmic Systems.

[8] traditional rating-based observations being summarized in natural language

[9] Longo, L., Brcic, M., Cabitza, F., Choi, J., Confalonieri, R., Ser, J.D., Guidotti, R., Hayashi, Y., Herrera, F., Holzinger, A., Jiang, R., Khosravi, H., Lecue, F., Malgieri, G., P’aez, A., Samek, W., Schneider, J., Speith, T., & Stumpf, S. (2023). Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions. ArXiv, abs/2310.19775.

[10] Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., & Atkinson, P. M. (2021). Explainable artificial intelligence: an analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5), e1424.

[11] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.

[12] Malle, B. F., & Pearce, G. E. (2001). Attention to behavioral events during interaction: Two actor–observer gaps and three attempts to close them. Journal of Personality and Social Psychology, 81(2), 278.

[13] Yang, S. C. H., Folke, N. E. T., & Shafto, P. (2022, June). A psychological theory of explainability. In International Conference on Machine Learning (pp. 25007-25021). PMLR.

[14] Radlinski, F., Balog, K., Diaz, F., Dixon, L., & Wedin, B.D. (2022). On Natural Language User Profiles for Transparent and Scrutable Recommendation. Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.

[15] Yang, S. C. H., Folke, N. E. T., & Shafto, P. (2022, June). A psychological theory of explainability. In International Conference on Machine Learning (pp. 25007-25021). PMLR.

[16] https://www.algepi.com/epistemic-welfare-concept-note/

[17] Vraga, E.K., & Tully, M. (2019). News literacy, social media behaviors, and skepticism toward information on social media. Information, Communication & Society, 24, 150 – 166.

[18] Bearman, T.C. (1992). Information transforming society: challenges for the year 2000. Information Services and Use archive, 12, 217-223.

[19] Yeung, K., & Lodge, M. (2019). Algorithmic Regulation: An Introduction. IRPN: Innovation Policy Studies (Topic).

[20] Umejei, E. (2022). Chinese Digital Platform: “We Write What the Algorithm Wants”. Digital Journalism, 10, 1875 – 1892.

[21] Mathur, A., Mayer, J.R., & Kshirsagar, M. (2021). What Makes a Dark Pattern… Dark?: Design Attributes, Normative Considerations, and Measurement Methods. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems.

[22] Nechushtai, E., & Lewis, S.C. (2019). What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Comput. Hum. Behav., 90, 298-307.

[23] Purohit, A., Bergram, K., Barclay, L., Bezençon, V., & Holzer, A. (2023). Starving the Newsfeed for Social Media Detox: Effects of Strict and Self-regulated Facebook Newsfeed Diets. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.

[24] Nechushtai, E., & Lewis, S.C. (2019). What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Comput. Hum. Behav., 90, 298-307.

[25] Geiger, R.S., & Halfaker, A.L. (2016). Open algorithmic systems: lessons on opening the black box from Wikipedia.

[26] Piscopo, A. (2018). Wikidata: A New Paradigm of Human-Bot Collaboration? ArXiv, abs/1810.00931.

[27] Vraga, E.K., & Tully, M. (2019). News literacy, social media behaviors, and skepticism toward information on social media. Information, Communication & Society, 24, 150 – 166.

Other resources

Dr. Lefèvre Attends GER Journalismes Launch Event

Dr. Lefèvre Attends GER Journalismes Launch Event

Bruno Lefèvre, postdoctoral researcher at ReSIC (Université Libre de Bruxelles) and partner in the ALGEPI project, attended the Journée de lancement GER Journalismes at the Université de ...
Prof. Lalanne and Dr. Tuor Showcase Ongoing Work at MIT Media Lab

Prof. Lalanne and Dr. Tuor Showcase Ongoing Work at MIT Media Lab

In November 2024, Prof. Denis Lalanne, with the support of Raphaël Tuor, showcased ALGEPI's ongoing work at the MIT Media Lab in Boston. The presentation was part of a visit by the Human-IST ...
ALGEPI Partners at ECREA 2024: Presenting Epistemic Welfare

ALGEPI Partners at ECREA 2024: Presenting Epistemic Welfare

From the 24th to the 27th of September, ALGEPI consortium partners attended the European Communication Research and Education Association (ECREA) Conference in Ljubljana, Slovenia, contributing ...