Abstract
This short paper introduces a three-stage pedagogy for engaging with algorithmically curated digital news media. The model is based on the work of W. James Potter and his cognitive theory of media literacy. The paper argues that algorithms amplify media effects through curation and repetition. A brief survey of the technical make up of news recommendation algorithms is followed by a discussion on the connect between algorithms and media effects. The model is a preliminary statement on a research project.
Keywords
Media Literacy Education, News Recommendation Algorithms, Cognitive Media Literacy Theory
Media recommendation algorithms are our constant online companion. Practically every media property we engage with online serves as raw material for algorithms attempting to “personalize” our experience by serving up new items that in some ways are supposed to align with our interests. Algorithms attempt to create personalized music playlists, video watch lists, and newspapers. Recommendation algorithms range from relatively simple curations of items viewed by people who have viewed the same piece of media to complex machine learning algorithms that analyze topic and sentiment to match a viewer’s interests more closely (Seaver 2019). It is tempting to begin this paper with statements referring to the ubiquitous nature of the media recommendation algorithm. How we are immersed in, surrounded by, and drowning in algorithms trying to present us with information that appeals to our interests and beliefs. How every choice we make in consuming digital media is attended to by some amorphous computer being that remembers our preferences continually narrowing our field of selection creating filter bubbles that reduce our ability to think critically. This paper, however, will take a different tack. While the ethical issues surrounding media recommender algorithms a very worthy of study, I am focused on the integration of recommendation algorithms, specifically those that recommend news, into media literacy pedagogy. In other words, how do we teach individuals to think their way through algorithmic noise while consuming information. I am interested in algorithms that work covertly in their attempt at personalization such as the algorithms at work in applications such as Apple News (main feed), Google News, and Yahoo News.
The title of this paper uses the word “deconstruction”, a loaded term that gestures to Derrida’s work on the instability of meaning. I am using it in a more literal sense. How does a media consumer recognize the function and effect of algorithms designed to prey on our psychological and cognitive drives and mechanisms to keep us clicking, reading, and viewing? In this short paper, I tentatively introduce a preliminary framework for a media literacy curriculum aimed at arming media consumers with knowledge and skills that allow them to “think through” news recommendation algorithms. The foundation of this framework is based on seventy-year-old ideas on the essence of technology formulated by Martin Heidegger. For Heidegger, the essence of technology is not what it does, but how it makes us conceive the world. News recommender algorithms (all algorithms for that matter) are more than the mathematical abstraction of a set of instructions, they are shapers of our world view. Heidegger saw the world shaping essence of technology to be the greatest danger facing humankind while simultaneously offering the “saving power” to counteract the danger. If only we use our power to think through instrumentality to the essence of technology (1993). By thinking through rather than reacting to recommendation algorithms, a news consumer could engage this saving power to, at the very least, recognize its narrowing function, and ideally, transform the narrowing protocol to one of expansion. For media literacy pedagogy to elicit this saving power it must be focused on, as Potter (2004) argues, individual cognition.
The paper will begin by examining where algorithms fit in media literacy education followed by a survey of the technological state of news recommender algorithms. I will then present a three-part media literacy framework that utilizes and extends Potter’s (2004b) cognitive theory of media literacy to promote thinking through information filtering algorithms consisting of (1) algorithm awareness; (2) determining and understanding algorithm function; and (3) strategically directing the algorithm.
News recommendation algorithms present an aspect of media literacy that is neither information nor medium, but rather a technological instantiation of journalistic news framing as well as the curation practice of human editors. News producers have always framed stories in an attempt to appeal to cognitive mechanisms of emotion and risk assessment (Otieno, Spada,and Renkl 2013). News editors focus on selecting news that appeals to the largest segment of readers. Viewers, and listeners. News recommendation algorithms that are based on textual analysis use news frames, recast as linguistic constructs, as items of analysis to be compared with reader engagement to produce an individualized editing function aimed at creating personalized news services. News recommendation algorithms present a unique challenge to media literacy education. I shall argue below algorithms amplify the elements of Potter’s media effects knowledge structure at the individual level. Before outlining their relationship to media effects, I will present a brief survey of news recommendation algorithms from technological and sociological perspectives.
Recommendation algorithms are most often designed using one of three models: (1) collaborative, (2) content-based, or (3) Hybrid (Adomavicius and Tuzhilin 2005). Collaborative algorithms focus on recommending items that other people have rated, purchased, or consumed following the logic of “people who purchased this book, music, and so on also purchased…”. Content-based recommendation algorithms focus on what the individual has previously rated or recommended and then searches for and presets similar items. A hybrid combines aspects of both the collaborative and content models. These common recommendation algorithms are effective when there are a sufficient number of people rating and/or consuming media such as music and movies. They are far less effective at recommending news items. Even if there are enough people rating a news story, by the time the algorithm is ready to make a recommendation, the news story has more often than not become temporally irrelevant (Raza and Ding 2021). To deal with this challenge, news recommendation algorithms are increasingly being developed using deep learning models developed using factorization. Latent factor models analyze features (factors) of the interaction of reader and news item including temporality, location, topic, and social factors (Raza and Ding 2021). Algorithms are analyzing how long a reader spends reading an item, where they are reading it, the topic(s) of the item, and are they sharing the item over social media. Raza and Ding (2021), in their comprehensive review of news recommendation algorithms, find that there is rapid growth in the use of deep neural networks to analyze reader behavior to find unique and unobserved patterns. The move to neural network-based algorithms has created a transparency problem for news recommendation algorithms as it is almost impossible to clearly define how a neural network has calculated and presented a recommendation. A lack of transparency leads to a lack of trust (Shin 2020) on the part of the reader, assuming of course that the reader is aware of the algorithm in the first place. A more detailed examination of the technical state of affairs for news recommendation algorithms is beyond the scope of this short paper, but as Raza and Ding (2021) have observed in the literature, they are evolving into being more accurate in their recommendations while becoming less transparent in their function. A basic knowledge of how these algorithms are designed and how they function in practice is an important element in understanding the relationship between algorithms and media literacy pedagogy. More on this connection below.
Whether news recommendation algorithms have a positive or negative effect on democratic society is a contested issue. Pariser (2011) believes algorithmic interventions are creating filter bubbles that are polarizing society by reducing critical engagement. Researchers such as Ivan Dyklo have found both a filtering and increased processing effects (2015). More recent studies such as have not found empirical evidence that fragmentation is occurring because of news recommendation algorithms (Zuiderveen Borgesius et al., 2016; Fletcher & Nielsen, 2017). My goal is not to examine this debate in any depth. I propose accepting that algorithms are a factor in engaging with any digital media and developing a strategy for limiting negative and enhancing positive effects should be the goal of media literacy education.
News recommendation algorithms serve to amplify the elements of Potter’s media effects knowledge structure. Potter defines three components of a knowledge go media effects, (1) perspective, (2) process of influence, and (3) factors of influence. In the case of news recommendation algorithms, algorithmic curation affect each component. In terms of perspective, algorithms simultaneously amplify immediate and long-term effects. Potter uses the example of repeated and regular exposure to violence and crime in television shows could increase the likelihood that a viewer believes they live in a high crime neighborhood (2004b, 81). The same effect could be produced by repeated exposure to crime focused news items. Algorithms based on latent factor analysis would identify crime as a topic of engagement and present similar items in an attempt at creating and maintaining interest. Algorithms increase immediate exposure in a reading session and long-term exposure over multiple sessions. Other media recommendation algorithms such as Netflix would have the same effect as watching a violent movie elicits suggestions of other violent movies. Recommendations produced by news algorithms do not necessarily produce negative effects, but negative effects are possible if the reader remains unconscious of the algorithm’s function. Algorithms have an amplifying effect on the other aspects of Potter’s perspective model. Levels of effect including cognitive, attitudinal, emotional, physiological, behavioral, and societal are all amplified by algorithmic curation. Finally, Potter’s conception of valence or the value a reader puts on the message in a news item is amplified by recommendation algorithms. Latent factor analysis leads to repetition that reinforces an individual’s stance on an issue. Potter refers to an individual’s risk set-point, where media exposure leads to a manifestation of the media effect, for example exposure to violence leading to violence (84). News recommendation algorithms could move a risk set-point in terms of ossification of opinion. Repeated exposure to like arguments and positions reinforce confirmation bias. Further research is required to flesh out this idea.
News recommendation algorithms affect the process of influence of media effects mainly through Potter’s conception of countervailing influences. An individual is exposed to similar content that could lead to confirmation bias while simultaneously feeling satisfaction at staying informed. I plan to design a study to test this hypothesis. Potter’s factors in the process of influence, specifically cognitive abilities, drives, and sociological factors play a key role in a media literacy pedagogy aimed at teaching methods of algorithm awareness and strategy. Recognizing levels of effects such as attitude reinforcement, emotional sensitization, and behavioral habit formation is key to developing a conscious perspective on news recommendation algorithms. Understanding the psychological drive to confirm our held beliefs and developing habits to challenge them can be strategically enhanced through algorithms. Finally, recommendation algorithms can serve as conditioning social institution if unconsciously followed. If an individual takes an automatic stance to news curation, the algorithm is covertly playing an institutional role as it curates items of similar ideological content from a range of sources. What follows is a brief introduction to a media literacy education strategy for teaching algorithmic awareness and strategy. I plan to expand on each element through a series of quantitative and qualitative research projects conducted over the next year.
I suggest a three-stage strategy for engaging with algorithmically curated news content. Each stage is directly related to work by Potter (2004b) on media effects. Potter believes (2004) most individuals engage with media on an automatic level. They are unconscious to the effects of media. He proposes a series of knowledge structure required for media literacy, one of which is a knowledge of media effects. I suggest adding algorithm as an element to his set of factors that include perspective, risk, processes, and factors of influence. The first stage is algorithm awareness. Algorithms amplify aspects of perspective through repetitive exposure to similar content. If this timing and repetition effect happens outside of conscious awareness, all other media effects are made opaque resulting in confirmation bias and attitude ossification. The first step in engaging with digital news media is the conscious awareness of algorithmic surveillance. The understanding that every click an individual makes, time spent reading, and any comments or ratings provided are collected and recorded to determine items presented in the future. The second stage is awareness of algorithmic function. Is the news recommendation algorithm based on previous ratings (our pre-selected topics) made by the individual or is it based on what other readers have read? Or does the algorithm seem to understand the topics and attitudes in the content engaged? This is a complex task that requires research and experimentation. I plan to attempt to develop a method for algorithm understanding based on a set of experiments that are closely aligned with stage three. Stage three involves using recommendation algorithms in a positive manner. I use the term positive, not is a sentiment, right or wrong fashion, but more of a deliberate, goal directed manner. This stage also will be developed through future research in the hopes of developing a heuristic for moving the algorithm to a center position by actively seeking out counterarguments and opposed positions while under algorithmic surveillance.
This short, preliminary paper has introduced a three-part strategy for developing a media literacy pedagogy for engaging with algorithmically curated news content. The model is based on the work of Potter in cognitive media literacy theory proposing the addition of algorithmic amplification to his knowledge of media effects. The goal of the model is to provide a method for consumers of media to think through recommendation algorithms to regain control of what and how they choose to engage with digital media. I am beginning work on designing research projects to flesh out the theory and application of the stages.
References
Adomavicius, G., & Tuzhilin, A. (2005). Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering, 17(6), 734-749. https://doi.org/10.1109/tkde.2005.99
Borgesius, Z., F. J. & Trilling, D. M., J. & BodÛ, B. d. V., & C. H. & Helberger, N. (2016). Should We Worry about Filter Bubbles. Internet Policy Review, 5(1). https://doi.org/https://doi.org/10.14763/2016.1.401
Dylko, I. B. (2016). How Technology Encourages Political Selective Exposure. Communication Theory, 26(4), 389-409. https://doi.org/10.1111/comt.12089
Fletcher, R., & Nielsen, R. K. (2017). Are News Audiences Increasingly Fragmented? A Cross-National Comparative Analysis of Cross-Platform News Audience Fragmentation and Duplication. Journal of Communication, 67(4), 476-498. https://doi.org/10.1111/jcom.12315
Heidegger, M. (1993). The Question Concerning Technology. In D. F. Krell (Ed.), Martin Heidegger: Basic Writings. Harper Collins.
Otieno, C., Spada, H., & Renkl, A. (2013). Effects of News Frames on Perceived Risk, Emotions, and Learning. PLoS ONE, 8(11), e79696. https://doi.org/10.1371/journal.pone.0079696
Pariser, E. (2012). The Filter Bubble: How the New Personalized Web in Changing What We Read and What We Think. Penguin.
Potter, W. J. (2004). Argument for the Need for a Cognitive Theory of Media Literacy. American Behavioral Scientist, 48(2), 266-272. https://doi.org/10.1177/0002764204267274
Potter, W. J. (2004b). Theory of Media Literacy: A Cognitive Approach. Sage Publications.
Raza, S., & Ding, C. (2022). News recommender system: a review of recent progress, challenges, and opportunities. Artificial Intelligence Review, 55(1), 749-800. https://doi.org/10.1007/s10462-021-10043-x
Seaver, N. (2019). Captivating algorithms: Recommender systems as traps. Journal of Material Culture, 24(4), 421-436. https://doi.org/10.1177/1359183518820366
Shin, D. (2020). How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. Computers in Human Behavior, 109, 106344. https://doi.org/10.1016/j.chb.2020.106344
Current Issues
- Media and Information Literacy: Enriching the Teacher/Librarian Dialogue
- The International Media Literacy Research Symposium
- The Human-Algorithmic Question: A Media Literacy Education Exploration
- Education as Storytelling and the Implications for Media Literacy
- Ecomedia Literacy
Leave a Reply