Abstract
Algorithm-driven technologies are transforming our world. Digital algorithms possess the epistemological algorithmic authority to shape, reinforce, and limit society by determining what we see and do not. While some public and private efforts exist to mitigate the harmful effects of these powerful digital technologies, few measures exist to prepare K-12 students for the algorithm-driven world in which they live. This paper provides an overview of the emerging Critical Algorithmic Literacy (CAL) framework. Using Kellner & Share’s (2019) critical media literacy (CML) framework to frame Critical Algorithmic Literacy, this paper examines promising practices and challenges to CAL implementation. The described methods empower students to evaluate, challenge, and reconstruct algorithmically-driven media.
Keywords
Critical Media Literacy, AI, Surveillance Capitalism, Critical Algorithmic Literacy, Biased Algorithms, Disinformation, Algorithms
Algorithmically-driven technologies are transforming our world. While some public and private efforts exist to mitigate the harmful effects of these technologies, few measures exist to prepare students for the algorithm-driven world in which they live (UNICEF, 2020; Wang et al., 2022). Digital algorithms comprise an invisible infrastructure that make consequential decisions governing much of our lives (Trammell & Cullen, 2021). These algorithms influence media consumption, health care, finances, social interactions, and much more (Nobel, 2018; O’Neil, 2016). The prevalence of artificial intelligence, biased algorithms, surveillance capitalism (Zuboff, 2019), and disinformation amplify the need for students to develop critical skills regarding how digital media influences their lives.
Young people spend significant parts of their lives viewing and interacting with algorithmically-driven media. A study by Common Sense Media, for example, found that children between the ages of 8-12 spend an average of five-and-a-half hours online each day, with YouTube comprising most of that screen time (Rideout et al., 2022). Moreover, digital platforms such as TikTok, Facebook, and YouTube track, analyze, and monetize children’s online behavior for profit-driven purposes (Wang et al., 2022; Zuboff, 2019). Despite the impact of YouTube and other algorithmically-driven media on children’s lives, most students have little knowledge of how these media are created, disseminated, and interpreted (Kellner & Share, 2017).
This article describes the evolving critical algorithmic literacy (CAL) framework (Cotter, 2020; Dasgupta & Hill, 2021; Wang et al., 2022) to empower students to evaluate, challenge, and construct algorithmically-driven media. Critical algorithmic literacy represents a synthesis of computer science and Kellner and Share’s (2019) critical media literacy. Because digital algorithms function as a communications technology, such as broadcasting and publishing (Gillespie, 2014), digital algorithms should be included in the paradigm of media literacy education.
As digital algorithms help make decisions on personal and societal levels, they influence how students understand themselves and relate to others. Accordingly, our notion of literacy education should expand to include students’ critical consumption of and production with these persuasive algorithms. As the definition of media literacy education expands traditional literacy to “multiliteracies” that include various media such as music, film, video, and the Internet (New London Group, 1996; Buckingham, 2007; Share, 2007; and Hobbs & Jensen, 2013), this article places digital algorithms as a medium to be included within this dynamic definition of literacy. Educational organizations such as The National Council of Teachers of English (2022) and the National Council for the Social Studies (2022), for example, have similarly called for the expansion of literacy that includes digital media. As digital algorithms dominate society’s personal, economic, cultural, and social spheres, media literacy should include algorithmic awareness with a critical perspective to empower students with the skills to question algorithmic representations of reality. The emerging critical algorithmic literacy framework provides a structure to address student needs by expanding traditional notions of literacy and media literacy to include digital algorithms (Cotter, 2020; Dasgupta & Hill, 2021; Hautea et al., 2017; Trammell & Cullen, 2021; Wang et al., 2022).
Critical algorithmic literacy seeks to expand media literacy education to provide students with skills that “…promote their ability to engage in the critique of algorithmic practices” (Wang et al., 2022, p. 4) and “…can use to understand, interrogate, and critique the algorithmic systems that shape their lives” (Dasgupta & Hill, 2021, p.1). Beyond students considering the effects of algorithms in authentic contexts, CAL invites students to develop a sense of agency to address how algorithms are used to establish and reinforce existing power structures (Trammell & Cullen, 2021). The critical examination of and interactions with these algorithmic effects empower students to assess the complex relationships between their lives and the multitude of media with which they interact.
Drawing on critical pedagogy, CAL seeks to empower students to address inequitable power dynamics (Trammell & Cullen, 2021). Many examples exist of algorithms perpetuating negative stereotypes and societal inequities (Gran et al., 2021; Kantayya & Buolamwini, 2021; Noble, 2018; O’Neil, 2016). As many digital algorithms amplify power asymmetries in covert yet formidable ways, CAL reveals the nonneutral nature of algorithmically-driven media created by people for specific purposes in sociocultural contexts (Dasgupta & Hill, 2021; Hautea et al., 2017). CAL seeks to address fundamental questions regarding who or what has the power to determine what students see and can interact with, as well as the implications of that power.
CAL’s focus on digital algorithms and their social implications draws on Kellner and Share’s (2019) critical media literacy (CML) framework. The critical media literacy framework provides a conceptual framework for the emerging CAL model. CML seeks to develop analytical skills that “…examine the relationships between media and audiences, information, and power” (Kellner & Share, 2019, p. 26). Critical media literacy seeks to challenge dominant media representations and strives to analyze power structures communicated through the various media we all consume. The CML framework consists of six conceptual understandings and corresponding questions centered on the nonneutral nature of all media and the ways media “support and/or challenge dominant hierarchies of power, privilege, and pleasure” (Kellner & Share, 2019, p.8). Kellner and Share (2019) argue that “literacies must constantly be evolving to embrace new technologies and forms of culture and communication, and must be critical, teaching students to become discerning readers, interpreters, and producers of media texts and new types of social communication” (p. 31). CML leverages new production technologies to empower students to create alternative media texts that challenge dominant, often hegemonic messages created and distributed by mass media. This production process engages students in questioning dominant narratives and finding their voices to communicate alternative viewpoints and representations. This emphasis on student production comprises a core component of CAL-integrated lessons.
Algorithms in Context
While the term algorithm, in the broadest sense, refers to “…instructions for solving a problem or completing a task” (Rainie & Anderson, 2017, p.2), I will use the more current application of the word to refer to the increasingly powerful computer programs that autonomously make decisions based on data (Gillespie, 2014; Willson, 2017). Beyond purely technical definitions of computer algorithms, many expand the term algorithm to include their sociocultural uses and effects (Gillespie, 2014). For example, Benjamin (2019), Noble (2018), and O’Neil (2017) describe algorithms as mechanisms that embed culturally-dominant ideologies. Seaver (2017) envisions digital algorithms “…as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas” (p.1), and O’Neil (2017) describes digital algorithms as “opinions embedded in mathematics” (Kindle location 405). Many scholars and observers similarly describe the nonneutral nature of algorithms and their effects throughout society.
Critical algorithmic literacy involves students taking a critical stance on the sociocultural effects of digital algorithms. As Noble (2018) explains, algorithms “…function as an expression of a series of social, political, and economic relations, relations often obscured and normalized in technological practices” (p. 98). These normalizing views promote the purposes of their respective companies (Beer, 2009; Noble, 2018; Kitchin, 2017). CAL represents a synthesis of computer science and CML and provides students with the skills for students to examine the creators’ intentions, biases, and purposes embedded in these algorithms.
Within educators’ and students’ conception of digital algorithms, it is essential to include Artificial Intelligence (AI), or machine learning, as it dominates students’ digital environments. AI represents increasingly ubiquitous and complex algorithms that augment and often replace human decision-making (O’Neil & Gunn, 2020). Even among experts, defining AI presents a challenge (Long & Magerko, 2020) as it represents a broad and evolving term (Register & Ko, 2020). For this research, I will refer to AI’s most common manifestation, supervised machine learning (Shane, 2019). Broadly speaking, this form of AI uses trial-and-error methods of deciphering large data sets to invent rules that help achieve specific outcomes (Shane, 2019). As an example, imagine if one wanted to create a computer program that differentiated between cats and dogs. One could create a searchable database that lists the features of each animal. Alternately, AI would incorporate many examples of cats and dogs and “learn” the differences between them (Lane, 2021). As AI-driven media dominates students’ lives through YouTube and other platforms, it is essential to place artificial intelligence within the CAL framework.
AI systems make predictions, recommendations, and decisions that influence many aspects of students’ lives (Yeung, 2020). AI recommends music, videos, friends, and products to children (UNICEF, 2020). The major technology platforms such as TikTok and YouTube deploy these powerful AI systems to exert this influence without fully considering the possible consequences to those affected (O’Neil & Gunn, 2020). Moreover, because AI content changes based on user interaction, many scholars emphasize the algorithmic effects caused by the interaction between users and AI algorithms (Bucher, 2018; Gillespie, 2014; Seaver, 2017; Zarouali et al., 2021). Examining the effects between AI-driven platforms and users adds another layer within the CAL framework that calls for students to explore their roles in negotiating and contributing to meaning-making (Kellner & Share, 2019).
Like critical media literacy and other critical pedagogies, critical algorithmic literacy does not entail learning a list of specific competencies (Share, 2019) but provides a structure for examining the relationships between information and power (Kellner & Share, 2007). While authors such as Cotter (2020) and Dasgupta (2020) draw from the work of Freire (1972) to provide a critical perspective on CAL, this article seeks to extend that critical viewpoint by framing critical algorithmic literacy within the conceptual understandings of Kellner & Share’s (2019) critical media literacy framework.
Currently, student work with digital algorithms occurs almost exclusively within computer science courses (Ciccone, 2021). While extremely valuable, computer science courses rarely include examining the sociocultural effects of these algorithms. (Gebre, 2022; Ridley & Pawlick-Potts, 2021). In an effort to expand the conversations about computer science education, Kafai and her colleagues (2020) distinguish between cognitive, situated, and critical perspectives in computer science education. The cognitive framing emphasizes computational concepts and programming practices intended to be helpful in college and future careers. This cognitive framing represents the vast majority of computer science courses (Gebre, 2022; Kafai et al., 2020). However, technic seldom provides information to predict the sociocultural impact of algorithmic outcomes (Kroll, 2018). On the other hand, the critical framing of computer science highlights the values and practices of computer science as it applies to social justice and critical pedagogy.
Kafai and her colleagues (2020) emphasize that the cognitive and critical perspectives are not mutually exclusive but inform each other to provide an epistemological framework for dialogue. Emphasizing the critical perspective, CAL goes beyond the objective analysis of algorithms, encouraging students to explore the contexts and consequences of data-driven media platforms (Ching, 2012). With increased student focus on the contextual, ideological, and commercial use of algorithms and AI, CAL empowers students to learn to question algorithms’ position as neutral and omniscient authorities (Gebre, 2022). Here again, critical media literacy provides a suitable framework for CAL, as CML’s focus on social justice challenges dominant media influences by analyzing power structures communicated through various media (Kellner & Share, 2007).
Personalized Reward Machinery
This critical perspective implicit in CAL becomes more vital for young students as their media environments expand and change exponentially. Since the 1950s, corporate interests have tapped into young people’s preferences and behaviors to exert influence (Steinberg & Kincheloe, 2004). Currently, digital surveillance and algorithm-driven preference bubbles combine to create a profit-driven environment where many children spend much of their time and attention (Zuboff, 2019). Very few efforts exist to incorporate critical algorithmic literacy (CAL) into traditional notions of literacy and critical media literacy (Lee et al., 2021).
Economic factors drive digital platforms’ pervasiveness and persuasiveness. Media persuasion targeted at children is not a new phenomenon. Steinberg and Kincheloe (2004) argue that since the 1950s, for-profit corporations have created a “cultural pedagogy” that influences students’ experience to a greater extent than school, peers, parents, or even themselves (p.17). Consider the influences, for example, of Disney princesses, professional sports, and social media “influencers” on the lives of many children. In addition, algorithmic-driven technologies such as YouTube and TikTok now personalize, expand, and intensify children’s immersion in this consumer-driven environment, that is, “…working twenty-four hours a day to colonize all dimensions of lived experience” (Steinberg & Kincheloe, 2004, p. 131). Students require critical analysis skills as commercial entities seek to commodify all aspects of children’s lives (Giroux, 2011).
The integration of CAL helps students counter the immense power wielded by the major technology platforms such as Google, TikTok, and Facebook. To keep us all captivated by these persuasive technologies, technology companies, in part, employ many of the same psychologically exploitive techniques used by designers of slot machines (Dow-Schüll, 2014). For example, slot machines’ success is measured by “time on device.” To increase time on device, many digital platforms combine persuasive psychology, variable rewards, and powerful reinforcers to connect rewards to the human need for approval. These elements combine to form a “personalized reward machinery” (Schüll, 2012, p. 71). Because of the pervasive influence of algorithms, a power imbalance emerges between those trained in computer science and those who are not (Ciccone, 2021). While it is not my contention that all students enroll in computer science courses, they would be well served to learn about algorithms in relevant contexts (Hautea, 2017; Dasgupta & Hill, 2021). Most students, after all, do not require programming skills to interact with algorithmic-driven media (Long & Magerko, 2020; Resnick & Silverman, 2005). Viewed from Kafai et al.’s (2020) framings of computer science, students benefit from a meaningful blend of the cognitive and critical frames of computer science knowledge. While not ignoring the cognitive frame, CAL-integrated lessons should emphasize Kafai et al.’s (2019) critical frame to support a critical media literacy model that encompasses digital algorithms.
While there are legislative (Algorithmic Justice and Online Platform Transparency Act, 2021) and private (Harris & Raskin, 2022; Neff, 2022) efforts to minimize the effects of potentially harmful algorithms on children, relatively few of those efforts focus on public education as a way of addressing the dominating influences of these digital algorithms. In part, the major digital platforms such as YouTube, Facebook, and Google dominate young people’s lives by using detailed knowledge of them gathered from various sources (Alter, 2017; Eyal, 2019; McNamee, 2019). This personal information provides data that inform the addictive properties of these platforms. (Alter, 2017; Eyal, 2019; Lanier, 2018). In addition, the business models of companies such as Facebook, Netflix, and YouTube involve gaining and keeping attention by exploiting cognitive biases and other innate attributes (Alter, 2017). Yet, despite the role played in children’s lives, very few efforts exist to help students make sense of these persuasive technologies.
Challenges to Critical Algorithmic Awareness
The nature of the algorithms and AI increase the challenges to their critical examination. Effective CAL involves differentiating algorithms and AI from text, video, and other media to examine algorithms in the panoply of existing media. For example, algorithms remain mostly invisible to users, so their opacity requires inferring their effects. In addition, major digital platforms such as Google incorporate user data to personalize experiences, enhancing the platform’s persuasive powers and epistemic authority. Finally, the major digital platforms possess an asymmetry of power that most people fail to perceive.
There exists a tremendous yet covert power asymmetry between digital media platforms and their users. Former Google Design Ethicist Tristan Harris (2017) claims a “…handful of people working at a handful of technology companies … will steer what a billion people are thinking today.” Moreover, young people engage with these technologies at increasingly earlier ages (Rideout & Robb, 2020). This extreme asymmetry also exists in the relative privacy rights of companies and users. These technology companies possess troves of data about their users, yet the users know comparatively little about the companies (Zuboff, 2019). Rushkoff notes, “[YouTube’s] algorithms are watching us much more intently than we’re watching the videos” (in Hobbs, 2019, p. vii). Pasquale (2015) describes the societal implications of this asymmetry: “To scrutinize others while avoiding scrutiny oneself is one of the most important forms of power” (p. 3). CAL seeks to address the power imbalance between digital platforms and K-12 students. Here again, critical media literacy’s tenet of social constructivism supports examining creators’ purposes and techniques to achieve these often profit-driven motives (Kellner & Share, 2019). This CML perspective serves to counter major platforms’ credibility without accountability.
The opacity of algorithms presents yet another challenge to examining their effects. Unlike other media, we do not observe the algorithms themselves but only their consequences. Many efforts exist to regulate these large platforms to increase algorithmic transparency. However, as Burrell (2016) and Pasquale (2009) point out, even if companies publicly shared their algorithms, it is unlikely that laypersons would understand these algorithms’ purposes, effects, and biases. In addition, even those who create algorithms lack a complete understanding of the algorithmic effects of their work (Rainie & Anderson, 2017). Finally, increased algorithmic transparency would not alter the asymmetry between these large technology companies and their users because the algorithms are dynamic, created by multiple programmers, and extremely complex (Burrell, 2016). Companies concealing their algorithms to protect their intellectual property amplifies the need for students to infer their effects.
In addition to algorithmic opacity, algorithmic epistemic authority complicates examining their individual and societal effects as the algorithms determine what can be known about them (Gillespie, 2014; Cotter, 2020). The technology platforms arbitrate “truths” to which the users can see. Moreover, because digital algorithms control much of the information students are exposed to, they function as arbiters of what is important and true (Gillespie & Boczkowski, 2014). These scholars go as far to assert, “That we are now turning to algorithms to identify what we need to know is as momentous as having relied on credentialed experts, the scientific method, common sense, or the word of God” (Gillespie & Boczkowski, 2014, p. 2). Whether or not one considers algorithm-driven information as “momentous” as the word of God, our media-immersed environment requires specific skills involving critical analysis and production of algorithm-driven environments (Valtonen, 2019). Ito and her colleagues (2021) describe the social and moral imperative that students examine how algorithms are “shaped by and reflect historical inequities, problematic assumptions, and institutionalized power” (p. 3).
These perceptions of algorithmic objectivity disguise the human subjectivity involved in their creation, application, and distribution (Gillespie, 2014; Noble, 2018). Gillespie (2014) refers to the perception of algorithmic objectivity as “… a carefully crafted fiction” (p.13). The anthropomorphizing and personalization of intelligent agents such as Siri require a “new sociotechnical understanding” of AI-driven tools (Choung et al., 2022, p.3). Here, critical media literacy (CML) promotes this sociotechnical understanding through the questioning of non-neutral media to empower students to examine the relationships between media, their purpose(s), and context (Kellner & Share, 2019). As Noble (2018) and others point out, digital algorithms hide potential biases and harms behind a façade of objectivity and mathematical precision. Building on critical media literacy, CAL empowers students to look behind this façade by questioning what seems normal or natural in the algorithmically-driven media environment.
In Algorithms We Trust
Despite the asymmetry, opacity, and subjectivity of digital algorithms, research suggests that most people, especially children, possess a high level of trust in AI and algorithms (Choung et al., 2022). Many students assume intelligent agents think like humans and perceive them as credible and friendly (Long & Magerko, 2020). In their study of 3–10-year-olds, Druga and her colleagues (2017) found that children tend to personify intelligent agents (such as Siri and Alexa) more than adults. Children in this study perceived these AI-driven technologies as “friendly,” trustworthy,” and “smarter” than themselves (p.597). These technologies’ conversational abilities and other anthropomorphic features require a new level of critical awareness addressed within the CAL framework.
Students’ trust in the effects of algorithms, moreover, minimizes the healthy skepticism that promotes critical inquiry (Gillespie, 2014; Hobbs, 2020). An expanded view of literacy that includes the sociocultural implications of the effects of algorithmically-driven content should represent a priority in K-12 education. Low algorithmic awareness makes one more susceptible to data-driven manipulation, more likely to spread misinformation and more accepting of stereotypes (Mohamed, 2020; Pariser, 2019).
Because the students lacked knowledge of the individual and sociocultural impacts of their technology use in sociocultural contexts, Wang et al. (2021) advocate for increased critical algorithmic literacy so students “…engage in a critique of algorithmic systems reflexively” (p.16). Long & Magerko (2020) suggest that educators can improve students’ algorithmic awareness if students interact with and create algorithms in authentic contexts.
Barriers to CAL Implementation
As an emerging field, there exist many challenges to CAL implementation. First, teachers often lack algorithmic knowledge to engage in CAL (Aleman et al., 2021). Further, school districts do not promote the study of algorithms beyond the objective explorations within computer science courses (Ciccone, 2021). In addition, teachers’ fears of attracting backlash from parents, school administrators, or community members discourage some from pursuing any content perceived as political (Ciccone, 2021). Critical algorithmic literacy implementation may require educators to engage in challenging conversations around the impact of systems at the personal and community level. Finally, the many variations of media literacy education pose a challenge to teachers to pursue it as a new endeavor (Hobbs, 2022).
Some research describes students’ barriers regarding students learning about AI. These perceptions can affect who seeks opportunities to learn about computer science and AI. Some high school students avoid computer science because of its perceived demands, especially math skills (Long & Magerko, 2019). By working with algorithms in authentic contexts, students can see relevant connections to their lives and the lives of others. Regarding a critical view of computer science, little exposure reaps positive effects (Resnick & Silverman, 2005). Recent case studies illustrate that relatively simple programming activities empower students to “uncover structures and assumptions in algorithmic systems” (Dasgupta & Hill, 2021, p.20). It is here where CAL embedded content provides students with content within context to maximize learning.
Media Production within the CAL Framework
CAL, like other forms of media literacy education, in general, includes both analysis of existing media as well as the thoughtful creation of media (Buckingham, 2007; Share, 2015; Hobbs, 2019). The cyclical process between production and analysis allows students to reflect on their creative work using their media analysis skills and then apply them to create new media products. (De Abreu et al., 2017). This perspective enables students to experience “…the clear connection between creative practice and criticality that exists in media education” (Connolly & Readman, 2017, p.251). Student agency through authentic media production empowers “students to shape the world they live in and to help to turn it into the world they imagine” (Morrell, 2013, p. 302).
Media production within media literacy education also provides students with the constructivist perspectives needed to analyze media more effectively, such as considering purpose, audience, and embedded values (Buckingham, 2007; Dezuanni, 2015; Hobbs, 2019). Further, “If we ask the children to critique the world but then fail to encourage them to act, our classrooms can degenerate into factories of cynicism” (Bigelow et al., 1995, p.5). Like other forms of media literacy education, CAL student production incorporates student agency through personally-relevant learning applied to real-world contexts (Lee et al., 2022).
CAL-infused student projects will make use of Freire’s (1972) “problem-posing education” as the students create products that will “…develop their power to perceive critically the way they exist in the world with which and in which they find themselves” (Freire, 1972, p. 252). As with other critical pedagogies, critical algorithmic literacy strives to empower all students, especially underrepresented groups and those economically disadvantaged. DiPaola and her colleagues (2020) point out the student work from a CAL perspective “…seeks to ensure that the stakeholders who are at the greatest risk for harm are centered in the design process” (p.3). This CAL-focused work empowers students to view the products they create as sociotechnical systems that affect their lives and the lives of others.
From a critical media literacy perspective, Funk, Share, and Kellner (2016) argue that media production helps learners examine dominant ideologies and representations of those ideologies. Moreover, Redmond (2021) maintains that media literacy education without student media production tends to perpetuate the perspectives of the dominant cultures. To counter these dominant narratives, a typical CML activity engages learners in analyzing hegemonic representations of race, class, and gender and then creating “counter-narratives” that challenge these portrayals from the learners’ perspectives (Share et al., 2019). Hammer (2011) astutely summarizes the benefits of media production with the CML framework:
Teaching critical media literacy through production constitutes a new form of pedagogy in which students become more aware of how media is constructed, conveys dominant ideologies, and is one of the most powerful, often unconscious, sources of education. These critical skills not only make students aware of how their views of the world are mediated by media but also enable them to learn how to critically read, engage, and decode media culture (p. 361).
Critical algorithmic literacy helps students see digital technologies’ potential to help them make a difference in their own life and the lives of others (Lee & Garcia, 2014). Despite this potential, many challenges exist in addressing Critical Literacy Media tenets to CAL.
Previous Implementations of Critical Algorithmic Literacy
While not always adopting the term critical algorithmic literacy, some researchers have studied classroom implementations of what I have described as CAL. In one notable example of CAL implementation and research, Hautea, Dasgupta, and Hill (2017) worked with 6-12 graders who interacted with simple algorithmic systems that led to meaningful questions and discussions regarding algorithms in relevant sociocultural contexts. Students interacted with and created computer programs to learn algorithms’ limitations, assumptions, and biases throughout this work. This experiential exploration included students’ questioning and discussing data collection, using that data within digital algorithms, and their commensurate effects.
The researchers found that these students’ ability to mine and analyze others’ data heightened their awareness of data privacy, data bias, and the real-world effects of these technologies (Hautea et al., 2017). For example, students described how they could create computer programs to shape others’ behaviors. One student, for instance, expressed her concern about computer program creators using users’ personal data to personalize content to increase engagement. These student statements reflect their awareness of the relationships between the author’s purpose, the algorithm itself, and its effects on others.
The student projects in this case study also helped students develop insight into the subjectivity of algorithms. Some students created programs that ranked the quality of others’ projects. Students created algorithms incorporating mathematical formulas based on their value judgments of the projects. Students later reflected that they based these algorithmic conclusions on their own perspectives as well as the data to which they had access. In this way, students internalized O’Neil’s (2017) assertion that algorithms are “opinions embedded in mathematics” (Kindle location 405). Student insights from Hautea et al.’s (2017) case study embody the CAL framework.
In a case study that exemplifies the purposeful integration of the Kafai et al.’s (2019) critical, situated, and cognitive elements, Lee et al. (2022) studied 16 youth between the ages of 15-19 in what the authors refer to as Critical Computational Expression (CCE). The authors describe CCE as a “theoretical and conceptual framework we have developed that integrates the three distinct traditions of critical pedagogy, computational thinking, and creative expression” (Lee et al., 2022, p. 7). The young authors in this study created a computer program called “Can You Teach AI to Dance?” (p.18). After expressing their displeasure with the “danceability” algorithm on Spotify, participants designed a program that engaged users by having them provide a “danceability score” of specific songs and then compare those scores with those produced by Spotify’s algorithm. This project differs from the prior studies because it emphasizes participant creation of a working product for authentic audiences.
One young participant summarized an essential tenet of CAL referring to algorithms: “Because sometimes I see something and I’m like ‘that’s just there, and that just is,’ but then you can start to question it and understand what they are doing and how they’re affecting us” (p. 16). Overall, Lee and his colleagues concluded, “by creating a learning ecology that centered the cultures and experiences of its learners while leveraging familiar tools for critical analysis, youth deepened their understanding of AI” (Lee et al., 2022, p. 1). Like the critical perspective in CAL, Lee and his colleagues (2022) CCE focuses on analyzing and creating media to engage real audiences to counter dominant views.
These two case studies demonstrate pioneering efforts in critical algorithmic literacy and implementations represent a meaningful blend of Kafai et al.’s (2020) cognitive and critical framings of computer science. Both these studies entailed students examining algorithms from a critical perspective and creating products that reflect those analyses. These cases illustrate that even relatively modest interactions with algorithms empower young people to think critically about the sociocultural effects of algorithms (Long & Magerko, 2020; Resnick & Silverman, 2005).
Through this enhanced perspective, students experience the nonneutral nature of media in general and algorithms in particular. The student production components helped students internalize that data requires interpretation and that people and entities shape the process of algorithmic creation and dissemination with their own purposes and biases (Dasgupta & Hill, 2021, p.19). It is only through the critical examination of and interaction with these algorithmic effects that students examine the complex relationships between their lives and the multitude of media with which they are surrounded.
Conclusion
As media technologies change, models of media literacy change along with them. Most current media literacy efforts, for example, focus predominately on mass media, where the media is transmitted to and received by the users (Cho et al., 2022). Mass media mostly lacks the data collection and personalization capacities of the dominant digital platforms. Because of the separation between producer and consumer, current media literacy models may lead to less personal analysis than those media that change based on user inputs. YouTube’s algorithms, for example, manifest different user-producer relationships than Facebook. Jandrić (2019) points out that newer technologies do not negate earlier forms of critical media literacy “…instead, it updates them for the digitally saturated world” (p. 34). The critical algorithmic literacy model may update the current conception of media literacy education to develop skills more aligned with the world in which students live.
We live in a time where digital algorithms are woven throughout our society, often making vital decisions in areas including, but not limited to, education, commerce, politics, justice, and employment. Freire (1972) wrote, “No reality transforms itself” (p. 28). As educators, we should work to support students’ critical consciousness in the world in which they live. Because algorithms are not neutral entities, they may perpetuate injustices and structural inequalities in powerful and covert ways. By helping students create connections between their computational thinking, personal experiences, and creative expression, the CAL lessons and case study seek to support student understandings of algorithms in authentic contexts. This study aims to contribute to a body of knowledge that will empower teachers to help their students’ journeys and transform their realities within the technologically-driven world in which they live.
Appendix A: Kellner & Share’s (2019) Critical Media Literacy Framework:
- Social Constructivism: All information is co-constructed by individuals and/or groups of people who make choices within social contexts.
- Languages / Semiotics: Each medium has its own language with specific grammar and semantics.
- Audience / Positionality: Individuals and groups understand media messages similarly and/or differently depending on multiple contextual factors.
- Politics of Representation: Media messages and the medium through which they travel always have a bias and support and/or challenge dominant hierarchies of power, privilege, and pleasure.
- Production / Institutions: All media texts have a purpose (often commercial or governmental) that is shaped by the creators and/or systems within which they operate.
- Social & Environmental Justice: Media culture is a terrain of struggle that perpetuates or challenges positive and/or negative.
References
Aguilera, E., & Pandya, J. Z. (2021). Critical literacies in a digital age: Current and future issues. Pedagogies: An International Journal, 16(2), 103–110. https://doi.org/10.1080/1554480X.2021.1914059
Aleman, E., Nadolny, L., Ferreira, A., Gabetti, B., Ortíz, G., & Zanoniani, M. (2021). Screening Bot: A Playground for Critical Algorithmic Literacy Engagement with Youth. Extended Abstracts of the 2021 Annual Symposium on Computer-Human Interaction in Play, 198–202. https://doi.org/10.1145/3450337.3483478
Ali, S., Payne, B. H., Williams, R., Park, H. W., & Breazeal, C. (2019, June). Constructionism, ethics, and creativity: Developing primary and middle school artificial intelligence education. In International workshop on education in artificial intelligence K-12 (EDUAI’19) (pp. 1-4).
Alter, A. (2017). Irresistible: the rise of addictive technology and the business of keeping us hooked. Penguin Press.
Anderson, M., & Jiang, J. (2018). Teens, social media & technology. Pew Research Center. http://publicservicesalliance.org/wp-content/uploads/2018/06/Teens-Social-Media-Technology-2018-PEW.pdf
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985–1002. https://doi.org/10.1177/1461444809336551
Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Social forces.
Bigelow, B., Christensen, L., Karp, S., Miner, B., & Peterson, B. (Eds.). (1994). Rethinking our classrooms: Teaching for equity and justice. Milwaukee, WI: Rethinking Schools.
Bucher, T. (2018). If… then: Algorithmic power and politics. Oxford University Press.
Buckingham, D. (2007). Digital media literacies: Rethinking media education in the age of the Internet. Research in Comparative and International Education, 2(1), 43-55
Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. https://doi.org/10.1177/2053951715622512
Canadian Commission for UNESCO. (2020). Algorithm-Literacy-Education-Guide.pdf. https://www.algorithmliteracy.org/data/resources/en/Algorithm-Literacy-Education-Guide.pdf
Ching, C.C. (2012). Introduction: Part I: Developmental perspectives. In R. Foley & C Ching (Eds.) Constructing the Self in a Digital World (p. 17-25). Cambridge University Press.
Cho, H., Cannon, J., Lopez, R., & Li, W. (2022). Social media literacy: A conceptual framework. New Media & Society, 14614448211068530. https://doi.org/10.1177/14614448211068530
Choung, H., David, P., & Ross, A. (2022.). Trust and ethics in AI. AI & society, 1–13.
Ciccone, M. (2021) Algorithmic Literacies: K-12 Realities and Possibilities. Works in Progress, 8.
Connolly, S. and Readman, M. (2017). Towards ‘Creative Media Literacy’in International Handbook of Media Literacy Education [Eds De Abreu, B. Mihailidis, P., Lee, A.Y.L., Melki, J. and McDougall , J.] Routledge.
Cotter. K.M., (2020). Critical algorithmic literacy- power, epistemology, and platforms. [Doctor of Philosophy]. Michigan State University.
Cotter, K., & Reisdorf, B. (2020). Algorithmic Knowledge Gaps: A New Horizon of (Digital) Inequality. International Journal Of Communication, 14, 21. https://ijoc.org/index.php/ijoc/article/view/12450
Dasgupta, S., & Hill, B. M. (2021). Designing for Critical Algorithmic Literacies. In Algorithmic Rights and Protections for Children. https://doi.org/10.1162/ba67f642.646d0673
DiPaola, D., Payne, B. H., & Breazeal, C. (2020). Decoding design agendas: An ethical design activity for middle school students. Proceedings of the Interaction Design and Children Conference, 1–10. https://doi.org/10.1145/3392063.3394396
De Abreu, B., Mihailidis, P., Lee, A.Y.L., Melki, J. and McDougall , J. (2017). Arc of Research and Central Issues in Media Literacy Educationin International Handbook of Media
Dezuanni, M. (2015). The building blocks of digital media literacy: Sociomaterial participation and the production of media knowledge, Journal of Curriculum Studies, 47:3, 416-439, DOI: 10.1080/00220272.2014.966152
Dogruel, L., Masur, P., & Joeckel, S. (2021). Development and Validation of an Algorithm Literacy Scale for Internet Users. Communication Methods and Measures, 1–19. https://doi.org/10.1080/19312458.2021.1968361
Druga, S., Williams, R., Breazeal, C., & Resnick, M. (2017). “Hey Google is it OK if I eat you?”: Initial Explorations in Child-Agent Interaction. Proceedings of the 2017 Conference on Interaction Design and Children, 595–600. https://doi.org/10.1145/3078072.3084330
Eyal, N. (2019). Indistractable: how to control your attention and choose your life. BenBella Books.
Freire, P. (1972). Pedagogy of the oppressed. Herder and Herder.
Freire, P., & Macedo, D. (1987). Literacy: Reading the word and the world. Bergin & Garvey.
Funk, S. S., Kellner, D., & Share, J. (Eds.). (2019). Critical Media Literacy as Transformative Pedagogy. IGI Global. https://doi.org/10.4018/978-1-5225-8359-2
Gebre, E. (2022). Conceptions and perspectives of data literacy in secondary education. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13246
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A.Foot (Eds.), Media technologies (pp. 167-194). Cambridge, MA: MIT Press.
Gillespie, T., Boczkowski, P. J., & Foot, K. A. (Eds.). (2014). Media technologies: Essays on communication, materiality, and society. MIT Press.
Giroux, H. A. (1987). Critical Literacy and Student Experience: Donald Graves’ Approach to Literacy. Language Arts, 64(2), 175–181. http://www.jstor.org/stable/41961590
Giroux, H. (1999). Rewriting the discourse of racial identity: Towards a pedagogy and politics of whiteness. Harvard educational review 67 (2), 285-321
Gran, A.-B., Booth, P., & Bucher, T. (2021). To be or not to be algorithm aware: A question of a new digital divide? Information, Communication & Society, 24(12), 1779–1796. https://doi.org/10.1080/1369118X.2020.1736124
Hammer, R. (2011). Critical Media Literacy as Engaged Pedagogy. E-Learning and Digital Media, 8(4), 357–363. https://doi.org/10.2304/elea.2011.8.4.357
Hargittai, E., Gruber, J., Djukaric, T., Fuchs, J., & Brobach, L. (2020). Black box measures? How to study people’s algorithm skills. Information, Communication & Society, 23(5), 764–775. https://doi.org/ 10.1080/1369118X.2020.1713846
Harris, T. (2017, April). How a handful of tech companies control billions of minds every day. [Video]. TED Conferences. https://www.ted.com/talks/tristan_harris_how_a_handful_of_tech_companies_control_billions_of_minds_every_day
Harris, T. and Raskin, A. (2022) Center for Humane Technology. https://www.humanetech.com/
Hautea, S., Dasgupta, S., & Hill, B. M. (2017). Youth Perspectives on Critical Data Literacies. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 919–930. https://doi.org/10.1145/3025453.3025823
Herdzina, J., & Lauricella, A. R. (n.d.). Media Literacy in Early Childhood Report. 54.
Hobbs, R. (2004). A Review of School-Based Initiatives in Media Literacy Education. American Behavioral Scientist, 48(1), 42–59. https://doi.org/10.1177/0002764204267250
Hobbs, R. (2019). Media literacy foundations. The international encyclopedia of media literacy, 1-19.
Hobbs, R. (2020). Propaganda in an age of algorithmic personalization: Expanding literacy research and practice. Reading Research Quarterly, 55(3), 521-533.
Hobbs, R. (2022). Assessing Media Literacy Measures, Presentation to the 2022 International Communication Association. April 11, 2022. [YouTube Video]. https://www.youtube.com/watch?app=desktop&v=GDaQgSDMX-s.
Ito, M., Cross, R., Dinakar, K., & Odgers, C. (2021). Algorithmic Rights and Protections for Children. https://wip.mitpress.mit.edu/pub/intro-algorithmic-rights-and-protections/release/1
Jandrić, P. (2019). The Postdigital Challenge of Critical Media Literacy. The International Journal of Critical Media Literacy, 1(1), 26–37. https://doi.org/10.1163/25900110-00101002
Kafai, Y., Lui, D., & Proctor, C. (2020). From Theory Bias to Theory Dialogue: Embracing Cognitive, Situated, and Critical Framings of Computational Thinking in K-12 CS Education. ACM Inroads, 11(1), 44–53.
Kantayya, S. (Director) & Buolamwini, J. (Writer). (2021). Coded Bias [Film]. Shalini Kantayya, p.g.a. (Producer).
Kellner, D. & Share, J. (2007). Critical media literacy, democracy, and the reconstruction of education. In D. Macedo & S. R. Steinberg (Eds.) Media literacy: A reader. (pp. 3-23).
Kellner, D., & Share, J. (2019). The critical media literacy guide: Engaging media and transforming education. Brill.
Kist, W. (2005). New literacies in action: Teaching and learning in multiple media (Vol. 75). Teachers College Press.
Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. https://doi.org/10.1080/1369118X.2016.1154087
Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 1-14. https://doi.org/10.1098/rsta.2018.0084.
Lane, D. (2021). Machine learning for kids: A project-based introduction to artificial intelligence. No Starch Press.
Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. Random House.
Lauricella, A. R., Herdzina, J., & Robb, M. (2020). Early childhood educators’ teaching of digital citizenship competencies. Computers & Education, 158, 103989.
Lee, C. H., Gobir, N., Gurn, A., & Soep, E. (2022). In the Black Mirror: Youth Investigations Into Artificial Intelligence. ACM Transactions on Computing Education, 3484495. https://doi.org/10.1145/3484495
Lee, I., Ali, S., Zhang, H., DiPaola, D., & Breazeal, C. (2021). Developing Middle School Students’ AI Literacy. SIGCSE ’21 Conference Proceedings, Toronto, Canada March 17 – 20, 2021,
Long, D. and Magerko. B. (2020). What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3313831.3376727
Luke, A., & Freebody, P. (1997). Shaping the social practices of reading. In S. Muspratt, A. Luke, & P. Freebody (Eds.), Constructing critical literacies: Teaching and learning textual practice (pp. 185–225). Sydney, Australia: Allen & Unwin; Cresskill, NJ: Hampton Press.
McNamee, R. (2019). Zucked: Waking up to the Facebook catastrophe. Penguin Press
Mohamed, S., Png, MT. & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philos. Technol. 33, 659–684. https://doi.org/10.1007/s13347-020-00405-8
Morrell, E. (2013). 21st-century literacies, critical media pedagogies, and language arts. The Reading Teacher. 66 (4).
National Council of Teachers of English (2020). Literacy is more than just reading and writing. Literacy & NCTE [Blog]. https://ncte.org/blog/2020/03/literacy-just-reading-writing/.
New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92.
Neff, G., Aragon, C., Guha, S., Kogan, M., Muller, M., Fiesler, C.(2022) All data is human:
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. University Press.
O’Neil, C. (2017). Weapons of math destruction. Penguin Books.
O’Neil, C. and Gunn, H. (2020) Near-Term Artificial Intelligence and the Ethical Matrix In Ethics of Artificial Intelligence. Edited by: S. Matthew Liao, Oxford University Press.
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Viking/Penguin Press.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press.
Payne, B. H., & Breazeal, C. (2019). An Ethics of Artificial Intelligence Curriculum for Middle School Students. [Curriculum document]. MIT Media Lab. https://docs.google.com/document/d/1e9wx9oBg7CR0s5O7YnYHVmX7H7pnITfoDxNdrSGkp60/
Ranieri, M. and Fabbro, F. (2016). Ranieri and Fabbro’s (2016) Questioning discrimination through critical media literacy. Findings from seven European countries. European Educational Research Journal. 2016;15(4):462-479. doi:10.1177/1474904116629685
Rainie, L., & Anderson, J. (2017). Numbers, facts and trends shaping the world. Pew Research Center.
Redmond, T. (2012). The pedagogy of critical enjoyment: Teaching and reaching the hearts and minds of adolescent learners through media literacy education. Journal of Media Literacy Education, 15.
Redmond, T. (2019). Unboxed: Expression as Inquiry in Media Literacy Education. Journal of Literacy and Technology. Special Edition. Volume 20, Number 1: Winter
Redmond, T. (2021). The Art of Audiencing: Visual Journaling as a Media Education Practice. Journal of Media Literacy Education Pre-Prints. Retrieved from https://digitalcommons.uri.edu/jmle-preprints/17
Register, Y., & Ko, A. J. (2020). Learning Machine Learning with Personal Data Helps Stakeholders Ground Advocacy Arguments in Model Mechanics. Proceedings of the 2020 ACM Conference on International Computing Education Research, 67–78. https://doi.org/10.1145/3372782.3406252
Resnick, M., & Silverman, B. (2005). Some reflections on designing construction kits for kids. Proceeding of the 2005 Conference on Interaction Design and Children – IDC ’05, 117–122. https://doi.org/10.1145/1109540.1109556
Rideout, V. J. & Robb, M. B. (2020). The Common Sense census: Media use by kids age zero to eight, 2020. San Francisco, CA: Common Sense. https://www.commonsensemedia.org/research/the-common-sense-census-media-useby-kids-age-zero-to-eight-2020.
Rideout, V. J., Peebles, A., Mann, S., & Robb, M. B. (2022). Common Sense census: Media use by tweens and teens, 2021. San Francisco, CA: Common Sense. https://www.commonsensemedia.org/research/the-common-sense-census-media-use-by-tweens-and teens-2021
Ridley, M., & Pawlick-Potts, D. (2021). Algorithmic Literacy and the Role for Libraries. Information Technology and Libraries, 40(2). https://doi.org/10.6017/ital.v40i2.12963
Rushkoff, D. (2019). Forward In Hobbs, R. Mind Over Media: Propaganda Education for a Digital Age. W. W. Norton & Company.
S.1896 — 117th Congress (2021-2022). Algorithmic Justice and Online Platform Transparency Act.117th Congress (2021-2022). (2021, May 27). https://www.congress.gov/bill/117th-congress/senate-bill/1896/
Schüll, N.D. (2012). Addiction by Design: Machine Gambling in Las Vegas. Princeton University Press.
Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 2053951717738104. https://doi.org/10.1177/2053951717738104
Shane, J. (2019). You look like a thing, and I love you. Hachette UK.
Share, J. (2015). Media Literacy is Elementary: Teaching Youth to Critically Read and Create Media- Second Edition. Peter Lang Publishing.
Share, J., Mamikonyan, T., & Lopez, E. (2019). Critical Media Literacy in Teacher Education, Theory, and Practice. In J. Share, T. Mamikonyan, & E. Lopez, Oxford Research Encyclopedia of Education. Oxford University Press. https://doi.org/10.1093/acrefore/9780190264093.013.1404
Steinberg, S. R., & Kincheloe, J. L. (2004). Kinderculture: The corporate construction of childhood. Boulder, Colo: Westview Press.
Thompson, N. (2018). When tech knows you better than you know yourself: Historian Yuval Noah Harari and ethicist Tristan Harris discuss the future of artificial intelligence with WIRED editor-in-chief Nicholas Thompson. Wired [Online magazine]. (October 4, 2018). https://www.wired.com/story/artificial-intelligence-yuval-noah-harari-tristan-harris/
Trammell, A., & Cullen, A. L. (2021). A cultural approach to algorithmic bias in games. New Media & Society, 23(1), 159–174. https://doi.org/10.1177/1461444819900508
United Nations Children’s Fund (UNICEF) (2020). Policy Guidance on AI for children. https://www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policy-guidance-AI-children-draft-1.0-2020.pdf
Valtonen, T., Tedre, M., Mäkitalo, Ka., & Vartiainen, H. (2019). Media Literacy Education in the Age of Machine Learning. Journal of Media Literacy Education, 11(2). https://doi.org/10.23860/JMLE-2019-11-2-2
Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2022). Don’t make assumptions about me!’: Understanding Children’s Perception of Datafication Online. Conference on Computer Supported Cooperative Work and Social Computing, 1–20. https://www.tiffanygewang.com/publication/paper-placeholder-8/paper-placeholder-8.pdf
Willson, M. (2017). Algorithms (and the) everyday. Information, Communication & Society, 20(1), 137–150. https://doi.org/10.1080/1369118X.2016.1200645
Yeung, K. 2020. Recommendation of the council on artificial intelligence (oecd). International Legal Materials 59, 1 (2020), 27–34.
Zarouali, B., Boerman, S. C., & de Vreese, C. H. (2021). Is this recommended by an algorithm? The development and validation of the algorithmic media content awareness scale (AMCA-scale). Telematics and Informatics, 62, 101607. https://doi.org/10.1016/j.tele.2021.101607
Zuboff, S. (2019) The age of surveillance capitalism. Public Affairs.
Current Issues
- Media and Information Literacy: Enriching the Teacher/Librarian Dialogue
- The International Media Literacy Research Symposium
- The Human-Algorithmic Question: A Media Literacy Education Exploration
- Education as Storytelling and the Implications for Media Literacy
- Ecomedia Literacy
Leave a Reply