Abstract
Post-secondary students, while regular internet users are not savvy in technological literacy practices, especially algorithmic awareness. Despite research that points to a clear gap in student understanding of the workings of artificial intelligence and machine learning, outside the field of library and information science little attention to developing these academic and life skills has emerged. General education must shift to include learning outcomes that prepare students to not only make sense of their own lived experiences with technology, but also recognize the many harms intrinsic to algorithmic systems. This article contends that information literacy education must be adopted across the curriculum because improved information literacy will strengthen student’s engagement with technology academically and socially; moreover, in order to retain this knowledge, student learning must be scaffolded and iterative. This article offers the First-Year-Writing (FYW) classroom as one potential site of information literacy integration and focuses on the ways that algorithmic awareness can be supported when students engage in acts of metacognition, self-reflection, and meaning-making.
Keywords
Information Literacy, Algorithmic Awareness, First-year-writing, Metacognition, Rhetoric
To practice literacy in the 21st century is to be able to understand the interconnected nature between text, technology, social structures, economic, and political influences, and the role of digital communication in our online (and offline) lives. Digital literacy requires people to be able to consume and create, but also requires people to be critical of what they are consuming…the next phase in technological literacy is to incorporate the role of algorithms and algorithmically-run platforms.
– Koenig, 2020, p. 2
Awareness of algorithmic decision-making is fundamental to contemporary information literacies, which is understood as critical engagements with information. Yet…there is a need to go beyond awareness in order to connect individual responsibilities, collective responsibilities and corporate interests and to facilitate an understanding of information as co-constituted with the socio-material conditions that enable it.
– Haider & Sundin, 2021, p. 140
Computer algorithms impact nearly every aspect of our lives, from our direct interaction with search engines, online advertising, and product or media recommendations, to behind-the-scenes work managing airport runways, assisting with medical diagnoses, and anticipating security concerns. The ubiquity of algorithms makes it easy for them to be perceived as neutral actors, rather than as the product of human hands, human preferences, and human biases. Nonetheless, a number of incidents call attention to the “human” ideologies embedded in algorithms: a 2012 article in Bitch magazine discussed the sexualization of women, especially women of color, in search engine results; in January 2019, YouTube finally responded to years of criticism about its promotion of conspiracy theories and misinformation by announcing that it would adjust its recommendation algorithm (Dwoskin); in the same month, Democratic Rep. Alexandria Ocasio-Cortez made headlines arguing that algorithms reflect the biases of their makers, and that “if you don’t fix the bias, then you are just automating the bias” (Resnick). Moments like these destabilize the belief in algorithmic neutrality by contesting their objectivity.
In the college writing classroom, moments like these can and should inform the ways students take up and use technological infrastructure, like algorithms, during the researching and writing process. Yet, in spite of countless examples where the “automated biases” of algorithms become visible, the approach to teaching students to navigate these algorithms has remained focused on evaluating the ethos of the sources they find, rather than on the ethos of the algorithms themselves. As such, students are asked to think critically about the product of a search, but rarely about the process.
It is further troubling that rather than encourage students to critically assess the biased nature of search engine results we tend to accept these biases as inherent to the systems–positioning them as normative and expected. This approach is one that accepts that the internet is, as our title indicates, a gorgeous ball of filth. This description, while amusing and perhaps in some ways accurate, also draws attention to one of the problems we seek to emphasize. To call the internet a gorgeous ball of filth is to acknowledge its capacity for beauty and ugliness, art and exploitation, connection and harassment. It is a definition that doesn’t ask, or seek to ask, why the internet should be this way. Similarly, in our classrooms, we guide students in avoiding the less savory parts of the internet, particularly misinformation, without prompting them to think critically about why it’s there, and why it is so prominent.
Search algorithms play a crucial role in determining what is and isn’t prominent; like other rhetorical actors, they are audiences and produce texts for audiences, they make attempts to persuade, and they are rooted in the values and beliefs of those who author and interact with them. An interrogation of algorithms as rhetorical actors is thus particularly important in classrooms where students not only rely upon, but are encouraged to use, tools like Google and YouTube to carry out academic research. Building on Estee Beckargued’s (2005) contention that “if educators ask students to dig into digital spaces that use tracking technologies, then they also have some responsibility to teach students about invisible digital identities, how to become more informed about digital tracking, and how to possibly opt-out of behavioral marketing” (p. 126). This article contends that if instructors expect students to engage with algorithms as part of their research process, then they also have a responsibility to provide students with the tools to do so ethically. In failing to ask students to consider the ethos of algorithms, instructors (often unintentionally) are also failing to teach students important academic (and social) information literacy skills, particularly in regard to algorithmic awareness and rhetorical agency.
Ultimately, what this article suggests is that holistic attention to information literacy needs to be established across the curriculum, particularly in required, general education courses. Here, we advocate for addressing these concerns within the context of our home discipline, rhetoric and composition, and within first-year-writing (FYW), an introductory core curriculum writing course in U.S. colleges and universities. FYW courses work to provide students with tangible “rhetorical orientations to the world” (Crowley, 1998, p. 78) seated in citizenship and democratic thinking. Consequently, FYW is one educational site where tenets of information literacy are strongly connected to “ideals of participation and democracy” (Haider & Sundin, 2021, p. 131). Although we identify FYW as a course for teaching information literacy, including algorithmic awareness, it is important to emphasize that these skills must occur across the curriculum; improved information literacy cannot be achieved in a single course, it must be scaffolded and incremental.
Algorithms & Harm
Although “opaque algorithmic forces” are present in many aspects of our lives–determining whether and on what terms we receive car or health insurance, job interviews, bank loans, etc. (Gardner, 2019)–we, and our students, are most familiar with them in the context of search and recommendation programs. Algorithms like those used by Google and Netflix are designed to take in our queries and observe our actions, using this data to answer our questions, anticipate our needs, and recommend items or media we might enjoy. Because search engines are the “public face” of algorithms, we tend to think of them as tools, and evaluate them primarily based on their usefulness. A good algorithm returns the information we wanted or expected, while a bad one does not. This perception, however, sidesteps the rhetorical dimension of algorithms. In spite of their neutral appearance, algorithms “take their forms from the builders and makers as well as the social systems out of which those people produce their algorithms …algorithms are machinations of human beings’ intentions and the equations designed to achieve those intentions” (Gallagher, 2020, p. 2). As a result of this human connection, even the best intentioned “builders and makers” produce algorithims that reflect values and beliefs, including racial, economic, sexual, etc. biases. With respect to search engines, such biases often take the form of search results that promote misinformation, exploit women and marginalized groups, or perpetuate stereotypes, as a result of foregrounding the most profitable content.
That algorithms encode and enact biases is hardly surprising. As Abigail Bakke (2020) explains, “if we think of bias broadly as showing a preference for one thing over another, then bias is built into the design of search algorithms–they pick the ‘best’ or ‘most relevant’ result based on inscrutable criteria” (p. 4). Considered in this way, bias is crucial; an unbiased search algorithm wouldn’t return results that are especially useful. While concepts like “best” and “most relevant” aren’t objective, they are regularly interpreted by designers and users who, in effect, use algorithms to make arguments for what these concepts mean. Jeff Rice’s (2020) work highlights the way that ideologies are themselves algorithmic, arguing that reactions to images such as that of quarterback Collin Kaepernick kneeling during the national anthem are determined “not only by the image itself, but by [the] many other images, headlines, ideas, and beliefs that we bring to this image” (p. 4). If ideologies function algorithmically, then we must recognize that algorithms also reflect ideologies.
However, bias within algorithms, when discriminatory, can be framed within paradigms of harm. Harm is a result of algorithms “[encoding] the biases of their developers or the surrounding society, producing predictions or inferences that are clearly discriminatory towards specific groups” (Baker & Hawn, 2021, p. 2). Edwards (2018) argues that while algorithms “often claim to be neutral, platforms make decisions, and these decisions are informed by multiple interests (political, economic, legal, etc.)” (p. 66, italics original). Various researchers have identified the ways that these “multiple interests” fail to recognize the many intersectional social contexts of users (Adams, Applegarth, & Simpson, 2020; Head, Fister, & MacMillan, 2020; Koenig, 2020; Nobel, 2018; O’Neil, 2016, etc.), especially in regard to minoritized communities. These harms, however, are rarely discussed in the classroom and, consequently, continue to be ignored by teachers and students; however, because these platforms “make decisions” and because these decisions are more often than not inherently racist, sexist, ableist, classist, etc. There is a need to examine the process as rhetorical.
It is very difficult to diffuse algorithmic harm if we do not teach students how to first, actively and ethically engage with algorithms and, second, align algorithmic awareness with rhetorical awareness. While Gallagher (2020) agrees that “we must demystify algorithms” (p. 4), he also points out that multiple strategies are in place to keep algorithms opaque–misinformation, black boxing, diffused responsibility for the algorithms, proprietary claims, etc. Misconceptions about the potential harm of algorithms function as additional obstacles. Examples of discrimination, like those above, abound, yet, for many, “a surprisingly prevalent belief is that a machine learning model merely reflects existing algorithmic bias in the dataset and does not itself contribute to harm” (Hooker, 2021, p.1), such positionings maintain assumptions that “algorithms simply grind out their results, and it is up to humans to review and address how that data is presented to users, to ensure the proper context and application of that data” (Kirpatrick, 2016, p.16). This is in line with the by now familiar response of companies like Google to claims of bias in their algorithms: when racist, untrue, or exploitative content predominates a search this is treated as unfortunate, but unavoidable. For example, a 2011 statement from Google encouraged users disturbed by anti-Semitic search results to alter their search terms, citing an inability to remove pages unless required by law (Noble, 2018, p. 42) In other words, users are both responsible for potentially harmful search results, and responsible for vetting what they find themselves.
Even the most skillful users are unprepared to navigate the most ubiquitous search engines –often unaware of the role that factors such as page ranking play. Moreover, “participants’ judgments of credibility [are] almost always based on name recognition” (Bakke, 2020, p. 11), a finding which seems reasonable until we consider the ways that Google and other search engines work to make certain venues, creators, and networks recognizable to us –of course, at the expense of others. Consequently, how users frame search terms, and the presence of elements such as auto-complete or search recommendations can act to “foreclose diverse perspectives in search results” (Bakke, 2020, p. 11). This is a key area of information literacy that is not being taught to students. While we are very good at discussing ethos-related rhetorical concerns, such as teaching students about the importance of source credibility (e.g. using concepts like authority, peer-review, etc.) we do not often situate search results the same way. In other words, we have not yet shifted our pedagogies to address ethos-related concerns associated with algorithms, concerns that shape search results.
Information Literacy & FYW
Ultimately, this article supports holistic attention to information literacy across the curriculum, especially in general education courses like FYW. Information literacy is commonly understood as a “set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning” (ALA, 2015, p. 7). As core general education coursework is designed to prepare students to participate fully in academic and civic duties, it should include learning outcomes that engage students in information literacy recognition, interpretation, and appraisal. Discussions of information literacy, literacies, and literacy practices are necessary for students to fully understand “the complex sets of cultural beliefs and values that influence our understandings of what it means to read, write, and communicate with computers” (Vie, 2008, p. 14). These approaches must come into play if students are to gain comprehensive awareness of the information literacy practices they engage with when carrying out academic (and non-academic) technologically assisted research.
In addition, when we teach students about information literacy we must also prepare them to consider how algorithms function as rhetorical actors. Haider and Sundin (2021) explain “[i]nformation literacy today inherently implies the creation of meaning from information shaped in relation to and by algorithmic systems that employ different forms of predictive analytics” (p. 131), suggesting that central to any discussion of information literacy is also attention to algorithmic awareness and the ways that algorithmics are also rhetorical. As “our world (online and off!) is increasingly mediated, filtered, personalized, and predicted by algorithms” it is necessary to teach students how to “appraise, interrogate, and analyze the roles algorithms play in structuring our information seeking and use” (Gardner, 2019). Moreover, “if we define information literacy as the ability to critically and reflectively locate, evaluate, and incorporate information–something we ask students to do in nearly every writing class–then the role of algorithms in that process must not be overlooked” (Bakke, 2020, p. 2). We know that engagement with technological information is shaped by past usage and response to this usage, therefore it is important for students to develop not simply information literacy skills, but also algorithmic literacy skills–or awareness. Much of this awareness lies in metacognition, self-reflection, and meaning-making: acts that enable students to make sense of their own lived experiences with technology. Moreover, these are skills that students need in order to succeed academically, professionally, and socially.
Coursework that cultivates information literacy awareness can foreground the ways that our lives and those of our students are shaped by technological interaction, yet “to date, no systematic investigation has explored what college students and faculty think about algorithm-driven platforms and concerns they may have about their privacy and access to trustworthy news and information” (Head, Fister, & MacMillian, 2020, p. 3). As a central tenet of post-secondary education is the development of interdisciplinary and transferable critical thinking skills—including, reflection, comprehension, and problem solving—it is especially problematic that core general education courses have not, for the most part, addressed this chasm in student knowledge. This article, building upon the work of others, contends that educators must “encourage a critical analysis of AI in education that also attends to its perhaps unintended and unforeseen negative consequences” (Kizilcec & Lee, 2020, p. 1). Nevertheless, as Haider and Sundin (2021) explain the bulk of AI-related scholarship has not examined the importance of
enabling people’s understanding of why they are provided certain information in certain constellations in the first place and why other information remains obscure…attending to this question more explicitly is a prerequisite for understanding how infrastructural conditions are implicated in enabling (and disabling) information, specifically in relation to algorithmic systems. (p.130)
In a perfect world, all university libraries would employ diverse and specialized librarians–disciplinary, archival, information literacy, etc.–and information literacy skills would occur across multiple library sessions or, even, a semester-long course. However, not all universities have access to these resources and in many cases the task of teaching information literacy is often up to the discretion of individual instructors. FYW’s focus on research and writing, as well as its status as a mainstay across many U.S. postsecondary institutions, makes it an ideal site to engage with topics of information literacy, including algorithmic awareness. Unfortunately, FYW courses are already burdened with primary or sole responsibility for teaching writing, research (including information literacy), public speaking, critical thinking, multimodal composition, and citizenship making this necessary task even more difficult.
In focusing this article on the ways that these ends can be achieved within our home discipline we support the work of Koenig (2020) who suggests “students in composition, technical communication, and rhetoric should be aware of how they engage with these algorithmic platforms not only with a basic awareness, but with a critical eye and a sense of their own agency” (p. 12). This approach emphasizes algorithms as rhetorical actors, pushing back against common place worldviews that frame technological infrastructure as neutral.
Rhetorical & Metacognitive Interventions
We now examine some of the ways that FYW courses can shift to engage with information literacy, especially algorithmic awareness; we briefly survey the work of others in our discipline and extend this work to offer pedagogical interventions seated in rhetoric–emphasizing the ways algorithms function as rhetorical actors. Pedagogical approaches that incorporate algorithmic awareness can support research-related teaching, which is already a central tenant of FYW. That said, we stress that such awareness can (and should) be adapted across the curriculum; we encourage readers to consider ways to apply the strategies we offer within their own courses, programs, and institutions.
We begin by considering the role of rhetoric and rhetorical awareness; we then discuss the importance of metacognition and reflection. This focus on FYW serves to address what has come to be understood as the ineffective integration of “technological literacy instruction into the composition classroom in meaningful ways” (Vie, 2008, p. 9) and offers readers rhetorical and metacognitive interventions that support the development of student information literacy competencies.
Rhetorical Interventions
Rhetoric is already a central component—be it implicit or explicit—within every FYW course. We regularly teach students how to be rhetorically aware, particularly in regard to unpacking rhetorical situations (e.g. the triangulation of context, purpose, and audience) across textual, oral, visual, and other modes of communication. If one extends rhetorical awareness to conceive of algorithms as rhetorical actors, we can position “rhetorical literacy as an application of the effective use of language and technology and how users act within spaces to produce change” (Koenig, 2020, p. 3). Not only does positioning algorithms as rhetorical (both subject to persuasion and persuasive themselves) offer a way to incorporate information literacy into existing FYW curricula, it also helps us avoid a grand narrative of progress in our discussions of algorithmic bias, which is necessary because algorithms are changing all the time. By the time instructors are aware of incidents like those described in our introduction, these incidents are likely to have been resolved. To conceive of algorithms as rhetorical is to reject a narrative where algorithms are flawed but becoming “better” as each incident is resolved, and instead positions them as agents whose arguments can shift over time, as the values and ethos of those arguments change.
To treat algorithms as rhetorical means, at least, to rank them among the audiences and speakers our students think critically about. When algorithms are persuasive agents, rather than neutral tools, students are invited to recognize “how the machine influences them,” which should lead them to “become further aware of how they can act to influence the machine.” In these ways, rhetorical awareness can help students become “active agents in their own algorithmic entanglements” (Koenig, 2020, p. 7). Students can be taught to write for algorithmic as well as human audiences. Gallagher (2017) outlines strategies to support such moves, such as having students create content for the web (e.g. a YouTube video), with attention both to the human audience the video is created for, and the algorithmic audience which will determine whether and how their video is found. In the latter approach, students should think of audience as encompassing “the processes and procedures by which YouTube prioritizes their videos” (Gallagher, 2017, p. 27, emphasis original). Thus, when we teach audience (as we always do) we must make room to engage with “audience theories that better account for how our students already write for algorithmic audiences, help our students develop accurate descriptions of such practices, and describe the new ethical challenges that arise from those practices” (Gallagher, 2020, p. 4).
Similarly, students should be invited to consider the ways that they are audiences for algorithms. In part, this involves teaching them to recognize ad content in search results, calling attention to it not simply so it can be scrolled past and avoided, but to consider it as part of the argument made by their search results. As we might ask of any text, what values are revealed by what is made available? This examination can support critical examinations of the logic operating behind assumptions about “best” and “most relevant” results: Who are the results most useful or most relevant for? How do results affect different users? Students need to be given space to think hard about why the answers to such questions matter as well as how their answers can affect their future research processes and engagement with search algorithms–academically and socially. These may seem like straightforward lines of inquiry, yet it is naïve to think students will critically engage and reflect upon their research proccess without prompting. Thus, when incorporating algorithmic rhetoric within coursework we need students to understand how “technological systems work with humans to persuade” (Bakke, 2020, p. 3) and not “convince ourselves that we are actively making decisions about how to participate in a given system [because], in reality, we accept options made apparently available to us from a set of constrained possibilities” (Brock & Shepherd, 2016, p. 21). It is this lack of critical inquiry into the relationships we have with algorithms that needs to be interrogated across general education curriculum; examining algorithms as agentive and rhetorical is one way to improve students’ information literacy skills.
Metacognitive Interventions
Still, simply situating algorithms as rhetorical actors is not enough, we also must emphasize that while “the agential affordances of algorithms seem obvious: algorithms exercise agency as they shape the behaviors of human users” (Adams et al., 2020, p. 2). This is a perspective that students rarely encounter; they are not taught to think of algorithmic interaction as agential and predominantly engage with online resources passively. However, students must be shown that “while networks can, on their surface, seem like sites primarily created for user engagement, it is critical to consider the imbalance between user-produced and network-shaped content” (Adams et al., 2020, p. 3). In other words, FYW classrooms can invite students to reflect upon how their research processes are more often “network-shaped content” rather than “user-produced.” Teachers can assign coursework that expects students to critically interrogate their and their peers’ research processes instead of blindly trusting it. We identify three layers inherent to this analysis, with the first focusing on process, the second on reflection, and the third on discussion.
While we use algorithms all the time, we tend to use them indiscriminately. Yet, for instance, the process of using a search engine to find a good restaurant is not the same as searching for a good academic source–“there is a time and place for more convenient, superficial searches and there is a time and place for more thorough, scholarly searches” (Bakke, 2020, p. 12). These differences–be they seated in content or context–are rarely points of discussion in the classroom. There is a maintained assumption that students are savvy internet users; however, just because students know how to use a technology does not mean that this usage is not superficial or problematic. In unpacking the role of technology within the research process coursework can “focus on how users arrive at sources and how users decide what to trust, often implicitly, based on factors such as search result rankings” (Bakke, 2020, p. 3, italics original). Koenig (2020) explains that “to work towards improved algorithmic literacy, specifically within the classroom, is to first define how users engage with algorithmic platforms and what they understand about their practices” (p.3). To achieve these ends, coursework can spend time inviting students to thoughtfully engage with their research process. To think about the choices they make in regard to the results they find as iterative, recognizing that the algorithm is also making choices–restricting some content and favoring other content. For example, students need to consider the quality of sources (even those found on ‘credible’ sites like Google Scholar), the dissemination of sources, and the impact of sources on not only their own academic and personal lives, but also those of their peers. In the classroom, such work can best be achieved in conjunction with exercises, discussions, and readings (such excerpts of those we cite here). Students need to understand the research process as rhetorical and embodied.
While critical engagement with the process is a first step, reflective writing about the process motivates metacognitive awareness and is a necessary second move. Students need to come to the realization that they trust platforms like Google too hard and too easily; teachers can’t just tell students this, students need to reach such conclusions themselves. Gallagher (2017) offers concrete assignment-based interventions, such as having students write algorithm narratives in which they attempt to articulate the values/ideologies of a particular algorithm. These types of assignments encourage metacognition and awareness; “by denaturalizing and interrogating the black box of algorithms as alterable human constructs, [teachers and students] can attend to algorithms as persuadable, subjective writing activities rather than objective features of internet communication” (Gallagher, 2017, p. 32). Writing about algorithmic engagement can enable students to examine “patterns of use and their own role in creating those patterns,” which helps students’ “critical and rhetorical understanding of their engagements” (Koenig, 2020, p.11). This reflective, metacognitive approach is especially important because it allows students to critically examine their past engagements with algorithms, encouraging them to consider algorithms as rhetorical actors; however, written reflection, while an integral part of any FYW classroom (though often associated with an analysis of one’s own writing process and rhetorical choices) is still not enough because it is primarily personal writing–written for oneself and, maybe, shared with the teacher.
The third level of developing algorithmic metacognitive awareness and building information literacy prowess is discussion. The simplest version of this could be a think-pair-share exercise where students are asked to consider a prompt or set of questions (e.g. What did you learn about algorithms from analyzing your own research process? How did your choices differ from those of your classmates? Why do you think this happened? How will this knowledge affect your search process?), respond individually to these (or other questions) via free-writing or simple brainstorming, and then share out their observations and justifications in small groups. In these groups, students would be given time to discuss the similarities and differences across peer responses. Following this pairing, the class would come together to suss out these smaller discussions with the teacher serving as a moderator and/or scribe. Ideally, after this process students would be given the opportunity to revisit their original reflection in order to see if any of their reasonings have shifted and, if so, students would be given time to revise the original reflection, documenting any experiential shifts and explaining what factors influenced this shift (moreover, these responses could help teachers shape future iterations of this exercise).
While this three-tiered process may seem lengthy, students need this scaffolding (across multiple exercises, class periods, etc.) in order to better process not only their role as algorithmic users, but also the ways algorithms function as rhetorical actors. An important part of information literacy is the promotion of metacognition, “which offers a renewed vision of information literacy as an overarching set of abilities in which students are consumers and creators of information…metaliteracy demands behavioral, affective, cognitive, and metacognitive engagement with the information ecosystem” and situates metacognition (e.g. critical self-reflection) as central to developing information and technological literacy/literacies (ALA, 2015, p. 7). Ultimately, students need to be given time to process the process and a huge part of this is time to think about their own choices as well as considering these choices alongside those of their peers.
Conclusion
In unpacking two potential pedagogical interventions–rhetorical and metacognitive–we have sought to illustrate ways to consolidate pedagogical conversations around information literacy and algorithmic awareness. The research process is a useful starting point because it is foundational to general education curricula and all levels of higher education; yet, there has been little scholarship that foregrounds the relationships between student research, general education, and information literacy –a troubling gap. This article encourages post-secondary institutions and educators to acknowledge and respond to this information literacy deficit. Having students critically engage with the seen and unseen impacts of algorithmic bias, discrimination, and harm must be centralized and validated across multiple learning spaces if we are to prepare students to be agentive and ethical users of technology. Ultimately, future pedagogical research in information literacy must expand beyond the walls of libraries into general education curriculum, yes, but it must also infiltrate all levels of undergraduate education (much like the boundlessness of the internet has infiltrated all aspects of our daily lives). Students are already cognizant of the ways technology can be helpful, but there is also a need to educate them on the ways that technology, particularly algorithms, are harmful, discriminatory, and biased.
References
Adams, H. B., Applegarth, R., & Simpson, A. H. (2020). Acting with algorithms: Feminist propositions for rhetorical agency. Computers and Composition, 57, 102581.
Association of College and Research Libraries. (2015). Framework for information literacy for higher education. Association of College and Research Libraries, a division of the American Library Association.
Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 1-41.
Bakke, A. (2020). Everyday googling: results of an observational study and applications for teaching algorithmic literacy. Computers and Composition, 57, 102577.
Beveridge, A., Figueiredo, S. C., & Holmes, S. (2020). Introduction to “Composing Algorithms: Writing (with) Rhetorical Machines”. Computers and Composition, 57, 102594.
Brock, K., & Shepherd, D. (2016). Understanding how algorithms work persuasively through the procedural enthymeme. Computers and Composition, 42, 17-27.
Edwards, D. W. (2018). Circulation gatekeepers: Unbundling the platform politics of YouTube’s content ID. Computers and Composition, 47, 61-74.
Dwoskin, E. (2019, January 25). YouTube is changing its algorithms to stop recommending conspiracies. The Washington Post. Retrieved May 13, 2022, from https://www.washingtonpost.com/technology/2019/01/25/youtube-is-changing-its-algorithms-stop-recommending-conspiracies/
Gallagher, J. R. (2017). Writing for algorithmic audiences. Computers and Composition, 45, 25-35.
Gallagher, J. R. (2020). The ethics of writing for algorithmic audiences. Computers and Composition, 57, 102583.
Gardner, C. C. (2019). Teaching algorithmic bias in a credit-bearing course. International Information & Library Review, 51(4), 321-327.
Haider, J., & Sundin, O. (2020). Information literacy challenges in digital culture: conflicting engagements of trust and doubt. Information, Communication & Society, 1-16.
Haider, J., & Sundin, O. (2021). Information literacy as a site for anticipation: temporal tactics for infrastructural meaning-making and algo-rhythm awareness. Journal of Documentation.
Head, A. J., Fister, B., & MacMillan, M. (2020). Information Literacy in the Age of Algorithms: Student Experiences with News and Information, and the Need for Change. Project Information Literacy.
Hooker, S. (2021). Moving beyond “algorithmic bias is a data problem”. Patterns, 2(4), 100241.
Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S. G., … & Shestakofsky, B. (2021). Toward a sociology of artificial intelligence: A call for research on inequalities and structural change. Socius, 7, 2378023121999581.
Kirkpatrick, K. (2016). Battling algorithmic bias: how do we ensure algorithms treat us fairly?. Communications of the ACM, 59(10), 16-17.
Kizilcec, R. F., & Lee, H. (2020). Algorithmic fairness in education. arXiv preprint arXiv:2007.05443.
Kleinberg, J., Ludwig, J., Mullainathan, S., & Rambachan, A. (2018, May). Algorithmic fairness. In Aea papers and proceedings (Vol. 108, pp. 22-27).
Koenig, A. (2020). The algorithms know me and I know them: using student journals to uncover algorithmic literacy awareness. Computers and Composition, 58, 102611.
Loukina, A., Madnani, N., & Zechner, K. (2019, August). The many dimensions of algorithmic fairness in educational applications. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications (pp. 1-10).
Mitchell, S., Potash, E., Barocas, S., D’Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 141-163.
Noble, S. U. (2018). Algorithms of oppression. In Algorithms of Oppression. New York University Press.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
Resnick, B. (2019, January 23). Yes, artificial intelligence can be racist. Vox. Retrieved May 13, 2022, from https://www.vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-bias
Rice, J. (2020). Algorithmic outrage. Computers and Composition, 57, 102582.
Smith, H. (2020). Algorithmic bias: should students pay the price?. AI & society, 35(4), 1077-1078.
Leave a Reply