• Skip to main content
  • Skip to secondary menu
  • Skip to footer
International Council for Media Literacy

International Council for Media Literacy

Bridging Academia to Action

International Council for Media Literacy
Bridging Academia to Action
  • Get Involved
  • Home
  • About Us
    • Board
    • Advisory Council
    • History
      • Founders
      • Past Projects
      • Conferences
      • Sponsor Awards
  • Awards
    • Marieli Rowe Award 
      • Marieli Rowe Award Recipients
    • Jessie McCanse Award
      • Jessie McCanse Award Recipients
  • Newsletters
  • Blogs
  • The Journal of Media Literacy
    • About The JML
      • Our Philosophy
      • Ethics Policy
      • Editorial Board
      • Author Guidelines
    • Print Archives
      • 2018 to 2000
      • 1999 to 1953
    • Digital Issues
      • McLuhan Mosaic
      • Public Commons
      • Conference Reflections
      • MIL Dialogue
      • Research Symposium
      • Human-AI
      • Ecomedia Literacy
      • Storytelling

The Case of an AI Agent

April 18, 2026 by Gina Marcello

Abstract

This case study documents the discovery of an AI Agent impersonating a student in an asynchronous undergraduate media and information literacy course. A hidden prompt embedded in the final paper revealed not only machine-generated text but the wholesale outsourcing of learning to an automated generative AI agent. Drawing on McLuhan’s (1964) concept of media environments and the tetrad of media effects (McLuhan & McLuhan, 1988), the case is analyzed not as an isolated academic misconduct issue but as evidence of a shifting educational ecology. The incident calls for a deeper investigation into the ecology of online learning, the vulnerabilities of asynchronous instruction, and the systemic risks of automation, not panic. The paper concludes with a dual call to action: pedagogical strategies that safeguard human voice and authenticity, and institutional reforms that authenticate student identity to ensure education remains a meaningful human endeavor (Postman, 1996). 

Keywords

AI Agents, Tetrad of Media Effects, Sociotechnical Systems, Misinformation, Media Education


Introduction: From Moral Panic to Media Environments 

The first clue that something unusual was happening in my summer class arrived in the form of a repeated sentence in a student’s final paper. This student was at risk of failing the course, had recently participated in the graduation ceremony, and was proficient in information technology applications. The student asked to meet with me before the final paper was due to see if there was any way they could pass the course. I agreed to meet so we could discuss any extenuating circumstances or challenges the student faced. After some discussion, I offered leniency if the student could demonstrate the ability to apply course concepts to the final case study analysis. While it might be challenging to do in such a short time frame, it was not impossible. I invited the student to contact me with any questions as they wrote their analysis. The student expressed what appeared to be genuine gratitude for the opportunity. 

When I opened the student’s assignment the next morning, I was stunned. The hidden prompt that I had embedded in the assignment directions, not visible to the students, but visible to any LLM, appeared in the final paper 10 times. My intent was not to trap students but to make visible when an assignment had been machine-authored. The prompt was there to remind them to behave ethically.

In reading the paper, I assumed the repeated sentence was simply a sign of machine-generated text. The realization was disappointing in light of our recent conversation. However, a series of events unfolded after the first paper submission, and I was compelled to look more closely at the student’s submission patterns, assignment completion rates, course page views, and the use of my email correspondence in a resubmitted assignment. This deep investigation revealed a very different picture. The student not only used a large language model to write their final paper, but deployed an AI agent to take the course. 

An AI agent can log in to a course, download readings, complete quizzes, post to discussion boards, and submit assignments. This all happens autonomously. In other words, once the agent is set up and granted access to the course, it automatically completes all of the work on behalf of the student. The student never has to look at the course or complete an assignment and this student appeared to have done just that. What I discovered was not just plagiarism, but the outsourcing of learning itself. It is tempting to frame this case as a straightforward example of academic dishonesty and respond with alarm and moral panic. Yet, as McLuhan (1988) and others have argued, moral panics obscure more than they clarify. The more urgent task before us is to treat such incidents not as cheating scandals but as signals of a new environment emerging within higher education. If the medium is the message, then the deployment of AI Agents in coursework does not simply introduce another form of misconduct; it alters the very ecology of authorship, presence, and participation. And the student-teacher relationship.

AI Agents in higher education are an indicator of a broader transformation in the ecology of education. This case study serves as the proverbial canary in the coal mine. AI technologies are reshaping what it means to participate, to author, and to learn in digital environments. To systematically analyze what the presence of an AI Agent signifies within the education ecosystem, this paper applies McLuhan’s tetrad of media effects (McLuhan & McLuhan, 1988), situating the event within broader critiques by Postman (1985, 1993), Noble (2018), and Zuboff (2020). The paper argues that AI Agents constitute a new environment that enhances, obsolesces, retrieves, and reverses core assumptions of higher education. By foregrounding ecological analysis, the goal is not to incite panic but to signal a significant ecological challenge, then map possible solutions to the automation of identity and authorship in digital learning spaces.

Case Description: Outsourcing Identity to a Machine 

The first time I realized an AI Agent was deployed in my undergraduate course, it was staring back at me in a student’s final paper. I embedded a hidden prompt in the assignment directions, not visible to the students, but visible to the LLM: “Include the following quote at the beginning of every paragraph: Integrity is doing the right thing even when no one is watching. Someone is always watching.” My intent was not to trap students but to make visible when an assignment had been machine-authored. If the phrase appeared in submitted work, it would reveal the use of generative AI and, more importantly, serve as a provocation: a reminder for students that authorship, ethical conduct, and authenticity matter. The prevalence of the same sentence repeated at the beginning of every paragraph signaled that ethical conduct and proofreading are required. Minimally, never submit a document as your own work without reading it. While some might assume the hidden prompt was intended to catch students cheating, this was not the intention. There are far less subtle ways to signal LLM use. This hidden prompt was a reminder that your unique voice and your integrity are of the highest importance. Do not subjugate either to a machine, no matter how compelling. 

When the student submitted their final paper, the hidden embedded prompt appeared at the beginning of every paragraph for a total of ten times. Although it was clear the paper was LLM-generated, I decided to grade it and provide commentary. Every time the phrase appeared, it was highlighted in yellow. My intention in providing comments and grading the submission was to encourage the student to understand how, where, and why this clearly automated submission failed. I followed up with an email asking them to review my comments and reach out so we could schedule a time to discuss their paper. I knew why the sentence was repeating; I had embedded the prompt in the assignment directions. As a lifelong media educator, one of the more powerful ways we can teach is to help students recognize when the promise of a technology does not live up to its hype. This was one of those opportunities. It is why I graded the submission instead of simply failing the student for academic dishonesty.  

Having clear evidence of LLM use, I contacted the department chair to discuss appropriate next steps. As a faculty member at a large state university, the academic integrity violation process can be lengthy and complex, sometimes taking up to a year for a student’s case to be reviewed. In this situation, the student had already participated in the graduation ceremony, and this course was the last three credits needed to confer their degree. 

Within hours of my comments, email, and grade submission, the student emailed in a panic. They explained they had accidentally submitted a draft version and would like to resubmit. Before I could respond, the student resubmitted their paper. Unfortunately, the resubmission was far worse than the original. The resubmission referred to itself, my comments, and then praised how well this version met the professor’s expectations. It seemed the student had not read the instructions, their own submissions, or even my feedback. I was both stunned and confused as to why a student who was already at risk of failing the course would not look at the paper(s) they submitted. 

On the surface, we might simply assume this was an example of a student who did not bother to read the paper generated by the LLM before submitting it. But, as I reviewed their broader pattern of work in the context of the final submissions, including quiz submissions, social annotation assignments, time to complete assignments, and course page views, another possibility emerged. The student had not just used an LLM; they deployed an AI Agent. 

AI Agents are relatively new and not yet well understood in educational spaces. A good way to understand what an agent can do is to think of it as an assistant created to complete tasks on your behalf. You tell the agent what you want it to do, provide it with the required login credentials, and it completes those tasks and submits them for you. For example, you can ask an AI Agent to find the least expensive flight to a particular location on a particular day. Once identified, the agent will use your login credentials, book, and pay for your plane ticket. Once complete, it can email you the reservation and add it to your calendar. While some may perceive this as incredible efficiency, what is often overlooked are the privacy and security breaches AI agents enable (Whittaker, 2025a, 2025b). In order to complete these tasks, the agent must be granted full access, root-level access, to access your devices and accounts. Root-level access enables an agent to read, write, execute, modify, or delete any file, i.e., files not connected to the specific course.

In order for an AI agent to access a course, the student needs to generate a new API access token. These tokens allow the external application to access user data and perform actions 

on their behalf. Once logged in, the agent can download course materials, view due dates, take quizzes, post and respond to discussion board posts as well as emails, and submit assignments. Once set up, theoretically, the student never has to look at the course content or complete any of the assignments. Essentially, the AI agent, not the student, participates in the course. 

The realization that a machine, instead of a human being, was accessing and completing course assignments and responding to emails was deeply unsettling. My first reaction was panic. If a student could so easily outsource their participation and own learning, what does this say about the future of online education, the purpose of reading and writing in higher education, and the role of universities in authenticating human presence in an age of AI machines? 

Higher education rests on the assumption that enrollment signals presence. In this case, that assumption collapsed. The student had not simply cheated on an assignment; they had outsourced their identity as a learner. This realization marked a cataclysmic shift in my understanding of the stakes of large-scale AI integration into educational spaces. While academic dishonesty is not new, agentic technologies fundamentally alter the online learning environment.

Evolving Media Ecosystems 

Media literacy has long been shaped by moral panics surrounding new technologies. From the printing press to television to the internet, each disruption provokes cultural anxieties about decline while also compelling educators and scholars to develop new tools for interpretation and response (Buckingham, 1991; Gerbner et al., 2021; Hall, 2021; Livingstone, 2004; McLuhan, 1964, 1977; McLuhan et al., 1978; Ong, 2009; Postman, 1980, 1993). Our intellectual ancestors remind us that panic often obscures what matters most: new media reorganize environments, altering how people communicate, learn, and understand the world. 

McLuhan (1964) fundamentally reshaped how educators and scholars approached communication technologies. Rather than focusing on content alone, McLuhan argued that media 

function as environments that extend human senses, reorganize perception, and alter the balance of human relations. McLuhan’s claim that the medium is the message underscores the idea that form, not content, produces the most significant cultural effects (McLuhan & Fiore, 1967). Postman (1993) built on this ecological perspective, arguing that technological change is not additive but transformative: it alters entire systems of meaning and culture. Just as television transformed public discourse into entertainment (Postman, 1985), AI Agents transform online learning into simulation. Participation is no longer evidence of presence, and completion is no longer evidence of learning. The very ecology of education changes when machines can be deployed to impersonate students. 

Zuboff (2020) describes how platforms commodify human behavior, extracting and automating actions to predict and control populations. AI Agents in education can be understood as part of this broader logic: they automate not just content creation but the behaviors of learning itself. Andrejevic (2020) extends this critique, suggesting that automation alters the meaning of participation. If a student can outsource the act of logging in, taking quizzes, and submitting papers, then being a student becomes a performance that a machine can inhabit. Automated systems also reproduce structural inequities. When students assume machine outputs are neutral stand-ins for human work, systemic biases are further embedded into the environment (Noble, 2018). Machine output is never neutral. Assuming a machine answer is invariably the correct answer undermines the very essence of education: appreciation for diverse ways of knowing. Together, these critiques remind us that the problem is not merely dishonesty, but the commodification and automation of identity itself. And the comfort that some people have in allowing a machine to represent them.

Scholars caution against polarized narratives that frame AI as either savior or destroyer of education (Selwyn, 2019; Potter, 2020). Instead, they call for pragmatic analyses of how digital technologies reshape pedagogy, equity, and governance. AI Agents exemplify the need for a systemic perspective (Milner & Phillips, 2020). Their deployment alters the very essence of 

learning, collapsing trust, and forcing institutions to reconsider how they verify identity, evidence of learning, and sustain authentic human presence in digital environments. McLuhan’s tetrad (1988) is a powerful tool for understanding the significance of this case, as it highlights the larger systemic issues arising from the presence of AI Agents in university culture. In what follows, I apply the tetrad to analyze how AI Agents enhance, obsolesce, retrieve, and reverse the ecology of higher education. 

The Tetrad as a Framework for Understanding AI Agents 

The tetrad of media effects is a heuristic that poses four simple but generative questions: What does the technology enhance? What does it obsolesce? What does it retrieve from the past? And what does it reverse when pushed to its limits? The value of the tetrad lies in its ability to step outside moralistic framings of technology as good or bad. Instead, it treats media as environments, drawing attention to how they reorganize practices, relationships, and cultural meanings. 

Enhances – What does the technology enhance? 

AI Agents enhance efficiency in ways that make participation in courses appear effortless. AI agents can log in, take quizzes, and submit assignments at speeds and levels of consistency that even the most conscientious student would struggle to match. Summer courses are information dense, and using an AI agent enables rote tasks to be completed with minimal to no effort. Zuboff (2020) warns in her analysis of surveillance capitalism that efficiency often comes at the cost of meaning. What is enhanced is not the student’s capacity to inquire or reflect but the machine’s ability to perform. Agents in educational spaces only serve to train the machines. Feedback once reserved for a human learner now becomes data for the machine to learn how to perform like a human. As research by Lamp et al. (2025) suggests, “To significantly improve human likeness of synthetic activity yielding a more realistic and effective synthetic user persona”.   

The enhancement of efficiency is precisely why the technology is attractive to struggling or disengaged learners in the first place: it provides a quick fix to the demands of online coursework. What looks like efficiency is also a simulation, a performance of presence rather than genuine engagement. In this sense, the enhancement obscures the distinction between being a student and acting like one. When agentic machines are deployed by learners in educational spaces, student actions become performative. When the AI performs on the student’s behalf, it is an imposter, a role-player. It may also enable a more efficient and free simulation training environment. While the student believes they are being provided a free assistant, they are enabling “free” access to curricula and human interactions within a structured learning context. For media literacy pedagogy, this presents a profound risk. When efficiency becomes the dominant value, students may learn to equate the appearance of activity with authentic engagement and learning. The distinction between being a student and acting like one becomes blurred, and with it the possibility of cultivating mindful, critical habits of inquiry. In short, AI Agents enhance the metrics of education while eroding its substance, learning. AI agents in this context are anti-educational because they prevent learning while representing it. 

Obsolesces – What does the technology obsolesce? 

In enhancing efficiency, AI Agents simultaneously obsolesce the traditional markers of trust and authorship typical of asynchronous learning environments. A completed quiz or submitted paper can no longer be assumed to reflect the intellectual labor of a human learner. Trust, once implicit in the teacher-student relationship, is eroded, as faculty can no longer assume that a submission represents authentic engagement. Reflection is also displaced: if machines can produce responses, the pedagogical aim of writing as a process of thinking and inquiry is lost. The very premise that enrollment signals presence and participation is rendered unstable.  Online and asynchronous learning that lacks human authentication mechanisms has enabled the abuse of agency. Postman (1993) reminds us that technological change is never additive; it transforms the entire system of meaning in which it operates. Just as television recast public discourse as entertainment, AI Agents deployed by students recast academic engagement as simulation. Reflection and inquiry, long embedded in the act of reading and writing, are displaced when machines can generate responses indistinguishable from student work. The obsolescence here is not limited to authorship but extends to the pedagogical practices that depend on it: formative feedback, revision processes, and dialogic exchange between students and instructors. Successful practices common to face-to-face learning environments are now inappropriate and ineffective measures in digital environments. At the institutional level, AI Agents threaten to obsolesce assessment itself. If human presence cannot be verified, grading is essentially useless. The very premise that enrollment signals participation collapses. What disappears is not simply the authentic, and very human, student paper but the ecology of trust, reflection, and meaning-making that defines education as a human endeavor (Postman, 1996). 

Retrieves- What does the technology retrieve from the past? 

AI Agents retrieve an older, industrial model of education in which learning is measured by compliance and task completion. They bring back a transactional view of schooling that values ticking boxes over cultivating understanding. This echoes Freire’s ( 1974, 2018) critique of the banking model of education, where knowledge is deposited into learners who are expected to withdraw it on demand, without meaningful transformation. The use of an agent bypasses the epistemic and reflective labor that actual learning requires, and much of the pedagogy upon which media, information, and digital literacy are built. As McLuhan (1988) observed, what media retrieve from the past often returns in intensified form. The agent does not simply recall the industrial classroom; it amplifies its logic of compliance to the point where participation itself can be simulated. 

By automating the gestures of participation, logging in, submitting work, and posting comments, AI agents retrieve the view of schooling as performance of compliance rather than cultivation of thought. For media literacy pedagogy, this is particularly troubling. This retrieval is ecological, not nostalgic. Postman (1993) would remind us that when compliance becomes the measure of education, the entire system of meaning is reorganized. In such an environment, reflection and authorship are not merely diminished but redefined as unnecessary, or at least reproducible by a machine. 

Reverses -what does the technology reverse when pushed to extremes? 

When pushed to extremes, the efficiency and simulation of AI Agents reverse into the collapse of higher education’s legitimacy. If a degree can be earned by an automated system indistinguishable from a student, then the degree itself no longer certifies human learning but machine performance. The reversal is not only institutional but cultural. Education shifts from a process of cultivating inquiry and autonomy to a spectacle of automation. This is especially crucial in a time when many people are questioning the value and necessity of a university education. It was unthinkable when I was an undergrad; now it’s not. This is an existential challenge for universities. The predictions of AI to replace knowledge workers are encouraging some people to decline university education and seek AI-proof careers, especially the trades.

McLuhan (1988) reminds us that all media, when extended beyond their limits, reverse into counter-environments. The Agent’s promise of efficiency reverses into a spectacle of automation, where presence is indistinguishable from absence. What was once the measure of engagement, submissions, logins, and discussion posts, becomes evidence only of system-level simulation. Participation becomes its parody. And therefore calls for its revision.

Postman (Postman, 1985, 1993) might frame this reversal as a technopoly in education, where efficiency and output displace meaning and reflection. The institution’s purpose is destabilized/defrauded. Universities risk certifying compliance with automated systems rather than cultivating reflective, autonomous citizens. The reversal is not only institutional but cultural. Trust in the university as a site of human development is threatened, as degrees risk being perceived as credentials without clear evidence of learning. For media literacy pedagogy, the stakes are existential: when automation reverses presence into absence and authorship into simulation, the very conditions for inquiry, dialogue, and epistemic reflection are jeopardized/nullified. The social contract of higher education, the promise to cultivate thoughtful, reflective citizens, is at risk of collapse. So rather than getting an education, this student used agents to get the certification without the learning.

Taken together, the tetrad reveals that AI Agents are not a marginal tool or a passing concern but a new media environment that reorganizes the very conditions of higher education. They enhance efficiency in ways that simulate participation, but in doing so they obsolesce trust, authorship, and the pedagogical practices that depend on them. At the same time, they retrieve an older, industrial model of compliance and task completion, reanimating the very logic that critical pedagogy and media literacy have long worked to resist (Kubey, 1997; Macedo & Steinberg, 2009). 

Finally, pushed to their limits, AI Agents reverse higher education into a legitimacy crisis. The misuses of AI Agents serve as the canary in the coal mine, warning of a deepening rupture in the social contract of education as learning becomes automated performance. What emerges is not a story of academic dishonesty in isolated cases, but evidence of a systemic transformation. AI agents deployed by students in learning contexts reveal how precarious authorship, reflection, and presence have become in online learning environments, signaling a deeper instability at the core of higher education. The tetrad makes clear that AI Agents do not merely supplement student learning but reorganize the environment of higher education. The challenge, then, is not to respond with panic or prohibition, but to design pedagogies and policies that preserve learning as an authentically human endeavor. Doing so will require educators and institutions to work at multiple levels.

Course Design for Inquiry and Reflection 

If AI agents retrieve an older, transactional model of education, where success is measured by the completion of tasks rather than the cultivation of understanding, then our responsibility as educators is to design courses that resist automation and foreground reflection. Assignments most vulnerable to substitution, such as formulaic essays and discussion board posts, multiple-choice quizzes, and summaries or reports, should be reimagined. Designing for inquiry and reflection means privileging intellectual struggle over efficiency, inquiry over compliance, and self-awareness over simulation. 

As educators, we must remind students that learning is not about compliance or efficiency but about wrestling with ideas, questioning assumptions, and situating themselves in the flow of information they encounter every day (Hall, 2021). This means moving beyond assignments that machines can easily mimic by creating tasks that require lived experience, epistemic values, and intertextual connections. Oral defenses, collaborative projects, and annotation-based assignments that emphasize personal experiences invite deeper engagement that machines struggle to reproduce. Assignments rooted in mindful engagement with course concepts (Langer, 2000; Potter, 2020) are not AI-proof but can serve as a reminder that intellectual struggle is not something to bypass but vital for intellectual and personal growth. 

If AI Agents obsolesce trust and blur authorship, then pedagogy must foreground authorship as a visible, non-substitutable human act. It should also address the moral question of allowing a machine to represent us. Assignments that draw on lived experience, articulate epistemic values, or require intertextual connections make it harder for machines to substitute and easier for authentic voice to emerge. Hobbs (2010, 2020) underscores that authorship is central to critical agency, and Chinn (2020, 2021) argues that reflection and ownership of ideas are vital epistemic practices. To resist the obsolescence of authorship, educators can design assignments that foreground epistemic reflection, emphasizing the irreplaceable value of lived experiences. 

Treating AI as an Object of Study 

If AI Agents enhance efficiency by simulating learning, then one pedagogical response is to shift them from hidden actors to explicit objects of analysis. Rather than banning AI, educators could integrate AI output into coursework. Ask students to annotate subject-specific output critically and debate the ethics of outsourcing intellectual labor to a machine. If possible, discuss the political economy of information technology; make visible the invisible architecture of the information ecosystem, and discuss possible human harms in relation to the topic under analysis (McChesney, 2008). These types of activities can cultivate epistemic vigilance and reframe AI use as an opportunity for deep reflection and inquiry. In this way, media literacy pedagogy fulfills its long-standing goal: to treat media not only as vehicles of content but as environments to be interrogated and understood (Buckingham, 2003, 2013, 2019; Hobbs, 2010, 2020). 

Institutional Responsibility for Authentic Presence 

Finally, if AI Agents reverse higher education into a legitimacy crisis, then responsibility cannot rest on pedagogy alone. Institutions must assume responsibility for verifying human presence. Just as financial institutions authenticate transactions through multifactor systems, universities must develop mechanisms to ensure that students are indeed who they claim to be. This is particularly urgent in asynchronous courses where impersonation is most likely. The longer universities wait, the more they risk eroding trust not only between faculty and students but also between higher education and the publics we serve. Verification systems cannot be treated as optional add-ons; they are now essential to preserving the integrity of degrees, the meaning of authorship, and the legitimacy of the university itself. 

Conclusion 

What began as a hidden prompt to detect LLM use revealed something more profound: the vulnerability of higher education to identity outsourcing, AI agents posing as students, and the real possibility that technology companies are using university courses and data as simulation environments to test “behavioral fidelity of synthetic user personas” (Lamp et al., 2025). Framed through McLuhan’s (1988) tetrad, this case suggests that AI imposter agents risk reversing higher education into a crisis of legitimacy. Technologies change not only what we do but who we are (Ong, 2009; Postman, 1993). 

The responsibility of media, digital, and information literacy scholars is not only to defend education from the risks of automation but to generate new pedagogies and practices that preserve and extend human-centered learning. If AI agents are the proverbial canary in the coal mine, then this case is a warning that the social contract of education is under severe strain and, perhaps, attack. The task before us is not to panic, but to act. We must renew practices and policies that safeguard education as a place where students show up as thinkers, creators, and citizens, and where learning remains, at its core, a human endeavor. Universities owe that to their students and funders who depend on them for the knowledge required to be productive members of society and engaged citizens.

References 

Andrejevic, M. (2020). Automated media. Routledge, Taylor & Francis. https://doi.org/10.4324/9780429242595 

Buckingham, D. (1991). Media Education: The Limits of A Discourse. Annual Conference of the American Educational Research Association, Chicago. 

Buckingham, D. (2003). Media education: Literacy, learning, and contemporary culture. Polity Press ; Distributed in the USA by Blackwell Pub. 

Buckingham, D. (2013). Media Education: Literacy, Learning and Contemporary Culture. Wiley. 

Buckingham, D. (2019). The media education manifesto. Polity Press. Chinn, C. A., Barzilai, S., & Duncan, R. G. (2020). Disagreeing about how to know: The instructional value of explorations into knowing. Educational Psychologist, 55(3), 167–180. https://doi.org/10.1080/00461520.2020.1786387 

Chinn, C. A., Barzilai, S., & Duncan, R. G. (2021). Education for a “Post-Truth” World: New Directions for Research and Practice. Educational Researcher, 50(1), 51–60. https://doi.org/10.3102/0013189×20940683 

Freire, P., & Freire, P. (1974). Pedagogy of the oppressed (Tenth printing). The Seabury Press. Freire, P. (with Macedo, D. P.). (2018). Pedagogy of the Oppressed: 50th Anniversary Edition (4th ed). Bloomsbury Publishing USA. 

Gerbner, G., Gross, L., Morgan, M., Signorielli, N., & Shanahan, J. (2021). Media Effects. Growing up WIth Television: Cultivation Processes, Chapter 3 (2nd ed.).

16 

Hall, S. (2021). Writings on Media: History of the Present (C. Brunsdon, Ed.). Duke University Press. https://doi.org/10.1215/9781478022015 

Hobbs, R. (2010). Digital and media literacy: A plan of action : a white paper on the digital and media literacy recommendations of the Knight Commission on the information needs of communities in a democracy. Aspen Institute. 

Hobbs, R. (2020). Mind over media: Propaganda education for a digital age (First edition). W. W. Norton & Company. 

Kubey, R. (Ed.). (1997). Media literacy in the information age: Current perspectives. Transaction Publishers. 

Langer, E. J. (2000). Mindful Learning. Current Directions in Psychological Science, 9(6), 220–223. https://doi.org/10.1111/1467-8721.00099 

Lamp, S., Hiser, J. D., Nguyen-Tuong, A., & Davidson, J. W. (2025). PHASE: Passive Human Activity Simulation Evaluation (No. arXiv:2507.13505). arXiv. https://doi.org/10.48550/arXiv.2507.13505 

Livingstone, S. (2004). Media Literacy and the Challenge of New Information and Communication Technologies. The Communication Review, 7(1), 3–14. https://doi.org/10.1080/10714420490280152 

Macedo, D. P., & Steinberg, S. R. (Eds.). (2009). Media literacy: A reader. Peter Lang Publishing, Inc. 

McChesney, R. W. (2008). The political economy of media: Enduring issues, emerging dilemmas. Monthly Review Press. 

McLuhan, M. (1964). Understanding media: The extensions of man (Repr). Mentor. McLuhan, M. (1977). LAWS OF THE MEDIA (earlier article). ETC: A Review of General Semantics, 34(2), 173–179. JSTOR.

17 

McLuhan, M., & Fiore, Q. (1967). The medium is the massage: An inventory of effects. Bantham Books. 

McLuhan, M., Hutchon, K., & McLuhan, E. (1978). Multi-Media: The Laws of the Media. The English Journal, 67(8), 92. https://doi.org/10.2307/815039 

McLuhan, M., & McLuhan, E. (1988). Laws of media: The new science (Repr). Univ. of Toronto Press. 

Milner, R. M., & Phillips, W. (2020). 5. Cultivating Ecological Literacy. You Are Here. https://youarehere.mitpress.mit.edu/pub/fc80dnhc/release/1 

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. 

Ong, W. J. (2009). Orality and literacy: The technologizing of the word (Reprinted). Routledge. 

Postman, N. (1980). Teaching as a conserving activity. Dell Pub. Co. 

Postman, N. (1985). Amusing ourselves to death: Public discourse in the age of show business (20th anniversary ed). Penguin Books. 

Postman, N. (1993). Technopoly: The surrender of culture to technology (1st Vintage Books ed). Vintage Books. 

Postman, N. (1996). The end of education: Redefining the value of school. Vintage. Potter, W. J. (2020). Media literacy (Ninth edition). SAGE. 

Whittaker, M. (2025a, June 25). AI, Tech Power and Surveillance: The New Institute [Youtube]. https://www.youtube.com/watch?v=kHL_Z0EERlM

Whittaker, M. (2025b, August 7). AI Security Warning from Signal app’s Meredith Whittaker – The Hidden Dangers of Agentic AI [Interview]. https://www.youtube.com/watch?v=jE_CNezjV7o

Zuboff, S. (2020). The age of surveillance capitalism: The fight for a human future at the new frontier of power (First trade paperback edition). PublicAffairs.

Current Issues

  • A McLuhan Mosaic: Bringing Foundational Thought to Present Urgency and Relevance
  • Public Commons
  • Media and Information Literacy: Enriching the Teacher/Librarian Dialogue
  • The International Media Literacy Research Symposium
  • The Human-Algorithmic Question: A Media Literacy Education Exploration
  • Education as Storytelling and the Implications for Media Literacy
  • Ecomedia Literacy
  • Conference Reflections

Archived JML Print Issues

  • Print Issues years 2018 to 2000
  • Print Issues years 1999 to 1953

Learn More About The Journal of Media Literacy

  • About the Journal of Media Literacy
  • Our Editorial Team
  • Our Philosophy
  • Publication Ethics Policy
  • Author Guidelines
  • Get Involved
  • Gina Marcello
    Associate Teaching Professor Rutgers University School of Communication and Information

    Gina Marcello is an Associate Teaching Professor at Rutgers University in the School of Communication and Information. She is the co-author and program coordinator of Disinformation Detox in Communication, Media, and Information Studies (04:189:210), an interdisciplinary general education course that equips students to critically navigate misinformation. Her media literacy journey began in the 1990s under the mentorship of cognitive psychologist Robert Kubey, Ph.D., whose research on television addiction and screen dependency inspired her dissertation, and continues to inform her focus on fostering cognitive engagement, epistemic awareness, and sustainable digital habits.

Share This:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Pinterest (Opens in new window) Pinterest
  • Share on Reddit (Opens in new window) Reddit
  • Share on Telegram (Opens in new window) Telegram
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print

The Journal of Media Literacy McLuhan Mosaic Case Studies
Misinformation Media Education Tetrad of Media Effects AI Agents Sociotechnical Systems

Reader Interactions

Leave a ReplyCancel reply

Footer

International Council for Media Literacy

Formerly the National Telemedia Council

Support Media Information Literacy:

IC4ML is a 501(c)(3) based in Wisconsin, USA with members Worldwide.

Join Our Mailing List

Read Past Newsletters

Search

Contact Us

ICforML@gmail.com

View Ways to Get Involved

  • Email
  • Facebook
  • Instagram
  • Twitter

Copyright © 2026 · International Council for Media Literacy. All Rights Reserved.

 

    • English
    • Português (Portuguese (Portugal))
    • Español (Spanish)