Abstract
In an era where media technologies, from legacy broadcast to social networks and generative AI, pervade everyday life, the once-invisible media environment demands renewed scrutiny. This article re-examines media effects and knowledge construction through a critical lens that “makes the familiar strange again.” Drawing on media ecology, constructivist epistemology, and media literacy scholarship, it argues that media should be understood not merely as conveyors of content but as environmental forces shaping how reality is perceived and knowledge is constructed. I explore the shift from content-centric analyses to viewing media as environments, the influence of attention and information flow on the formatting of knowledge, and the role of interpretation and power in meaning-making. Integrating radical constructivist insights into media studies, I also consider how realities are co-constructed and how AI-driven platforms create new epistemic media environments. The conclusion calls for an expanded notion of media literacy attuned to these complex dynamics in the age of artificial intelligence.
Keywords
Media Effects, Knowledge Construction, Media Literacy, Artificial Intelligence, Radical Constructivism

Introduction
Marshall McLuhan famously quipped, “We don’t know who discovered water, but we know it wasn’t a fish… A pervasive environment…is always beyond perception”. His metaphor I think highlights how people immersed in a media environment often fail to notice its influence: the medium itself becomes “as imperceptible as water to a fish”. Today’s media-saturated society exemplifies this idea: media have become so ubiquitous that we take them for granted, like air and water. In the age of artificial intelligence, this familiar backdrop of media and technology must be made strange again, rendered visible and subject to inquiry, in order to understand how it shapes knowledge and perception. This article aims to examine how media, old and new, act not just as channels of content but as environmental forces that structure attention, construct knowledge, and wield power over meaning. It connects insights from legacy media research with contemporary challenges posed by digital platforms and generative AI. Classic media theories warned that the form of media can eclipse content in impact (encapsulated in McLuhan’s dictum that “the medium is the message”) and that media environments can subtly reformat how we, as media prosumers, think and communicate. Meanwhile, constructivist epistemology reminds us that knowledge of reality is actively constructed rather than passively absorbed—an insight that has become newly salient in an era of personalized feeds and AI-generated information. By “making the familiar strange again,” the paper takes a critical stance toward media habits and infrastructures often seen as natural. The sections that follow explore: (1) a paradigm shift from analyzing media content to examining media as environments; (2) how media shape knowledge via attention structures and flow; (3) the interplay of meaning-making and power; (4) the concept of constructed realities through a radical constructivist lens; (5) artificial intelligence as a novel epistemic media environment; and finally, (6) implications for media literacy. By bridging legacy media insights with the AI-driven present, the article aims to also illuminate continuities and changes in how media shape our construction of reality, and why, as I write, an updated media literacy is crucial for education and civic life in the 21st century.
1. From content to environment.
Early media research often fixated on content, analyzing whether violent images increase aggression or how propaganda messages sway opinions. Such media effects studies implicitly treated media as neutral conduits for content. However, mid-20th-century thinkers like Marshall McLuhan and Neil Postman redirected attention from what media convey to how the medium itself conditions experience. McLuhan argued that the characteristics of a medium, its form, speed, sensory biases, influence society more profoundly than any single message it carries. As he famously stated, “the medium is the message” (McLuhan, 1964), meaning that a medium’s environmental effects on habits and perception are its real message. For example, the rise of television did not just entertain the public with new content; it transformed the cultural environment by privileging visual, fast-paced communication over the typographic, linear logic of print (Postman, 1985). In Postman’s analysis, television turned public discourse into a form of show business, altering our notions of truth and knowledge by formatting serious issues as entertainment, a shift in epistemology that content alone could not explain. Media researchers thus began to view media technologies as environmental forces or ecosystems that shape how people think, learn, and socialize. The field of media ecology, introduced by Postman in the 1960s, explicitly studies “media as environments”. A classic illustration is the contrast between print culture and television culture: print (a predominantly textual, linear medium) fosters linear thinking and sustained argument, whereas television (an audiovisual, image-centric medium) favors immediacy and emotional appeal. The content might be about politics or education in both cases, but the forms mold the message’s meaning. In short, focusing solely on content is insufficient; one must consider the media environment, the socio-structural context that envelops content and audience. In this sense, recognizing media as environments makes the familiar strange by revealing how deeply our cognitive patterns and social practices are entwined with the media that deliver our information and connect us to the world.
2. Attention, flow, and the formatting of knowledge.
Media environments fundamentally organize our attention, a scarce resource in any information-rich society. As early as 1971, economist Herbert Simon presciently noted that “a wealth of information creates a poverty of attention”. In other words, the more information bombards us, the more our attention must be divided and commodified. Different media formats capture and direct attention in particular ways, thereby shaping how knowledge is packaged and received. For instance, broadcast television introduced what Raymond Williams (1974) called “flow” as its dominant mode of communication: a seamless sequence of programs, commercials, and segments that blends into a continuous stream. Williams observed that viewers even describe the experience not by specific content but simply as “watching television,” and many find it “very difficult to switch off,” as one show flows into the next. The format of TV (its unbroken flow without clear intervals) encourages passive absorption and prolonged viewing, effectively formatting knowledge into a never-ending stream of stimuli. In this environment, knowledge often arrives as brief, emotionally engaging bits optimized to keep us watching rather than in-depth, reflective analysis. If television’s flow made it “insidious” by making it hard to stop watching, today’s digital platforms have elevated attention management to a fine science. Social media feeds and streaming platforms deploy infinite scrolls, autoplay, and algorithmic personalization to create a personalized flow for each user, aiming to maximize “time spent.” This attention economy drives the formatting of knowledge into click-friendly headlines, bite-sized posts, and trending topics. Information is delivered in streams that favor speed, novelty, and visceral impact, often at the expense of depth. The result is an information environment that keeps users engaged (or “hooked”) but can also lead to fragmented understanding. Knowledge construction in such an environment tends toward breadth rather than depth: users encounter many bits of information without a structured framework, potentially fostering what Nicholas Carr (2010) called “the shallows” of scattered attention. Moreover, algorithmic curation means individuals see unique information streams, reinforcing certain knowledge and omitting other perspectives. In all cases, the formatting of knowledge, how information is presented and sequenced, influences what comes to be known, or believed. Whether through the curated flow of a network news broadcast or the endless scroll of a TikTok feed, media environments shape our cognitive habits, guiding what the audience pays attention to and for how long. A critical awareness of these attention structures is thus essential. As McLuhan warned, the “psychic and social effects” of media operate like a narcotic, unnoticed by those under their influence. Developing media literacy today requires recognizing these formatting effects: understanding that the mode of delivery (fast versus slow, image versus text, continuous feed versus discrete chunks) can subtly determine how knowledge is processed and prioritized.
3. Meaning, interpretation, and power.
Media messages do not implant meanings directly into passive brains; rather, meaning is actively constructed in the interaction between text and audience. Stuart Hall’s (1980) influential encoding/decoding model emphasized that while media producers may encode preferred meanings into a message, audiences decode those messages according to their own frameworks of knowledge, social positions, and experiences. In Hall’s words, before a message can have an effect, it must be appropriated as meaningful discourse and decoded by the audience, and this decoding is not guaranteed to align with the intent of the sender. Thus, two people might interpret the same news report or film in starkly different ways. Meaning is negotiated or even contested, not simply delivered. This insight foregrounds the role of interpretation: media literacy involves not just access to content but the skill to critically interpret and evaluate that content. An educated audience can recognize, for example, when a photograph in a news article is being used symbolically or when a statistic is framed to support a particular narrative. They can generate alternative interpretations which is a cornerstone of critical thinking. However, interpretation does not occur in a power vacuum. The process of meaning-making in media is intertwined with questions of power: Who gets to produce media narratives? Whose interpretations become dominant or “common sense”? Media institutions (legacy broadcasters, Hollywood studios, big tech companies) have historically held disproportionate power in framing issues and selecting which voices are amplified. The concept of agenda-setting in journalism research encapsulates this: the media may not tell people what to think, but through editorial choices, they are stunningly successful in telling people what to think about. For instance, if news outlets give weeks of saturation coverage to a minor political scandal while ignoring systemic issues, the public’s perception of importance is skewed by that coverage. Likewise, social media algorithms today exercise a new form of agenda-setting power by pushing certain topics or posts to trend. The power to shape the information agenda and the framing of issues has profound effects on collective knowledge and public discourse. As Foucault (1980) argued, power and knowledge are co-constitutive: those who control the discourse (the ability to make certain statements appear true or important) wield power, and conversely, power can determine what is accepted as knowledge. Furthermore, media representations can privilege certain interpretations and marginalize others. Cultural studies research has shown how stereotypes in media (regarding race, gender, etc.) can reinforce power hierarchies by normalizing meanings, for example, portraying one group consistently as villains or victims shapes societal perceptions of that group. At the same time, audiences can resist these meanings, creating subversive or oppositional interpretations that challenge power. In the digital era, “prosumer” culture (where users produce media content themselves) adds another layer: marginalized groups may use online platforms to inject counter-narratives, yet platform algorithms and moderation policies (often opaque) may still favor established power structures and mainstream perspectives (e.g., through ad-driven incentive structures or biases in AI moderation). In summary, understanding media’s role in knowledge construction requires grappling with this dynamic of meaning and power. Media literacy education places emphasis on interrogating who created a message and why, recognizing that every media text comes from someone with a point of view, and that our own interpretations can either unwittingly reinforce or consciously question the power relations embedded in media content. Empowering learners to decode and question media messages is thus both an epistemic exercise and a civic one, enabling a more democratic distribution of interpretive agency in the face of concentrated media power.
4. Constructed realities and Radical Constructivism.
If the previous section considered how meaning is made, here I zoom out to ask: What is “reality” in a mediated world? Constructivist theories of knowledge assert that individuals do not have direct access to an objective reality; rather, we construct our own realities based on our perceptions and cognitive schemas. In the radical form of this view, radical constructivism (RC), knowledge is seen as constructed by the knower and valued for its viability (its usefulness in organizing experience), not its supposed correspondence to an objective truth. The German philosopher Ernst von Glasersfeld (1917-2010), a pioneer of RC, put it succinctly: knowledge “does not depict or correspond to an observer-independent, objective reality, but instead constitutes a viable model built by an observer to make sense of experience” (von Glasersfeld, 1995, p.18, as cited in Martinisi, 2025). From this perspective, all of us are, in effect, constructing our realities inside our heads, a process inevitably influenced by the information and experiences we encounter, including those from media. Media, then, serve as experience providers that feed into our reality-construction. A person who grows up watching a heavy dose of televised crime dramas may construct a reality in which the world appears far more dangerous than crime statistics justify, indeed a phenomenon noted by Gerbner’s cultivation theory. In the digital realm, a social media user whose feed is filled with conspiracy theories may construct a very different sense of reality than someone whose feed contains primarily science journalism. Radical constructivism invites us to see these divergent realities not simply as one informed and one deluded individual, but as each person maintaining a coherent model of the world given the inputs they have selected or been provided. Echo chambers and filter bubbles exemplify this: online communities often self-organize such that members constantly reinforce each other’s perspectives, “stabilizing internal coherence through repeated validation of shared distinctions” (Martinisi, 2025, p.75). In one of my articles (2025), I argued that echo chambers can be understood through the lens of RC “as self-organizing systems that maintain cognitive and communicative viability rather than as pathologies of misinformation”. In other words, rather than viewing those ensconced in echo chambers simply as dupes of falsehood, RC suggests seeing these groups as viability-seeking systems, they construct a version of reality that works for them (provides explanatory power, emotional satisfaction, community belonging, etc.), even if it diverges from other communities’ realities or from empirically verified facts. This constructivist reframing has important implications. It reminds scholars and educators that all knowledge (including scientific knowledge) is constructed by human minds (albeit with rigorous methods in the scientific case). It fosters reflexivity: awareness of our own role in constructing the realities we inhabit. In my article I also emphasized the ethical and pedagogical implications of RC, notably the need for reflexivity, second-order observation, and systemic awareness in confronting digital polarization. In this case, second-order observation means observing not just the content of communications but the process, watching how we and others observe, highlighting that we are part of the system we analyze. For example, rather than merely criticizing “those people” in an echo chamber, a constructivist approach invites us to examine our own echo chambers and biases, and to consider how each social group’s reality is coherently constructed from its vantage point. As Heinz von Foerster famously noted, “the environment as we perceive it is our invention”. Applying RC in media studies thus “makes strange” the assumption that media simply construct the reality for us. Instead, media provide inputs that observers (audiences) actively integrate into their own worldviews. A constructive step forward, suggested by RC scholars, is to design media and educational interventions that promote transparency, pluralism, and responsible meaning-construction (Martinisi, 2025). This could mean, for instance, designing social media or search interfaces that encourage users to encounter multiple perspectives (increasing the diversity of one’s experiential inputs) or teaching learners to be mindful of how they construct reality from the media they consume. By acknowledging that multiple constructed realities coexist, we also underline the importance of dialogue and critical comparison between different perspectives, not to arrive at a single “correct” reality, but to foster mutual understanding and perhaps expand one’s own construction through openness to new information. In sum, radical constructivism enriches media literacy by highlighting that reality is not simply out there to be transmitted; it is in here, actively assembled, and media are among the chief tools and raw materials in that assembly process.
5. Artificial Intelligence as an epistemic media environment.
Artificial intelligence is rapidly transforming the media landscape into something qualitatively new: an epistemic environment where AI not only mediates information but also generates and evaluates it. In the past, media scholars spoke of mass media versus personal media; now, we, the new generation of media scholars, must consider machine-mediated media. Algorithms already decide what is seen in our news feeds, which videos are recommended, and which search results appear, thereby acting as gatekeepers of knowledge. With generative AI (e.g. large language models, deepfake generators), machines have also become content creators, producing text, images, and videos that mimic human-made media. This convergence of AI with media platforms has created a new kind of media environment, one in which artificial agents participate in shaping the information supply, often without transparent disclosure. This can be described as an AI-driven epistemic environment because it fundamentally affects how knowledge is acquired, validated, and shared in society. One key feature of this AI-mediated environment is the personalization (and potential fragmentation) of knowledge. Recommender algorithms on platforms like Facebook, YouTube, or TikTok analyze user behavior to present content tailored to each of our preferences and biases. While this can enhance user engagement, it also narrows the range of information people receive, reinforcing pre-existing views and creating insular information loops. Studies have noted that such algorithmic filtering contributes to the formation of epistemic “bubbles” or echo chambers where contradictory information is filtered out (Pariser, 2011; Sunstein, 2017). In algorithmically curated spaces, attention and engagement often override truth as the guiding values. For instance, an AI system may learn that sensational or emotionally-charged content keeps users online longer, and thus it might preferentially amplify such content. The epistemic consequence is that users are disproportionately exposed to extreme or polarizing information, potentially distorting their sense of reality and undermining shared factual baselines. As one recent analysis of disinformation put it, contemporary algorithms “prioritize emotionally provocative or controversial content over accurate information”, facilitating the viral spread of misinformation in pursuit of engagement metrics. In short, AI systems can shape what is accepted as knowledge by controlling information visibility and repetition, effectively exercising power over the epistemic environment in ways that users may not even realize (since the process is often invisible and individualized). Generative AI adds another layer of complexity. With AI models now capable of producing human-like text, images, and even deepfake video, the line between human-generated knowledge and AI-generated output blurs. One concern is the rise of AI-generated misinformation or “hallucinations”, confidently presented false statements that AI chatbots might imagine . If these fabricated outputs circulate widely (for example, a bogus “news” article entirely written by AI, or a fake quote by a public figure generated by a language model), they contribute to what some scholars call “epistemic pollution” (the contamination of the information environment with false or misleading content). Unlike traditional rumors or falsehoods, AI can produce misinformation at scale and with a veneer of credibility that makes it hard to immediately detect. The epistemic consequences are significant: when the public cannot easily distinguish authentic information from synthetic, the very notion of evidence can be eroded. The risk is entering what has been already termed a “post-truth” condition, where trust in media and even in the idea of knowable truth dwindles. Indeed, experts warn that the unchecked flood of AI-generated content could “undermine public trust in science” and democratic discourse if left unaddressed. Yet, AI is not only a threat; it is also a tool that, if harnessed ethically, could improve the information environment. For example, AI can help identify patterns of misinformation, assist fact-checkers, or personalize learning in constructive ways. What is clear is that AI has become an integral part of the media ecosystem, we might say we now inhabit a hybrid human-machine media environment. This reality calls for an expanded definition of media literacy: AI literacy as a component of media literacy. To be truly media literate in the 2020s is to understand how algorithms curate your news feed, to recognize when an image or video might be AI-generated, and to grasp the biases embedded in AI systems. In other words, it is to become aware of the epistemic infrastructure underlying our media consumption. Media literacy scholars note that the traditional focus on evaluating content must broaden to include understanding of algorithms and data (Mihailidis & Thevenin, 2013). The epistemic environment shaped by AI can only be navigated safely if users develop a critical consciousness of how AI works, its criteria, its limitations, and its influences on what we come to accept as reality. As a practical step, educational initiatives are starting to integrate “algorithm awareness” and critical data studies into curricula, so that students learn to ask questions like: Why am I seeing this post? How did this information find me? Such reflexive questions echo the constructivist stance (second-order observation) and are vital for maintaining one’s epistemic agency in an AI-mediated world. In sum, artificial intelligence has transformed media into a dynamic, responsive environment, one that holds great promise for knowledge access, but also one that can distort and manipulate knowledge on an unprecedented scale. The charge moving forward is to cultivate media and AI literacies that enable citizens to continue “making the familiar strange”: to see the strings of the algorithms and the constructed nature of what we often take for granted as the information environment around us.
Conclusion
In an age of smart machines and ubiquitous screens, making the familiar strange again is not just an intellectual exercise but a survival skill for informed citizenship. In this article I traced how media, from the era of broadcast television to the algorithmically tailored feeds of today, create environments that invisibly shape our perceptions, knowledge, and sense of reality. By re-examining media through multiple lenses (media ecology’s focus on environments, radical constructivism’s insight into constructed realities, and critical media literacy’s empowerment of audiences) a clearer and more critical view emerges of the water in which contemporary life swims. The familiar rhythms of media consumption (the evening news, the Facebook scroll, the YouTube autoplay) become “strange” when users pause to ask how these practices format attention, what assumptions underlie their messages, and how they intertwine with power structures. This reflective distance is precisely what is needed for genuine media literacy in the age of AI. Higher education, in particular, has a responsibility to foster this critical perspective. An accessible but academic understanding of media effects and knowledge construction equips students, and scholars alike, to navigate the complex information ecosystems of contemporary life. It means recognizing, for example, that a trending hashtag on X might be as much a product of algorithmic design as of public sentiment; or that one’s deeply held views may have been reinforced by the self-validating algorithmic loops of an echo chamber. It also means appreciating the positive potential of media and AI (for connecting communities, enhancing learning, and solving problems) while maintaining a healthy skepticism toward their outputs. The aim is consciousness: an aware stance that constantly interrogates how we know what we know. As McLuhan and others advised, the contemporary society must become “competent in using and understanding the dominant media of [our] culture”, today, that includes not only reading newspapers or analyzing films but also decoding algorithms and questioning AI-generated content. Ultimately, a making-the-familiar-strange-again approach restores a measure of human agency. It allows us to step outside the flood of media messages and reflect on their construction, to see the outlines of the water we swim in. Such awareness is the first step toward meaningful action: whether it’s demanding greater transparency from tech platforms, supporting quality journalism and educational media, or simply adjusting our own media diets to better serve our need for reliable knowledge. In the transformed media landscape shaped by artificial intelligence, the tools may be still new, but the imperative remains the same as it was in the era of print or television: to approach media not as passive consumers of messages, but as active, critical participants in constructing meaning. By doing so, by persistently questioning and learning, media teachers can ensure that media technologies, however advanced, serve to enlighten rather than obscure, and that the knowledge we build collectively is robust and inclusive.
References
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132.
Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. New York: W.W. Norton.
Foerster, H. von (2003). Understanding Understanding: Essays on Cybernetics and Cognition. New York: Springer.
Gerbner, G., Gross, L., Morgan, M., & Signorielli, N. (1994). Growing up with television: The cultivation perspective. In J. Bryant & D. Zillmann (Eds.), Media Effects: Advances in Theory and Research (pp. 17–41). Hillsdale, NJ: Lawrence Erlbaum.
Hall, S. (1980). Encoding/decoding. In S. Hall, D. Hobson, A. Lowe, & P. Willis (Eds.), Culture, Media, Language (pp. 128–138). London: Unwin Hyman.
Martinisi, A. (2025). Author’s response: Applying radical constructivism to the study of online echo chambers. Constructivist Foundations, 21(1), 75–79.
McLuhan, M. (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.
McLuhan, M. (1969). Counterblast. New York: Harcourt Brace & World.
Mihailidis, P., & Thevenin, B. (2013). Media literacy as a core competency for engaged citizenship in participatory democracy. American Behavioral Scientist, 57(11), 1611–1622.
Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin.
Postman, N. (1985). Amusing Ourselves to Death: Public Discourse in the Age of Show Business. New York: Viking.
Simon, H. A. (1971). Designing organizations for an information-rich world. In M. Greenberger (Ed.), Computers, Communications, and the Public Interest (pp. 37–72). Baltimore: Johns Hopkins Press.
Sunstein, C. R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton, NJ: Princeton University Press.
Williams, R. (1974). Television: Technology and Cultural Form. New York: Schocken.
Current Issues
- A McLuhan Mosaic: Bringing Foundational Thought to Present Urgency and Relevance
- Public Commons
- Media and Information Literacy: Enriching the Teacher/Librarian Dialogue
- The International Media Literacy Research Symposium
- The Human-Algorithmic Question: A Media Literacy Education Exploration
- Education as Storytelling and the Implications for Media Literacy
- Ecomedia Literacy
- Conference Reflections

Leave a Reply