Abstract
This article proposes ecomedia literacy as a critical framework for exploring the intricate relationship between artificial intelligence (AI), media, and the environment. The authors argue for a nuanced awareness of AI’s symbolic, discursive, and material impacts on the planet, advocating for responsible and ethical engagement with emerging technologies. They explore the role of media education in reorienting priorities towards the common good and cultivating a deeper connection to the environmental commons. The article critiques the optimistic discourse surrounding AI in popular press and highlights the need for a more critical understanding of AI’s environmental footprint and its broader material conditions. The authors present ecomedia literacy as a holistic approach that extends beyond content and representation to include technological infrastructure and sustainability considerations. Through a case study of ChatGPT, they demonstrate how ecomedia literacy can complicate discourse and ideas surrounding AI within a media education context. This article offers a framework for ecomedia literacy that could serve as a foundational concept for navigating challenges posed by AI and digital media, contributing to the preservation of our shared environmental heritage and the promotion of the common good.
Keywords
Ecomedia Literacy, Artificial Intelligence, AI Literacy, Commons, ChatGPT

Introduction
Ecomedia literacy offers a critical framework for examining the complex interplay between artificial intelligence (AI), media, and the environment. As a method of inquiry, ecomedia literacy can enable valuable insights into the symbolic, discursive, and material impacts of AI on our planet and encourage more responsible and ethical engagement with emerging technologies. Ecomedia literacy combines various critical domains of scholarship, such as systems thinking, ecomedia studies, ecojustice, and critical media literacy (López, 2021). Ecomedia literacy recognizes the importance of understanding the ecological implications of media technologies, emphasizing the need for awareness of the environmental footprint and sustainability considerations associated with the technological aspects of media production and consumption. Furthermore, ecomedia literacy encourages a shift to recognizing media’s connection to environmental commons like air, water, and shared ecosystems, promoting a more holistic understanding of media’s role in our interconnected world. Through this lens, this article seeks to explore how media education can be reoriented to prioritize the common good and develop a deeper connection to the environmental commons through ecomedia literacy. We invite readers to consider how a collective commitment to the commons can inform more sustainable and equitable practices in the digital age, ultimately contributing to the preservation and revitalization of our collective environmental legacy.
As it stands, popular press discourses surrounding the relationship of AI to education and environment provide a limited interpretation of issues at hand. In the case of AI and education, news reports indicate that school responses are diverse (e.g., Singer, 2023a, 2023b). On one hand, issues of ethics, academic integrity, and dishonesty result in bans (e.g., NBC News, 2023; Singer, 2023b) or surveillance tactics to detect unapproved use (e.g. Fowler, 2023; NBC News, 2023). On the other, possibilities point toward progress and efficiency in integrating generative AI as a learning tool (e.g., Extance, 2023; Klein, 2023; NBC News, 2023) or to support teachers’ work (e.g., Kuiper, 2023; Liu et al., 2023; Tong, 2023).
In terms of environment, stories of AI improving the agricultural industry (e.g., Gonzalez, 2023; Hollis, 2023) and commentary on AI’s impact on climate change predominate. Although some coverage takes an ambivalent tone toward the benefits of AI compared to its environmental costs (e.g., Funes, 2023; Wall Street Journal, 2023) and just a handful address AI’s delay to Big Tech’s sustainability goals (Rathi, 2024; Rathi & Bass, 2024), a perusal of news stories shows largely optimistic discourse about the ability of AI to solve climate change-related problems, including energy efficiency (e.g., Marr, 2023), large-scale data analysis (e.g., Chasan, 2023), predictive modeling for disasters like wildfires (e.g., Large, 2023) and hurricanes (e.g., Barber, 2023), wildlife preservation (e.g., Jacobo, 2023), and citizen science initiatives (e.g., Horn-Muller, 2023).
Concerning both education and environment, popular press perspectives appear to uphold a generally positive take on AI. Scholar Benedetta Brevini (2021) argues that this tone reflects broader discourse: “This portrayal of AI as a benevolent deity has a crucial effect: it obfuscates the materiality of the infrastructures and devices that are central to AI’s functioning” (p. 8). Thus, ecomedia literacy can aid in understanding where and how an environment-focused approach to media education can elucidate a more critical, holistic understanding of the symbolic, discursive, and material impacts of AI.
AI for Sustainability and Sustainability of AI
Many AI technologies exist and permeate everyday human experiences. Artificial general intelligence (AGI), which would be able to perform tasks with human-level abilities and intelligences, is still a theoretical pursuit (Shevlin et al., 2019). The technologies available today are narrow AI, which perform limited tasks, solve problems, and expand human abilities (Martinez et al., 2019). Machine learning techniques in narrow AI allow for processing and sense-making of data (Mackenzie, 2015). One such technique, deep learning, uses multiple layers and hierarchies to process large amounts of data in ways that mimic human neural networks (Strauß, 2018). Natural language processing, another narrow AI technique, allows computers to understand how humans communicate through speech and writing (Brooks, n.d.). ChatGPT, the popular chatbot tool from OpenAI, uses deep learning algorithms and natural language processing to enable it to mimic human-like interactions with users.
Current literature on AI and the environment reflects two main branches: AI for sustainability and sustainability of AI (van Wynsberghe, 2021). AI for sustainability refers to the use of AI programs and platforms to further sustainability messages, goals, organizations, and work. For example, Gorlaski and Tan (2020) conducted a case study of AI’s use for water management, plant disease diagnosis, and water contamination detection, while Nishant et al. (2020) proposed processing massive datasets through AI without “the bias of human emotions and information asymmetry” (p. 9) to direct environmental governance, improve industry processes, and reduce harm risks in resource extraction and disposal. The idea of AI for sustainability is given such value that after an open letter called for a pause of large-scale AI experiments without better risk and safety regulations (Future of Life Institute, 2023), a rebuttal argued that an AI pause might adversely affect climate change research and progress (Larosa et al., 2023).
Sustainability of AI refers to the investigation of AI’s environmental footprint and how to address its material demands and planetary harm. Some call this Green AI, an approach to AI that strives toward a decreased environmental footprint (Schwartz et al., 2020). Within this vein, most research focuses on the energy demands of AI and its emissions, while very few studies address the “holistic impact that AI has on the natural environment” (Verdecchia et al., 2023, p. 16). Studies investigating energy demands may suggest systemic changes, like Henderson et al.’s (2020) proposal for energy efficiency in hardware, models, and testing environments; consistency and standardization in reporting carbon costs; and badging or credentialing AI based on energy usage. Or they may predict future impacts, like de Vries’ (2023) accounting of AI’s increasing energy footprint and the best—and worst—environmental outcomes. And analyzing emissions can help quantify carbon output from AI programs. For example, Strubell et al. (2019) found that training one type of machine learning framework was on par with a trans-American flight. This strand of research also results in AI-carbon calculators, such as carbontracker (https://github.com/lfwa/carbontracker) and ML CO2 Impact (https://mlco2.github.io/impact/).
The less common holistic approach engages the investigation of broader material conditions and implications (van Wynsberghe et al., 2022). In one such example, Brevini and Doctor (2023) develop a political economy of AI, calling attention to the extractive characteristics of the technology that tear through natural resources, destroy habitats, and exploit human health and labor. In another example, Kelly’s (2022) study connects the impacts of AI data centers, including water use, energy use and fuel sources, and emissions; physical devices, including manufacturing, consumption, and e-waste; environmental justice, including resource extraction and e-waste dumping in countries the Global South while disproportionately benefitting the Global North; and recommendations for the future, including AI governance through governmental regulation. An important recommendation, governmental regulation is only one piece of the AI-environment puzzle. Education about AI and its relation to the planet is another essential component, especially as more people use AI tools. Media education is a space to address such issues.
Despite research that shows only three out of eleven global digital literacy frameworks address AI, albeit in insufficient and out-of-date ways (Tiernan et al., 2023), media education is important in the age of AI: “it is a no-brainer that critical media literacy needs to involve deep understanding of issues pertaining to data and AI” (Jandrić, 2019, p. 34). Media education about AI can support understanding the technology as a tool (Ciampa et al., 2023; Valtonen et al., 2019), addressing misinformation (Ali et al., 2021; Chiang et al., 2022a, 2022b; Masood, 2023; Sperry, n.d.), integrating ethics (Luttrell et al., 2020; Masood, 2023), and renegotiating notions of literacy (Selwyn, 2022). For example, in their metanalysis of 30 peer-reviewed studies that defined AI literacy, Ng et al. (2021) found that knowing and understanding AI, using and applying AI, evaluating and creating with AI, and integrating AI ethics are the central themes in how AI literacy is theorized. Notably, these authors found that AI ethics is addressed through “human-centered considerations (e.g., fairness, accountability, transparency, ethics, safety)” (Ng et al., 2021, p. 4). This anthropocentric outlook on AI ethics excludes environment, planet, and broader conceptions of life and the common good from the AI conversation. In order to integrate civic responsibility and a perspective that understands the common good to extend beyond humans alone, more critical approaches to AI literacy and media education might engage posthumanist approaches (Jandrić, 2019; Leander & Burriss, 2022) and media ecology frameworks (Berger et al., 2019), as we aim to do in this article.
While there is scholarship on AI and environment, and AI and media education, to our knowledge, academic work on media education about AI’s relation to environment is not readily accessible or available. One of the few examples is López and Frenkel’s (2022) piece on the dual extraction of algorithm surveillance capitalism through data harvesting and material resource extraction that contribute to carbon capitalism. This large gap in the literature is an opportunity to expand the application of the ecomedia heuristic in how we think, talk about, and use AI. Why does this gap matter, and why do we suggest applying this approach is necessary? Because technologies are not neutral. Technologies change the environment they enter, in terms of the social and cultural environment (e.g., Carey, 1967; McLuhan, 1964; Postman, 2000), the natural world environment (e.g., Gabrys, 2011; Jancovic & Keilbach, 2023; Maxwell & Miller, 2012), and the intersection of the two (e.g., Bast et al., 2022; Kääpä, 2018; Olesen, 2020; Veltri & Atanasova, 2015).
The Ecomedia Commons and Ecojustice
“Ecomedia” is a redefined and expanded conception of media within an ecological context that recognizes the reciprocal relationship between media and the environment (López & Ivakhiv, 2024). This holistic approach not only considers the content and representation of environmental issues but also extends to the technological infrastructure supporting media systems. Viewed as an infrastructure, media can more clearly be conceived of as an “ecomedia commons,” which connects two key ideas. First, the environmental commons— “all that we share”—exists physically as our shared planetary ecosystems (atmosphere, oceans, hydrologic cycles, etc.). Second, the media commons is akin to a digital public sphere, which enables us to stay informed and take action about environmental issues. This framing of ecomedia as a commons aligns with ecojustice principles by emphasizing the interconnectedness of media systems, social structures, and ecological well-being, challenging us to consider the broader impacts of our media practices in both human communities and the natural world.
The “5Es” (López and Ivakhiv, 2024) is a framework that allows us to probe deeper into the ecojustice dimension of the ecomedia commons. Enclosure involves scrutinizing the privatization of the commons and ways of understanding the world. Extraction centers on the removal and use of resources, energy, data, and attention, prompting an examination of how these processes contribute to environmental degradation and social disparities. Expendability highlights ecosystems and people in sacrifice zones, emphasizing the need to address the disproportionate impact of environmental harm on marginalized communities. Exploitation underscores the use and misuse of labor and land, particularly in the production of “cheap things” (Patel & Moore, 2018), urging a reevaluation of systems that exploit both human and natural resources. Finally, externalization addresses the effects of destruction and its costs to peoples and environments. Together these five characteristics make up what can be described as a framework for “extractive media” and serves as an analytical perspective for inquiring into AI’s impact on the environment. The ecomedia commons highlights the tension between the internet’s potential for knowledge connection and the commercialization of digital spaces. This battle for control shapes how we value and interact with the commons. An eco-conscious response involves pushing back against the enclosure of both physical and digital spaces, prioritizing shared benefit and an ethic of care towards the environment.
Ecomedia Literacy and AI: ChatGPT as a Case Study
Based on this foundational understanding and the established literature, the following research questions drive this case study. Recognizing the proliferation of AI into the average person’s daily life, where does (or should) the planet fit into the AI narrative? And recognizing that instrumentalism, technological progress, modernity, and, in some cases, techno-utopianism feed the underlying epistemology of dominant approaches to media education (Berger et al., 2019; Cappello, 2017), how can ecomedia literacy support a more complex, nuanced, and ecological narrative of AI? And lastly, recognizing the generative and consumptive potential of AI, how can ecomedia literacy lend in understanding the representational, discursive, and material implications of AI on the planet in media education? The following case study of ChatGPT works to show how an ecomedia literacy approach can complicate the discourse and ideas surrounding AI, particularly within a media education context.
One method of ecomedia literacy analysis is a heuristic tool, the ecomediasphere, designed to support a holistic exploration of the symbolic, cultural, material, phenomenological, and ideological dimensions of media (López, 2021). The unit of analysis is an “ecomedia object,” which can be a media text, platform, gadget, or hyperobject (a conceptual entity that extends across time and space, involving numerous interconnected elements). Ecomedia objects are conceptualized as “boundary objects,” which possess agreed-upon characteristics but change meaning and function according to context and usage. Using the ecomediasphere heuristic, the ecomedia object is examined from the perspective of four distinct zones: ecoculture, political ecology, ecomateriality, and lifeworld. For instance, when studying climate disinformation as a hyperobject, the ecocultural perspective focuses on values shaping the climate crisis debate, political ecology delves into the systemic drivers of disinformation, ecomateriality scrutinizes medium properties and material infrastructure driving disinformation, and lifeworld addresses cognitive dispositions and sensory experience. But because it is an iterative “circuit,” each zone engages the others and in some cases they overlap considerably. For example, worldviews embedded in ecoculture are what drive political ecology (the design of a system). This in turn connects with decisions about how products and services impact environments or human health. Likewise, the state of an environment and one’s relationship to it impacts their worldview, and so on.
This section models a brief ecomediasphere analysis that could be used in an educational setting to demonstrate how ecomedia literacy is used to analyze the environmental impacts of AI. In March 2023, OpenAI’s website was studied as a platform to identify how it represents itself through images and text, and the free version of ChatGPT 3.5 was queried with various prompts based on the four zones of inquiry.
Ecoculture
The term ecoculture “corresponds with meaning, values, lifestyles, identity formations, ways of knowing, and rituals and practices mediated through shared interpretations and sense-making practices” (López et al., 2024, p. 9). This entails examining how shared beliefs are expressed through symbols, discourses, and narratives in media. OpenAI’s logo is geometric and flower-like, with soft edges, interconnected chain links, and a swirl design reminiscent of symbols of eternal life and infinity. The video set-piece posted on the website displays a relaxed office environment in an urban setting with the interior design of a living room that incorporates non-threatening elements such as earthy colors, natural fibers, abundant houseplants, informal atmosphere (non-businesslike), a subtle touch of technology (without visible servers and emphasizing minimal environmental impacts), a youthful vibe, cosmopolitan aesthetics, and multicultural representation. The people in the video are primarily gendered as female.
OpenAI’s website stresses words and phrases like “participation,” “safety,” and “inclusion.” It describes GPT-4 as “creative” and “collaborative,” couching it in language that emphasizes that it is a tool that you use (and by implication, it does not use you). The language on the website aims to humanize the product and convey its ethical stance by emphasizing various concepts, such as its utility for the world and how it benefits everyone, its contribution to a better quality of life, the facilitation of “flourishing thoughts” and ideas, acknowledging its imperfections, highlighting its capacity to amplify individual abilities, enhancing productivity for an improved quality of life, encouraging widespread participation, emphasizing the importance of inclusive learning, and aligning with societal suitability. On this last point, “alignment” is a key term in AI discourses, which refers to the design and development of AI systems to ensure they behave in ways consistent with human values and objectives, minimizing the risks of unintended consequences, and ensuring ethical and responsible use (Gabriel, 2020).
When ChatGPT was queried about how it impacts ecoculture, it responded by first stating, based on its last “knowledge update” in 2021, that it was not aware of any specific impact. But it then offered the caveat that if the question was about the intersection of ecology and culture, it stated that it plays a crucial role in raising awareness about environmental issues and promoting sustainable practices by answering questions and providing information on topics like renewable energy and climate change, encouraging environmentally conscious choices, and promoting cross-cultural collaboration on global environmental challenges through its multilingual communication capabilities. ChatGPT’s optimistic wording reflects the positive use of language that OpenAI deploys on its website to describe their products and services.
AI critics note that large language models (LLMs) like ChatGPT have knowledge formations coming from opaque datasets (Edwards, 2023; Zhou et al., 2023). The concern is that they tend to not include voices of women, older people, and marginalized groups, especially Indigenous peoples who are not incorporated into systems of institutional knowledge. Algorithmic codes “operate within powerful systems of meaning that render some things visible, others invisible and create a vast array of distortions and dangers” (Benjamin, 2019, p. 7). Ecomedia studies emphasize ecocultures in their plurality because there are diverse ways that cultures shape and are shaped by ecomedia (López et al., 2024, p. 9). This follows from efforts to “provincialize” discourses around the Anthropocene and to recognize the critique from Black and Indigenous scholars that media studies and academia more generally have universalized, through a process of “coloniality,” modernist knowledge constructs as contextless, timeless, placeless, universal, general, and abstract (Mignolo, 2007; Quijano, 2007; Towns, 2022).
The lack of diversity in ChatGPT’s training data likely impacts the “cultural commons,” the shared cultural resources, knowledge, and creations that are considered part of the public domain or common heritage of a society (Bowers, 2012). The cultural commons encompasses elements that are not owned or controlled by any particular individual or entity but are collectively held and accessible to the public, including various forms of intellectual and creative expressions, such as literature, art, music, folklore, traditions, and political practices. The concept of the cultural commons is often associated with the broader idea of the commons discussed above, which extends beyond culture to include shared resources like air, water, and land. In the context of culture, the cultural commons emphasizes the importance of preserving and sharing cultural heritage for the benefit of society as a whole. It encourages open access to knowledge and creativity, nurturing a sense of cultural belonging and facilitating the exchange of ideas. Efforts to protect the cultural commons often involve debates about intellectual property rights and access to cultural resources. Advocates for the cultural commons argue for policies and practices that balance the interests of creators with the broader societal interest in maintaining a rich and accessible cultural heritage and ensuring traditional ecological knowledge (TEK) remains part of our collective cultural legacy.
The scraping and ingestion of datasets could potentially lead to the enclosure (privatization) of the cultural commons, as well as a flattening knowledge by eviscerating cultural context. One concern is the possibility of cultural homogenization, where popular content recommended by AI algorithms overshadows diverse and lesser-known cultural expressions, leading to a loss of cultural diversity. Algorithmic bias is another issue, as AI systems may perpetuate stereotypes or discriminatory patterns, potentially resulting in unfair representation or exclusion of certain cultural groups (Cao et al., 2023; Ferrara, 2023). The use of AI in content creation raises challenges related to intellectual property, including questions about attribution, ownership, and fair compensation (Zhuk, 2023). Privacy concerns also emerge, as AI-driven systems processing user data for personalized recommendations may deter individuals from sharing their cultural expressions due to fears of data misuse. Additionally, the authenticity of cultural expressions could be eroded by AI-generated content such as deepfakes or AI-generated art (Masood et al., 2021). The digital divide may widen if access to AI technologies is not evenly distributed, exacerbating disparities in cultural participation and representation. Furthermore, an over reliance on AI algorithms for content curation may lead to filter bubbles, limiting exposure to diverse cultural perspectives. AI’s role in generating content contributes to the formation of cultural narratives around environmental issues, influencing the values embedded in these narratives and shaping users’ cultural understanding of their connection to the environment.
Political Ecology
Political ecology entails studying how economic and power structures design systems and produce impacts on the environment, including the production of ideologies and material goods. When prompted about its political ecology, ChatGPT responded that it is an AI language model designed to process natural language and generate responses without political ideologies, beliefs, or biases. It claims to operate based on statistical patterns in diverse language data, but it warns users to be aware that its responses may reflect biases present in the training data, emphasizing the importance of critical evaluation and consideration of multiple information sources. When asked to ascertain the ecological benefits of AI, ChatGPT identified advantages such as conservation, efficient resource management, precision agriculture, climate modeling, and green energy: “Overall, AI can play an important role in addressing environmental challenges by helping to improve resource management, reduce waste, and provide more accurate predictions and models for environmental conservation and protection” (ChatGPT, 2023).
The environmental assets touted by ChatGPT fall within the anthropocentric spectrum of environmental ideology, which is defined as a set of beliefs, values, attitudes, and principles that shape an individual’s or a society’s understanding of the relationship between humans and the environment (Corbett, 2006). Environmental ideology encompasses perspectives on nature, the use of natural resources, conservation, sustainability, and the overall human ecological impact. Conservationism and preservationism represent mainstream environmentalism and are anthropocentric because they reinforce a status quo beneficial to humans, whereas ecocentric environmental ideologies are transformative, radical, and even anti-capitalist. The environmental solutions promoted by ChatGPT demonstrate how it favors political and institutional prejudices about ecology that form its knowledge base. This raises the question of how it aligns with particular environmental priorities.
In any analysis of political ecology, it is crucial to investigate the funding model of organizations responsible for producing the materials under examination. The tension between nonprofit or for-profit has divided many in the AI development world, including the leadership of OpenAI (Radsch, 2023). As noted in the following examples, nonprofit and for-profit approaches to AI diverge in several fundamental aspects, including their missions, funding models, ownership structures, competitive landscapes, and ethical considerations. In terms of mission and purpose, nonprofits are typically oriented toward serving the public good or addressing specific societal issues. For instance, the United Nations supports the nonprofit organization AI for Good and aims to use AI to solve global challenges. On the other hand, for-profit entities prioritize generating revenue and profit for stakeholders, such as Google AI (a subsidiary of Alphabet), a for-profit research organization that focuses on developing AI technologies with commercial applications. Nonprofits rely on grants, donations, and public funding to sustain operations. The nonprofit organization PathFinders develops AI-powered solutions for education and poverty alleviation. Their funding comes from government grants, corporate sponsorships, and individual donations. In contrast, for-profit organizations generate revenue through product or service sales, licensing, and may secure funding through venture capital or private investment. The AI company DeepMind (acquired by Alphabet) uses its AI technology to develop commercial applications including healthcare and self-driving cars. They raised over $1 billion in venture capital from investors like Google Ventures and Fidelity (Dickson, 2021).
The competitive landscape varies, too. Nonprofits may emphasize collaboration, sharing knowledge, and addressing societal challenges collectively. The nonprofit organization Open Data Institute promotes the use of open data to solve social problems. Their collaborative initiatives involve various organizations and individuals from different sectors. For-profit entities, operating in competitive markets, strive for market share, profitability, and a competitive edge. Google Cloud AI and IBM Watson offer AI solutions for businesses to improve their operations and decision-making. Ethical considerations play a role in both approaches, but nonprofits often prioritize broader societal benefits and ethical AI practices over immediate financial gains. For instance, the nonprofit organization Partnership on AI emphasizes the responsible development and use of AI. For-profit organizations also consider ethics, but commercial pressures may sometimes lead to different trade-offs in decision-making. The AI company Palantir has been criticized for its involvement in military projects, raising concerns about the potential misuse of AI for surveillance and warfare (Musil, 2022).
The AI landscape is dynamic, with emerging hybrid models and collaborations between nonprofit and for-profit entities. OpenAI was founded as a nonprofit organization in 2015 with the stated goal of developing safe and beneficial AGI for the betterment of humanity. However, in 2019, OpenAI transitioned to a capped-profit entity. This means that while the company can attract investment from venture funds and grant employees stakes in the company, the profit is capped at 100 times any investment (Coldewey, 2019). The leadership shake-up in 2023 indicates that there were contradictory missions between the two approaches (Radsch, 2023). OpenAI’s largest investor, Microsoft, secured the third spot among the world’s most valuable technology brands in 2022, with a brand value estimated at $611 billion USD (Alsop, 2023). It is likely that Microsoft would not invest in OpenAI if it did not reproduce the global economic and political status quo that enriches the company. OpenAI’s infrastructure depends on an extractive economic system for energy and resources, but it also extracts knowledge through scraping datasets and copyrighted materials. It relied on exploited and abused labor in the process of training and tagging its data (Perrigo, 2023). As a machine of ideology, it replicates taken-for-granted, mainstream, institutional assumptions about how to solve the environmental crisis in the same way that Microsoft promotes an environmental sustainability agenda that reinforces its business model. Dependence on the status quo of the extractivist global supply chain for its technology, machines, and energy means more business as usual when it comes to externalizing the negative costs of environmental destruction and the reliance of expendable ecosystems and disposable peoples required to keep the system functioning.
Ecomateriality
Ecomateriality explores the material connections between media and the environment. This includes infrastructural elements like cables, satellites, electromagnetic energy, and server farms, as well as ecological aspects such as mining, manufacturing, energy consumption, and waste production and disposal. When queried about the environmental impact of its output, ChatGPT responded by stating,
As an AI language model, I do not have any physical presence or environmental impact. I exist solely as a software program and do not consume resources or generate any waste or pollution… I do not have the ability to have an impact on the environment, and my primary function is to provide information and support to users. (ChatGPT, 2023)
It added, “If you have concerns about the environmental impact of the technology you are using, I recommend consulting with the manufacturer or service provider for more information.” When asked about the physical infrastructure that runs it, it responded: “As an AI language model, I do not have a physical body or infrastructure. I exist solely as a software program and am able to function through the use of computer hardware and software systems” (ChatGPT, 2023). It went on to state that it relies on devices with adequate processors and memory, such as computers or smartphones, to run the necessary software for processing language data and generating real-time responses. This reply minimizes the intricate network of hardware and software components supporting its functionality, including processors, memory, storage, and various software tools, with specific configurations varying across devices while maintaining its core purpose of understanding and generating human-like language for user assistance.
Finding specific information on OpenAI’s infrastructure is not easy; however, they rely on Microsoft’s Azure AI Supercomputers for processing (Langston, 2020). Studies of energy consumption estimate that training and running ChatGPT requires a massive amount of computing power housed in energy-intensive data centers. These centers generate a considerable carbon footprint, with estimates suggesting that ChatGPT emits 8.4 tons of CO2 per year, the equivalent emissions from driving an average gasoline-powered passenger vehicle for about 20,700 miles (based on EPA estimates) (McLean, 2023). Cooling these data centers consumes a considerable amount of water. A study revealed that training GPT-3, a predecessor to ChatGPT, used around 700,000 liters of freshwater, equivalent to the amount required to build several hundred cars (McLean, 2023). The production and disposal of electronic devices used to train and operate ChatGPT generate electronic waste, which can contain hazardous materials (Kelly, 2022). The only references to environmental sustainability on the OpenAI website were in the community forum with user generated discussions about topics related to sustainability.
To minimize the environmental impact of AI, strategies include using energy-efficient hardware and optimizing data centers to reduce power consumption, while also employing renewable energy sources like solar and wind. Other crucial steps include designing more efficient AI models and algorithms that require fewer computational resources, reusing and fine-tuning existing models instead of training new ones from scratch, and considering the full lifecycle environmental costs of AI systems, including manufacturing and e-waste. Implementing responsible AI policies, educating employees on sustainable practices, and choosing smaller, domain-specific models over large general ones when appropriate can also help. Monitoring and measuring the carbon footprint of AI activities, pushing for transparency from AI vendors and cloud providers, and using AI only when necessary, considering simpler technologies for the same goals, are additional strategies to reduce the environmental impact of AI. By taking a holistic approach that encompasses energy use, hardware, model design, and organizational practices, companies can significantly lower the carbon footprint of their AI initiatives while still benefiting from its capabilities (Kanungo, 2023). Finally, AI companies can be better regulated by governments to mitigate their environmental impacts.
Lifeworld
Lifeworld delves into how media users and audiences sense and undergo their connection to the environment by engaging with various forms of media and technology. Through instantaneous and personalized responses, ChatGPT impacts how individuals engage with and perceive environmental information. The personalized framing of content by AI algorithms influences users’ lifeworlds by presenting information in alignment with their preferences and beliefs, but the formation of filter bubbles and echo chambers poses a challenge by limiting exposure to diverse perspectives. As noted by Ridley and Pawlick-Potts (2021), “Algorithms provide a structure that frames – and constrains – how we express ourselves. They are a way of seeing and acting in the world…” (p. 2). OpenAI and its partner Microsoft are in the business of codifying the kinds of information and media content that count as knowledge.
One concern of ecomedia scholars is the impact of these technologies on our attention, especially since a sense of groundedness in space, place, and time is considered an essential precondition of driving climate action and sense of care for the environment (Citton, 2017; Rauch, 2023). Reliance on technologies like LLMs for our knowledge can lead to inattention and a lack of awareness of the materiality and infrastructure necessary for these systems to function. The danger of over relying on tools like ChatGPT is that they offer truth without evidence, contributing more generally to the epistemic crisis – the inability to know what is true or real – caused by algorithms and the spread of disinformation (Benkler et al., 2018).
Conclusion: Navigating the Ecological Dynamics of AI through Neil Postman’s Lens
In our examination of AI through ecomedia literacy, it’s worth revisiting Neil Postman’s five characteristics of technological change (Postman, 1998). Postman’s primary assertion that “all technological change is a tradeoff” resonates with our analysis of ChatGPT. The advantages brought forth by AI, such as efficiency, data processing capabilities, and innovative problem-solving, come at a price, such as the ecological footprint of AI, from energy-intensive data centers to electronic waste. The uneven distribution of advantages and disadvantages, also illuminated by Postman, is demonstrated by the analysis of its environmental impacts and labor exploitation. As noted by ecojustice advocates, the winners and losers in the AI landscape are not evenly distributed across populations, raising questions about who benefits and who bears the brunt of ecological repercussions.
Within the heart of every technology, Postman says there’s a powerful idea. The powerful ideas embedded in AI, from algorithmic decision-making to the perpetuation of cultural and institutional biases, underscore that as AI becomes a driving force in shaping knowledge and behavior, it becomes imperative to unveil and address the prejudices ingrained in its algorithms. Postman’s ecological perspective on technological change as non-additive also aligns with our understanding of how AI’s impact is not confined to its immediate applications but ripples across the entire ecosystem of media and environment. Finally, Postman’s observation that technology tends to become mythic and naturalized into the order of things correlates with the rhetoric of AI as an inevitable and irreplaceable addition to the world.
Synthesizing Postman’s insights with our examination of AI and ecomedia literacy, the iterative exploration of ecoculture, political ecology, ecomateriality, and lifeworld in the context of AI reveals tradeoffs, biases, and ecological consequences is reminiscent of Postman’s cautionary approach. The precautionary principle, which advocates preventive measures in the face of uncertain environmental or public health threats, contrasts with the Silicon Valley’s disruption ethos to “move fast and break things.” In the case of OpenAI as an industry leader, it remains to be seen if it retains any of its “open” aspirations, or if it will more resemble a “ClosedAI” through its process of enclosure by privatizing knowledge and constraining what can or should be known about the world for profit. To address these concerns, it is crucial to implement ethical AI practices, ensure diverse representation in AI development teams, and actively engage with communities to understand and address cultural considerations related to AI technologies (IAMCR, 2023; UNESCO, 2022). At the local level and in the context of media education, applying the ecojustice framework and ecomediasphere heuristic of ecomedia literacy allows us to complicate the good/bad binary conversation surrounding AI, to integrate opportunities to investigate ethics of its production and consumption, to investigate representation and meaning through symbols and discourse, to unveil issues of environmental justice, and to critique its planetary extraction and consumption.
OpenAI’s discourse about its AI tools begs several questions: Who are these for? Who does “we” and “us” refer to in their rhetoric? Are we good enough? Do we need improvement, more efficiency, and productivity? Does this make us more productive, and for what? Do we really need this? In response, it’s worth visiting the well-known “stochastic parrots” paper, which argues that AI development should include
weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models. (Bender et al., 2021, p. 610)
Finally, when considering the wording of “Natural Language Understanding” (NLU) at the bases of LLMs like ChatGPT, it is worth exploring what is meant by “natural.” If it is true that AI has a way of flattening and homogenizing descriptions of the world, what makes it “natural”?
References
Ali, S., DiPaola, D., Lee, I., Sindato, V., Kim, G., Blumofe, R., & Breaseal, C. (2021). Children as creators, thinkers and citizens in an AI-driven future. Computers and Education: Artificial Intelligence, 2. https://doi.org/10.1016/j.caeai.2021.100040
Alsop, T. (2023). Top 20 technology brands worldwide 2022. Statista. https://www.statista.com/statistics/267966/brand-values-of-the-most-valuable-technology-brands-in-the-world/
Barber, G. (2023, September 27). AI hurricane predictions are storming the world of weather forecasting. Wired. https://www.wired.com/story/ai-hurricane-predictions-are-storming-the-world-of-weather-forecasting/
Bast, D., Carr, C., Madron, K., & Syrus, A. M. (2022). Four reasons why data centers matter, five implications of their social spatial distribution, one graphic to visualize them. EPA: Economy and Space, 54(3), 441-445. https://doi.org/10.1177/0308518X211069139
Bender, E.M. et al. (2021). On the dangers of stochastic parrots. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. https://doi.org/10.1145/3442188.3445922
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.
Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: manipulation, disinformation, and radicalization in American politics. Oxford University Press.
Berger, E., Logan, R. K., Miroschnichenko, A., Ringel, A. (2019). MEDIACY: A new way to enrich media literacy. Journal of Media Literacy Education, 11(3), 85-90. https://doi.org/10.23860/JMLE-2019-11-3-8
Bowers, C. A. (2012). The way forward: Educational reforms that focus on the cultural commons and the linguistic roots of the ecological/cultural crises. Eco-Justice Press.
Brevini, B. (2021). Is AI good for the planet? Polity.
Brevini, B., & Doctor, D. (2023). Carbon capitalism, communication, and artificial intelligence. In A. López, A. Ivakhiv, S. Rust, M. Tola, A. Y. Chang, & K. Chu (Eds.), The Routledge handbook of ecomedia studies (pp. 171–178). Routledge. https://doi.org/10.4324/9781003176497-21
Brooks, R. (n.d.). The role of natural language processing in AI. University of York. https://online.york.ac.uk/the-role-of-natural-language-processing-in-ai/
Cao, Y., Zhou, L., Lee, S., Cabello, L., Chen, M., & Hershcovich, D. (2023). Assessing cross-cultural alignment between ChatGPT and human societies: An empirical study. https://doi.org/10.48550/ARXIV.2303.17466
Cappello, G. (2017). Literacy, media literacy and social change. Where do we go from now? Italian Journal of Sociology of Education, 9(1), 31-44. https://doi.org/10.14658/PUPJ-IJSE-2017-1-3
Carey, J. W. (1967). Harold Adams Innis and Marshall McLuhan. The Antioch Review, 27(1), 5-39. https://doi.org/10.2307/4610816
Chasan, A. (2023, August 26). Some experts see AI as a tool against climate change. Others say its own carbon footprint could be a problem. CBS News. https://www.cbsnews.com/news/artificial-intelligence-carbon-footprint-climate-change/
Chiang, T. H. C., Liao, C-S., & Wang, W-C. (2022a). Impact of artificial intelligence news source credibility identification system on effectiveness of media literacy education. Sustainability, 14(8), 1-16. https://doi.org/10.3390/su14084830
Chiang, T. H. C., Liao, C.-S., & Wang, W.-C. (2022b). Investigating the difference of fake news source credibility recognition between ANN and BERT algorithms in artificial intelligence. Applied Sciences, 12, 1-15. https://doi.org/10.3390/app12157725
Ciamapa, K., Wolfe, Z. M., & Bronstein, B. (2023). ChatGPT in education: Transforming digital literacy practices. Journal of Adolescent & Adult Literacy, 67(3), 186-195. https://doi.org/10.1002/jaal.1310
Citton, Y. (2017) The ecology of attention. Polity.
Coldewey, D. (2019, March 11). OpenAI shifts from nonprofit to ‘capped-profit’ to attract capital. TechCrunch. https://techcrunch.com/2019/03/11/openai-shifts-from-nonprofit-to-capped-profit-to-attract-capital/
Corbett, J. B. (2006). Communicating nature: How we create and understand environmental messages. Island Press.
de Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 2191-2194. https://doi.org/10.1016/j.joule.2023.09.004
Dickson, B. (2021, October 10). AI lab DeepMind becomes profitable and bolsters relationship with Google. VentureBeat. https://venturebeat.com/ai/ai-lab-deepmind-becomes-profitable-and-bolsters-relationship-with-google/
Edwards, B. (2023, October 23). Stanford researchers challenge OpenAI, others over AI transparency in new report. Ars Technica. https://arstechnica.com/information-technology/2023/10/stanford-researchers-challenge-openai-others-on-ai-transparency-in-new-report/
Extance, A. (2023).ChatGPT has entered the classroom: How LLMs could transform education. Nature, 623, 474-477. https://doi.org/10.1038/d41586-023-03507-3
Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday, 28(11). https://doi.org/10.5210/fm.v28i11.13346.
Fowler, G. A. (2023, June 2). Detecting AI may be impossible. That’s a big problem for teachers. The Washington Post. https://www.washingtonpost.com/technology/2023/06/02/turnitin-ai-cheating-detector-accuracy/
Funes, Y. (2023, November 17). The secret environmental cost hiding inside your smart home device. The Verge. https://www.theverge.com/2023/11/17/23951196/smart-home-ai-data-electricity-fossil-fuel-climate-change
Future of Life Institute. (2023, March 22). Pause giant AI experiments: An open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://doi.org/10.1007/s11023-020-09539-2
Gabrys, J. (2011). Digital rubbish: A natural history of electronics. University of Michigan Press.
Gonzalez, W. (2023, February 2). How AI is cropping up in the agriculture industry. Forbes. https://www.forbes.com/sites/forbesbusinesscouncil/2023/02/02/how-ai-is-cropping-up-in-the-agriculture-industry/
Gorlaski, M. A., & Tan, T. K. (2020). Artificial intelligence and sustainable development. The International Journal of Management Education, 18. https://doi.org/10.1016/j.ijme.2019.100330
Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., & Pineau, J. (2020). Toward the systematic reporting of the energy and carbon footprint of machine learning. Journal of Machine Learning Research, 21, 1-43. http://jmlr.org/papers/v21/20-312.html
Hollis, J. (2023, November 29). 3 ways AI can help farmers tackle the challenges of modern agriculture. The Conversation. https://theconversation.com/3-ways-ai-can-help-farmers-tackle-the-challenges-of-modern-agriculture-213210
Horn-Muller, A. (2023, April 17). Google wants you to save coral reefs (with AI’s help). Axios. https://www.axios.com/2023/04/18/google-ai-coral-reefs
International Association for Media and Communication Research (IAMCR) (2023). The Role of Information and Communication Sciences in the Governance of Artificial Intelligence. Kyoto: IAMCR. https://iamcr.org/sites/default/files/ai%20statement.pdf
Jacobo, J. (2023, July 26). Mitigating climate change and preserving biodiversity: Several ways AI can be used to help the environment. ABC News. https://abcnews.go.com/US/mitigating-climate-change-preserving-biodiversity-ways-ai-environment/story?id=101590463
Jancovic, M., & Keilbach, J. (2023). Streaming against the environment: Digital infrastructures, video compression, and the environmental footprints of video streaming. In K. van Es & N. Verhoeff (Eds.), Situating data: Inquiries in algorithmic culture (pp. 85-102). Amsterdam University Press. https://doi.org/10.5117/9789463722971_ch04
Jandrić, P. (2019). The postdigital challenge of critical media literacy. The International Journal of Critical Media LIteracy, 1(1), 26-37. https://doi.org/10.1163/25900110-00101002
Kääpä, P. (2018). Environmental management of the media: Policy, industry, practice. Routledge. https://doi.org/10.4324/9781315625690
Kanungo, A. (2023, July 18). The Real Environmental Impact of AI. Earth.Org. https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
Kelly, B. (2022). Ethical AI and the environment. The iJournal: Student Journal of the Faculty of Information, 7(2). https://doi.org/10.33137/ijournal.v7i2.38608
Klein, A. (2023, November 14). Schools ‘can’t sit out’ AI, top U.S. Education Department official argues. Education Week. https://www.edweek.org/technology/schools-cant-sit-out-ai-top-u-s-education-department-official-argues/2023/11
Kuiper, M. (2023, November 14). Cedar Valley educators using AI to reduce workload, improve classroom teaching. The Waterloo-Cedar Falls Courier. https://wcfcourier.com/news/local/education/artificial-intelligence-education-high-school-technology/article_2ac655cc-7f37-11ee-8a7c-b7d57bcd447a.html
Langston, J. (2020, May 19). Microsoft announces new supercomputer, lays out vision for future AI work. Microsoft. https://news.microsoft.com/source/features/ai/openai-azure-supercomputer/
Large, S. (2023, July 4). Cal Fire now using artificial intelligence to fight wildfires. CBS News. https://www.cbsnews.com/sacramento/news/cal-fire-now-using-artificial-intelligence-to-fight-wildfires/
Larosa, F., Hoyas, S., García-Martínez, J., Conejero, J. A., Fuso Nerini, F., & Vinuesa, R. (2023). Halting generative AI advancements may slow down progress in climate research. Nature Climate Change, 13, 497-499. https://doi.org/10.1038/s41558-023-01686-5
Leander, K. M., & Burriss, S. K. (2020). Critical literacy for a posthuman world: When people read, and become, with machines. British Journal of Educational Technology, 51(4), 1262–1276. https://doi.org/10.1111/bjet.12924
Liu, J., Demszky, D., & Hill, H. C. (2023, August 14). AI can make education more personal, yes, really. Education Week. https://www.edweek.org/leadership/opinion-ai-can-make-education-more-personal-yes-really/2023/08
López, A. (2021). Ecomedia literacy: Integrating ecology into media education. Routledge.
López, A., & Frenkel, O. (2022). Algorithms and climate: An ecomedia literacy perspective. The Journal of Media Literacy. https://ic4ml.org/journal-article/are-big-tech-algorithms-good-for-the-planet-an-ecomedia-literacy-perspective/
López, A., & Ivakhiv, A. (2024). When do media become ecomedia? In A. López, A. Ivakhiv, S. Rust, M. Tola, A. Y. Chang, & K. Chu (Eds.), The Routledge handbook of ecomedia studies (pp. 19–34). Routledge.
López, A., Ivakhiv, A., Rust, S., Tola, M., Chang, A. Y., & Chu, K. (2024). Introduction. In A. López, A. Ivakhiv, S. Rust, M. Tola, A. Y. Chang, & K. Chu (Eds.), The Routledge handbook of ecomedia studies (pp. 1–16). Routledge.
Luttrell, R., Wallace, A., McCullough, C., Lee, J. (2020). The digital divide: Addressing artificial intelligence in communication education. Journalism & Mass Communication Educator, 74(4), 470-482. https://doi.org/10.1177/1077695820925286
Mackenzie, A. (2015). The production of prediction: What does machine learning want? European Journal of Cultural Studies, 18(4-5). https://doi.org/10.1177/13675494155773
Marr, B. (2023, September 1). How can we use AI to address global challenges like climate change? Forbes. https://www.forbes.com/sites/bernardmarr/2023/09/01/how-can-we-use-ai-to-address-global-challenges-like-climate-change/
Martinez, D. R., Malyska, N., Streilein, W. W., Caceres, R. S., Campbell, W. M., Dagli, C. K., Gadepally, V. N., Greenfield, K. B., Hall, R. G., King, A. J., Lippmann, R. P., Miller, B. A., Reynolds, D. A., Richardson, F. S., Sahin, C. S., Tran, A. Q., Trepagnier, P. C., Zipkin, J. R., Roeser, C. A. D.,… Thornton, J. R. (2019). Artificial intelligence: Short history, present developments, and future outlook, final report. Massachusetts Institute of Technology. https://www.ll.mit.edu/r-d/publications/artificial-intelligence-short-history-present-developments-and-future-outlook
Masood, M., Nawaz, M., Malik, K. M., Javed, A., & Irtaza, A. (2021). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. https://doi.org/10.48550/ARXIV.2103.00484
Masood, R. (2023, May 15). Helping kids navigate the world of artificial intelligence. Common Sense Media. https://www.commonsensemedia.org/articles/helping-kids-navigate-the-world-of-artificial-intelligence
Maxwell, R., & Miller, T. (2012). Greening the media. Oxford University Press.
McLean, S. (2023, April 28). The environmental impact of ChatGPT: A call for sustainable practices In AI development. Earth.Org. https://earth.org/environmental-impact-chatgpt/
McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill.
Mignolo, W.D. (2007). Delinking: The rhetoric of modernity, the logic of coloniality and the grammar of de-coloniality. Cultural Studies, 21(2–3), 449–514. https://doi.org/10.1080/09502380601162647
Musil, S. (2022). Palantir extends US defense contract that prompted protest at Google. CNET. https://www.cnet.com/tech/palantir-extends-us-defense-contract-that-prompted-protest-at-google/
NBC News. (2023, February 6). Some U.S. schools banning AI technology while others embrace it [Video]. YouTube. https://www.youtube.com/watch?v=U7FVMbmjLeQ
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qia, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 5, 1-11. https://doi.org/10.1016/j.caeai.2021.100041
Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 1-13. https://doi.org/10.1016/j.ijinfomgt.2020.102104
Olesen, T. (2020). Greta Thunberg’s iconicity: Performance and co-performance in the social media ecology. New Media & Society, 24(6), 1325-1342. https://doi.org/10.1177/1461444820975416
Patel, R. & Moore, J.W. (2018). History of the world in seven cheap things: A guide to capitalism, nature, and the future of the planet. University of California Press.
Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Postman, N. (1998). Five things we need to know about technological change. Available at: https://web.cs.ucdavis.edu/~rogaway/classes/188/materials/postman.pdf
Postman, N. (2000). The humanism of media ecology. Proceedings of the Media Ecology Association, 1, 10-16. https://media-ecology.net/publications/MEA_proceedings/v1/postman01.pdf
Quijano, A. (2007). Coloniality and modernity/rationality. Cultural Studies, 21(2–3), 168–178. https://doi.org/10.1080/09502380601164353.
Radsch, C. (2023, November 27). The real story of the OpenAI debacle is the tyranny of big tech. The Guardian. https://www.theguardian.com/commentisfree/2023/nov/27/openai-microsoft-big-tech-monopoly
Rauch, J. (2023). Slow media, eco-mindfulness, and the lifeworld. In A. López, A. Ivakhiv, S. Rust, M. Tola, A. Y. Chang, & K. Chu (Eds.), The Routledge Handbook of Ecomedia Studies. Routledge, pp. 337–344. https://doi.org/10.4324/9781003176497-42.
Rathi, A. (2024, July 8). Google is no longer claiming to be carbon neutral. Bloomberg. https://www.bloomberg.com/news/articles/2024-07-08/google-is-no-longer-claiming-to-be-carbon-neutral
Rathi, A., & Bass, D. (2024, May 24). Microsoft’s AI push imperils climate goal as carbon emissions jump 30%. Bloomberg. https://www.bloomberg.com/news/articles/2024-05-15/microsoft-s-ai-investment-imperils-climate-goal-as-emissions-jump-30
Ridley, M., & Pawlick-Potts, D. (2021). Algorithmic literacy and the role for libraries. Information Technology and Libraries, 40(2). https://doi.org/10.6017/ital.v40i2.12963
Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green AI. Communications of the ACM, 63(12), 54-63. https://doi.org/10.1145/3381831
Selwyn, N. (2022, April 1). What should ‘digital literacy’ look like in an age of algorithms and AI? DigiGen. https://www.digigen.eu/digigenblog/what-should-digital-literacy-look-like-in-an-age-of-algorithms-and-ai-neil-selwyn/
Shevlin, H., Vold, K., Crosby, M., & Halina, M. (2019). The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge. EMBO Reports, 20(10). https://doi.org/10.15252/embr.201949177
Singer, N. (2023a, June 8). Hey, Alexa, What should students learn about A.I.? The New York Times. https://www.nytimes.com/2023/06/08/business/ai-literacy-schools-amazon-alexa.html
Singer, N. (2023b, August 24). Despite cheating fears, schools repeal ChatGPT bans. The New York Times. https://www.nytimes.com/2023/08/24/business/schools-chatgpt-chatbot-bans.html
Sperry, S. (n.d.). A.I.–pros, cons, credibility and bias. Project Look Sharp. https://www.projectlooksharp.org/front_end_resource.php?resource_id=896
Strauß, S. (2018). From big data to deep learning: A leap towards strong AI or ‘intelligentia obscura’? Big Data and Cognitive Computing, 2(3). https://doi.org/10.3390/bdcc2030016
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. https://doi.org/10.48550/arXiv.1906.02243
Tiernan, P., Costello, E., Donlon, E., Parysz, M., & Scriney, M. (2023). Information and media literacy in the age of AI: Options for the future. Education Sciences, 13. https://doi.org/10.3390/educsci13090906
Tong, A. (2023, November 16). Exclusive: OpenAI wants ChatGPT in classrooms. Reuters. https://www.reuters.com/technology/openai-explores-how-get-chatgpt-into-classrooms-2023-11-16/
Towns, A. R. (2022). On black media philosophy. University of California Press.
Valtonen, T., Tedre, M., Mäkitalo, K., & Vartiainen, H. (2019). Media literacy education in the age of machine learning. Journal of Media Literacy Education, 11(2), 20-36. https://doi.org/10.23860/JMLE-2019-11-2-2
van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1, 213-218. https://doi.org/10.1007/s43681-021-00043-6
van Wynsberghe, A., Vandemeulebroucke, T., Bolte, L., & Nachid, J. (2022). Special issue “Towards the sustainability of AI; Multi-disciplinary approaches to investigate the hidden costs of AI”. Sustainability, 14(24). https://doi.org/10.3390/su142416352
Veltri, G. A., & Atanasova, D., (2015). Climate change on Twitter: Content, media ecology and information sharing behaviour. Public Understanding of Science, 26(6), 721-737. https://doi.org/10.1177/096366251561370
Verdecchia, R., Sallou, J., & Cruz, L. (2023). A systematic review of green AI. WIREs Data Mining and Knowledge Discovery, 13(4), 1-26. https://doi.org/10.1002/widm.1507
Wall Street Journal. (2023, October 2). AI can help fight climate change, but it also adds to it [Video]. The Wall Street Journal. https://www.wsj.com/video/series/tech-news-briefing/ai-can-help-fight-climate-change-but-it-also-adds-to-it/780D83F0-A919-4141-9BF1-1E3C666656D1
United Nations Educational, Scientific and Cultural Organization (UNESCO) (2022). Recommendation on the Ethics of Artificial Intelligence. Paris: UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000381137
Zhou, J., Müller, H., Holzinger, A., & Chen, F. (2023). Ethical ChatGPT: Concerns, challenges, and commandments. https://doi.org/10.48550/ARXIV.2305.10646
Zhuk, A. (2023). Navigating the legal landscape of AI copyright: A comparative analysis of EU, US, and Chinese approaches. AI and Ethics. https://doi.org/10.1007/s43681-023-00299-0
Current Issues
- Media and Information Literacy: Enriching the Teacher/Librarian Dialogue
- The International Media Literacy Research Symposium
- The Human-Algorithmic Question: A Media Literacy Education Exploration
- Education as Storytelling and the Implications for Media Literacy
- Ecomedia Literacy
- Conference Reflections
Leave a Reply