Being human means being capable of solving moral challenges. Yet, just as there exists the pain of thinking, there is pain of decision-making and humanity seems to be on a quest of shifting that responsibility to AI, seeking relief from the very weight that defines human experience. We thought technological progress was needed for humans to flourish, but it seems we use it to help ourselves to degrade. This essay aims to examine how AI could be developed and governed to protect and enhance human autonomy, particularly by addressing the psychological phenomenon of the “agentic shift” in human-AI interactions. The agentic shift, a concept introduced by Stanley Milgram, occurs when individuals transfer responsibility for their actions to an authority figure. In the context of AI, this shift results in humans abdicating their decision-making to artificial systems. By analysing the works of Stanley Milgram, Neil Postman, George Lakoff and Mark Johnson, this essay explores how the metaphorical framing of AI as human-like has contributed to the premature attribution of human qualities to these systems, undermining human agency and responsibility. Within this framework, the essay argues that AI must be developed as a tool that reinforces human autonomy rather than replacing or diminishing it. Specifically, it examines how AI can be programmed to prevent the agentic shift, helping users recognize the boundaries of AI’s capabilities while fostering a clear perception of AI as a supportive tool rather than an authoritative decision-maker.
The psychological phenomenon of “Agentic State” was first introduced by Stanley Milgram in 1963 and described taking responsibility off ourselves and shifting it to an authority, letting that authority tell us what to do. In 1993, American communication theorist Neil Postman argued in his famous book Technopoly that the “agentic shift” happened between humans and AI when scientists began to speculate about the possibility of designing intelligent information machines.
Despite the fact that technologies of that time were inadequate to allow the fulfilment of an idea of duplicating a human mind, the general public began to see computers not merely as tools, but as entities with human-like qualities, prematurely attributing to these imperfect systems the ability to think. AI intelligence has not only been humanised in people’s perception, but has begun to be perceived as an expert of last resort. The conditions for this shift began with the emergence of the computer-human metaphor. J. David Bolter identified this metaphor in his 1984 book Turing’s Man, where he argued that modern society started to equate machines with humans, and humans with machines. As a result, we began to perceive ourselves as “information processors.”
One of the most striking examples of how deeply our language has embraced the ‘machine as a human’ metaphor occurred in 1988, when computers across the ARPANET network became overloaded. Postman argues that the use of the term “virus” to describe computer malfunctions led to a significant change in how people perceive the relationship between humans and computers. He suggests that this anthropomorphic language implies computers have human-like qualities such as the ability to fall ill, recover, think, and make decisions.
This linguistic shift, Postman contends, subtly transfers responsibility for outcomes from humans to computers. He refers to this phenomenon as an “agentic shift,” borrowing the term from Stanley Milgram, to describe how people attribute agency to machines, thereby absolving themselves of responsibility for the computer’s actions or decisions. In Postman’s words, “Technopoly is a state of culture. It is also a state of mind. It consists in the deification of technology, which means that the culture seeks its authorization in technology, finds its satisfactions in technology, and takes its orders from technology.” This explains how the increasing reliance on AI and machines fosters a cultural mindset that further entrenches this agentic shift.
In their famous book Metaphors We Live By, George Lakoff and Mark Johnson point out that ideologies are always framed in metaphorical terms. As with all metaphors, those used in Technopoly can hide aspects of reality, subtly shaping our perception. Such metaphors redefine how we understand ourselves, reducing human potential to the narrow functions of machines.
Postman provides concrete evidence of how this metaphorical framing operates in practice. He demonstrates how Technopoly fundamentally alters our relationship with every aspect of life. As he explains, “In Technopoly, we have an entirely new relationship to the world, to information, to each other, and to ourselves. It redefines what we mean by religion, by art, by family, by politics, by history, by truth.” According to Lakoff and Johnson, metaphors like these constrain our understanding of reality, and by virtue of what they hide, can lead to human degradation. When people limit their self-perception to the capabilities of computers, they inevitably see themselves as inferior to AI, thus allowing AI to assume a position of authority. This shift in perception reinforces the “agentic shift” that Postman warns against where humans increasingly surrender their control and responsibility to technology.
To bring humans back to the active role, AI should be programmed with the prevention of the agentic shift as a core principle within its hierarchy, not only helping people recognize the shift but, more importantly, fostering a mode of communication between humans and AI that reinforces the perception of AI as a tool for problem-solving, rather than as the problem solver itself. This approach safeguards both freedom of thought and freedom of action by making it clear that humans cannot yet shift responsibility to AI.
In essence, we are stating the obvious. We need to program AI to serve humans, and AI can specifically accomplish this task if it helps humans get rid of the illusion that artificial intelligence surpasses human intelligence. To help users form an adequate understanding of AI possibilities, modern AI systems should include in its design three key objectives:
From a technical standpoint, modern AI systems should be designed to fulfil three key objectives:
1. to assist users in identifying precursors that lead to an agentic shift. This involves helping individuals recognize unrealistic expectations placed on AI systems;
2. to aid in the detection of the agentic shift itself and guide users on how to communicate with technology in a way that prevents people from turning off their own thinking;
3. to recognize that the first two steps should lead to the final step of «disenchantment» or deconstruction of the agentic shift. This step is crucial and needs further elaboration.
The process of “disenchantment” draws inspiration from structuralist methodologies in social sciences. The core idea is that by becoming aware of the structures or forces that shape our perceptions and behaviours, we can liberate ourselves from their unconscious influence. In the context of human-AI interaction, this translates to helping users understand the psychological mechanisms behind the agentic shift.
By explicating how and why we might be tempted to abdicate our decision-making power to AI, we empower users to resist this tendency. The AI could, for instance, provide insights into cognitive biases. It might offer examples of how over-reliance on AI in various fields has led to errors or missed opportunities for human creativity and intuition. To develop AI in such a way, developers themselves should obtain expertise to monitor and comprehend the societal impact of human-AI interactions. The area of AI development has to become a multidisciplinary one, putting an end to the hostility between humanists and physical scientists.
Expertise in communication theory and psychology will help to understand how AI affects the way people construe the world and how it limits and redefine human understanding of themselves. This understanding becomes even more critical when we consider Erich Fromm’s insight: “The strength of the dominant worldview and the belief system associated with it, by shaping people’s thinking, values, and decisions, directs the course of history.”
If, when developing AI, developers manage to make the mechanism of AI operation more transparent, it will become easier for users to understand that AI is not able to solve complex humanity’s problems. As Neil Postman highlights, the belief that “the most serious problems confronting us at both personal and public levels require technical solutions through fast access to information” is misguided. He argues that this perspective is flawed because “our most serious problems are not technical, nor do they stem from a lack of information.”
Therefore, relying solely on technology to address such issues is ineffective. Unlike algorithms, human beings have a unique ability to improvise and invent non-standard strategies in unpredictable circumstances, rather than just activating well-organised passive information. Possessing formal logic, computers do not have intuition, widely used by humans for creative problem-solving that does not allow for sequential analysis due to its complexity. Only humans can address these problems, but AI can definitely be used as a tool. AI should be developed in a way that helps humans rise from the dark depths of misconceptions, helping to form a more adequate perception of AI itself. The interface should be developed in such a way that the communicative layer through which people interact with AI emphasises that AI is not acting autonomously. Instead, it must be clear that it is a tool created by other humans, and it has certain limitations. If developed in such a way, AI will help humans restore faith in their own abilities and expertise.
Some sort of an instruction with explanations could also be introduced into AI technologies, including knowledge that will help outline the problem that AI is not yet equivalent to Human Intelligence, that AI is yet inferior to Human Intelligence, and therefore humans do not yet have grounds to shift responsibility for solving tasks onto AI.
Returning to the issue of adequate representation, one could suggest that if the metaphor based on the computer-human principle leads to human degradation, forcing us to simplify our perception of ourselves to the level of a mechanised machine, then a possible solution might be to invent a new metaphor for a human. A metaphor that would help us discover new facets of ourselves and through this understanding better realize our potential.
However, we can do something else — we can create a new metaphor for AI that would simplify our perception of this technology. A metaphor working on the principle of “AI as a tool.” This metaphor will downplay AI, shift it from the pedestal on which AI was raised by our society due to accumulated illusions.
The perceived but unspoken mismatch between the promised capabilities of AI and its real functions have already caused in our society some disappointment with science as such. But science itself is not an illusion. It would be an illusion to think that it can give us what it is actually not capable of. If we overcome the illusions surrounding AI, our interaction with this technology could be seen as true scientific progress rather than outdated fanaticism. Then, one day, the interaction between Artificial Intelligence and Human Intelligence can help us truly flourish, safeguarding both freedom of thought and freedom of action in a harmonious way.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Fromm, E. (1991). The Pathology of Normalcy (p. 24). American Mental Health Foundation.
Lakoff, G., & Johnson, M. (1980). Metaphors We Live By (p. 26). Chicago: University of Chicago Press.
Milgram, S. (1963). Behavioral Study of Obedience. Journal of Abnormal and Social Psychology, 67(4), 371-378.
Postman, N. (1992). Technopoly: The Surrender of Culture to Technology (p. 142). Vintage Books.
Weber, M. (1946). From Max Weber: Essays in Sociology. Oxford University Press.
Leave a Reply