• Skip to main content
  • Skip to secondary menu
  • Skip to footer
International Council for Media Literacy

International Council for Media Literacy

Bridging Academia to Action

International Council for Media Literacy
Bridging Academia to Action
  • Get Involved with IC4ML
  • Homepage
  • About Us
    • Our Board
    • Our Advisory Council
    • Our History
      • Our Founders
      • Past Projects
      • Conferences
      • Sponsor Awards
  • Awards Program
    • Marieli Rowe Innovation in Media Literacy Education Award 
      • Marieli Rowe Innovation in Media Literacy Education Award Recipients
    • The Jessie McCanse Award
      • The Jessie McCanse Award Recipients
  • Newsletters
  • Blogs
  • The Journal of Media Literacy
    • About The Journal of Media Literacy
      • Our Philosophy
      • The Journal of Media Literacy Publication Ethics Policy
      • The Journal of Media Literacy Editorial Team
      • Author Guidelines for The Journal of Media Literacy
    • The Journal of Media Literacy Print Archives
      • The Journal of Media Literacy Print Archives 2018 to 2000
      • The Journal of Media Literacy Print Archives 1999 to 1953
    • The Journal of Media Literacy Digital Issues
      • The Journal of Media Literacy – Democracy by Collision or Connection? The Crisis of the Public Commons
      • The Journal of Media Literacy – Conference Reflections Issue
      • The Journal of Media Literacy – MIL Teacher Librarian Dialogue Issue
      • The Journal of Media Literacy – Research Symposium Issue
      • The Journal of Media Literacy – Human AI Issue
      • The Journal of Media Literacy – Ecomedia Literacy Issue
      • The Journal of Media Literacy – Storytelling Issue

Is it Paranoia? A Critical Approach to Platform Literacy

Setembro 1, 2022 by Dr. Mark Carrigan, Dr. J.J. Sylvia IV

Abstract

Social media platforms have received increasingly bad press coverage over the course of the last decade for everything from problematic uses of algorithms to the ability of authoritarian regimes to leverage them as a way to impact elections. Unfortunately, this emphasis on critique, though justified, has led to a paranoid form of thinking in which many understand that such risks exist, but lack a technical grasp of how such platforms function. We argue that platform literacy should be a foundational aspect of a university education, as it is vital to understanding how to best apply one’s agency, especially as part of an engaged citizenry in an increasingly digitized world.

Keywords

Platforms, Agency, Critique, Paranoia, Pedagogy, Algorithmic Thinking


Introduction

In recent years the ‘techlash’ has involved a significant shift in the public perceptions of large technology firms, as once admired corporations (and their leaders) are increasingly blamed for all manner of social and political problems. Once obscure concepts from the digital social sciences such as filter bubbles, fake news and computational propaganda feature prominently in public debates producing the sense of an information landscape constituted through deliberate manipulation, serving the interests of private actors on the winning side of what Andrejevic (2013) describes as the ‘big data divide’. There is a diffuse awareness of the ‘algorithm’ as a shadowy entity orchestrating social events from behind a curtain that would be easy to characterize in positive terms at the level of digital literacy, with a largely uncritical perspective on social media platforms being replaced by a recognition of the manipulation inherent in their operations.

In contrast, we argue in this paper that we are seeing the emergence of a paranoid perspective on the information landscape which is more effective at sewing doubt than supporting agency. danah boyd (2018) has pointed to a related problem that arises when media literacy is primarily taught through the lens of identifying bias. A paranoid perspective blossoms when platform users are digitally literate enough to identify that problematic practices are occuring, but are unable to identify their underlying causes or opportunities for action that could reduce such problems. Engaging with platforms through the lenses of fear and anxiety can lead not only to irrationality, but also to disengagement and an overall decline in agency with these platforms, eroding the capacity for either individual or collective action by failing to locate threats with any specificity.

Addressing this challenge requires reflection on the role of computational thinking and algorithms in education, with careful attention on locating the agency of educators as we make decisions about the platforms we use in our teaching. We argue for an approach to platform literacy that 1) emphasizes a focus on criticality rather than paranoia, 2) clarifies both the challenges and opportunities associated with operating in a multiplatform landscape, and 3) builds and sustains the practical capacity for agency. This approach is intended to support teaching about computational thinking and algorithms in education, as well as to provide a framework within which considerations of the platform landscape we are operating within can be incorporated into teaching more broadly. It emphasizes the role of platforms in pedagogy such that these issues are as much ‘in here’ as ‘out there, with the challenge for educators being how to link the two in a way which cultivates the platform literacy of students and supports their agency within what has been termed platform capitalism (Srnicek 2017).

To expand on these three points, we use Eve Sedgwick’s (1997) distinction between criticality and paranoia, through a form of reading which doesn’t presuppose the ubiquity of bad actors and the inevitability of threats. We bring this concept into dialogue with the field of platform studies, arguing that a focus on platforms as infrastructures co-constructed between users and operators helps ground claims about online threats in a concrete sense of their underlying source. Second, we must acknowledge the differences between learning how to use technology (including platforms) and learning content. Although this has long been recognized in distance education it is often ignored in other classroom settings. The use of multiple platforms, not only within individual classes but also across the larger educational experience of students, increases their cognitive load and can lead to a sense of chaos and/or dislocation. The lack of agency which students might experience in a wider platform landscape believed to be rife with threats risks being reproduced within educational institutions, with the expectation students adapt and thrive within a rapidly fluctuating multiplatform landscape over which they have little to no control. Clearly addressing the complexity of our multiplatform world allows one to reflect more carefully and critically on how to use each particular platform in a deliberate way, taking into account those elements which are unique to each platform. By combining these insights, one can then make better informed decisions about which platforms to use and how to engage with them in ways that expand the practical capacity for agency. This agency might take the form of better avoiding mis- and dis-information, participating in democratic systems and debates, developing and sharing creative work, and more.

Finally, we conclude with a series of questions for further research and preliminary recommendations about how to best integrate a platform literacy approach within the curriculum. Where would it make sense to situate it? How would it be embedded within the landscape of online and/or in-person education at an institution? How might our own use of platforms in the classroom be re-evaluated  and re-shaped in light of this perspective? Addressing these questions helps us clarify the role that computational thinking and algorithms can play in education, with a particular emphasis on the multiple ways that they shape agency. Most importantly, devising pedagogical strategies that help students develop a critical rather than paranoid approach to their interactions with a multitude of platforms is vital to an informed and engaged citizenry.

The Conceptual Grammar of Platforms

What is a platform? It’s curiously easy to neglect this question when the term circulates so widely, invoked by a diverse range of actors in order to refer to what is often assumed to be a taken for granted feature of contemporary social life. In an influential early paper, Gillespie (2010) identified four senses of platform which the contemporary use of the term draws upon: the computational (an infrastructure), the architectural (a place to stand), the figurative (a foundation to build) and the political (a body of commitments). As he puts it, “All point to a common set of connotations: a ‘raised level surface’ designed to facilitate some activity that will subsequently take place” (Gillespie, 2010, p. 351). In this sense, the notion of platform foregrounds the interaction which the infrastructure facilitates. It suggests a space in which users are able to come together, interacting through the platform in a way which was previously impossible. This might be an entirely mediated interaction that takes place on social media platforms such as Facebook, Instagram or TikTok. It might involve a face-to-face interaction made possible by the social coordination facilitated by the platform, as in so called ‘sharing economy’ services like Uber and AirBnB. In both cases the terminology of the platform foregrounds their enabling features, implying these infrastructures (and the firms which operate them) are producing new forms of sociality (Couldry, 2014b). The stress is on the social action which platforms make possible rather than the platforms themselves as social actors. This is a stance with significant regulatory consequences, by positioning platform operators as neutral distributors of information rather than interested parties responsible for the socio-technical environment in which this information circulates.

The problem with this view is that, as Gillespie (2015) puts it, platforms intervene. Their actions shape the platform environment in ways ranging from the broad functionality of the platform and the capacities available to users, to intensely granular decisions about the interface through which those capacities are accessed and whether, when and how particular uses of them deserve censure. In recent years we have seen decisions about content moderation in particular become immensely salient in political life, even to the extent of playing a significant role in major elections as with the sustained controversy over whether Twitter and Facebook would remove postings from then President Trump widely regarded as hate speech. Pasquale (2016) draws attention to the quasi-judicial role in which platforms now operate in so far as they adjudicate disputes with significant socio-political and socio-economic ramifications. As he observes, platform firms “are quick to take the credit when their platforms are part of movements, but trivialize user rights in their own governance” almost “as if the platforms see themselves as virtual worlds, whose users have essentially accepted (via terms of service) near-absolute sovereignty of corporate rulers” (Pasquale, 2016, p. 512). Far from providing a neutral space in which previously dispersed people are able to come together, we see something more akin to an enclosure in which social interaction is shifting into a privatized infrastructure resembling a fiefdom rather than a commons (Couldry & van Dijck, 2015).

This could be seen as two distinct uses of the term ‘platform’. The first is a positioning strategy by firms who want to present themselves as neutral intermediaries, as a matter of regulatory strategy and corporate branding. It indicates their role in what was briefly accepted as a participatory turn in social life, in which digital technologies made it easier for dispersed groups to come together and collaborate beyond the influence of established gatekeepers and other elites (Carrigan & Fatsis, 2021). Here Comes Everybody claimed Shirky’s (2009) influential techno-utopian touchstone in its title, with the terminology of platforms being adopted by these nascent firms in order to account for the connective gift they styled themselves as bequeathing to society: they were facilitating a latent pro-sociality which was untapped in pre-digital society for which they should be praised rather than held responsible (Couldry, 2014a, 2014b). The second use of the term is analytical in so far as it seeks to draw attention to the mechanisms through which platforms facilitate interaction rather than simply pointing to the fact of interaction. It draws attention to the features of the platform which are, to use Pickering’s (2011) phase, ontologically veiled by the first use of the term. It recognizes the interaction which platforms are facilitating but insists on drawing out the socio-technical framework through which this interaction happens, the choices underlying it and the commercial interests which have guided them. It historicises the platform by locating both the technology and the business model underlying it within a particular phase of capitalist development (Srnicek, 2017). The element which both uses have in common is their recognition of the multi-sided character of platform infrastructures which are able to bring together distributed parties in ways which were formerly unviable because of either the coordination problems involved or the scale of the interaction taking place. The value of the service will tend to increase proportionally as more users are connected creating a winner-takes-all dynamic often talked about in terms of the network effects inherent to platforms.

What are the mechanisms through which platforms facilitate interaction? In part this is a matter of platforms intervening in the sense already discussed. These environments are designed in a comprehensive and multifaceted way, relying on real-time data about user behavior to modulate every aspect of the service. This ranges from the nature of the interface through to the interactions undertaken using it and the outputs which circulate as a consequence. This activity is most readily apparent with social media platforms which structure what would otherwise be an overwhelming proliferation of content using mechanisms such as what van Dijck (2013) calls the popularity principle i.e. using quantified engagement with content and the users generating it in order to structure the platform in hierarchical ways. The intention is to increase user engagement in the sense of ensuring users stay for longer, return more often and engage with more material while they are active on the platform. The business model here is what Zuboff (2019) calls surveillance capitalism with user engagement being a vector through which models of users can be generated and leveraged for advertising. In effect this is, as Williams (2018) notes, a matter of attention being monetized. Even though the nature of these interventions is less straightforward than service platforms such as Uber and AirBnB, real time behavioral science of this sort is every bit as integral to how such firms operate their platforms. For example Uber uses data science to estimate fares and journey times, as well the controversial ‘surge pricing’ which intends to nudge drivers into high demand areas by offering a premium. To talk about this in terms of data science raises an interesting question because these are interested judgements in the sense in which the firm has a clear stake in the outcome. The actionable insights generated through these data infrastructures don’t match the traditional conception of objectivity but nor are they arbitrary. Instead we could see these in performative terms as constituting the facts of the platform environment to which user participants are forced to respond.

In practice this results in what Marres (2018) describes as an “environment in two halves” in which “users with ‘influence-able’ and ‘target-able’ opinions, tastes, and preferences” exist on one side with “authoritative data analysts who ‘know’ the population’s fine- grained and ever-changeable preferences and tastes”. This involves a split between a front-end interface oriented toward users (providing them with a limited range of metrics in which to understand their own use) and a back-end interface oriented towards developers with sophisticated mechanisms for analyzing and intervening on the platform. The nature of these interventions obviously depends on the sector in which the platform operates and the business model of the firm underlying it. However these epistemic asymmetries are inherent to the platform itself, with user behavior generating data traces which are available to the operators but not the end users. It’s a contingent matter of the effectiveness to which these are exploited by firms but what Andrejevic (2014) presciently described as the ‘big data divide’, in which some actors have access to sophisticated behavior data whereas others are forced to rely on gut instincts, is inherent to the platform economy.

The Multiplatform Landscape

The COVID-19 pandemic both exacerbated and made clearer many societal disparities. One such disparity that revealed itself in higher education was that even students who had access to a computer and stable internet connection struggled to navigate the prolific number of platforms that were used by instructors moving their courses online. Aiming for flexibility during an interregnum, a wide variety of communications channels, course formats (online, hybrid, hyflex, etc.), and course platforms were adopted. We have argued elsewhere (Sylvia IV, 2022) that this approach created an overload for many students, who struggled to know where and when to access their course content. However, this struggle only revealed a challenge that had been ongoing, even before the onset of the pandemic. The predominance of a tool based framing in online learning makes it difficult to see how the combination of platforms produces an environment for students rife with unintended consequences.

We argue that it is increasingly pedagogically important to acknowledge that students are embedded in a multiplatform environment in which they often must learn how to use a particular platform and/or technological tool while also learning the content of a discipline at the same time. Though this has long been recognized in literature that explores the best practices for distance education, this concept has perhaps been overlooked in more traditional in-person courses in which the use of platforms has proliferated over the past decade, even for courses that are fully and only “in-person.” Further, it must be recognized that the cognitive load associated with learning and using multiple platforms extends across various classes in which a student may be enrolled. This multi-platform approach can occur for many reasons. For example, although many universities formally adopt one learning management system (LMS), such as Blackboard, Canvas, or Moodle, access to the Google LMS is often made available to schools that adopt the Google Education Suite. Additionally, instructors may require content to be created and posted to podcasting or video platforms such as Soundcloud and YouTube, or to use additional sites such as Perusall for social annotation or FlipGrid for video-based discussion forums. These requirements can accumulate significantly across multiple courses taught by different instructors who are very unlikely to be coordinating their adoption of digital tools and platforms. There is also an asymmetry here, as instructors can choose and limit the number of platforms needed across all of the courses they teach, while students must learn to engage with the multitude of platforms assigned by many different instructors. 

Within this multi-platform landscape, how the various platforms are introduced and incorporated into a course take on additional importance, though the strategies for doing so span a wide spectrum, ranging from detailed step-by-step instructions to simply letting students attempt to figure out the platform on their own. Traditional college-aged students are often understood as digital natives; however, we argue that if this label is to remain relevant, it must be interpreted as one’s having basic familiarity with using a variety of platforms rather than a deep understanding or mastery of such platforms. In short, students may be engaging with many different platforms but not developing a true understanding of how they work. This matters because even a pedagogical approach which offers detailed step-by-step instructions may not engage with the deeper logistics and algorithmic functioning of a platform, lest the course is specifically focused on some form of media literacy as a learning outcome. Even then, platform logistics are still rarely included under the larger umbrella of media literacy.

These decisions cut across traditional areas of expertise, as the overall platform environment within an institution is shaped through the aggregate decisions of instructors and learning designers but within an infrastructural context over which they often have little control, at least on an individual level. To the extent online learning moves into the pedagogical mainstream within a university the decisions made at the level of information systems about procurement, maintenance and development become newly central to the life of the institution. McCluskey and Winter (2012, p. 15) suggest that “if faculty members do not control information technology, the information technology department will control the faculty members”. Though this is undoubtedly an overstatement, it nonetheless dramatises a growing tension within the post-pandemic university, in which the inclination of instructors to innovate and experiment sits in tension with institutional imperatives to limit subscriptions and ensure a coherent learning environment for students. There are no simple answers to the challenges which emerge under these conditions but we suggest that the experience of students should be foregrounded in understanding the politics of the multiplatform university.

The conceptual transition from tools to platforms is significant in this respect (van Dijck & Poell, 2017). It foregrounds the capacity of platforms to establish the parameters of interaction, encouraging certain modes of relation and discouraging others, in ways which accumulate commercially relevant data which is accessible to operators but not to end users. This means that questions of ethics and ownership are unavoidable, as the expectation that students will engage on a given platform incorporates their activity within the circuits of digital capitalism (Srnicek, 2017; Zuboff, 2019). It follows from this that political questions concerning technology are as much ‘in here’ within the university, as they are ‘out there’ in wider society.

This is ethically challenging but also pedagogically salient because it creates opportunities to change what might otherwise be diffuse conversations about shadowy actors operating at a distance (which we term paranoid) into constructive dialogues about shared experiences of the multiplatform environment and how this might be changed (which we term critical). Doing so requires that we recognize the limitations of our own agency as educators, in so far as that we operate within an infrastructural environment established at an institutional level, alongside our capacity to shape that environment through individual choices (concerning platforms we adopt, how we frame their use and what we expect of students) as well as the responsibility we have to coordinate these choices in pursuit of a multiplatform environment adequate to the needs of both educators and students. As McCluskey and Winter (2012) point out, there is a political dimension to the provision of technology at an institutional level which has often been unrecognized. This has only become more pronounced since the pandemic, with the shift this has entailed for modes of delivery within higher education.

Outside of the classroom, platform critiques have begun to receive wide-spread coverage in mainstream media through reporting on events such as the Cambridge Analytica controversy, Russian social media interference in campaigns for both Brexit and the 2016 U.S. Presidential election, and U.S. Congressional hearings in which platform CEOs have testified. Such coverage brings valuable awareness of the problems associated with such platforms, as more people better understand concepts like filter bubbles, digital surveillance, and targeted advertising. However, popular coverage of these issues often doesn’t confer a full technical understanding of why such problems occur in light of how the platforms are coded, managed, regulated, and moderated. We argue that this increasing knowledge about the fact that such problems exist, without the development of a deeper understanding of the platforms themselves, can lead to a problematic form of paranoid agency. Therefore, we find ourselves in an environment where many are suspicious of the platforms they use on a daily basis, while a proliferation of uncritical platform adoption and use in the classroom fails to help students develop any deeper understanding of either the platforms themselves or how students might interact with them in ways that increase their agency. For this reason, we argue that platform literacy should be understood as a necessary foundational cornerstone of a college education, as platforms have wide ranging societal impacts.   

Critical Rather Than Paranoid Agency

In her influential analysis of practices of reading within literary theory, Sedgwick (1997) draws attention to the “methodological centrality of suspicion to current critical practice” to the extent that “paranoia has by now candidly become less a diagnosis than a prescription” (Sedgwick, 1997, p. 125). The influence of thinkers such as Marx, Nietzsche and Freud can be understood in terms of the susceptibility of human subjects to misapprehension, failing to recognize the realities they confront in ways shaped by exterior economic interests or internal depth psychological ones. In making this case, Sedgwick (1997, p. 125) is not denying the reality of systemic injustice within social life, but rather calling attention to a situation in which “to theorize out of anything but a paranoid critical stance has come to seem naive, pious, or complaisant”. Paranoia is an anticipatory orientation which seeks to negate unwelcome surprises in a world which by its nature will continually generate them. It tends, as Sedgwick (1997, p. 131) puts it, to “grow like a crystal in a hypersaturated solution, blotting out any sense of the possibility of alternative ways of understanding”.

Paranoia tends to be self-perpetuating through its instinct to regard non-paranoid readings as naive. If the paranoid subject steps back from this orientation only to be met by an unwelcome surprise, this can be regarded as a justification of the paranoia which had correctly diagnosed the ubiquity of threat in the first place. In its attempt to control the future by anticipating threats coming from all directions, it places “an extraordinary stress on the efficacy of the knowledge per se – knowledge in the form of exposure” (Sedgwick, 1997, p. 138). It imagines that if only we could expose the threat, we could avoid it and that anticipatory knowledge of the threatening landscape is the means through which we might achieve this. In contrast criticality involves an “emphasis on the potentiality of the present, in all the complexities of our implications in its creation and re-creation” rather than a twitchy focus on how we might negotiate an already bad situation which is always getting worse (Roseneil, 2011).

Though Sedgwick (1997) was writing in a pre-platform environment, her distinction becomes particularly significant once the world has become saturated with platforms which mediate interactions between distributed parties. The paranoid imperative towards anticipation and unveiling sits incongruously within a platform environment structured by what Andrejevic (2013, p. 13) describes as the ‘big data divide’ in which “control over the tremendous amount of information generated by interactive devices is concentrated in the hands of the few who use it to sort, manage, and manipulate” while people “without access to the database are left with the ‘poor person’s’ strategies for cutting through the clutter: gut instinct, affective responses and ‘thin-slicing’ (making a snap decision based on a tiny fraction of the evidence)”.

There is a widespread awareness amongst users of the designed nature of the information environment, ranging from a recognition of the mundane changes which daily users of the platform are presented with through to the strategic conduct which can be seen on the part of influencers and those who aspire to this status (Johnson et al., 2019). It is a mistake to imagine that users respond in uniform ways to the fluctuating incentives and disincentives which the platform environment offers for those pursuing visibility and status (Ørmen & Gregersen, 2022). However, in itself, the recognition that there is a situational logic confronting users of the platform, in which some actions are more conducive than others to the rewards which the platform offers, only entrenches the epistemic asymmetry which users are operating under. The platform is a machinery of behavioral intervention which offers users only the most fleeting insights into its consequences, relying on inferences from their own experience and that of others rather than providing any opportunity to look under the hood. This emphasizes the everyday character of manipulation while making it extraordinarily difficult to gain authoritative insights into who, why and how is driving these outcomes.

Another approach is possible, however. Sedgwick (1997) argues that reparative reading, a strategy long recognized and used within queer theory, can allow one to instead use their knowledge to create positive change in the current moment. The paranoiac climate of social platforms, in which manipulation can be found everywhere while the manipulators remain unseen, can be mitigated by a pedagogy which focuses on the capacity for intentional action of users, grounded in the values and commitments they bring with them as people with roles beyond the narrowly technocratic sense of ‘user’. For example, rather than focusing on critiquing and further exposing the role that Russia’s use of social media has played in recent elections or the spread of disinformation and its negative impact on civil discourse, one could instead deploy their efforts at using those same algorithms to advertise an event that models and enacts intergenerational civic dialogue (Sylvia IV, 2021). This counter-actualization (Braidotti, 2013) challenges one to take the problem discovered through the process of critique and enact an ethical alternative. Such an approach does not eliminate the need for critique – a deep knowledge of the problems at hand is required to perform a reparative reading or a counter-actualization. Yet, it is precisely this deep knowledge that we argue is lacking and needs to be filled through the intentional development of platform literacy, which would broaden one’s practical capacity for agency when 1) making decisions about which platforms to use 2) avoiding mis- and dis-information  and 3) using platforms as citizens in participation of larger democratic systems.

Developing such literacy can be difficult when students are required to interact with an increasingly large number of platforms. This challenge highlights the multiple and often conflicting agencies involved, including but not limited to the agency of platform developers and motivations for profit, the agency of instructors and their academic freedom in selecting the best platforms for use in their classrooms, and the agency of students trying to successfully complete their education, increasingly balancing multiple responsibilities such as school, work, and family. These challenges make clear the need to think more carefully about how platform literacy can be integrated into student learning outcomes. This means cultivating an ever more sophisticated awareness and transparency of the interpenetration of technological and pedagogical decision making. The choices we make and our reasons for making them are inherently relevant to a pedagogy adequate to platform capitalism.

Principles for Platform Literacy

We argue that a focus on platforms as infrastructures co-constructed between users and operators helps ground claims about online threats in a concrete sense of their underlying source. Developing a deeper understanding of how the infrastructure is co-constructed allows users to more robustly understand where their agency is involved in the process, and by extension, how changes in action could lead to changes in the platform. This is a point we developed at some length because it has significant implications downstream at the level of pedagogy and practice. It creates the space in which we can recognize the agency which students are capable of within a multiplatform environment (which by its nature encompasses and extends far beyond the learning environment of the university) while recognizing that the realization of this agency cannot be taken for granted. This must not remain the norm.

We can no longer accurately assume that all “digital native” students have a deep understanding of platforms and their computational logistics. This highlights how such agency is a precarious achievement (both within the university and the wider society) challenging us to identify the conditions which make its realization more rather than less likely. The success of this undertaking necessitates that we avoid a paranoid reading of platforms which undermines the potential for agency in the face of the diffuse power of the platform. This makes clear why a literacy approach is needed. Only teaching critiques of platforms or, worse, completely ignoring the platforms themselves for the content they facilitate leads to a paranoid reading which creates intellectual and affective obstacles to facilitating the agency of students. Furthermore, bringing our decision making concerning platforms (as well as the constraints upon it) into our teaching makes it possible to explore these issues as immediate features of a shared environment, without of course limiting these discussions to the university context.

Clearly addressing the complexity of our multiplatform world allows one to reflect more carefully and critically about how to use each particular platform in a deliberate way, taking into account those elements which are unique to each platform. This requires a practical approach. As previously acknowledged, there are multiple layers of agency which must be considered and engaged. For example, instructors could engage their agency to consider which other platforms are currently in use at their university and adopt those rather than other similar platforms (Sylvia IV, 2022). If Blackboard is used widely across campus, one may opt to use that rather than Google Classroom, despite a preference for Google Classroom. Such an approach would reduce the students’ cognitive load required for learning and/or using an additional platform, creating additional time which might be used on meta-reflection about the platform itself and their agency, as described above, in using it. Ideally, this approach could also be incorporated in ways that would allow students to shape this environment through their own agency. Of course, this approach can be applied to other platforms and not only learning management systems.

The point of the approach we are advocating is to integrate decision making about platforms into learning design as an object of reflection, with a view to cultivating an understanding of the choices we can and cannot make in the multiplatform environment, as well as the institutional and technological factors underlying these constraints. Being transparent with students about these decisions will also be an important pedagogical step in helping students better develop their own agency as they engage with platforms. 

Conclusion

We hope this paper has demonstrated the need for the incorporation of platform literacy into both institutional and pedagogical practices. However, even after acknowledging such a need, determining how best to implement this practice presents its own set of challenges. We conclude with a series of questions for further research and preliminary recommendations about how to best integrate a platform literacy approach within the curriculum.

Where would it make sense to situate it? Like other literacies, platform literacy is not a discipline specific issue. While incorporating platform literacy in one’s own individual courses may be an effective short-term solution, we argue that the importance of platform literacy to being an engaged citizen warrants its inclusion in the curriculum in a way that it impacts all students, and not only those of specific majors. For example, it may be worth considering how to incorporate such an approach into general education outcomes. Could it be incorporated into other already existing classes, or does it make sense to build entirely new courses to address these challenges, perhaps even combined with other emerging literacies?

How would it be embedded within the landscape of online and/or in-person education at an institution? In answering this question, it will be important to consider not just where platform literacy is included in curriculum, but how faculty, students, and other staff on campus discuss and adopt platforms across campus. In other words, how might institutions balance academic freedom, financial concerns, and the wide variety of educational and other platforms being adopted? Whose agency is and should be involved in this decision-making process?

How might our own use of platforms in the classroom be re-evaluated and re-shaped in light of this perspective? How might we as instructors think more carefully about the use of platforms within our own classes? In light of answers to the above questions, what opportunities can we pursue that move beyond either critique or unreflective adoption and help our students better understand and increase their own agency?

Addressing these questions helps us clarify the role that computational thinking and algorithms can play in education, with a particular emphasis on the multiple ways that they shape agency. Most importantly, devising pedagogical strategies that help students develop a critical rather than paranoid approach to their interactions with a multitude of platforms is vital to an informed and engaged citizenry. In short, we recommend that platforms themselves be carefully considered, both in their selection for use as part of a course and within the course by students. They should not be imagined as merely a content delivery system, as this lack of critical engagement develops a paranoid mindset in students.

References

Andrejevic, M. (2013). Infoglut: How too much information is changing the way we think and know. Routledge.

Andrejevic, M. (2014). Big Data, Big Questions| The Big Data Divide. International Journal of Communication, 8, 17. https://ijoc.org/index.php/ijoc/article/view/2161

boyd,  danah. (2018). What hath we wrought? [Keynote Speech]. SXSW EDU. https://www.youtube.com/watch?v=0I7FVyQCjNg

Braidotti, R. (2013). The posthuman. Polity Press.

Carrigan, M., & Fatsis, L. (2021). The Public and Their Platforms: Public Sociology in an Era of Social Media (First edition). Policy Press.

Couldry, N. (2014a). The myth of ‘us’: Digital networks, political change and the production of collectivity. Information, Communication & Society, 18(6), 608–626. https://doi.org/10.1080/1369118X.2014.979216

Couldry, N. (2014b). Inaugural: A Necessary Disenchantment: Myth, Agency and Injustice in a Digital World. The Sociological Review, 62(4), 880–897. https://doi.org/10.1111/1467-954X.12158

Couldry, N., & van Dijck, J. (2015). Researching Social Media as if the Social Mattered. Social Media + Society, 1(2), 2056305115604174. https://doi.org/10.1177/2056305115604174

Gillespie, T. (2010). The politics of ‘platforms.’ New Media & Society, 12(3), 347–364. https://doi.org/10.1177/1461444809342738

Gillespie, T. (2015). Platforms Intervene. Social Media + Society, 1(1), 2056305115580479. https://doi.org/10.1177/2056305115580479

Johnson, M. R., Carrigan, M., & Brock, T. (2019). The imperative to be seen: The moral economy of celebrity video game streaming on Twitch.tv. First Monday. https://doi.org/10.5210/fm.v24i8.8279

Marres, N. (2018). Why We Can’t Have Our Facts Back. Engaging Science, Technology, and Society, 4, 423–443. https://doi.org/10.17351/ests2018.188

McCluskey, F. B., & Winter, M. L. (2012). The Idea of the Digital University: Ancient Traditions, Disruptive Technologies and the Battle for the Soul of Higher Education (Edition Unstated). Policy Studies Organization.

Ørmen, J., & Gregersen, A. (2022). Institutional Polymorphism: Diversification of Content and Monetization Strategies on YouTube. Television & New Media. https://doi.org/10.1177/15274764221110198

Pasquale, F. (2016). Two Narratives of Platform Capitalism. Faculty Scholarship. https://digitalcommons.law.umaryland.edu/fac_pubs/1582

Pickering, A. (2011). The Cybernetic Brain: Sketches of Another Future. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/C/bo8169881.html

Roseneil, S. (2011). Criticality, Not Paranoia: A Generative Register for Feminist Social Research. NORA—Nordic Journal of Feminist and Gender Research, 19, 124–131. https://doi.org/10.1080/08038740.2011.568969

Sedgwick, E. K. (1997). Paranoid Reading and Reparative Reading; or, You’re So Paranoid, You Probably Think This Introduction is About You EVE KOSOFSKY SEDGWICK. https://doi.org/10.1215/9780822382478-001

Shirky, C. (2009). Here comes everybody: The power of organizing without organizations: [with an updated epilogue] (Nachdr.). Penguin Books.

Srnicek, N. (2017). Platform Capitalism. Polity Press. https://www.wiley.com/en-us/Platform+Capitalism-p-9781509504862

Sylvia IV, J. J. (2021). Civic Engagement and Dialogic Approaches to Post-Pandemic Truth in the Classroom. In A. Atay & D. Kahl Jr. (Eds.), Pedagogies of Post-Truth (pp. 137–154). Lexington Books.

Sylvia IV, J. J. (2022). The Varieties of Online Learning Experience: A Study of the Infodemic. In M. Carrigan, H. Moscovitz, M. Martini, & S. Robertson (Eds.), Building the Post-Pandemic University.

van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford University Press.

van Dijck, J., & Poell, T. (2017). Social media platforms and education. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE Handbook of Social Media (pp. 579–591).

Williams, J. (2018). Stand out of our Light: Freedom and Resistance in the Attention Economy. Cambridge University Press. https://doi.org/10.1017/9781108453004

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power: Barack Obama’s Books of 2019 (Main edition). Profile Books.

Current Issues

  • Public Commons
  • Media and Information Literacy: Enriching the Teacher/Librarian Dialogue
  • The International Media Literacy Research Symposium
  • The Human-Algorithmic Question: A Media Literacy Education Exploration
  • Education as Storytelling and the Implications for Media Literacy
  • Ecomedia Literacy
  • Conference Reflections

Archived JML Print Issues

  • Print Issues years 2018 to 2000
  • Print Issues years 1999 to 1953

Learn More About The Journal of Media Literacy

  • About the Journal of Media Literacy
  • Our Editorial Team
  • Our Philosophy
  • Publication Ethics Policy
  • Author Guidelines
  • Get Involved
  • Dr. Mark Carrigan
    Lecturer in Education University of Manchester

    Dr Mark Carrigan is a Lecturer in Education at the University of Manchester where he is programme director for the MA Digital Technologies, Communication and Education. He directs the Post-Pandemic University project which is an international network comprising an online magazine, podcast hub and conference series. He’s the author of Social Media for Academics, published by Sage and now in its second edition.

  • Dr. J.J. Sylvia IV
    Assistant Professor of Communications Media Fitchburg State University

    Dr. J.J. Sylvia IV is an Assistant Professor of Communications Media at Fitchburg State University, where he helped launch B.A./B.S. degrees in Digital Media Innovation and an M.S. in Applied Communication with a specialization in Social Media. ethics. His research focuses on the philosophy of communication and analyzing the impact of big data, algorithms, and other new media on processes of subjectivation — how we are created as subjects. He has published in journals such as The Journal of Interdisciplinary Studies in Education, International Journal of Communication, and Social Media + Society.

Share This:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Pinterest (Opens in new window) Pinterest
  • Share on Reddit (Opens in new window) Reddit
  • Share on Telegram (Opens in new window) Telegram
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print

The Journal of Media Literacy Human AI
Algorithmic Thinking Pedagogy Paranoia Agency Critique Platforms

Reader Interactions

Leave a ReplyCancel reply

Footer

International Council for Media Literacy

Formerly the National Telemedia Council

Support Media Information Literacy:

IC4ML is a 501(c)(3) based in Wisconsin, USA with members Worldwide.

Join Our Mailing List

Read Past Newsletters

Search

Contact Us

ICforML@gmail.com

View Ways to Get Involved

  • Email
  • Facebook
  • Instagram
  • Twitter

Copyright © 2026 · International Council for Media Literacy. All Rights Reserved.

 

    • English (Inglês)
    • Português
    • Español (Espanhol)