Critical AI Literacy: Beyond hegemonic perspectives on sustainability
How can universities resist being coopted and corrupted by the AI industries’ agendas?
Marcela Suarez1, Barbara C. N. Müller2, Olivia Guest1, and Iris van Rooij1
1 Department of Cognitive Science and Artificial Intelligence, Radboud University
2 Group of Communication Science, Behavioural Science Institute, Radboud University
“With a devastating ecological footprint and the exploitation of hidden labour, hyped-AI serves to amplify discrimination and other social, economic and environmental injustices.” — van Rooij & Guest (2025)
Radboud University envisions a greater role for ‘AI’ (short for Artificial Intelligence) in its new mission.1 This is cause for reflection, to say the least, since Radboud University is committed to contributing to a sustainable and equitable world through cultivating critical thinking. Currently popular, widespread, and hyped technologies are in great tension with these stated values, and thus a focus on AI seems to imply giving up on our values. In this short blogpost, we illustrate just how deep these problems go.
Hegemonic AI is unsustainable
The AI hype promoted by tech corporations2 resulted in the global expansion of hyper-scale data centers. These facilities’ extensive computational infrastructure requires intensive use of resources such as electricity, water, labor, and land to develop their generative image models, which produce synthetic images, and their large language models (LLMs), which do the same for text. Since tech companies are not reporting their energy and water, experts now warn of the increasing environmental harms caused by AI, with a critical study estimating that AI industries could consume as much energy as the Netherlands by 2027.3 Meanwhile, and alarmingly, companies are reportedly investing in reopening nuclear power facilities to cover the great amount of energy that AI technologies will require in the coming years.4 Some companies are even using methane gas illegally.5
Sadly, energy is not the only public good whose sustainability is compromised due to AI hype: water is indispensable for data centers, too. Millions of liters of water are needed to cool the extensive computational infrastructures on which LLMs are developed and used. One study notes that writing a 100-word email with AI technologies consumes half a liter of water and enough energy to fully charge seven mobile telephones.6 Another study estimates that by 2027, the AI industry will use half the amount of water as does the United Kingdom annually.7 However, there are further hidden environmental costs of water, energy, and other minerals and metals in the AI supply chain, not to mention the electronic garbage that ends up in countries of the Global South.8
These social and environmental harms of AI technologies are related to the very material consequences in the territories that sustain the data centers supporting systems like LLMs.9 Across the globe, from Mexico to Taiwan, communities living next to the data centers report electricity outages, light pollution, and water scarcity. This is particularly worrying in the Global South which is rich in resources and so attracts predation from AI companies.10 These corporations are intensifying the extractivist business model, which runs on extant colonial lines, to secure AI computational power. This is to say that natural and human resources are stolen from local territories, leading to scarcity, displacement, and violence.

Hegemonic AI pervades public spheres
In spite of its enormous resource cost, AI technologies are being pushed on society without critical reflection: in education, government, banking, science, journalism, police and military forces, across a wide range of applications, such as browsers, video calls, and other software. Supposedly these applications promise to help us in some way by making us work more efficiently and effectively.11 However, AI technologies reinforce social inequalities and harms.12 These technologies are increasingly involved, for example, in allegations of racism, sexism, plagiarism, violations of copyright, and professional integrity,13 just to mention a few.
Although claimed otherwise, AI technologies are never gratis: they come at a high cost for our planet and society, and even more so for communities that have been historically oppressed by colonial power structures.14 Despite gradually increasing critique of issues surrounding sustainability of AI in the media, much public confusion and misinformation persists. This is caused in part by the AI industry, on the one hand, aggressively lobbying against regulation and, on the other hand, self-describing as “green” and as working towards sustainability — a maneuver known as greenwashing. Using this contradiction, these companies obscure their energy and water consumption in data centers and their societal and political harms.15 AI technologies are thus presented as indispensable to society, suggesting that only with the help of AI technologies we are able to fight, for example, the climate crisis — also known as technosolutionism. This even goes as far as suggestions that we all need to sacrifice our planet for unrealistic promises of the AI hype.16 Obviously, there is no planet B,17 and the Earth is more important than AI technologies with high environmental and social costs.
Sustainable AI: Towards Critical AI Literacy
Radboud University can no longer avoid critically reflecting on AI. We need to raise awareness of how these technologies harm people and ruin our planet. 18 When we critique AI, we should do so with intellectual honesty and in a principled way. Utmost care is needed to avoid ethics washing, greenwashing, and generally — what we dub — critical washing.19 Critical reflection cannot be reduced to a mere ‘check box’ before unleashing AI technologies in our daily life after all (as has been done in other areas such as vaping and combustion engines). Researching and reflecting on the harms of AI is not itself harm reduction. It may even contribute to rationalizing, normalizing, and enabling harm. Critical reflection without appropriate action is thus quintessentially critical washing.
Universities are not spokespersons for the AI industry. On the contrary, we need to resist being coopted and corrupted by the industries’ agendas. This requires deep commitment to the mission of universities, our academic values, our scientific integrity, and our critical scholarship — whether one is a working academic or administrative and support staff, we all have a part to play to protect and uphold these values and standards. Universities have the responsibility not to blindly adopt AI in research, education for staff and students, and for administration under the guise of helping society to move forward and prosper, and of preparing students and staff for an imagined future. We do not have to limit ourselves to the AI industry’s or the UN’s and EU’s legal frameworks for a shared vision on our ethical, ecological, and social responsibilities: these have failed us before. Instead, universities need to take on the mantle of a critical voice and of knowledge producers. We need to increase Critical AI Literacy in students, in scientists, and in other professionals.20 This is the role that universities are duty bound to play in times of planetary crises.
https://www.radboudrecharge.nl/en/article/president-of-the-executive-board-alexandra-van-huffelen-there-is-no-room-for-complacency-when-it-comes-to-the-budget-cuts
Iris van Rooij and Olivia Guest, “Don’t Believe the Hype: AGI Is Far from Inevitable,” Radboud University, September 29, 2024, https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable.
Zoe Kleinman and Chris Vallence, “Warning AI Industry Could Use as Much Energy as the Netherlands,” BBC, October 10, 2023, https://www.bbc.com/news/technology-67053139.
Ivan Penn and Karen Weise, “Hungry for Energy, Amazon, Google and Microsoft Turn to Nuclear Power,” The New York Times, October 16, 2024, sec. Business, https://www.nytimes.com/2024/10/16/business/energy-environment/amazon-google-microsoft-nuclear-energy.html.
Dara Kerr, “Elon Musk’s xAI Powering Its Facility in Memphis with ‘Illegal’ Generators,” The Guardian, April 10, 2025, sec. US news, https://www.theguardian.com/us-news/2025/apr/09/elon-musk-xai-memphis.
Mark Sellman and Adam Vaughan, “‘Thirsty’ ChatGPT Uses Four Times More Water than Previously Thought,” The Times, October 4, 2024, https://www.thetimes.com/uk/technology-uk/article/thirsty-chatgpt-uses-four-times-more-water-than-previously-thought-bc0pqswdr.
Pengfei Li et al., “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models” (arXiv, 2023), https://doi.org/10.48550/ARXIV.2304.03271.
Ana Valdivia, “The Supply Chain Capitalism of AI: A Call to (Re)Think Algorithmic Harms and Resistance through Environmental Lens,” Information, Communication & Society, 2024, 1–17, https://doi.org/10.1080/1369118X.2024.2420021; Gerrit De Vynck, “The AI Boom May Unleash a Global Surge in Electronic Waste,” The Washington Post, October 29, 2024, https://www.washingtonpost.com/technology/2024/10/29/ai-electronic-waste-recycling/.
Valdivia, “The Supply Chain Capitalism of AI.”
Antonio A. Casilli et al., “Global Inequalities in the Production of Artificial Intelligence: A Four-Country Study on Data Work” (arXiv, October 18, 2024), https://doi.org/10.48550/arXiv.2410.14230.
Lisanne Bainbridge, “Ironies of Automation,” Automatica 19, no. 6 (November 1983): 775–79, https://doi.org/10.1016/0005-1098(83)90046-8.
Grant Fergusson et al., “Generating Harm. Generative AI’s Impact & Paths Forwa” (EPIC (Electronic Privacy Information Center)., 2023), https://epic.org/documents/generating-harms-generative-ais-impact-paths-forward/; Casilli et al., “Global Inequalities in the Production of Artificial Intelligence.”
Mark Dingemanse, “Generative AI and Research Integrity” (OSF, May 14, 2024), https://doi.org/10.31219/osf.io/2c48n.
Paola Ricaurte, “Ethics for the Majority World: AI and the Question of Violence at Scale,” Media, Culture & Society 44, no. 4 (May 2022): 726–45, https://doi.org/10.1177/01634437221099612; Abeba Birhane, “Algorithmic Colonization of Africa,” SCRIPT-Ed 17, no. 2 (August 6, 2020): 389–409, https://doi.org/10.2966/scrip.170220.389.
Isabel O’Brien, “Data Center Emissions Probably 662% Higher than Big Tech Claims. Can It Keep up the Ruse?,” The Guardian, September 15, 2024, sec. Technology, https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech
See for example: Reuters, “OpenAI CEO Sam Altman Says at Davos Future AI Depends on Energy Breakthrough,” The Economic Times, January 16, 2024, https://economictimes.indiatimes.com/tech/technology/openai-ceo-sam-altman-says-at-davos-future-ai-depends-on-energy-breakthrough/articleshow/106906470.cms?from=mdr.
Mike Berners-Lee, There is no Plan(et) B: a handbook for the make or break years (Cambridge: Cambridge University Press, 2019).
Adamantia Rachovitsa and Niclas Johann, “The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case,” Human Rights Law Review 22, no. 2 (June 1, 2022): ngac010, https://doi.org/10.1093/hrlr/ngac010; Melissa Heikkilä, “Dutch Scandal Serves as a Warning for Europe over Risks of Using Algorithms,” Politico, March 29, 2022, https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/; Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2017).
Ricaurte, “Ethics for the Majority World”; RCSC author collective, “A Sustainability Manifesto for Higher Education,” Https://Www.Earthsystemgovernance.Org/2023radboud/, 2023, https://repository.ubn.ru.nl/handle/2066/301240.
Samuel H. Forbes and Olivia Guest, “To Improve Literacy, Improve Equality in Education, Not Large Language Models,” Cognitive Science 49, no. 4 (April 2025): e70058, https://doi.org/10.1111/cogs.70058; Abeba Birhane and Olivia Guest, “Towards Decolonising Computational Sciences,” Kvinder, Køn & Forskning, no. 2 (February 8, 2021): 60–73, https://doi.org/10.7146/kkf.v29i2.124899.