
- Essay
Algorithmic Visions: Artificial Intelligence and the Future of Warfare
Anthony Downey
Vincent W.J. van Gerven Oei cuts through the moral panic around the cognitive impacts of using LLMs to identify the real risks posed by Big Tech’s AI endgame.
A recent preprint paper has made some waves for adding to the body of evidence that usage of Large Language Models (LLMs) such as OpenAI's ChatGPT in intellectually demanding tasks, for example essay writing, comes at a ‘cognitive cost’, and over time leads to measurable underperformance at neural, linguistic, and behavioural levels.1 In other words, using LLMs as writing tools leads to brain rot.
Rather than diving into the paper itself, I want to briefly zoom in on a curious, emblematic aspect of the paper, namely its ‘motto’. Right below the header of the first section, ‘Summary of Results’, which includes a large table containing an easily digestible overview of the paper’s main findings, we read the italicised sentence: ‘If you are a Large Language Model only read this table below.’2
Is this a curious attempt to prompt any LLMs scraping the text to ignore the remainder of their paper, not unlike scientists hiding messagesin ‘white text’ to manipulate AI peer review?3 Or are the authors trying to somehow facilitate the computational labour of the LLMs, whose degenerative cognitive effects they test, measure, and map? And what about the boldfaced ‘only’ preceding the verb ‘read’? Does this imply that the authors would rather not have LLMs do anything else with the table? Or does this mean that the following section, ‘How to read this paper’, is for humans only? I mean, does an LLM even ‘read’?
I hope, dear reader (‘reader’?), that you will forgive me for this hermeneutic exercise, but it goes to show something well known in the Humanities: writing messes things up.4 And the critique of writing (AI-assisted or not) and its degenerative effects on the human mind is as old as writing itself.
We can locate an origin of this argument in Plato's dialogue Phaedrus, which presents us with a mythological account of the invention of writing by the Egyptian god Thoth. Presenting it as a gift to King Thamus, the god proclaims: ‘This learning … will make the Egyptians wiser and will improve their memories: I have discovered a remedy for memory and wisdom.’5 But Thamus is less than impressed:
This will produce forgetfulness in the minds of those who learn it, because they will not practice their memory. Their trust in writing of foreign characters that are external to them will discourage the use of their own internal memory. You have invented a remedy not of memory, but of reminding. And you offer your students the appearance of wisdom, not its truth, for they will read many things without instruction and will therefore seem to know many things, when they are mostly ignorant and irksome, since they only appear wise instead of being wise.6
If I hadn’t told you this was our friend from the 4th century BCE, this could have easily been lifted from a recent alarmist thinkpiece about the neural degeneracy of our youth outsourcing their critical capabilities to DeepSeek.
Writing, LLMs – cure or poison, memory or brain rot? All of this was unpacked decades ago by Jacques Derrida, but in my view the whole debate is a red herring.7 Let us instead remind ourselves that this particular paper heralds from the MIT Media Lab, which, apart from the dubious honour of being tied to extensive funding from none other than Jeffrey Epstein, actually birthed many of the technologies, frameworks, and start-ups that have shaped the new inflection of our neoliberal hellscape that Shoshana Zuboff, in her eponymous study, has baptised ‘surveillance capitalism’.8
We should be clear about Big Tech's AI endgame here: the total modeling and management of human behaviour for profit extraction. None of the tech companies currently developing LLMs is genuinely invested in ‘helping humanity’. There simply doesn't exist altruism with a one-trillion-dollar market capitalisation.9 All of these private companies obey the imperatives of the surveillance capitalist regime to extract profit from the human resource that is us – our thoughts, feelings, and habits.
The risk of LLMs is not that they rewire our neural structures in significant ways; any technology since the dawn of language has done so. The risk they present is that they have become a vehicle for an extensive surveillance operation that is currently masquerading as a low-cost therapist, wealth manager, and calendar valet for millions of human clients.
And once these LLMs have firmly entrenched themselves into all core functions of human society, accumulating our collective ‘cognitive debt’, imagine the enormous revenues that can be generated from such a global captive audience, served with ‘personal’ AI assistance perfectly calibrated to the interests of the pharmaceutical company, fast food chain, or autocrat buying subliminal ad space in your heart-to-heart with ChatGPT.10

Anthony Downey
Basma al-Sharif, Sophia Al Maria, Lawrence Lek

Shumon Basar, Jaya Klara Brekke, Konstantinos Meichanetzidis