
Writing Gaza
Omar Berrada & Shivangi Mariam Raj
Copyright c/o ATPD (2025). Courtesy of the author.
As the object of observation and scrutiny rather than the subject of communication, the so-called ‘Middle East’ has long served as a locus for applying, refining, and expanding colonial technologies of mapping.1 Established through cartographic, cadastral, photographic, and photogrammetric methods, these technologies mined information and projected meaning onto individuals, populations, and places. Powered by artificial intelligence (AI), the legacy of these technologies is readily identifiable in the neocolonial ambition to extract (mine) and extrapolate (project) data in the name of predefining, if not predetermining, future movements, incidents, risks, and dangers.2
To fully understand the implications of this, we need to observe two distinct but closely related points. To begin with, AI is a predictive mechanism devised to forecast future patterns. It employs inferential methods to ‘reason’, or project, from premises (data) to conclusions (projections). Over time, these projections frequently act as definitive revelations rather than, as they in fact are, inductions – that is, probabilistic forecasts of potential future events or patterns that have yet to happen.3
Secondly, algorithmic projections, despite their propensity towards probabilistic inductions and so-called hallucinations (or, more accurately, errors), have acquired an insidious degree of diagnostic purposiveness. Imbued with an authority associated with a deceptive gloss of scientific objectivity, the operative logic of AI works towards generating solutions, or verdicts in the form of diagnoses. These prognostic and diagnostic methods are not abstractions insofar as they have a considerable impact on people and communities, nowhere more so than in zones of conflict. The impact generated by probabilistic inference (the event of targeting, for example) and diagnostic action (a missile strike) is all too visible in present-day cycles of data-centric warfare where information – extracted from and applied to specific topographies and communities – is used to eliminate perceived risks and threats.4
To the extent that the predictive and diagnostic capacity of AI today underwrites the deployment of unmanned aerial vehicles (UAVs), the use of Automated Target Recognition (ATR), and a pervasive reliance on Lethal Autonomous Weapons Systems (LAWS), we need to also observe the following: as a model of ‘algorithmic governance’ designed to manage and control populations and events, AI is a disciplinary technology.5 As evidenced in its cybernetic origins, its modus operandi has been historically geared towards both automation and ever more autonomous systems of decision-making designed to ‘govern’ future activity.6 When applied to military targeting, the regulative function of AI, a machinic contrivance geared towards prediction, seeks to prefigure future patterns of behaviour and activity in order to both control (govern) events and, in turn, eradicate perceived threats.7
As a calculated approach to governing, or managing, people, communities, and topographies, the extraction and application of data is historically embedded in the instrumentalist archetypes of computational projection that we find in the expansionist ambitions of colonialism. If a place, object, or individual could be expressed numerically through cartographic coordinates, it not only bestowed a privileged command onto the colonial I/eye of the coloniser but also ensured that information could be operationalised and applied, however dubiously, to discipline subjects and dominate space. As Anne Godlewska points out, ‘[t]he emphasis on number and the instrumentality of knowledge has a strong association with cartography as mapping assigns a position to all places and objects. That position can be expressed numerically.’8
The martial instrumentalisation of numerical fixing, through cartographic means, in the context of data-centric warfare is demonstrably present in the AI-powered technologies we today associate with Wide Area Airborne Surveillance (WAAS) and Wide Areas Persistent Surveillance Systems (WAPSS). Through defining subjects as computationally compliant with an imperialist calculus of life, the techno-scientific resonance of numeric fixing (measuring) not only announces and sustains the authority of the colonial gaze, it strives to fix those subjects as passive data rather than active agents.9 When we consider these practices in light of the extractive logic of AI today, it is evident that – through the relentless mining of data – individuals can become co-opted into a technological apparatus that is essentially a means to ‘datafy’ and thereafter objectify them.
The datafication of individuals – a method to transform a living phenomenon into quantum data – submits the subject as a passive data point, registered in an opaque, recursive system of unaccountable algorithmic computation. The ascendancy of algorithmic governance thereafter heralds a vision of the subject (‘individuals’) as disaggregated data points (‘dividuals’) to be exploited for the purpose of computational profiling: ‘The algorithmic government knows (in)dividuals not as persons endowed with (real or supposed) capabilities of will and understanding, undergoing pain and pleasure but only as temporary aggregates of infra-personal data exploitable at an industrial scale.’10
The procedures associated with datafication – or dividuation – can be understood in terms of ‘data colonialism’, the latter involving a process whereby human activities are co-opted as data in the pursuit of governing them.11 ‘Algorithmic colonialism, driven by profit maximization at any cost, assumes that the human soul, behaviour, and action is raw material free for the taking. Knowledge, authority, and power to sort, categorize, and order human activity rests with the technologist, for which we are merely data producing “human natural resources”.’12 In time, data, or digital artefacts, can be incrementally instrumentalised in the pursuit of ever more effective paradigms of manipulation and control: ‘As dynamic and interactive human activities and processes are automated, they are inherently simplified to the engineers’ and tech corporations’ subjective notions of what they mean. The reduction of complex social problems to a matter that can be “solved” by technology also treats people as passive objects for manipulation.’13
As was the case in colonialism, the extraction of wealth – land, labour, and natural resources – segues, without any lessening of its overarching extractive command, into a neocolonial era focused on the mining of the quantifiable traces of people. How, one might therefore ask, has the ideological impetus of colonialism – a profoundly predatory and exploitative historical process – become indelibly hardwired into an inscrutable mechanical apparatus (AI) that underwrites the foundations of social, cultural, economic, and political life in an electronically connected world?
Writing in his seminal volume Orientalism, Edward Said presciently observed that the function of the colonial gaze – and colonisation more broadly – was to ‘divide, deploy, schematize, tabulate, index, and record everything in sight (and out of sight)’.14 This is an apt premonition of the algorithmic ‘vision’ we have come to associate with a neocolonial world order. Perpetuated by AI, this order strives to quarter, appropriate, realign, predict, and record everything in sight – and, critically, everything out of sight. In noting as much, I want to highlight here Said’s injunction against historical complacencies, in particular when he asserted ‘that the Orientalist reality is both anti-human and persistent. Its scope, as much as its institutions and all-pervasive influence, lasts up to the present’.15
When considered alongside the reduction of future possibilities to the demands of neocolonialism, Orientalism’s legacy, the biased, anti-human and discursive production of the subaltern other, not only persists in our time but has mutated into an apparatus of conspicuous predation: through the unrelenting extraction of data and its dubious application, AI exploits ostensible patterns in data to predict and thereafter pre-empt (govern) future outcomes in the name of risk management. The presence of an anti-human sentiment is, likewise, exposed in the datafication of individuals, so much so that we now live in an era of ‘digital dehumanisation’.16
The logistics of extraction and the datafication of people– not to mention the violence perpetuated through such activities – was all too amply captured in Aimé Césaire’s succinct phrase: ‘colonisation = “thingification”’.17 Through this resonant formulation, Césaire highlighted both the inherent processes of dehumanisation practised by colonial powers and how this sought to produce the docile and productive – that is, silenced, pacified, and commodified – body of the colonised. As befitted his time, Césaire understood the depradations of colonisation primarily in terms of wealth extraction (raw materials), the occupation of land, and the exploitation of physical, indentured labour. However, his thesis remains key to any understanding of how colonisation endeavours to project unmitigated control over the future, if only to pre-empt and extinguish elements that do not accord with the avowed aims and priorities of imperialism: ‘I am talking about societies drained of their essence, cultures trampled underfoot, institutions undermined, lands confiscated, religions smashed, magnificent artistic creations destroyed, extraordinary possibilities wiped out.’18 The exploitation of raw materials, labour, and people, realised through the violent projections of western knowledge and power, employed a process of dehumanisation that deferred, if not truncated, the unlimited possibilities of future realities.
In the contemporary context of the ‘Middle East’, the imperative of threat prediction and the management of risk – the disavowal of future possibility in the name of pre-emptive war – is profoundly reliant on the deployment of AI. This was already discernible in 2003 when, in the lead up to the invasion of Iraq, George W. Bush announced: ‘If we wait for threats to fully materialize, we will have waited too long.’19 Implied in Bush’s statement, whether he intended it or not, was the unspoken assumption that counter-terrorism and the prosecution of future wars would be necessarily aided by semi- if not fully-autonomous weapons systems capable of maintaining and supporting the military strategy of anticipatory and preventative self-defence. To predict threat, this logic goes, you have to see further than the human eye and act quicker (anticipate) than the human brain; to pre-empt threat, to use Donald Rumsfeld’s now infamous phrase, you have to be ready to determine and exclude (eradicate) the ‘unknown unknowns’, or that which remains ‘out of sight’.20
Although it is admittedly a historical mainstay of military tactics, the contemporary use of pre-emptive, or anticipatory, self-defence – the so-called Bush doctrine – is now understood to be one of the more questionable legacies of the attacks on the US on 11 September 2001. The invasion of Iraq in 2003, without evidence of Iraqi involvement in the events of 9/11, was a pre-emptive war waged by the US and its erstwhile allies to militate against such attacks in the future and mitigate the domain of ‘unknown unknowns’.21 In essence, it was a neocolonial war that, to recall Césaire’s earlier admonition of colonisation, ensured that societies, cultures, institutions, lands, religions, artistic creations were destroyed and the extraordinary fact of future possibility precipitously erased.22
In keeping with the ambition to predict ‘unknown unknowns’ in the name of national security, Alex Karp, the CEO of Palantir, wrote an opinion piece for the New York Times in July 2023.23 Karp’s summary of the continued dilemmas in the applications of AI systems in warfare, including the peril of machines that run amok, needs to be taken seriously insofar as he is one of the few with an insider’s insight into the future deployment of AI in data-centric warfare.24 Admitting that the most recent versions of AI, including so-called Large Language Models (LLMs) that have become increasingly popular in ChatGPT and other products, are impossible to understand for user and programmer alike, Karp freely accepted that what ‘has emerged from that trillion-dimensional space is opaque and mysterious’.25 It would appear, however, that the ‘known unknowns’ of AI, the recursive opacities of its iterative logic (not to mention the demonstrable inclination towards erroneous predictions or outright hallucinations), can be nevertheless counted on to predict the ‘unknown unknowns’ associated with the forecasting of threat – at least, that is, in the sphere of the predictive simulations championed by Palantir and others in the ever-expanding sphere of data-centric warfare.26
Widely seen as the leading proponent of AI and predictive analytics, Palantir seldom hesitates when it comes to advocating the expansion of these technologies in contemporary wars, policing, information management, and data analytics more broadly. In tune with its avowed ambition to see AI more fully incorporated into conflicts, its website is forthright on this matter. We learn, in relation to both AI and machine learning (ML) in general, that ‘new aviation modernization efforts extend the reach of Army intelligence, manpower and equipment to dynamically deter the threat at extended range. At Palantir, we deploy AI/ML-enabled solutions onto airborne platforms so that users can see farther, generate insights faster and react at the speed of relevance.’27 We can only surmise what reacting ‘at the speed of relevance’ means here, but presumably it has to do with the pre-emptive martial logic of autonomously anticipating and eradicating, through the affordances of AI, future – real or imagined – before they become manifest.28 To do this, the technology needs to predict, if not exhaust, the future; it needs to manage the extra-ordinary possibility of events that have yet to happen.
Published 20 years after the invasion of Iraq, Karp’s article proposed that current threats to US security called for pre-emptive warfare. Incorporating as they do the prophetic, if not oracle-like, capacities of predictive systems, he further argued that the use of AI in warfare would ‘shape the international politics of this century in the way nuclear arms shaped the last one’.29 It is not only the very ideal of an American future that is presently at stake, it would seem, but who controls the future as a quantifiable, predictable space that remains beholden to western interests.
Insofar as the predictive and diagnostic dimension of AI remains profoundly reliant on inequitable structures of suppression, silencing, and eradication, its disciplinary apparatus discloses an ‘epistemic violence’: the event of datafication not only seeks to predefine realities but silence voices and the agency of others according to the self-interested goals of neocolonialism.30 For all the professed viability, if not validity, of the AI apparatuses deployed in wide-area persistent surveillance systems (WAPSS) and autonomous weapons systems, we need to consistently foreground the degree to which ‘algorithms are political in the sense that they help to make the world appear in certain ways rather than others. Speaking of algorithmic politics in this sense, then, refers to the idea that realities are never given but brought into being and actualized in and through algorithmic systems.’31
The technological affordances of AI reveal, furthermore, the paradoxical capacity of predictive algorithms to provoke the very event they purport to augur. To this end, a ‘paradox arises from, on the one hand, the capacity of algorithms to make happen what they predict, and, on the other, that attempting to predict the future threatens to close its open horizon … a prediction that was intended to cope with the uncertainty of the future can quickly transform into a certainty that may turn out to be illusory’.32 Through unfettered access to data, systemic calculations, systematic training, algorithmic rationalisations, expanded infrastructures, and interconnected networks, AI has the capacity to not only predefine events that have yet to happen but summon forth a spectre of threat and danger, based on the political calculus of imminent risk, that can be then extinguished. In this mise en abyme, certain classes of data will be significantly overrepresented or underrepresented compared to others, thus ensuring that any bias in the data-labelling or training stage will be subjected to ‘algorithmic amplification’ in the output stage of prediction.33 Any prediction based on input data – for example, labelled images extracted from conflict zones – arguably stimulates computational exemplars of paranoiac projection in the pursuit of extra-terrestrial dominion and terrestrial dominance.
In light of these points, the abiding, if not urgent, concern revolves around the extent to which a perceived threat, based on a recursive calculus performed by a machinic apparatus – that is, an AI – is invariably responded to as if it was an actual threat: ‘When self-fulfilling prophecies begin to proliferate, we risk returning to a deterministic worldview in which the future appears as predetermined and hence closed. The space vital to imagining what could be otherwise begins to shrink.’34 When we consider that the algorithmically perceived event of ‘threat’ can be calculated, or summoned forth, then the nominal notion of ‘epistemic violence’ – which is often referred to as a means to silence or discount subaltern voices and agencies – finds an all too amenable counterpart in the extirpation, rather than just silencing, of the other.
The process of devolving decision-making processes – relating to questions of life and death – to autonomous, algorithmically augmented systems divulges a fatal link between colonial technologies and the opaque realm of unaccountable, AI-powered apparatuses. It also raises the question as to who, or what, is making decisions in data-centric warfare and the degree to which AI has become an alibi for mass annihilation.35 To accept, even in part, that the ambition to ‘see farther’ is contingent upon unfathomable apparatuses is to raise the question: what will be the future of death in an algorithmic age and who – or, more precisely, what – will get to decide its biopolitical and legal definitions?
Palantir’s stated objective to produce predictive structures and AI solutions that enable military planners to ‘see farther’ is not only ample corroboration of its reliance on the inductive, or predictive, qualities of AI but, given its ascendant position in relation to the US government and the Pentagon, a clear indication of how such technologies will determine the prosecution and outcomes of future wars.36 This will no doubt further bolster a regime of algorithmic ‘truth’, an imaginative geography, or command, that will encode AI as a disciplinary technology capable of governing populations and occupying realities without having to actually occupy a territory with so-called ‘boots on the ground’.
The ambition to see farther supports, finally, the neocolonial ambition to see that which cannot be seen – or, indeed, to see that which can only be seen through the calculus of the algorithmic gaze and its rationalisation of future realities. It is from within this phantasmal space of eternal menace – where the computation of a seemingly unsurpassable existential risk precipitates its summary eradication – that we can begin to see how the widespread use of algorithmically induced targets is designed to project threat, ad infinitum, into the (un)foreseeable future. Subject to martial, ideological, and algorithmic extrapolations, such futures would appear, for now at least, to be pre-occupied by technologies over which we have less and less control. These apprehensions, in our era of supposedly unending emergency (a convenient clarion call for unremitting forms of aerial surveillance), are far from regional. The subject of contemporary and future wars will be formulated in the shadow of these systems, and we will need to identify and distinguish, as a matter of due legal process, the ongoing impact of AI on the material realities of everyday life and, indeed, death.
Omar Berrada & Shivangi Mariam Raj
Lawrence Lek