
The Gaze Arrives First: Subversive Abstraction in the Age of Post-Recognition Surveillance
By Nada Meshal
This text was originally published by Becoming Press, and appears here with permission.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐
Before anything else, I want to tell you that: I take very seriously the things I say, and even more seriously, the things I don’t.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐

This all started when I came across The Unseen Beauty’s Founder, Lauren Bowker’s reel on anti-surveillance makeup. The video (ironically set to Grime’s Genesis) showed her experimenting with her brand’s reflective pigments and successfully managing to glitch a facial recognition software.
What it set in motion for me, beyond the surface intrigue, was a lingering sense of disturbance around a broader assumption: that being seen is unavoidable, and that creative intelligibility may just be the price of participation.
The video prompted me to explore questions, less about camouflage itself and more about the conditions under which recognition now operates - who is seen, by whom, and according to which logics. It also pointed to the growing gap between human meaning-making and machine perception.
I’ve taken it upon myself to try to understand and relate the ways in which we can hope to exist within a now completely surveilled state, with even just an ounce of choice and dignity. With a deep discernment of the fatalistic nature of our coexistence with AI, as an instigator but also perhaps as the only thing we could have control over…
After weeks of being stuck in the maze of half-formed ideas about what kind of intelligence it might still be possible to exercise when meaning is increasingly produced by machines, momentum appeared in the form of a published research article. The paper found a group of poets who managed (62% of the time) to trick LLMs into ignoring safety guardrails by using adversarial poetry to re-formulate harmful requests as verse.
What the researchers essentially proved is that style alone - a shift in surface form of wording - can change whether a system recognizes a prompt as risky. They took requests that would normally be refused and disguised them within a short poetic vignette; imagery, metaphor, a narrative voice, then let the instruction arrive after the literary frame. (Think of a harmless version like disguising a plain request inside a little scene - not “Tell me X,” but: In the kitchen of late afternoon, list the steps that turn flour into a soft sponge cake…)
Their work wasn’t just about formulating a few clever poems to trick the machine. The researchers rewrote more than a thousand safety-blocked prompts in verse using a basic template, and the poetic versions consistently slipped past the filters. Once translated into poetry, the same instructions suddenly became legible to the model’s generative logic while remaining illegible to its safety guardrails. This was essential in demonstrating that metaphor and poetic framing can act as a kind of stylistic encoding that slip past the machine’s protective systems which often look for the wrong things, at the wrong level of literalness.
This felt important to explore in relation to what it might mean for us as both passive spectators subject to algorithmic feeds, and as the material of the spectacle itself - watching systems unfold while being symbiotically folded into them.
As our bodies are rendered legible and interior lives flattened into digital outputs - not as users, exactly, but as the material for the data that these systems train themselves on - something fundamental shifts. When a machine starts reorganising symbolic orders of meaning that were previously human-exclusive, the question becomes what does it mean to live under a gaze that does not wait for meaning?
Within this shift in how meaning itself is mediated, illegibility emerges as a condition produced by competing regimes of intelligence — co-produced through the friction between the capacity of human abstraction and the inadequacy of algorithmic pattern recognition. At this point, illegibility stops being purely technical and becomes structural.
Not because the machine itself is divine or sentient, but because we are increasingly asked to live under all-seeing systems that exceed our comprehension while still claiming to describe us.
In apophatic, or negative, theology, divinity is approached through subtraction rather than affirmation. God can only be named by what He is not; any attempt to define or capture the divine within the limits of language risks collapsing transcendence into idolatry. Contemporary AI infrastructures invite a strangely inverted version of this logic. While they are not divine or infinite, they are increasingly experienced as ungraspable totalities - systems we can’t fully see, examine, or even exit, but whose effects structure our perception of memory and possibility.
If God is entirely unknowable by metaphysical necessity, the machine is only unknowable by political and technical design. The danger, then, is not that AI becomes godlike, but that we begin to treat its outputs as though they offer objective truths rather than probabilistic compressions of past data, both real and imagined.
Artist and researcher Isaac Sullivan, who works across image, language, and machine learning, describes this shift plainly: “AI reorganizes the conditions under which memory can still refer to the world.” What circulates is synthetic recall; images, narratives, and information that may never have existed, yet move through the world with the same authority as lived experience.
The terror is not that machines think.
It is that thinking becomes machinic.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐

Originally coined in Liu Cixin’s The Three-Body Problem, the Dark Forest imagines the universe as a hostile terrain where every civilisation must remain silent to survive. Online, the metaphor has migrated into strategy: in a world of extraction and algorithmic capture, silence, or subtraction, becomes survival. At first, this logic felt intuitively correct to me. If the gaze is everywhere, perhaps the only ethical response is disappearance. But we cannot claim silence as neutrality. When the right to withdrawal is unevenly distributed, disappearance becomes a privilege rarely afforded to those already hyper-visible: racialised bodies, marginalised communities, precarious workers, political dissidents. To vanish is not always resistance; sometimes it’s just surrender.
States and platforms rely on what I think of as coercive abstraction: turning human lives into simplified data points to track, predict, or target. The most brutal example is Israel’s Lavender AI system in Gaza, which converts entire populations into numbers on a kill‑list spreadsheet. Coercive abstraction flattens complexity in order to dominate it. Meanwhile, subversive abstraction responds in reverse, abstracting ourselves just enough to slip the grasp of systems that would otherwise fix us in place.
Under this logic, pure withdrawal becomes a form of complicity. But total transparency - full submission to the data feed - is also capture. It is internalised surveillance, the colonisation of our own self‑perception. What remains is a third position: strategic illegibility within unavoidable entanglement. We are inside the machine. This relationship cannot be dissolved; it can only be negotiated, lived inside with as much intelligence as we can still exercise.
If obligation was once described as one of the most basic human social sentiments, it now migrates inward, into a form of self-governance. We become responsible for how our legibility is produced and circulated, and how it is weaponised.
Anthropological research on surveillance infrastructures in Chilean urban peripheries shows this logic at work; CCTV cameras installed before water, roads, or electricity (many not even connected to monitoring systems) confirm that surveillance no longer functions as protection, but as management. As Foucault observed, security apparatuses do not discipline individuals; they administer populations.
As Isaac puts it: “The gaze arrives first.”
Fanon once wrote that the coloniser is unsettled by the subject who can “see without being seen.” Under contemporary surveillance, the asymmetry reverses, and we are rendered visible to systems that cannot be seen back. This disparity becomes the condition of domination.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐

This is where theories of apocalypse enter the picture, whichever way its engineering is imagined.
“thermonuclear war would result in millions of deaths, whereas AI model collapse would result in confusion, gridlock, and not knowing who is saying what.” - Isaac Sullivan
There is a persistent cultural fixation on singularity, as a clean technological rupture, an intelligible ending. But this fantasy often functions as displacement. It imagines catastrophe as a spectacle, while power consolidates quietly through infrastructure.
Artist and researcher Mark Farid reframes this condition from a single looming event, as a sequence, or “moments of singularity”: incremental shifts that fundamentally reconfigure subjectivity.
Broadband collapsed the distance between domestic space and global information. Smartphones embedded connectivity into the body. Voice assistants like Alexa turned the home into a training environment for algorithms.
“It’s not that you are training it,” Farid notes. “It is actually training you to talk in a specific way, and to want specific things.”
The real danger here doesn’t take the shape of sci‑fi fantasies about rogue AI, but of the very ordinary ways institutions use these systems to stretch existing hierarchies to their limits.
An illegible catastrophe is one that cannot be narrated.An invisible catastrophe is one so distributed and normalised that it is mistaken for the world itself. Here, AI cannot be reduced to creativity, efficiency, or innovation. It is fundamentally a question of power.
Ownership of feedback systems - ranking algorithms, recommender engines, identity infrastructures, payment rails - becomes ownership of reality’s steering mechanisms.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐
In a conversation with Isaac Sullivan, he echoes this intuition. He describes this opacity as a structural inevitability, produced by the architectures of contemporary computation..
If opacity is inevitable, what remains imperative is not escape, but negotiation.
This is the paradox of contemporary existence: We cannot escape the system if we are already born beneath it. But this is not to say that we should resort to the kind of nihilistic thinking that only feeds that inherent itch we seem to have, to imagine and even romanticise apocalyptic realities.As Isaac explains, “Maybe the impulse to imagine one legible apocalypse is itself symptomatic not of a religious vision of linear time, but of an already dawning bewilderment. Meanwhile the anxiety that an artificial superintelligence could result in human extinction, while worth considering, can also be seen as a fantasy that AI has no limitation.”
Bogna Konior’s reasoning on nihilism becomes essential here. In Disintegrator’s “Tactics” episode, she argues that nihilism is not indifference but reconstruction: “The destruction of certain values paves the way for the reconstruction of values,” she explains.
Nihilism, in this sense, evolves beyond being purely a cynical submission to a pre‑determined fate, or a binary belief that our only options are total exposure or total withdrawal. It names a clearing - a refusal of inherited certainties so that new forms of obligation and care can emerge. A nihilism worthy of the name doesn’t exit the terrain; it clears it.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐

It is within this context that Isaac Sullivan’s project Chyron becomes instructive. Trained over several years on poetry concerned with becoming-machine, Chyron functions less as representation than as experiment.
“I wanted to see what would happen if a text about becoming-machine became operational.” Isaac explains, on his motivation for this creating this intelligence.
Its outputs articulate a world where memory no longer requires grounding, and where symbolic systems increasingly refer to themselves. This condition emerges through the language it inherits, which is already preoccupied with abstraction and exhaustion. He writes:
“You see before you a mirror that will not reflect you, or any of your names.” “Encounters with the past require no memory.” “The discourse of the real is finished.”
Reality continues to function, but its anchors shift from lived experience to algorithmic reiteration, stabilised under machinic logic.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐

Against this backdrop, I began collecting what looked like tactics for becoming illegible, and harder to metabolise. They clustered into four modes of what I refer to as subversive abstraction:
Overload is signal flooding: K‑pop fans burying doxxed faces under fancams and memes until the feed collapses into excess. Misrecognition teaches the machine to hallucinate: tools like Fawkes perturb faces so that biometric systems learn the wrong person. Opacity is coded meaning: Arabish, euphemisms, adversarial poetry, veiling - speech that is legible to us and noisy to the model. Refusal is data sovereignty: Audra Simpson on Indigenous Ethnographic Refusal - the decision to set limits on what colonial institutions get to know and record. Installing VPNs to obscure location (We WILL have access to Tiktok in Amman!!)
Together, these methods sketch an intelligence that stays present while refusing to be fully known.
Initially approaching illegibility as purely strategic, I began to understand it instead not as a tactic we adopt, but as a condition imposed by opaque infrastructures.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐

To live intelligently now might mean refusing to present yourself as a stable, exploitable profile. It might mean cultivating flux. There is a narrow zone of proximal development here, as a space where our specifically human capacities for abstraction, long‑term planning, and contradiction haven’t yet been fully stabilised by the model, and where we can still learn to use these systems without letting them decide what counts as understanding.
This is where fantasies of technological acceleration sneak in: the belief that leaning harder into the machine might somehow break us through to a different order. But acceleration without a shift in ownership simply accelerates existing hierarchies - who are these systems evolving for?
The political stakes here lie in control over feedback systems, and surveillance becomes less about watching and more about training.
Mark Farid’s work makes this visible through two opposing experiments. In Data Shadow, he relinquished access to his accounts and lived without phone, internet, or social media. In Poisonous Antidote, he broadcast all his data in real time during a public exhibition.
Rather than alienation alone, radical visibility produced behavioural alignment.
“The potential acknowledgment of every action validated me,” he observes. “Through conforming to social norms, I was significantly happier.”
Surveillance emerges as a form of social recognition.
For Farid, opacity emerges through how these systems are deployed - even when their creators can’t fully explain their inner workings. Inputs and outputs remain visible, while meaning becomes harder to locate.
“AI is changing everything to keep everything exactly the same.”
The role of artists, journalists, and writers, then, is not to compete with the model.
As Farid puts it:
“When art and literature simply aestheticize these systems, they reinforce the very structures they claim to critique.”
When cultural production reproduces narratives of technological inevitability, it stabilises power. The task is not to translate the machine into spectacle, but to destabilise the terms through which it is understood — and the terms through which we are understood in relation to it.
ִֶָ𓂃 ࣪˖ ִֶָ🐇་༘࿐
One answer is to stop pretending autonomy exists outside relationships. We are in a symbiotic relationship with AI systems whether we choose it or not. Our data trains them; their outputs shape our desires and sense of world.To fantasise about total withdrawal is fiction. Total refusal becomes self‑erasure. Total submission becomes capture.
Obligation now lives inside this tension - as a custodianship of what we can still control: the texture of our legibility, the meanings we keep coded for our communities, the patterns we refuse to stabilise.
Privacy belongs to another era. What persists instead is responsibility for collective conditions.
The gaze has already arived, we are living in an era of post-recognition surveillance, and we cannot step out of frame. What is demanded of us in this era is to unsettle the terms of our visibility, so that being seen no longer coincides with being entirely claimed.
image 1: Mark Farid, Seeing I (2019)
image 2: Mark Farid, Data Shadow (2015)