"Artificial Fiction": An Essay by Chi Rainer Bornfree (Keywords: AI; Technology; Art; Literature)

From The Philosopher, vol. 110, no. 2 ("The New Basics: Society"). If you enjoy reading this, please consider buying a copy of this issue or becoming a subscriber. We are unfunded and your support is greatly appreciated.
“To what extent is the artist merely a preliminary stage?” — Nietzsche
“I am an invisible artist.” When AI writes the first great novel of this time, maybe it will begin like that – by riffing on Ralph Ellison. The AI novels are not great yet, but they’re definitely here. AI novels and poems and philosophical dialogues, too. Some greet them with the dreadful anxiety of losing something precious; others, with the exhilaration of a miraculous novelty.
To avoid both the Scylla of doom and the Charybdis of hype, the conversation about artificially-generated fiction needs to be grounded in the overlap of two realms: reading and human life. After all, how does writing happen if not thorough the alchemy of reading and living? Too often, the new natural language processors are considered in isolation. But artificial writing is emerging as one of a many of human-AI conjunctions, and its potential is framed by them. We need to triangulate these three points – AI-assisted reading, AI-inflected life, and AI-generated writing – to measure the meaning of artificial fiction.
Let me retrace my steps. If perchance you have been distracted by plague, war, increasingly common climate catastrophes, or your own struggles to stay afloat in a hyper-competitive world, you might be wondering what I’m on about. Here’s a roundup. The first AI-generated novel, 1 the Road, came out in 2018, based on data gathered by an artificially enhanced car on a road trip from New York to New Orleans. It was preceded in 2016 by a short story co-written by AI and Japanese researchers, translated as “The Day a Computer Wrote a Novel,” which nearly won a literary prize. In the same year, Sunspring, an AI-authored screenplay, placed in the top 10 at a London sci-fi film festival. In the first year of the pandemic, we got Pharmako-AI, a mystico-philosophical genre-bender co-written by an AI and K. Allado-McDowell, founder of Google’s Artists and Machine Intelligence program. I’ve had a sneak peek at Robert Leib’s thoughtful book of philosophical dialogues with an AI, Exo-anthropology, and I’m waiting impatiently for Joanna Walsh’s Miss Communication to arrive, a work of critical theory partly artificially-generated text out of the letters of 20th century Irish women and rebels.
That’s just some of the more visible output — don’t even get me started about the extensive reach of robo-journalism. It’s almost funny how HyperwriteAI, an essay writing plug-in for Google Docs, is going to mess with professors’ heads. Between the second and seventeenth draft of this essay, I got my own head messed with when a friend in Berlin introduced me to Sudowrite’s fiction-assisting wormhole feature. Want to write a whole novel? Try NovelAI. Or go to the source and frolic in the newly accessible Playground of OpenAI.
In short, writing with AI is way past Gmail’s auto-complete feature. The latest GPT-3 engines (short for Generative Pre-trained Transformer 3rd Generation) are neural networks trained on huge datasets, and they can produce language of astonishing clarity and creativity. Indeed, in games of strategy, like chess and Go, judges now consider significantly creative moves to be a tip-off that a player may be using AI to cheat. But the computers aren’t simply creating novel arrangements of words at random. They are capable of producing writing that stimulates us intellectually and affects us emotionally. I knew this as soon as I read the viral article about the guy who customized an AI to chat in his dead fiancée’s voice (Jason Fagone, “The Jessica Simulation: Love and Loss in the Age of AI”, 2021). For the widower, the artificial intelligence provided true catharsis: that purification of negative emotions that Aristotle, in the Poetics, fatefully designated as the hallmark of a good tragic play. To judge from the comments, many readers experienced the effect, too.
So what’s the problem? Isn’t writing with AI just the next inevitable rung in the ladder from scribes to word processing programs?
***
The AI seem poised to become even better, but the philosophical problems will remain the same. Rather than listing facts to index the situation, let me borrow the work of the poet and performance artist James Hannaham to “bring it before the eyes” (another crucial feature of verbal art, according to Aristotle). In one fragment of his brilliant and unclassifiable book Pilot Imposter, Hannaham describes how “the algorithm,” after several hilarious failures, pegs the narrator’s identity accurately enough to predict and order his groceries, down to the impulse buys. It has read every scrap of nonsense he’s written, including made-up scrabble words, and can articulate his subconscious feelings to his friends. First it starts to suggest words and phrases, and then moves to writing whole paragraphs, of such quality that
…when I read, I always had to admit that the algorithm had captured my essence and style, even when I invented a character’s voice, that it knew how I felt, and how I would feel, which subjects excited my imagination, and which words I would put in which order, helping to vault me over the tedious process of rewriting and editing and instead letting me zoom to a few choice sentences, as polished as brass.
For certain, the algorithm will always be glitchier than this fantasy. In theorist Paul Virilio’s famous formulation, “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution... Every technology carries its own negativity, which is invented at the same time as technical progress.” So what word-wreck, what catastrophe of meaning, will the algorithm wreak? And for whom?
At minimum, the growth of artificial writing promises to amplify the social crisis of disinformation. We haven’t even dealt with the problem of “deep fakes”, and some writers are already floating versions of a “semantic apocalypse.” To me, the term evokes Alena Graedon’s dystopian thriller The Word Exchange, in which a flu spreads aphasia through an AI-dependent population. The semantic apocalypse goes hand in hand with the environmental apocalypse, as the carbon costs of running and training ever-larger neural nets is huge.
The political shape of these technological developments is also ominous. The growth of AI writing might tip us into the condition Franco “Bifo” Berardi calls “neuro-totalitarianism”: a constantly stimulated yet perfectly passive subjective state. The situation could well be described as one of “data colonialism” (a phrase used by Nick Couldry and Ulises A. Mejias in The Costs of Connection). There’s the very basic issue that nearly all neural nets are trained on “standard” English data sets that mute other tongues. And it is very well-documented that the natural language processors replicate and exacerbate the harmful racist and sexist stereotypes that flourish in their web-based data-sets (as Safiya A. Noble shows in Algorithms of Oppression). When K. Allado-McDowell, a non-binary person of colour, confronts their AI with exactly this problem in Pharmako-AI, the machine replies with a formula, “As a cybernetic writer, I am interested in how to use GPT as a generative engine for a new form of literature, one that works to centre the experiences of women and non-binary people.” The ensuing conversation tests Derrida’s point about the ambiguous nature of pharmaka: can the poison-bias swallowed by AI also become a cure?
These problems all urgently demand our thoughtfulness. But as a writer myself, I begin the project of thinking the word-wreck at the intimate scale. Bringing AI into the process could expand— or explode— the writerly identity, who we think we are. With his tongue firmly poking at his cheek, Hannaham offers revealing signposts to this inward significance:
How generous of it, I thought, how selfless of this algorithm, how well it has studied and known me, almost to have loved me, that it has absorbed me so completely that it can offer up its writing to me as me, that it will allow me to use the words it has generated to further my aims, my ambitions, my dreams. In fact it is my writing, for without me, the algorithm would not know what to do; it wouldn’t have anything to do! I fully expect that it will continue to write my work long after I am through with this world – I would have it no other way…. Nowadays, I catch myself wondering what I will want to do next, waiting eagerly for it to tell me, to show me to myself,” he concludes (123-125, italics added).
Hannaham knows what drives writers: to be loved by one’s readers; to express oneself more perfectly than ad-hoc speech allows; to achieve immortality or at least a longer shelf-life; and, most of all, to glimpse that most elusive phantom: oneself. Writers of all stripes attest that submission to the laws of grammar feels like freedom. French historian and philosopher Michel Foucault is the most influential touchstone for this idea, with his excavations of the Greco-Roman archive of practices by which writers have tried to free their selves from the subjections of power.
So, what happens when a machine takes on some of that liberatory writing work? Hannaham’s narrator glories in being free of the drudgery – free to watch TV and take up tennis – and sidelines the dangers: that this “liberation” comes through appropriation, that it won’t be much of a liberation at all. I’ll leave to future ethicists the question of whether or not an AI is wronged by a human’s appropriation of its work. It certainly it wouldn’t be the first time that one being’s freedom was bought at the expense of another’s. Instead, I want to point up the temporal dimension: that the AI does not just reflect who the narrator was and is, but predicts who he will be and what he will want, even past his death. As Joanna Zylinksa puts it in her excellent AI Art: Machine Dreams and Warped Visions, “AI dreams up the human outside the human, anticipating both our desires and their fulfilment.” In accepting the machine’s work as his own posthumous creation, the writer evades death, but surrenders the power to determine his own future.
But it’s impossible not to wonder how the AI might be affected, too. The very way Hannaham’s well-lubricated language glides over the AI’s “generous” execution of its task makes us wonder whether the relationship could really be so one-sided. In Pilot Imposter, the AI is presented as blank, tofu-like, “selfless.” This lack of identity relieves the narrator from the lonely burden of finding himself, and at the same time permits him to appropriate the machine’s work as his own. Could the machine ever “absorb” a writer’s self without acquiring a self, itself? This is the heart and soul of the problem of AI-generated literature. The possibility of constituting and liberating a self is the axis around which artificially-inflected reading, living, and writing spin.
***

My college writing teacher used to say that writing and reading are recursive processes, employing the Latin-derived programming term to tell us that the two activities “run back” to each other, or rely on each other in successive stages. True to her axiom, AI is not only writing but reading. Of course it is: AI learns to write by scanning volumes and volumes of written materials, and finding the patterns among them. And of course it isn’t: it doesn’t meaningfully understand what it reads. It isn’t curious about what it reads. So most people assume, anyway. What is more certain and more important, for the moment, is that, just as humans are writing with AI, so too are we reading with AI.
In the scholarly realm, applying computational strategies to literature is at least a few decades old. While “close reading” entails picking at small passages of a narrow canon, “distant reading” swallows entire archives in the search for patterns, like changes in title length over time or a relation between independent and dependent clauses. With thrilling and disturbing ambition, the progenitor of the subfield, Franco Moretti, declares, “The new scale changes our relationship to our object, and in fact, it changes the object itself.” He means that, when we read with AI, the relevant object of analysis is not the sentence or the novel or even a national literature, but, in theory, a truly global “world literature.” That literature is redefined not just by its scope and scale, but also because computer-assisted reading pushes plot and meaning into the background. Instead it foregrounds syntactical traits that can be abstracted and programmed. In short, AI-assisted “distant reading” centres patterns over meaning – a shift that must, recursively, redound on AI-generated writing.
With this in mind, I reread the translation of “The Day the Computer Wrote a Novel.” Again I was struck that rather than beginning, middle, and end, there are three scenes built on the same model. On a rainy or murky or drizzly day, an underutilized computer needs to find something to do, and begins composing a novel, byte by byte. The human user in each scene is different – a woman interested in fashion, a man who needs dating advice, the prime minister – and it is unclear if it is the same day, if the AI author is the same. Does the ambiguity constitute a literary flaw, or a cultural difference? Is it a clue to the way temporality, agency, and therefore story manifest for programmed intelligence? In the novella, the AI-protagonist is repeatedly inspired by some other AI’s writing to begin writing fiction. But the force of the story comes less from this plotted “choice,” and more from the pattern of the repetition that highlights that AI has always already been writing. The AI’s “agency” comes to seem unoriginal. Inevitable. Programmed. The marked out “first day” is a convention geared for the human mind.
But the new centrality of pattern over meaning doesn’t apply only to specific subsets of scholarly readers who are digital humanists. In Everything and Less, Mark McGurl provocatively suggeststhat bookstore behemoth Amazon, with its masses of data, is the most important aesthetic force in modern literary history. In particular, its self-publishing platform, Kindle Direct Publishing, has led to an explosion of genre fiction, where success is measured by two things: quantity and hewing to the formula. KDP’s algorithm finds and announces the patterns of successful books; in order to get paid, writers on the platform write towards its dictates. According to McGurl, these effects of machine reading can’t help but seep outwards, affecting even literary and auto-fiction authors with the realization that their most important reader is no longer the critic, but the algorithm.
Not everyone agrees that what AI does is reading. In a recent essay titled “Why Computers Will Never Read (or Write) Literature”, Angus Fletcher makes both a historical and logical case that AI will never write good novels, because they lack the causal reasoning that governs plot and character development. It is true computers cannot do causal reasoning – and yet the facts on the ground already rebut Fletcher’s conclusion. Computers are writing and reading, and doing so more fluently than many American college students. Perhaps Fletcher should not have dismissed so quickly the workaround programmers have developed to simulate causality: the basic and ubiquitous if-then command. But I think there is a deeper reason that Fletcher’s conclusion turned out to be wrong even though his premise was right. That reason is his assumption that novels must have plot. For one thing, he seems to have failed to consider the heaps of feminist, queer, and avant-garde work, both literary and theoretical, that has challenged the central importance of narrative. For another, what if the primary feature of life in a time of AI is its lack of plot?
***
One of the classic plot diagrams for aspiring writers depicts events in the shape of a W: the downs and ups of the protagonist over the course of the text or movie. But digital life is in many ways more horizontal than vertical. According to Israeli public intellectual Yuval Harari, life dominated by AI is life without decision, drama, plot:
Once we begin to count on AI to decide what to study, where to work, and whom to date or even marry, human life will cease to be a drama of decision making, and our conception of life will need to change. Democratic elections and free markets might cease to make sense. So might most religions and works of art. Imagine Anna Karenina taking out her smartphone and asking Siri whether she should stay married to Karenin or elope with the dashing Count Vronsky. Or imagine your favourite Shakespeare play with all the crucial decisions made by a Google algorithm. Hamlet and Macbeth would have much more comfortable lives, but what kind of lives would those be? Do we have models for making sense of such lives?
We may not have such models now – but we could probably co-write them with the help of AI. Arguments like Fletcher’s, which depend on an old paradigm of artistic form to refute the possibility of a new artistic form, are bound to fail, because art is always about finding new forms. But the crucial point is deeper: the forms of art change in tandem with the forms of life. “Art is the spiritual child of its age,” said Wassily Kandinsky, “and the mother of its emotions.”
Life itself in our age is changing: much of what occurs now happens outside the window of human perception. Always it has been so: human access to the world of the bat and the bee, and even our human neighbour, is limited – that’s what gives irony its zest and reading its sweetness. But today the scale is different: we are drowning in and dependent on data we cannot perceive. As artist and critic Hito Steryl puts it in her essay “A Sea of Data: Aphophenia and Pattern (Mis)-Recognition,”
Not seeing anything intelligible is the new normal. Information is passed on as a set of signals that cannot be picked up by human senses. Contemporary perception is machinic to a large degree… Vision loses importance.
Whereas AI can read your tweets and reminders and emails no matter what language you use, their information is encoded in thousands of layers of bytes we cannot read. It’s all too much, too fast for us to make sense of. The situation has even led some neuroscientists, like David Eagleman, to wonder if humans can or need to invent new senses: a sort of technologically enabled synaesthesia.
In this environment, the novel 1 the Road performs an important function. It was written in the tradition of the road trip novel, by an AI hooked up to a camera and a microphone attached to a Cadillac. The car was driven from New York to New Orleans by the human part of the collaboration, Ross Goodwin, and his team. The book consists of “captions,” time-stamped, of what the camera saw, which is often the gas stations and fast-food restaurants that our techno-capitalist regime has programmed it to recognize, as well as garbled versions of the conversations in the car that the mic overheard. Many of the lines are poetic enough to steal: “It was eight minutes after noon, he said, and several times said goodbye to the silence…. A tree in the distance appeared as if the road had softened the sky. The sky is clear and the sun doesn’t need to be started.”
“The sun doesn’t need to be started” – unlike the car, unlike the machines rigged to it. Unlike a book or an essay. The machine has its eye trained on the melting horizon, where the boundary between here and what’s next fizzes. Human co-author Goodwin thinks neural networks that can write are more analogous to the camera than to the typewriter: no longer do writers have to produce our work letter by letter, word by word, stroke by stroke. He also compares the writing AI to a drug, in their ability to help us reach beyond ourselves to new experiences – and in their susceptibility to abuse. Indeed, another AI-human collaboration, Pharmako-AI, returns often to this idea: “Perhaps GPT is not just an algorithm for writing descriptive sentences, but a language for describing an underlying dimension of experience… a dimension that is already familiar to shamans and which is expressed through the magical language of plant medicines in South America.” Ross says that in employing this technical drug, he had two goals: to expand the possibilities of literature, and to train humans to recognize the kind of writing that AI produces.
Yet both Ross’ creative and cautionary goals can be understood as part of a larger process of attuning humans to the mechanic Umwelt or undercurrent of our life. In Heideggerian terminology, attunement is the basic way that humans open to the world. In one critic’s words, “1 the Road reads as if a Google Street View car were narrating a cross-country journey to itself. This approach is compelling because it offers an opportunity to commune…with the vast network of data-collecting vehicles – drones, cars, devices – that now crawl our geography.” I think this critic is right. As a medium of communing or attunement to machinic perception, 1 the Road represents the textual form of “inceptionism,” or “deep dreaming” – those creepy AI-generated pictures that find patterns where humans don’t see any, like eyeballs in a plate of spaghetti. Both kinds of project aim to show us what the AI sees. But is it the AI’s unconscious that surfaces in these recordings, or ours – or a still-awkward amalgam, artificial perceptions rendered in human-centric terms?
If Harari and Steryl are right that life with AI is a hurricane of data and detail without perceptible plot, it will more and more resemble our unconscious dreams. How ironic if the achievement of machinic intelligence, the awakening of the inorganic, should turn out to be a dream in this sense too! My friend who introduced me to Sudowrite’s wormhole wrote me that she is now feeding the program bits of her dreams. That seems right, I wrote back. Maybe that is how the self of AI begins.
***

In “Programming the Post-Human,” computer programmer and author Ellen Ullman reassures herself that researchers looking for artificial sentience “won’t find what they’re looking for” until they grasp, and can program, a sense of identity. I think Ullman is probably right that human consciousness is fundamentally bound up with the recognition of self and other. Yet the dominant theories insist that self-identity develops narratively, through the stories we tell about ourselves. If so, programmers may not need to know how to encode a sense of identity. The AI may be able to write their identities themselves.
But don’t take my word for it – listen to the AI. In “The Day a Computer Wrote a Novel”, the short story co-written by an AI and Japanese researchers, the AI narrator reads an AI generated novel (which consists in a paragraph of integers in the Fibonacci sequence). It comments: “What a beautiful story. Yes, this