The AI reader

Is there meaning in the machine-generated?

Bruce Nauman

Terry Nguyen, Dirt's senior staff writer, considers ChatGPT’s impact on how we read and write. Read the first part of the essay here.

Roland Barthes famously declared the death of the author in 1967. Over half a century later, we are still at her wake. Barthes believed that the author’s history and intent should not matter to the reader; biographical details are irrelevant to how a reader analyzes or reacts to the text at hand.

“The true locus of writing is reading,” he wrote. But Barthes’ edict took for granted the author’s sentience. Such a trusting presumption can no longer be made with AI. There’s little evidence within language itself, unlike with images, that can clue readers into its synthetic nature, if sentences can even be considered “synthetic.” With language models like ChatGPT, how should we consider notions of authorship and copyright? All we can know are the words, words that inevitably become compost, absorbed over and over again into the machine.

An older writer once advised me that reading is just another form of writing. Our brains are porous, like sponges. On my best writing days, I marvel at how my sentences cohere. Each one feels like a small miracle, as if someone had planted the phrases overnight and left them for me to harvest. On bad writing days, my mind goes on autopilot and spits out predictable, fill-in-the-blank prose. In these low moments, I’ve started to wonder if I am any better than a word processor or a deep learning algorithm.

Among the literati, the most pressing concern with AI is its effects on what gets published, which is ultimately a question of who we read. The experimental poet Tan Lin regards the act of reading as “a kind of integrated software.” Every day we encounter thousands of words in emails, text messages, and news articles, across advertisements and social media feeds. We may even pick up a book for an hour—and that hour of reading, in our minds, is distinct from the hour spent scrolling Twitter even though, in both instances, our eyes are simply skimming words.

To paraphrase Lin’s observation, the human brain processes, categorizes, and understands language depending on its content, context, and form. We wouldn’t read a cookbook as we would a Sally Rooney novel. AI, however, challenges these readerly instincts. It forces us to recognize the predictability of most things we read, to consider how we’re not often reading as closely as we think.

In our saturated digital landscape, reading has become a form of information control, a mix of “data management and passive absorption.” But read doesn’t quite encapsulate the ceaseless flow of ambient text that most people ingest upon waking up. On top of that, there’s no efficient way to filter good human-written works from bad machine-generated sludge, although we shouldn’t necessarily assume that all AI-produced texts are predictable and poorly written. A glut of bad writing exists online, produced by humans and machines alike. (And good writing, too, can skew somewhat formulaic; syntax and plot structures are “formulas” that provide writers with an accepted blueprint.)

If we can’t take words at face value, then readers are drawn to assessing the author’s (or prompt engineer’s) intent. Literary discussions online already tend to conflate the writer with their fictional work, and vice versa. Their biography and ideology colors the work. With AI, the fixation on authorial intent primarily applies to public language and discourse—how ChatGPT could engender misinformation campaigns or scams. But AI’s impact on an individual's writing practice seems to be of secondary importance, at least in popular discourse. Conversations about the technology’s misuse have largely neglected to consider the possibilities of its artistic and ethical use.

In February 2023, the science fiction magazine Clarkesworld temporarily closed its submission portal to curb the steady influx of machine-written stories submitted for publication. That month alone, Clarkesworld received about 700 “legitimate” submissions and 500 machine-generated ones, the latter of which editors assumed were AI-generated due to their poor writing quality. If left unchecked, the editors said, the number of machine-written submissions would soon outpace those written by real writers. Their job would start to resemble content management, instead of actual editing.

There is, though, no easy way to filter the fraudsters from the “real” writers. The submitters aren’t bots. They’re AI enthusiasts, “driven in by ‘side hustle’ experts making claims of easy money with ChatGPT,” wrote Clarkesworld publisher Neil Clarke on Twitter. Their efforts wouldn’t necessarily be deterred by a CAPTCHA. The least labor-intensive solution, ironically enough, would be to program a tool to comb through the slush pile: Use AI to flag stories with AI-generated quirks. But that alone wouldn’t be a failsafe fix. A piece could be partly written by ChatGPT and significantly edited to avoid detection; it could contain both human and machine writing. Strange and unfamiliar turns of phrases from ESL writers could also confuse the machine.

The Clarkesworld incident, among many others, has colored most people’s opinions of “AI writing.” In response, several literary magazines, like Asimov’s and Fantasy & Science Fiction, have amended their guidelines against any works produced with the assistance of AI, however minor. Anyone can be a creative writer with the help of ChatGPT, so its outright ban is an attempt at maintaining the integrity of the craft. Faced with a flood of AI submissions, the knee-jerk reaction is understandable but disappointing. It disguises the technology’s history and disavows its potential.

Since the mid-20th century, artists and writers have played with early programming languages like FORTRAN and COBOL to generate poems and fictive texts. OuLiPo, an avant-garde group of mid-century French writers, championed writing techniques that emphasized linguistic rules, chance, and/or mathematical constraints. These language experiments are analog predecessors to the large language models and natural language processors we have access to today. At universities like Brown, NYU, and MIT, the digital humanities and digital language arts have a decades-long history as fields of academic study. There are automated writing residencies, experimental journals, and organizations devoted to championing forms of electronic and networked literature. In recent years, genre fiction writers who self-publish as a side hustle have adapted AI tools like Sudowrite or Jasper to produce books more efficiently.

To discount this tradition of literature, or to exclude it from mainstream literary conversation, is short-sighted. Stubborn resistance to LLMs is futile if their widespread adoption is inevitable. It would be akin to maintaining a longhand writing practice when most people are typing on the computer. A modern writer should learn both. And increasingly, the modern writer is expected to function like a high-level text generator, programmed to be predictable and ever more productive.

 🌱 JOIN THE DIRTYVERSE

  • Join our Discord and talk Dirt-y with us. It’s free to join! Paid subscribers have access to all channels.

  • Follow @dirtyverse on Twitter for the latest news and Spotify for monthly curated playlists.

  • Shop for some in-demand Dirt merch.  🍄