On pattern-matching (and breaking)
The CEO of OpenAI, Sam Altman, revealed this month that the company had trained a new artificial intelligence model that is "good at creative writing," and, predictably after posting the model's handiwork, everybody made fun of it.
we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.
— Sam Altman (@sama) March 11, 2025
PROMPT:
Please write a metafictional literary short story…
While I understand the temptation to dunk on AI (and, more alluringly, to dunk on Sam Altman), I think that evaluating any AI's creative output is mostly a dead-end—knowledge of the artwork's origin colors our criticism, and, more disappointingly, blind reading studies reveal that people have genuinely bad taste (but you don't need a scientific model to prove that).
That said, I am only human, so I gazed deep into the future's eyes and found it not bad, not good, but so clumsily borrowed. When the OpenAI model described "Thursday" as "that liminal day that tastes of almost-Friday," what stood out most was a recency bias—the liminal space aesthetic became popular in 2020 during the initial COVID-19 lockdowns and, five years, too many Twitter accounts, and infinite Instagram posts later, "liminal" as a feeling—and certainly a word—is trite.
In practice, this is a big ol' "no shit" moment because every single piece of AI writing is, in a sense, borrowed. The Large Language Models that power these platforms have "learned" how to predict endless chains of words by digesting enormous volumes of data in their training processes and finding the patterns that emerge. And since all of that data is just human writing, some mild form of plagiarism is inherent.
But human writing isn't dissimilar, theoretically. We just learn by reading what we (as a species) have written and then we piece things together in our own ways. So, what separates a large language model's pattern-matching from that of our own?
One answer, I think, can be found in "Gravity's Rainbow."
There's a remarkable scene between two characters in the novel, Jessica and Roger, who are trying to carve a "normal" life for themselves away from World War II, but who keep being reminded of the war's presence by local rocketfire. Tucked away in a little house in a depopulated neighborhood, Jessica's thoughts are described:
"Until something falls here, close enough to matter, they do have their safety: their thickets of silverblue stalks reaching after dark to touch or sweep clouds, the green-brown masses in uniform, at the ends of afternoons, stone, eyes on the distances, bound in convoy for fronts, for high destinies that have, strangely, so little to do with the two of them here . . . don't you know there's a war on, moron? yes but—here's Jessica in her sister's hand-me-down pajamas, and Roger asleep in nothing at all, but where is the war?
Until it touch them. Until something falls."
I genuinely believe that there's some form of divinity happening in the words "Until it touch them." The four words insinuate that "the war" is, well, plural, as, grammatically, a singular object "touches" you, but multiple objects "touch." This feels like the kind of work that can't be predicted or generated by a machine because it breaks the standard rules of writing. And I'm not equating rule-breaking with "good" writing, but, for a novel that is obsessed with the impact of war, these four words masterfully expand our basic understanding of the concept. War isn't a thing or an object or a period in time. It is many and it consumes.