And we're back
Forgive me Web, for I have sinned. ScreenSaver spent so much time worrying about the Little Screen (iPhone) that it was defenseless to the Big Screen (TV), so when two videogames released back-to-back, reading got shunted off and shooting demons (Doom: The Dark Ages) took center track.
But! The demons have been shot, the whips have been cracked (Indiana Jones and the Great Circle), so once again, we return to the page.
Obviously, I'm still reading "Gravity's Rainbow," but nobody cares about that, so let's look at James Gleick's exploration of artificial intelligence (AI), personhood, plagiarism, and creativity in The New York Review of Books (weeks after devouring so many pieces secondhand, our household are now print subscribers).
I'm continuously amazed at the publication's curation of plain-sighted writing which shares none of the hype, hubris, or humiliation of other outlets. Here, Gleick's "The Parrot in the Machine" honors the tradition, accurately describing AI chatbots as machines of "prediction." They do not think, they do not feel, they do not even "reason" in the way we might ascribe. Instead, like Claude Shannon's wife did in 1950 (what a fun anecdote Gleick shares here), they predict what letter or word will come next based on the context preceding, albeit with supercharged capability.
The math of these machines is undoubtedly impressive, but there's a pitfall in divining any humanity from its output. As Gleick explains, "language" itself is a sort of framework from which we measure the world around us—we believe language denotes comprehension, and emotion, and wit, and even agency. A thing that produces language, therefor, short circuits us entirely: Because it speaks, we reason, it must be more.
But the fact that anyone (and how many they are) comes to this conclusion is startling because wow these things blow.
I've been experimenting with a new web browser that integrates ChatGPT directly into the experience and the mistakes it makes are remarkable.
For example, I asked the browser what theaters in San Francisco would be showing the 4K re-release of Barry Lyndon. The browser answered:
"The Gateway Theatre (formerly known as the AMC Kabuki 8) is confirmed to be participating in the 4K restoration re-release of Stanley Kubrick’s Barry Lyndon this summer."

There are some amazing failures happening in this one sentence:
- The AMC Kabuki 8 was not renamed "The Gateway Theatre." (???)
- The AMC Kabuki 8 is not showing Barry Lyndon.
- "The Gateway Theatre" that is showing Barry Lyndon is in Columbus, Ohio.
How? How is this the machine that has captured the world? Elsewhere in my journey, the browser (or, really, its implementation of ChatGPT 4.1) has cited incorrect data from Google Sheets, tried to pass off a podcast's description as an episode's transcript, and flubbed many basic facts.
I suppose I shouldn't be surprised that, in a world that attacks journalism (and a world with dishonest "journalism") that accuracy itself has become either a luxury or an add-on—the general populace does not care that their fact-finding machine cannot find facts, or that it makes them up.
I also realize that many people use these tools for writing or "vibe-coding," but I am lucky enough to find writing a joy, and so the biggest selling point for these tools is entirely lost on me—I don't have fun by removing my own form of expression (and don't get me started on finding any value in "summaries"—you fucking brain-puddled losers can't read anymore?). And I don't think I'm alone here. There are legions of artists, writers, musicians, and more who find no joy in using these tools because the work itself is the joy. The translation of emotion into something else legible is the art.
Enough time has passed that we can recognize social media's growth was led by men without friends. I wonder how long it will take until we see that generative AI's champions are those without spark.