I’m done giving AI the benefit of the doubt
“Should we ask ChatGPT to tell us a story?”
The boy couldn’t have been older than four, with big eyes that sparkled as his father suggested this absurd variation on story time from the table next to mine. I thought I’d misheard. I glanced over the top of my laptop and my half-written essay mapping the degeneration of cultural creativity against the rise of generative AI; I’d lost the thread of the thesis—the cursor that had been blinking at me from its fixed location for several minutes now.
The father began speaking into his phone. “Tell me a light-hearted children’s story about a blue fish who’s going on a journey. He also has a friend who is a red crab.”
The boy leaned earnestly as the loading icon pinwheeled on the screen.
It would be serendipitous—a real-life example of generative AI being used in lieu of creative labor—if it weren’t so damn jarring.
The essay I was working on was inspired by N+1’s really excellent polemic editorial titled, “Large Language Muddle.” Part cultural commentary and part call to arms, the editorial rails against the creep of generative AI in society and posits an alternative approach straight out of the Luddites’ playbook. It was empowering. I was so ready to write my own screed against generative AI; I’d argue that it was the natural progression of the focus-group forged and hyper-managed style of communication so many of us find ourselves confined to upon entering the professional sphere. I’d deride the hellscape that is LinkedIn and the strange pseudo-human, algorithm-serving language it requires us to use.
But back to the father and son. The dad proceeded to read aloud a sentimental story about some fish and his crabby pal; the son leaned over to admire the illustrations (yes, this was one of those subscription ChatGPTs with generative-graphic options).
My eavesdropping skills being lacking, I couldn’t analyze the merits of the story itself, but as soon as I was certain that this man was in fact outsourcing storytelling (or a trip to the library three blocks away or the cost of a children’s book written and illustrated by actual human beings) to an LLM, my brain started making concessions.
My smoldering disdain for AI was ignited before the subway stations became a breeding ground for those uncanny Sketchers ads, before Facebook feeds filled with AI-generated videos depicting Holocaust revisionism and shrimp Jesuses, before the damn Gemini star popped up at the corner of my Google Drive. It started as anxiety in response to rumblings from smug technophiles professing that generative AI would transform writing as we know it (disconcerting news for the person who’s wanted to be a professional writer since childhood), and friends suggesting I “try ChatGPT” instead of whiling away hours drafting cover letters for jobs I had no chance of getting (please, don’t take this as a condonation of cover letters or a condemnation of my friends).
Fast forward to now, and the anxiety has flipped to fury. New articles detailing the risks posed by AI chatbots moonlighting as friends, romantic partners, and therapists appear daily as workers are laid off in droves by vainglorious bosses. Utility bills balloon as electricity demand is driven up by AI-focused hyperscale data centers that consume electricity annually equivalent to that of about 100,000 households. I’ve listened to the Department of Energy use this as a rationale for increased fossil-fuel production and drilling. I’ve watched family and friends fight to appease absurd and arbitrary conceptions of workplace efficiency, their creative muscles atrophying as they outsource the few parts of their jobs that once gave them joy to LLMs.
So, yeah, AI’s the enemy, and yet I kept giving it the benefit of the doubt.
I pored over articles detailing the progress of generative AI. I watched Particle6’s “AI Commissioner” parody video on repeat, asking myself whether my reaction of utter disgust would be the same if I were unaware that the entire video and its “breakout star,” Tilly Norwood, were AI-generated. I read the metafiction flash piece produced by OpenAI’s “new model that is good at creative writing” that Sam Altman claimed “really struck” him, and was willing to admit it had a couple of decent, if not meandering lines tucked between the myriad cliches and straight-up plagiarism. Shout out to The Drift’s Max Norman for the thorough and scathing close-reading of the piece.
Much of the N+1 article is spent analyzing the glut of first-person essays in which (human) writers chronicle the influence of AI-generated writing on the literary world at large. Dubbing them “AI-and-I essays,” the editorial comments on the strange consistencies between all these pieces, from their “searching, plaintive web headline[s]” to the third quarter shift when the writer begrudgingly interacts with the LLM. Inevitably, the essayists are impressed:
“All these points in AI’s favor prompt some nervous reappraisals. Essays that began in bafflement or dismay wind up convinced that the technology marks an epochal shift in reading and writing.”
I should assure you now—the next part of this essay will not be a chronology of my deteriorating mental health as I type prompts into Claude. I’ve done enough ERP therapy in my life to know when to say “no” to the intrusive thoughts.
There is something unnerving about these “AI-and-I essays” and the fealty their authors give to their subject matter. For all the hay made over the obsequious nature of AI chatbots, maybe we should start investigating our own inclination to sycophancy. The authors of these essays rarely discuss the actual technological viability of LLMs. Their understanding remains that of the bemused consumer. I’m not proposing that my understanding is any better. If I have any knowledge of the technological capabilities and deficiencies of LLMs, it’s mostly a regurgitation of points made by my partner, a self-admitted AI-skeptic data scientist with a background in linear algebra.
This is all to say, the premature resignation among “AI-and-I” authors isn’t based on any technological wisdom. Could it be coming from a nagging sense of loyalty to the journalistic code of ethics? The rise of Fox News and its cooption of the phrase “fair and balanced” sent a lot of journalists and wannabe journalists into a cluster-fuck from which we have yet to recover. I certainly blame those years of high school and college journalism for the sense of anxiety that arises whenever I stake a claim . . . Have I given enough energy to the consideration of alternative perspectives? How have my personal biases contributed to this conclusion? Am I offering sufficient brain tissue for counter-narratives? I used to consider my compulsion to “hear the other side” a testament to my open-mindedness.
Nowadays, I’m not so sure. Whose narrative are we platforming when we suggest that, “like, maybe we shouldn’t be so precious when AI pillages our intellectual and artistic property,” or that generative AI is actually a “democratizer of knowledge?”
“Marketing is not destiny,” write the N+1 editors. But it sure tries hard to present itself as such. As I urged myself not to be so quick in disparaging the dad who enlists ChatGPT to make up stories to entertain his son, I found myself parroting the very people pedaling this technology. Few seem all that afraid of wrongfully over-hyping AI—which makes sense. Billions of dollars have been fed to the AI beast. OpenAI is scooping up defense contracts. The Lehman Brothers might not still be around in name, but the “Too Big To Fail” mentality is alive and well (we all watched as a deeply undeserving Wall Street was bailed out on the American taxpayer’s dollar in the early 2000s). Bubble or no bubble, betting on AI is safe. Siding with Capital is safe. Siding against our environment and our social wellbeing is safe . . . until it’s not.
“Deny the machine,” the N+1 editors urge.
There’s still plenty of time to stick spokes in the gears of generative AI. Does this require haranguing a young father for exposing his child to joy-killing energy-sucking AI slop? Probably not. But I sure as hell am going to write about him. I don’t want to live in a future where children read machine-made approximations of stolen stories against the drip, drip of melting glaciers. I refuse to accept this. I refuse to give AI the benefit of the doubt any longer.

