As TV Writers Strike, US Media Uncritically Echoes Film Studio Execs’ Bogus “AI Writer” Hype
No one knows what Generative AI will look like in 10 years, but for now, the idea of “AI” replacing high-level screenwriting or journalism is total vaporware.
One of the most difficult tasks for any media critic is to discern and assess the difference between the reality of a threat and the inevitable overhyping of said threat by powerful actors who benefit from getting other people to believe the hype. From crime to terrorism to disinformation, the threats posed by such malignant forces—forces that are very much real and a fact of contemporary life—are often inflated, misdirected, or exaggerated for cynical ends by those in power.
Such is the case with the recent discourse about generative AI and its impact on journalism and creative industry jobs like TV and film writing. Echoing the threats made by Hollywood studios to striking screenwriters with the Writers Guild of America, write-ups in The Washington Post, The New York Times, AP, Rolling Stone, CNN, Wired, and dozens of outlets all frame the threat of “AI scabs” as a real and present danger to creatives in these fields.
But, as I will argue in this column, I’m not convinced labor reporters and activists should be so quick to accept this premise. While the upside for integrating ChatGPT and other Large Language Models (LLM) into everything from software engineering to military application is, in many ways, here and now, many of its boosters have gotten way ahead of the evidence when making bold claims about what the technology is currently—and will soon be—able to do in the field of creative writing.
Let’s begin by examining a typical claim thrown out in these articles, as exemplified by this quote from “AI consultant” Dylan Budnick in Rolling Stone regarding the topic of AI and the ongoing WGA strike negotiations:
Studios can save a buck, wrangle creative control away from the writers to please advertisers-funders, and focus on editing a prewritten script instead of dealing with a range of voices and takes from a writers room.” Most elements of a screenplay “can all be easily spit out by the models used by OpenAI,” Budnick claims. “The job then becomes reading and editing, which is easily done by whoever has creative control.” Given a prompt such as: “Write me a movie about Spider-Man meeting Batman, include stage directions, suggest actors, soundtrack, etc. Write it in the style of a detective noir film,” the model can generate a roughly 50-page script.
Except—and this is important—this isn’t at all true. I emailed Budnick asking him to clarify what he meant when he made that statement. (Which elements of a screenplay “can all be easily spit out” by AI? Which elements can’t? What are the standards for evaluating the passability of an AI-generated script?) I also asked him to share with me a 50-page, AI-generated script that he thinks could, by implication, work to replace writing labor. He didn’t do so, instead telling me “it's a matter of opinion on whether or not the script would be ‘good enough’ on first pass.”
But it’s not a matter of opinion, really. And it’s okay to say it’s not. Using current tech, there is no case in which ChatGPT could produce anything remotely close to a usable script. Most “AI” pufferists will concede this point upon pushback. But, they always follow up by insisting AI can still reduce net writing labor hours by creating content a human writer could then “modify,'' with the general idea labor has been saved. “You can modify the initial prompt to be whatever you'd like,” Budnick insists, citing a recent example of a ChatGPT-assisted South Park episode that aired in March.
But this is also, crucially, not true, and yet it’s becoming the go-to compromise line: Clearly writers are still important and human input will still be needed in the future, but AI will streamline the process, they say.
Many art shows throughout the decades have displayed paintings by chimpanzees—indeed, their paintings have sold for many thousands of dollars—but no one has ever seriously argued that chimpanzees painting is anything more than a fun gimmick, or that chimps will one day replace human painters. The South Park episode in question was a one-off gimmick (and the ChatGPT co-writing credit may have been a joke). It riffed on the 6th-grade-level generic speak ChatGPT produces, but there’s been no public evidence that using this technology reduced the net labor that went into producing the episode.
The belief that having humans simply punching up or “rewriting” ChatGPT outputs will reduce net writing labor displays a fundamental ignorance about the process of high-level writing. Anyone who’s been an editor for a day will tell you that re-writing bad writing takes longer than simply having a competent writer write it themselves in the first place. Feeding writers “AI”-generated scripts that are filled with bloodless, generic cliches, that can’t use metaphor, lack scene context, nuance, or fidelity to structure, that have no originality or humor or spark, and asking them to rewrite said scripts simply adds superfluous steps to the process.
There’s another word for rewriting: it’s called “writing.”
Thus, in the end, no net writing labor is saved by incorporating “AI” into the process. Indeed, as many writers on the picket line have made clear, net labor is often times very much added. Certainly, when it comes to formulaic writing genres—pro forma emails, marketing copy, etc.—it can work just fine. That’s because these forms of writing are not creative, but technical and generic by definition, and this type of labor could very well be replaced by “AI.” But this has little-to-no bearing on the WGA strikes, screenwriting, or writing in adjacent fields like journalism, novel writing, etc.
As Lauren M. E. Goodlad & Samuel Baker wrote in their excellent essay, How Humanities Can Disrupt AI, “Today’s machine ‘intelligence’ bears little resemblance to the human thought processes to which it is incessantly compared, both by those who gush over ‘AI,’ and by those who fear it.
But one day soon it will, based on current growth metrics, we are told. Maybe? Maybe in 10 years? I’m not arrogant enough to speculate on the trajectory of technological development years from now, but we have to engage with the capacities of the tech in question as it currently exists, and we have to be a lot less credulous, taking into consideration the obvious incentives for capital to push out bullshit claims about what it can and cannot do.
When one gets to the specifics of what WGA negotiators are demanding, contract provisions concerning AI are largely a CYA move, because who knows what the tech could bring in five to 10 years. The union has no reason not to put language in the contract covering this, because even if AI has a 1% chance of actually accomplishing the magical outcomes ChatGPT Millerites claim, it’s still worth having this protection. That WGA is putting “AI” protections forward in contract negotiations is not, per se, evidence that the threat, as it currently stands, is as “real” as the studio bosses say it is. It’s simply evidence that a meaningful percentage of writers are concerned about how “AI” can or will be deployed in their industry if these protections are not put in place, or that they see little reason not to hedge the risk, however remote, and set the conditions now for how “AI” could be integrated into the creative process.
Hollywood bosses have the opposite incentive, of course. They have every reason to hype this trend to intimidate labor, to make workers seem replaceable, or to justify cutbacks that were planned for long before the latest version of ChapGPT dropped.
The mass psychology phenomenon of ignoring how far away we, in the present, are from a future in which this tech will be actually writing creative TV content is understandable. No one wants to be David Letterman in that infamous 1995 clip where he’s mocking the idea of the internet to Bill Gates.
At the same time, one also risks being the guy in 1966 saying we’ll be playing golf on Mars by 1990. Many writers see this gap between what’s being advertised and what’s actually coming out of the latest ChatGPT, and will openly say it here and there. But being skeptical about the actual risks of AI scabs isn’t a very popular position because most creative types are, understandably, hesitant to comment too much on complex technical issues we’re not familiar with. But I’ve also noticed—and I think this is the more compelling social dynamic at work—that writers don’t want to appear overly precious or self-indulgent about the sanctity of their own line of work and the value of the labor they perform. But they, and other high-level writers, really should, as gauche as it may seem, because they and their work are more valuable than their bosses want them to believe.
Doing what they do is incredibly difficult and involves a galactic level of nuance and imagination LLM models are simply not equipped to do.
Substance aside, there’s the meta question of: Even if “AI” can’t replace writers, will bonehead studio execs, high on the smell of their own bullshit, try it anyway? One WGA negotiator, Adam Conover, told Deadline last week, “I personally think the technology is completely overhyped and oversold. I don’t think you’ll ever truly be able to replace the work of a writer but I don’t put it past these companies to try and cook up some cockamamie scheme where they have an output text and hire writers to rewrite it or something like that. I think the public will hate it. I think it’d be a financial failure, but I think they might try and they could hurt a lot of writers by doing so.”
This strikes me as a far more reasonable concern. If history has taught us one thing, it’s that those in the C-suite, the suits without an original or creative bone in their body, may very well ignore reason and lay off a bunch of people based on hyped-up trends that will ultimately not go anywhere in terms of quality and substance. From the brains that brought us vertical screen streaming start-up Quibi, anything is possible. But this seems like even more reason for those in the media to push back on the premise that this tech will, in any way, actually write anything that’s any good, or that it will be integrated into the process to reduce net writing labor. We have absolutely no objective reason to believe that will happen, beyond vague claims and providential religious-like reasoning.
To be fair, other WGA reps, such as writer Ben Schwartz, are more nervous about the tech, and their concerns are worth hearing out. The writers on strike are far from having a unified position on the issue. But the substance of the tech in the present and short term is something I do think we ought to be sober about, if only because The Current AI Narrative does serve a broader political project of disciplining creative labor in a host of verticals. Take, for example, a recent memo sent out by Nich Carlson, global editor-in-chief of Insider, that reads like a bizarre form of passive-aggressive critique. The memo is ostensibly about being more efficient, but casually concedes that using “AI” for editing and writing will just create a bunch of extra work, fact checking, liability exposure, and re-writing that will take more time than just having humans do the writing in the first place.
The goal seems to be more about sending a message to recently laid off and soon-to-be laid off journalists, letting them know they're replaceable pieces of shit who shouldn’t bother unionizing or asking for a better contract. Even if the tech doesn't produce the promised results—which I clearly don’t think it will in the short term—the specter of it serves the function of psychologically taking high-level writing labor down a peg.
Silicon Valley mystics insist massive breakthroughs are just around the corner. They keep talking about how AI has passed the bar exam or passed a GRE test. Yes, if I could Google all the answers and store them in an endless memory bank, I too could pass these examinations. The point of these exams is to see to it that humans can retain and recall rarified knowledge, not that they can search large data sets and mimic the correct response. An impressive feat of human engineering, but not the creation of any intelligence, much less an intelligence that can apply the knowledge superficially displayed on the test to any real-world application. This hype cycle is fueled by many such ontological tricks, yet no one is ever really clear about how the thing they are claiming will soon disrupt every market on earth will actually do it. It’s just a vibe.
But vibes can have real-world negative impact when they’re used to belittle human labor, to paint it as formulaic and replaceable when it is anything but. LLM is an impressive technology that can—and already has—reduced or eliminated a lot of white-collar labor (which can, in other industries, potentially unleash real harms to the working class). But there’s no rational reason to assume it will magically surpass its inherent epistemological limits and start pumping out scenes from The Last of Us. Our media should be less quick to accept this premise without complicating it, asking for more evidence, or pointing out why studio execs would have motive to push this line in order to diminish the value of WGA labor.
What they currently call A.I. is closer to a parlor trick like Three-card Monte than HAL 9000.