Jason Boog | June 15, 2021

The GPT-3 Edition

On writing machines, creativity, and on-demand AI

Recommended Products

The Deep End: The Literary Scene in the Great Depression and Today
The Deep End: The Literary Scene in the Great Depression and Today

A book by Jason that explores the literary scene during the Great Depression and compares it to today's landscape.

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future
The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future

Kevin Kelly's 2016 book predicting an AI distribution model that resembles OpenAI’s API access.

Jason leads editorial at Fable, a social reading platform for book clubs. He is also the author of The Deep End: The Literary Scene in the Great Depression and Today and contributor of The New Masses Edition and The Roblox Edition for WITI.

Jason here. Way back in 1953, Roald Dahl, the author who chronicled the disruptive career of a candy baron and predicted the devastating consequences of genetically modified fruit, imagined a punch-card computer that could write a novel in 15 minutes flat. That story was called "The Great Automatic Grammatizator,” describing how aspiring novelists wrestled buttons, stoppers, switches, and knobs on a massive writing machine. “By so doing, he was able continually to modulate or merge fifty different and variable qualities such as tension, surprise, humor, pathos, and mystery,” wrote Dahl, as the machine speed-typed publishable pages every few seconds. Of course, computers couldn't write novels in the 1950s. But Dahl wasn't mocking computers. He was mocking the dwindling integrity of the publishing industry and its output which, to Dahl, increasingly felt like the product of a soulless factory.

But what Dahl satirized seventy years ago is almost a reality now. I can recreate many of the Grammatizator’s functions using the API for OpenAI’s language model, GPT-3. Language models are artificial neural networks that use probability, machine learning, and a huge body of training texts to master language prediction skills and provide increasingly human-like responses to text-based prompts. Released a year ago, the company’s beta API program lets civilians access GPT-3 remotely. Instead of punch cards, this language model trained with a massive selection of textual datasets, including the Common Crawl corpus (around one trillion words gathered through eight years of web searches), two “internet-based books corpora,” and the English-language version of Wikipedia. Through carefully structured prompts, I can enter all the “variable qualities” I want into OpenAI’s Playground interface, adjusting fields like “Presence Penalty” to keep the GPT-3 focused on certain themes or “Temperature” to control the randomness of the model’s sentences. It would cost about six dollars (and many, many hours of clicking) to generate a 75,000-word novel through OpenAI’s beta API. 

A screenshot of the playground interface that gives users access to OpenAI's API. Here, GPT-3 re-completes the first paragraph of this WITI newsletter (human writing in bold).

Dahl's story ends with the Great Automatic Grammatizator replacing about 70 percent of published authors. That’s a bit of a stretch. I subscribe to Brier's Law from WITI - The Machine Learning Edition: "The bar to get these things ok is low and great is high." And even when language models like GPT-3 can generate a coherent novel, nobody will be clamoring for AI-written books. We already have millions of human authors struggling to find readers! These language models won’t replace writers, but they will substantially change the way we write. 

Why is that interesting?

OpenAI’s beta API distributes GPT-3 to a rapidly expanding ecosystem of more than 300 apps, powering writing tools that reproduce the functions of “The Great Automatic Grammatizator.” In his 2016 book, The Inevitable, Kevin Kelly predicted an AI distribution model that resembles OpenAI’s API access:

The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. You’ll simply plug into the grid and get AI as if it was electricity.

The API doesn’t require any coding skills to use the playground features, so an amateur like me can learn how to craft creative GPT-3 prompts for pennies. That freedom reminds me of my very first Internet experiences, back when my dial-up modem started singing and opened all these new possibilities. If Kelly’s prediction continues to hold, musicians, artists, filmmakers, and game designers will all plug into AI electricity as GPT-4 and other hypothetical models expand their capabilities. And someday, my grandkids will wail and weep when the AI crashes and they lose all their progress in a Holodeck simulation. 

This API model channels the powers of GPT-3 in specific ways. With AI Dungeon ($9.99 a month), you can generate text adventures, choosing from a whole genre menu that includes fantasy, mystery, apocalyptic, cyberpunk, or custom, setting up story flavors as Dahl imagined. With Shortly AI ($79 a month), you can choose the “I’m writing a story” option and give GPT-3 a precise description of a theme and setting you desire. With a few taps, your AI writing partner will start writing a narrative that follows your parameters, without Dahl’s cumbersome knobs and switches. Finally, the invite-only service Sudowrite seems to be adept at mimicking human writing styles. Novelist Stephen Marche test drove the app for The New Yorker and imagined writing a new Jane Austen or P. G. Wodehouse novel with his AI writing partner, “I could do either in a weekend,” he wrote.

The API access to GPT-3 has some literal limits. Like other language models, GPT-3 breaks words down into “tokens,” or word segments that are easier for the model to process. When an OpenAI customer uses the playground API access to generate text, the language model will not let the combined prompt text and GPT-3 generated response exceed 2,048 tokens, or about 1,500 words. While I could generate 1,500 words over and over again inside the playground fairly cheaply, those current limits make it virtually impossible to produce a coherent novel-length narrative. These limits also make it hard to fine-tune the language model with longer texts.

But even when the API token limits are expansive enough to generate a novel, these tools won't replace novelists any more than the synthesizer replaced musicians. As philosophers consider the implications of GPT-3, the language model's failure at "understanding" is a common theme: how language models cannot replicate the very human ability to connect and make sense of the bewildering array of people, places, and things we encounter on a daily basis. Philosopher of technology Shannon Vallor wrote an essay about how “the labor of understanding” will continue to confound AI research.

When GPT-3 is unable to preserve the order of causes and effects in telling a story about a broken window, when it produces laughable contradictions within its own professions of sincere and studied belief in an essay on consciousness, when it is unable to distinguish between reliable scholarship and racist fantasies — GPT-3 is not exposing a limit in its labor of understanding. It is exposing its inability to take part in that labor altogether.

Understanding is a key to unlocking the mysteries of consciousness, but it’s not necessary for a tool that helps humans be creative, like a junior copywriter using the GPT-3 powered app HyperWrite (starts at $10 a month) to brainstorm copy. Someday very soon, you’ll have the ability to modulate the tone, style, genre, and narrative direction of your writing from your laptop. Microsoft “exclusively licensed GPT-3” last September, so it could start with a GPT-3 powered version of Clippy lurking around your Word documents. 

Whatever it looks like, language models like GPT-3 will evolve into ever more brilliant parrots, making our writing stranger, different, and deeper. I would never use a typewriter to draft an article anymore, and future generations will rely on these AI-augmented tools as much as I depend on spell-check. But we will always need humans to do the understanding part. The rest is code. (JB)

Vintage Computer of the Day:

The IBM 701 was released the same year as Dahl’s “The Great Automatic Grammatizator” short story, an enormous computer the same size as three refrigerators. This high-speed calculator held 72 Williams tubes, each with a storage capacity of 1,024 bits per tube, able to process 16,000 addition or subtraction operations per second. The operator console on the machine (pictured here) resembled the instrument panel of a vintage fighter jet. (JB)

Quick Links:

  • IEEE Spectrum senior editor Eliza Strickland wrote about the ongoing struggle to filter the output for language models like GPT-3. “It’s not clear how OpenAI will get the risk of toxic language down to a manageable level—and it’s not clear what manageable means in this context. Commercial users will have to weigh GPT-3’s benefits against these risks.” (JB)

  • I listened to this Why Theory podcast episode dedicated to Tenet five months ago, and I haven’t stopped talking about it. Todd McGowan and Ryan Engley introduced me to Freud’s concept of “Afterwardsness,” an interesting way of thinking about Christopher Nolan’s film and the long-term effects of the pandemic. (JB)

  • Congressman Ted Lieu introduced a “21st Century Federal Writers’ Project” bill last month, looking out for writers rocked by industry shifts since the pandemic. “The project would entail $60 million administered by the Department of Labor to nonprofits, libraries, news outlets and communications unions” writes the LA Times. (JB)

Thanks for reading,

Noah (NRB) & Colin (CJN) & Jason (JB)

Why is this interesting? is a daily email from Noah Brier & Colin Nagy (and friends!) about interesting things. If you’ve enjoyed this edition, please consider forwarding it to a friend. If you’re reading it for the first time, consider subscribing (it’s free!).

© WITI Industries, LLC.