- Retrograder
- Posts
- ChatGPT writes terrible dialogue and it’s very funny
ChatGPT writes terrible dialogue and it’s very funny
“My qualms are myriad,” I opined
I recently had the chance to read a hundred short stories by aspiring authors. These writers were not meant to use AI, but some of them did. It was incredibly, surprisingly obvious.
I’m sure ChatGPT will, at some point, figure out how to write better prose. At present, however, it’s delightfully bad.
As a test, I asked ChatGPT to improve this scene from The Sea Knows My Name, in which Thea and Wes debate whether mint will protect their crops from aphids:
Here’s what ChatGPT suggested:
Oho! “The mint’s verdant embrace!” How did such a pleasurable turn of phrase elude me? Forsooth!
I then asked ChatGPT to make the scene more subtle and emotional. She’s beautiful:
Burdened by the weight of a world tethered to the relentless grip of entropy
A few thoughts:
These scenes are hard to read. They’re dense and stilted. They read like they’ve been partially digested by a thesaurus. Or run through ten languages in Google Translate before being returned, despondently, to English.
ChatGPT transposed my present tense into past tense.
The dialogue tags are the worst.
A quick primer:
Dialogue: The stuff that is said (in quotation marks, unless you are Sally Rooney or similar)
Dialogue tag: The stuff that explains who spoke and how they spoke it
Ideally, dialogue tags help the reader make sense of the conversation without distracting from what’s being said. Three examples:
1:
“My god,” Jane said.
“What is it?” John said.
“My ferret has escaped,” Jane said.
“That’s not good,” John said.
This is monotonous. After the first two dialogue tags, you don’t need the others.
2:
“My god!” Jane exclaimed shrilly.
“What is it?” John gasped, grabbing her shoulders.
“My ferret has escaped,” Jane cried, dropping to her knees.
“That’s not good,” John dead-panned wryly.
This is distracting. The emotions are ping-ponging around the page. I’m getting confused about logistical matters—is John still grabbing Jane’s shoulders after she drops to her knees?
3:
“My god,” Jane said.
John lowered his newspaper. “What is it?”
“My ferret has escaped.”
“That’s not good.”
This is probably how I’d rewrite the scene. The first tag is “said”—boring but invisible. To establish John’s presence, I’ve given him a freestanding sentence rather than trying to append movement to his manner of speaking. Then I’ve cut the tags entirely from sentences three and four. Conventional wisdom recommends that writers limit their dialogue tags and use mostly “said” and “asked.”
But ChatGPT does not seem to be au fait with this wisdom.
ChatGPT was trained on a lot of books. It was trained on public domain books (Pride and Prejudice, Moby-Dick), and it was trained on copyrighted, pirated books (Game of Thrones, Harry Potter).
When I look at ChatGPT’s creative writing, I wonder if it’s able to differentiate between schools, eras, and genres. Two hundred years ago, it was much more common for characters to rapturously cry and boisterously exclaim (both from Pride and Prejudice). While I’m happy to read this in Austen, it’s weird in a modern novel.
All this to say, ChatGPT will surely get better at creative writing. It’s going to catch up with the books in its dataset and figure out that no character need ever “rejoin with a twinkle in his eye.” What I’m curious about is what comes next.
“I have a theory of the future,” she postulated postulatingly
If ChatGPT makes it easy to write good books as they exist now, authors will race to write what ChatGPT, as of yet, cannot. Authors will push the boundaries in weird and new ways; “humanness” will equal greatness. I envision book covers with blurbs that say, “This book is deeply human.”
I’m reminded of Kate Folk’s excellent short story Out There, in which a San Francisco woman is approached by handsome automaton men (“blots”) programmed to woo her, date her, and steal her identity. When she meets a man on a dating app, she finds herself watching him warily. Is he human? Is he a blot? When he seems particularly charming or kind, she worries he’s a blot. When he’s inattentive, she’s reassured that he’s probably human.
As I read my hundred short stories, I found myself skeptical of perfection. Surely, I thought, someone who knew every rule of grammar also knew when the rules should be bent. Where were the sentence fragments? Where was the surprise, the unexpected weirdness? And then I would stumble upon an error.
“My ferret has escaped,” Jane said, her limbs going raggedy.
Hang on, I would say. Raggedy? As in scruffy? That doesn’t make sense. But wait—raggedy as in Raggedy Ann? Her limbs are going doll-like, floppy?
I’d stop here; reread. Was it creative, or was it just wrong? Either way, I would feel a new warmth for the piece and its writer, drawn in by the appealing badness of being human.
Currently reading: Wordslut by Amanda Montell. I read Cultish last year and reference it all the time. My only regret is that it did not occur to me to become a pop linguist.
Non-urgent thought of the week: Apparently Australian dairy percentages are different? Australian light yogurt is American whole milk yogurt; Australian normal yogurt is American whole milk yogurt + cream. Spent about two years thinking dairy in Australia just tasted better. Oat milk would never trick me like this.
If you liked this, consider sharing it with your favorite person. If you hated it, consider sharing it with your least favorite person.
Reply