Should We Be Worried about Chat GPT?

Last winter the chattering classes were all chattering about this: the online language model that can churn out coherent essays in a matter of seconds. In November Chat GPT was made available for free on the Open AI website, so anyone could try it out for themselves. I did so, and wrote about the experience for WORLD magazine (PDF here). The results were impressive, but since then some flaws of this brainy tool have come to light—more about that below.

Like it or not, artificially-intelligent writing will be part of our future, and will undoubtedly replace entire swaths of the writing profession, such as journalism. Why pay for a reporter when you can feed the facts into a machine and watch it churn out a readable story for free? Especially when few reporters these days seem to wear out any shoe leather tracking down a story, but depend on stringers in the field to get it for them. Opinion writers, likewise, echo each other so frequently and tiresomely, why not just program a progressive language bot and let it stir up words to fit the issue?

Especially since ChatGPT is programmed progressive already. A reporter for WORLD asked it to write an essay in praise of President Biden and it obliged at once. An essay in praise of President Trump was beyond its pay grade, however. That, I would say from a neutral stance, is hogwash. And a flaw. The model has all the information on the web at its disposal and could easily find more rightwing opinions to draw from. Could ChatGPT be just another tool of the left to shut down alternate views?

But its handling of obvious facts isn’t completely reliable either. After my column appeared in WORLD, a reader wrote in with a correction. The first assignment I gave the machine was,  “Explain the controversy Athanasius and Pelagius.” ChatGPT dived into early church history with aplomb and spun out a grammatical and historically accurate reply—except for one thing. The debate was between Augustine and Pelagius. Athanasius is best known for an earlier controversy with Arius over the divinity of Christ. I was chagrined by the reader’s comment; I knew that. I was just being sloppy. But so was the machine. I looked it the original interaction and yes, ChatGPT had assumed my mistake and run with it. It gets “smarter” in time as factual errors are corrected, but I’m puzzled how it could explain the controversy accurately without gently reminding me that Augustine, not Athanasius,  was one half of it.

The program is entirely imitative; it learned to “talk” the way any child learns, by listening and responding. In time, little as I’d like to admit it, it will probably be able to imitate human genius. Its current attempts at poetry and drama are lame, but who’s to say it can’t learn from the best? Will enough Emily Dickinson input produce Dickinsonian output? Or a musical comparable to The Phantom of the Opera that won’t require royalties paid to Andrew Lloyd Webber? (The bottom line is the bottom line.)

Some doomsayers see all the professions taken over by machines, including the trades and crafts. Could happen, but only if we let it. My pain problem with ChatGPT is that it short-circuits thinking skills. Though many high schools and colleges have already outlawed it, clever nerds will continue to find their way around such restrictions. And why not? Why not outsource hours of brain work (if one follows the essay-writing protocols in Wordsmith Craftsman) to a machine that can churn out a B+ essay in minutes? That’s efficiency!

But it isn’t thinking, and there are no shortcuts to thinking. A machine “thinks” in linear fashion; a human in ever-widening circles that lead to surprising juxtapositions and unforeseen conclusions. Besides straightforward academic questions, I asked ChatGPT to compose a college entrance essay, a marriage-proposal letter, an approach to my parents about my transitioning from female to male, and advice on how to talk to a spouse with dementia. All the answers were acceptable and reasonable—and spooky.

Literate societies create; nonliterate ones sustain; trivial ones fritter away hard-won gains until they’re sitting on the packed-earth floor. Let’s not outsource thinking. Let’s just not.