Skip to content
5 min read Newsletters

ChatGPT as the Original AI Error

The human fascination with conversation has led us AI astray

ChatGPT as the Original AI Error
Photo by Armin Heydari / Unsplash
What if he's right What . . .if. . .he . . .is . . . right W-h-a-t i-f h-e i-s r-i-g-h-t
—Tom Wolfe, The new life out there, New York Herald Tribune, 1965

There is a growing sense that chat—as in ChatGPT—was an error. For two recent examples, see here and here. Neither of these are from people from whom I would take advice on anything consequential, so I am, if anything, biased to think they're wrong. But, to quote writer Tom Wolfe on Marshall McLuhan, "What if he's right"?

Let's start with something that too often goes unsaid, but helped drive ChatGPT's unprecedented adoption (which came as a surprise to OpenAI as much as anyone). And it is this: Humans are hairless, gregarious apes. Humans, and their fascinations, cannot be understood without our tireless and sometimes lethal compulsion to talk.

Why lethal? Because of how we evolved to privilege speech over breathing. In most mammals, the larynx is high in the throat, keeping the airway and esophagus reasonably separate. In humans, however, the larynx descended to enlarge the vocal tract. This made for a more resonant cavity, enabling singing and complex speech, but it also created a shared pipe via which even small amounts of wrongly routed food could cause us to choke. But the evolutionary advantage of advanced communication outweighed the survival risk of occasional choking.

Humans must be understood through this lens: they are so obsessed with talking that they are willing to die to get a few words out. And, as a result, they judge intelligence based on how human-like things sound.

We have a long history of being blinded by things that act human-ish intelligent but aren't. Among the earlier examples of this was the so-called Mechanical Turk (18th–19th century). Played up as a chess-playing automaton, it was actually a chess-playing human grandmaster in a box. But because it played like a human it was treated as a wonder, as if it were a human—perhaps eventually capable of speech!

More recently, and famously, we saw this with ELIZA . Developed by AI researcher Joseph Weizenbaum at MIT, it was an early psychiatry chatbot. ELIZA essentially mimicked what people said, responding with an anodyne question triggered by a keyword. This worked well enough that it was briefly a sensation, even if you rapidly want to throttle it.

You can see what I mean in the following dialogue (with this ELIZA variant):

While ELIZA itself didn't go much past cheap mimicry, you can trace a chat lineage from it to modern AI chatbots, like ChatGPT. And even the original Turing test of AI intelligence is chat-based, in that Alan Turing proposed that if someone knowledgeable couldn't tell a chatting computer over a terminal from a person, then it was intelligent—again, human-like conversation is intelligence to gregarious apes.

And that brings us to today, and the AI world that ChatGPT has made. It is, as must be obvious, only the most recent in a long line of "chat" services, dating back to Turing's original test. What they all have in common is the human fascination with services that sound (sometimes slavishly) like us, as with ELIZA; and other times stochastically, like with ChatGPT. The resulting rapid ascent—the fastest-growing consumer technology product in history—has it climbing straight up a wall of awareness and adoption.

But it is a chatbot, even if a remarkably fluid, fast, and persuasive one. And by viewing large language models—LLMs, the technology underlying most recent AI developments—through the lens of chat, which is understandable given recent chat history and our longer chattering evolution, we have become fixated. Adding AI to a product or a service has increasingly meant, post ChatGPT, adding chat to the product or service.

That, however, is often an error. People no more want to chat with every device in their life than they want to have dinner with their Kitchenaid dishwasher. They just want those things to do what they were bought to do, and chat, too often, gets in the way. Consumers are increasingly wary of chat interfaces, wondering why they are appearing everywhere.

The chat compulsion is even more misdirected in the workplace. Adding chat functionality to sales automation doesn't do much for most salespeople; adding chat to factory floor CNC routers will irritate most shop workers. I spoke to a salesperson at a large, publicly-traded company recently who explained that management, after noisily bragging on earnings calls about adding chat to various products and services, was now ... making little mention of it. There had been minimal customer interest, so out chat (quietly) went.

Does this mean current AI is a bubble or an error? No, but it does mean that conversation-fixated humans have latched onto the most conventional and conversational aspect of language models, and thrown that at every application in sight. While it sometimes works, it often does not. To this way of thinking, ChatGPT was the original error: a seductive service that appealed to our biases, to a fault.

If we break that fixation, what works? They are grammar engines predicting likely tokens, so dealing with messy machines talking to machines in one plausible future of current AI. There are thousands of machine languages, billions of speakers, endless grammars, and machine-to-machine chatter is already nonstop. The frontier isn’t another chat pane bolted onto a product and then promoted to irritated customers via an earnings call, but machine-to-machine dialogue: procurement systems negotiating supplier formats, logistics tools aligning on addresses, financial systems validating identifiers. Boring as fk, yes. But these are all languages with strict grammars, where speed and accuracy matter more than fluency, charm, or faking out a college instructor or a hiring manager.

Compared to that, human banter—our compulsive, choking-inducing chatter—looks like evolutionary baggage that keeps distracting us. The true impact of LLMs may resemble electricity’s shift from stage-lighting novelties to silent, ubiquitous wiring—boring, invisible, and indispensable. Humans need to stop thinking about AI as chat, an artifact of the original chatbot error still leading things astray.