As recent advances in generative AI captured the world’s attention earlier this year, it was common for impressed observers to say: “And this is the worst AI will ever be.” What AI could do with only a few keystrokes, whether conjuring screenshots of fictional movies or writing entire marketing plans, was astonishing, and the technology’s abilities would only improve.
Even AI’s detractors, who expressed fears about biases, job displacement, fake news and even usurpation of the species, conceded its promise. A coalition of computer scientists called for a moratorium on AI research, likening it to the nuclear arms race.
But everyone agreed that AI had finally arrived, and would soon upend everything.
Only a few months later, the AI revolution is running into some trouble. While AI technologies will almost certainly become a mainstay of postmodern life, both individual users and enterprises seeking to exploit it are already finding its transformative potential isn’t quite here yet. Even once that future arrives, AI may be less liberating than it is sneakily dissatisfying.
Early adopters of generative AI tools quickly became acquainted with its limitations, such as its lack of nuance and an inability to explain its decisions. Most perplexing is its tendency to “hallucinate” data — a fancy way of saying it just makes things up. This realization came too late for two lawyers at Levidow, Levidow & Oberman, who in June were forced to pay $5,000 in fines after they submitted a ChatGPT-written brief that cited nonexistent cases.
Acknowledgment of these flaws soon produced new advice about its proper use — “it’s great for generating ideas” or “it’s a good way to get a first draft.” Like Wikipedia in its early days, the conventional wisdom on AI soon became that it might be useful, but it certainly cannot be trusted.
What’s more, after experiencing record-setting early growth, the public’s interest may already be waning. A Google Trends search from early August shows searches for ChatGPT have fallen by half since its peak in April. An accelerated adoption rate also seems to bring the “trough of disillusionment” phase of the hype cycle just as quickly.
But what may threaten AI’s revolutionary status in the long run isn’t civilization-ending threats or even the bursting of a market bubble. What might really sully its reputation is its users simply becoming overfamiliar and underwhelmed with its signature style.
This is easiest to see already with text-to-image programs such as Midjourney and DALL-E, which arrived on the market before chat-based large language model (LLM) software. Early versions of these programs often had telltale signs of their AI origin, including difficulty drawing hands and a total inability to render legible text. Some improvements have been made, but the more AI-generated artwork you see, the more it tends to look alike. Nearly all have a kind of soft focus, airless, fantastical quality that renders them easily identifiable.
The same will happen with LLMs as their use in business and media proliferates. AI can be useful for brainstorming a list of ideas, or writing the first draft of a press release. But the general sameness will increasingly betray its origins, even if only used as a starting point.
What is this style, exactly? Considering that ChatGPT is by far the most popular such service, it stands to reason that future business communications will trend toward the cheery, noncommittal, aim-to-please averageness that characterizes its “voice.” Not that work emails have ever been an art form, but you can instantly tell an actual message from a mail merge.
A specific field where the implications of AI is causing many sleepless nights is the entertainment industry, where the current Hollywood strike is based in some part on AI-related concerns. No one is expecting AI to ever write an Oscar-winning screenplay, but nor will anyone be too surprised if it’s used to outline lowbrow sitcoms and reality series. Whatever its drawbacks, AI excels at mediocrity.
These stylistic tendencies owe something to its underlying architecture. LLMs are not “intelligent” in the way of the fictional computers represented in Stanley Kubrick’s “2001: A Space Odyssey” or Spike Jonze’s “Her.” Rather, they make predictions about what the next word will be in a given answer. Put another way: AI just tells you what it thinks you want to hear.
None of this is to suggest AI isn’t the next big thing. Especially in applications where originality and specificity are less important, such as reducing word count in a document, or filling in missing information in a photograph, AI can work like magic.
But as we become ever more familiar with AI-generated information, from sports recaps to customer service bots, research summaries to recommendation systems, we will increasingly notice how lifeless it feels. The question for managers and other decision-makers will become: Does the “artificial” get in the way of the “intelligence”? And for consumers: Will we follow the example of the lazy lawyers, or will we choose human-originated content first?
Hollywood itself suggests an ambiguous future. As this summer’s topsy-turvy box office indicates, audiences still show up for CGI-laden spectacles, but not with as much enthusiasm as they used to. Meanwhile, inventive storytelling and real-life stunt work can still capture imaginations. Like today’s moviegoers, tomorrow’s workers and consumers will still long for the real thing.
William Beutler is the founder of Beutler Ink, a digital creative agency specializing in content creation and Wikipedia consulting. He was previously a writer with National Journal Group.