Recent headlines have made clear: If AI is doing an impressively good job at a human task, there’s a good chance that the task is actually being done by a human.
When George Carlin’s estate sued the creators of a podcast who said they used AI to create a standup routine in the late comedian’s style, the podcasters claimed that the script had actually been generated by a human named Chad. (The two sides recently settled the suit.)
A company making AI-powered voice interfaces for fast-food drive-thrus can only complete 30% of jobs without the help of a human reviewing its work.
Amazon is dropping its automated “Just Walk Out” checkout systems from new stores – a system that relied on far more human verification than it was hoping for.
We’ve seen this before – though it may already be lost to Silicon Valley’s pathologically short memory.
Back in 2015, AI chatbots were the hot thing. Tech giants and startups alike pitched them as always-available, always-chipper, always-reliable assistants. One startup, x.ai, advertised an AI assistant who could read your emails and schedule your meetings. Another, GoButler, offered to book your flights or order your fries through a delivery app. Facebook also tested a do-anything concierge service called M, which could answer seemingly any question, do almost any task, and draw you pictures on demand.
But for all of those services, the “AI assistant” was often just a person. Back in 2016, I wrote a story about this and interviewed workers whose job it was to be the human hiding behind the bot, making sure the bot never made a mistake or spoke nonsense.
To power Facebook’s M, every single message was reviewed by contractors who worked out of the company’s Menlo Park headquarters. The workers at GoButler and x.ai told me they worked long hours or overnight shifts to mimic the bots’ perma-online presence. Customers constantly asked them if they were bot or human. Some customers sent them crude sexual messages, thinking no human would read it.
(In another indicator of Silicon Valley’s rapid evolutionary cycle, x.ai closed in 2021, and the name is now attached to xAI, Elon Musk’s new AI venture.)
People build AI to mimic human intelligence and capabilities. But when the AI can’t quite deliver on the promise, we end up with humans pretending to be chatbots pretending to be humans.
It’s the latest iteration of a trick that stretches back at least as far as 1770, when the original Mechanical Turk machine appeared to play chess automatically — but actually concealed a human chessmaster inside its apparatus.
Of course, some things are different this time around. Chatbots are more fluent in their conversation and mimicry. But today’s bots, just like 2016’s, struggle with reliability. We still trust a human more when something has to be done right.
Investor frenzy is often driving this hype. In 2016, “Human-assisted AI is ‘the hottest space to be in right now,’” one founder said. Another founder said that she was frustrated with the way a lot of investors would “congratulate us on having a fully automated bot” when the technology was quite reliant on humans.
As long as there’s the incentive to overhype AI’s abilities, there will be gaps between what AI promises and what it can reliably do. To fill that gap, you can always hire a person. —Ellen Huet