I have come to believe that AI usually isn’t magic, it’s obfuscation.
My hypothesis is that the best applications of AI will be those that are hyper-contextual and invisible.
Few products will succeed by virtue of being “AI-first,” “AI-driven,” “AI-${adjective}
”. Instead, AI should be used like seasoning that brings out the flavours: to accentuate an underlying, tangible, and sensible system.
Several folks have discussed how chatbots are the wrong interaction paradigm for almost every product for a variety of very good reasons. I think the conversational analogy was wildly successful for OpenAI and others, and ultimately contagious, because of two factors: first, the chatbot pattern is self-contained and has an immediate feedback loop. Second, it showcased the “magic” to the world beyond tech folks through a very familiar conceptual model that made it easy to play—to use AI for the sake of using AI.
Many products in this initial rush to experiment with AI responded by created their own chatbot equivalents in their own product (or by invisibly replacing humans in existing chat surfaces, which is its own topic). But the goal of using these products is not to use AI for the sake of using AI.
Squishing in a chatbot or other feature that positions its identity around its inherent AI-ness doesn’t make for a great product or tool. It’s a layer of obfuscation in front of your tool. And, particularly if your product provides any kind productivity outcomes, it explicitly de-tool-ifies your tool.
Then it isn’t a tool—it’s a novelty.
Instead, we should think of AI as an underlying technology that makes existing or new products better at what they already do.
Users should not have to care that the product uses AI. They should care that, in the context of whatever they’re trying to do, they can now do it faster and better, and in some way, they get value out of it that they couldn’t before.
The real magic is found in hyper-contextual, invisible applications of AI that enhance great tools.