I've been looking at a lot of AI-driven interfaces lately and I'm having a bit of a crisis about it.
On one hand, the automation is great. But on the other, I feel like we're trading "User Control" for "User as Auditor." Instead of just doing a task, I now have to spend mental energy double-checking if the AI did it right.
Also, what happens to our mental maps when the UI starts "adapting" and moving buttons based on what it thinks I want? Is that actually helpful, or just disorienting?
Are you guys actually seeing AI solve real pain points in your products, or are we just making things "feel smart" while making them more unpredictable? Would love to hear if anyone has a framework for when to pull the plug on an AI feature that's over-complicating things.
This really resonates. In many cases, AI isn’t removing friction—it’s just shifting it from execution to verification. Once users have to constantly double-check outputs, the cognitive load doesn’t disappear, it just changes form. The real win seems to come when AI works in the background and supports existing workflows rather than replacing them entirely.
Overrelying on AI is a bad thing, whether it's in UX design or coding. You still need human expertise to validate outputs and ensure the work actually aligns with real user needs and business context.
Its difficult to take a stance. If one has an idea of what they need for the UI, and can describe it in a prompt, I think Ai is able to build it fairly well. But putting words for a design, is not easy.
Feels like we didn’t remove friction, we just moved it to “verification.” If users have to double-check every output, it’s not solving UX. For me, if it breaks predictability, it’s not worth shipping.
Nice insights here. I actually faced this exact issue recently and found a workaround that saved a lot of time. It’s always good to see different perspectives on this.
Mostly shifting, in my experience. AI UIs tend to replace a known interface ("click this button") with an unknown one ("describe what you want"), which sounds simpler but taxes the user's working memory more. The best AI UX I've shipped hides the AI entirely — user clicks a familiar button, AI does the heavy lift invisibly, user sees a deterministic result they can edit. Chat as UI is the wrong default for most workflows.
In many cases, AI isn’t removing friction—it’s shifting it.
We’ve moved from “writing complex code” to “writing precise prompts and validating outputs.”
The effort hasn’t disappeared; it’s just moved into:
Prompt design Output validation Fixing edge cases, the AI didn’t anticipate So while developers may write less code, they often spend more time guiding and verifying the system.
The real UX question is: who owns the complexity?
From my experience with Candor Data Platform, the difference comes when systems go beyond generation and handle more of the workflow—like maintaining context, structuring pipelines, and supporting validation. That reduces the back-and-forth and the number of “almost correct” results.
The shift that matters isn’t just AI that helps you work faster—it’s AI that takes on more responsibility for delivering reliable outcomes.
Until then, many “AI UX improvements” are less about removing complexity and more about relocating it behind a cleaner interface.
CapeStart
AI, XAI, NLP, DL, ML, GenAI
One way to think about it: use AI where the cost of being wrong is low and reversibility is high. Avoid it where errors are expensive or hard to detect.