What can I help with?
What is quantum computing?
The blank canvas got easier. The trust problem got worse.
The blank canvas evolved. The trust problem didn't.
Scaffolded prompts, voice input, and suggestion chips have largely solved the blank canvas problem — the floor is lower than ever. But users who became fluent at prompting stopped questioning the answers they got back. Prompt literacy and output literacy don't develop together.
The result is a new class of user: highly capable at generating responses, poorly calibrated at evaluating them. They've learned to prompt. They haven't learned to doubt. That gap is the real design problem — and it gets worse the more confident the user becomes.
Five principles of AI Assistant UX
What the prototype demonstrates
The prototype shows two states of the same interface: a Home View that scaffolds the entry point with suggestion chips, categorized examples, and a voice input option — and a Chat View that surfaces trust controls inline without burying them in settings.
In the Chat View, four toggles sit near the input field: Memory (shows what the system is drawing on), Confidence (adds a score and source bar to every response), Reasoning (reveals the step-by-step logic), and Uncertainty (flags when the model is near the edge of its knowledge). Each toggle is a conversation tool, not a settings preference.
Every response includes persistent feedback controls — thumbs up/down, a regenerate button, and a "Report an Issue" dialog that prompts for a specific correction rather than a generic flag. The correction flow creates a deliberate moment, making the user an active participant in improving the output rather than a passive consumer of it.
"The UX improved. The trust problem got worse. Users learned to prompt — and in doing so, learned to stop questioning the answer."