Jump to:
menu
add
notifications_none
archive
apps
code
Recent Chats
What is quantum computing?
Help me plan a trip to Japan
Explain machine learning basics
Write a Python script for data analysis
Creative writing prompts
How to improve productivity
Summarize this article
Math homework help

What can I help with?

Popular uses
1
1. Scaffolding Ambiguity
Suggestion cards provide structure and examples without constraining possibility. They guide users toward productive interactions while leaving room for creativity.
edit
Write
1
1. Scaffolding Ambiguity
Templated prompts show users how to structure their requests effectively, reducing friction and improving first-time success rates.
image
Image
1
1. Scaffolding Ambiguity
These cards help users overcome the "blank canvas" problem by offering starting points that can be customized to their needs.
help_outline
Explain
1
1. Scaffolding Ambiguity
By showing what's possible through examples, we help users discover capabilities they might not have known about.
bar_chart
Analyze
1
1. Scaffolding Ambiguity
Scaffolding provides just enough guidance to be helpful without being prescriptive or limiting user creativity.
search
Research
1
1. Scaffolding Ambiguity
These prompts teach users best practices implicitly—showing them how to ask for well-documented, high-quality outputs.
code
Code
1
1. Scaffolding Ambiguity
The variety of suggestions demonstrates the breadth of what's possible, inspiring users to explore different use cases.
lightbulb_outline
Brainstorm
1
1. Scaffolding Ambiguity
Users can click these cards to auto-fill the input, then customize the prompt to their specific needs—a perfect balance of guidance and flexibility.
subject
Summarize
menu
add
notifications_none
archive
apps
code
Recent Chats
What is quantum computing?
Help me plan a trip to Japan
Explain machine learning basics
Write a Python script for data analysis
Creative writing prompts
How to improve productivity
Summarize this article
Math homework help

What is quantum computing?

person
Explain quantum computing in simple terms
smart_toy
Quantum computing is a revolutionary approach to computation that leverages quantum mechanics principles. Unlike classical computers that use bits (0 or 1), quantum computers use quantum bits or "qubits" that can exist in multiple states simultaneously through superposition.

This allows quantum computers to process vast amounts of information in parallel, making them potentially much faster for specific types of problems like cryptography, drug discovery, and complex simulations.
AI can make mistakes. Check important info Learn more about AI responses →
report_problem Report an Issue
What seems wrong with this response?
check_circle Action completed
AI Assistant — TL;DR — Ken Hung
Prototype 01 Designing for Different AI Archetypes
AI Assistant
Conversational Partner

The blank canvas got easier. The trust problem got worse.

1

The blank canvas evolved. The trust problem didn't.

Scaffolded prompts, voice input, and suggestion chips have largely solved the blank canvas problem — the floor is lower than ever. But users who became fluent at prompting stopped questioning the answers they got back. Prompt literacy and output literacy don't develop together.

The result is a new class of user: highly capable at generating responses, poorly calibrated at evaluating them. They've learned to prompt. They haven't learned to doubt. That gap is the real design problem — and it gets worse the more confident the user becomes.


2

Five principles of AI Assistant UX

01
Scaffolding Ambiguity
The blank canvas is intimidating by default. Suggestion chips, templated prompts, and contextual examples reduce friction at entry without constraining what's possible. Users learn the system's range by doing, not by reading documentation. The goal is to lower the cost of the first good prompt.
02
Memory Transparency
When the system personalizes a response, users should understand why. Visible memory context transforms "how did it know that?" from a creepy moment into a trust-building one. Without transparency, personalization feels like surveillance. With it, it feels like a colleague who pays attention.
03
Trust-by-Design
Confidence scores give users calibration signals without requiring them to verify every claim. But the signal only works if it's honest — a permanently high confidence score teaches users to ignore it. The interface must make uncertainty visible, not just confidence.
04
Conversational Explainability
Users who understand how a response was constructed become better at prompting over time. Step-by-step reasoning panels demystify the black box — not for every response, but on demand. "Why did you say that?" is one of the most important questions an AI interface can answer.
05
Error Transparency & Repair
Uncertainty banners, thumbs-up/down feedback, message editing, and a structured correction dialog all serve one goal: make it easy to catch and fix errors before they compound. False confidence is the fastest way to permanently lose a user's trust. The interface should make correction feel natural, not like an admission of failure.

3

What the prototype demonstrates

The prototype shows two states of the same interface: a Home View that scaffolds the entry point with suggestion chips, categorized examples, and a voice input option — and a Chat View that surfaces trust controls inline without burying them in settings.

In the Chat View, four toggles sit near the input field: Memory (shows what the system is drawing on), Confidence (adds a score and source bar to every response), Reasoning (reveals the step-by-step logic), and Uncertainty (flags when the model is near the edge of its knowledge). Each toggle is a conversation tool, not a settings preference.

Every response includes persistent feedback controls — thumbs up/down, a regenerate button, and a "Report an Issue" dialog that prompts for a specific correction rather than a generic flag. The correction flow creates a deliberate moment, making the user an active participant in improving the output rather than a passive consumer of it.


4

"The UX improved. The trust problem got worse. Users learned to prompt — and in doing so, learned to stop questioning the answer."