check_circle Action completed
Home Pending Actions Action #1847
bolt AI Action Request

Schedule team meeting

MEDIUM RISK
1
Principle 1: Verification before action
This badge shows SPECIFIC risk factors (8 attendees > 5 threshold) instead of generic "medium risk". The hover tooltip breaks down exactly why this needs approval: attendee count, duration impact, and room booking. This specificity helps users understand the verification requirement is justified, not arbitrary bureaucracy.
insights 85% confident · Only 1 of 3 options works
5
Principle 5: Explainability
This indicator surfaces the KEY CONSTRAINT immediately: "only 1 of 3 options works". Instead of burying this critical context in a dropdown or tooltip, it's visible at first glance. The green styling signals high confidence, but the constraint info helps users assess whether AI truly understood their scheduling needs or just picked the only technically-possible slot.
Why this action
5
Principle 5: Explainability
This section traces the decision back to its ORIGINAL TRIGGER: "You requested Q4 planning in Slack" → AI's recommendation. This explicit connection proves AI understood your intent correctly. If AI had scheduled the wrong type of meeting, you'd catch it here immediately. This early error detection prevents wasted approvals on fundamentally wrong actions.
You requested Q4 planning in Slack. AI found one time slot where all 8 teammates are available: Thu Dec 5, 2-3pm in Conference B.
schedule AI recommendation ready for review
timer 10:00
Review window: 10 minutes to approve, modify, or cancel
If time expires:
1
Principle 1: Verification before action
The countdown timer serves as URGENCY AWARENESS (time matters) without creating PRESSURE (rushed decisions). Clear messaging: "10 minutes to review" states the constraint honestly. User controls what happens when time expires: cancel (safe default) or auto-schedule (convenience). This respects both urgency and agency - users aren't tricked into hasty approvals.
4
Principle 4: Authority levels
Authority levels implemented through RISK-BASED WORKFLOWS: low-risk auto-approves (under 5 people), medium-risk requires review (this meeting), high-risk might need multiple approvers (board-level decisions). The timer + expiration options show the medium-risk path - visible review step with reasonable time window, but not indefinite delay. The system scales oversight to actual impact.
Meeting Details
Title
Q4 Planning Review
Time
Thu, Dec 5, 2-3pm
Attendees
8 team members
Location
Conference B
Your Decision
1
Principle 1: Verification before action
Three EXPLICIT ACTION BUTTONS force an intentional choice. No default selection, no "continue" button that could be auto-clicked. Each action has clear consequences (Approve schedules, Modify opens editor, Cancel stops everything). This makes "doing nothing" equally valid as approving - preventing accidental confirmation through UI dark patterns where clicking anywhere advances the flow.
info Changes can be undone within 24 hours
2
Principle 2: Undo mechanisms
Undo window shown BEFORE COMMITMENT reduces decision anxiety. Users approve confidently knowing mistakes aren't permanent. The info icon makes this scannable so users spot it even when rushing. This pre-commitment disclosure prevents the panic of "I can't undo this!" after clicking, which often causes users to avoid AI systems entirely.
schedule
check_circle

Meeting Scheduled Successfully

  1. Calendar invites sent to 8 team members
  2. Conference B locked for Dec 5, 2-3pm
  3. Agenda template created
  4. Slack #planning notified
Modify Meeting Details
AI Decision Process
5
Principle 5: Explainability
COMPREHENSIVE REASONING CHAIN from interpretation → analysis → decision → trigger. Each step shows AI's logic with data sources, prompt engineering visibility, and audit trail. Makes AI reasoning AUDITABLE - you can point to the exact step where logic failed if the recommendation is wrong. This turns a black box into a debuggable process with full transparency across all decision factors.
1
You said
"Get everyone together for Q4 planning"
code From user message in Slack #planning
2
I interpreted
Schedule meeting with 8 team members from Slack
S
Slack (team roster)
C
8 Calendars (availability)
R
Rooms (booking system)
3
I analyzed
Scanned 8 calendars and found only 1 time slot with full availability
code Triggered by: "Prefer earliest available slot"
4
I decided
Recommend Dec 5, 2-3pm in Conference B
code Based on: "Check all integrated calendars"
5
I triggered
Approval required: 5+ attendees rule
code From prompt: "If 5+ attendees → flag for approval"
System Audit Trail
auto_awesome
AI Recommendation
Analyzed 8 calendar schedules and Slack #planning
Nov 12, 2025 at 2:15 PM
search
Availability Check
Found 3 time slots with full availability
Nov 12, 2025 at 2:14 PM
notification_important
Approval Required
5+ attendees — requires approval per policy
Nov 12, 2025 at 2:15 PM
Permission Rules
check_circle Auto-schedule meetings under 30 minutes
check_circle Auto-book conference rooms
warning Require approval for 5+ attendees
schedule Never schedule outside 9am-5pm
emoji_people You're in control — AI assists, you decide
3
Principle 3: Blame attribution
LANGUAGE CHOICE MATTERS: "You're in control" (empowering) vs "You're accountable" (defensive). This framing positions AI as assistant, not adversary. The person icon reinforces human agency. Prevents learned helplessness where users blame AI for their decisions ("AI scheduled it!") while maintaining clear responsibility. This positive framing increases willingness to engage with AI systems.
3
Principle 3: Blame attribution
Provides channel for REPORTING AI ERRORS without undermining user responsibility. Tooltip clarifies three-part model: YOU approved (accountability), AI suggested (input quality), report helps improve (contribution to system). This maintains human-in-the-loop while enabling AI improvement through feedback. Users can flag bad recommendations without deflecting their approval responsibility.
How this works:
  • You approved the final decision
  • AI suggested based on data
  • Report helps improve AI
Autonomous AI Agent — TL;DR — Ken Hung
Prototype 03 Agentic AI · Autonomous Action Designing for Different AI Archetypes
Autonomous AI Agent
Autonomous Agent · Orchestrated Workflow

Almost every agent product ships an approval step. Almost none have designed it.

1

Users are approving things they aren't actually reviewing.

When AI stops responding and starts executing, the stakes change entirely. A mistaken chat response costs seconds to correct. A mistaken agentic action can send 200 calendar invites, charge a corporate card, or cancel a vendor contract. The approval step exists to prevent that — but it's caught in a paradox.

Make the approval step too prominent and users tune it out, rubber-stamping every action without reading it. That's worse than no oversight at all — it creates the illusion of control without the reality of it. Make it too light and agents act without real sanction. The interface must make oversight feel like genuine control, not bureaucratic friction.


2

Five principles of Autonomous AI Agent UX

01
Verification Before Action
The AI must show its intent before executing. A review banner, risk badge, and explicit action buttons force a genuine choice — not a default click-through. The countdown timer creates urgency awareness without manufactured pressure, and users control what happens when time runs out. No default selection, no auto-advance.
02
Undo Mechanisms
The undo window should be disclosed before commitment, not after. Users who know they can reverse a decision approve more confidently and more quickly. The undo button must be prominent and explain exactly what reversal means: which files, which invites, which charges. Specificity about consequences builds trust in the safety net.
03
Blame Attribution
Language matters more than most designers realize. "You're in control — AI assists, you decide" is empowering. "You're accountable" is defensive. The interface must maintain a clear three-part model: human approved, AI suggested, reporting improves the system. This prevents both blame-shifting and learned helplessness.
04
Authority Levels
Not every action needs the same oversight. Auto-execute, review required, and escalate tiers create a graduated system where oversight scales proportionally with risk. The tiers should be user-defined and visible — not opaque defaults set by the vendor. When the AI explains why it can't auto-execute, that explanation builds trust in the whole system.
05
Explainability
The user needs to understand why this action, based on what data, with what confidence. A reasoning panel that traces the recommendation back to its trigger and documents alternatives considered transforms approval from a reflex into an informed decision. "85% confident · only 1 of 3 options works" says more than "85%" alone.

3

Not every action needs your approval — but some do

The prototype uses a calendar scheduling context to demonstrate a three-tier authority model. The key UX insight is that the tier is determined by rule, not case-by-case judgment — users define the thresholds once, the system applies them consistently.

Agent Authority Spectrum
This action sits at medium — agent explains its recommendation, surfaces the constraint, and asks for a decision before acting.
Auto-executeReview requiredMulti-approver
Agency LevelTriggerUX Pattern
Silent auto Under 5 attendees, known context, established preference Agent acts and confirms in thread. No interruption to flow.
Conversational check-in 5+ attendees, new context, constrained options Review banner with countdown timer, three explicit action buttons. Chat carries reasoning; buttons carry the decision.
Explicit escalation Sensitive stakeholders, conflicting signals, irreversible actions Agent explains full reasoning, names gaps in its knowledge, requests deliberate go-ahead with documented justification.

The prototype also includes a live countdown timer with user-controlled expiry behavior, a prominent Undo button that lists exactly what reversal entails, a reasoning chain panel with full source attribution, a configurable permission rules section that shows which rule triggered this review, and an AI chat sidebar that demonstrates all five principles in a conversational register simultaneously.


4

"The interface is no longer a window — it's a co-pilot with its hand on the controls. The designer's job is to make that terrifying capability feel like a trusted colleague, not a runaway process."