OptiOps AI Platform - Auto-Remediation Demo

IT Operations Auto-Remediation

check_circleAction Completed Successfully
• Background job service restarted
• CPU normalized to 23% (was 94%)
• Server DB-07 healthy
• Completed at 2:47 PM by Ken Hung3
3 selected
monitor_heart
Monitor
Check 47 servers
0.3s
5
search
Detect
Pattern matching
1.0s
psychology
Analyze
Root cause analysis
1.2s
5
shield
Approval Gate
Human review
-
1
4
build
Remediate
Execute solution
2.1s
notifications
Notify
Send alerts
0.4s

Node Configuration

Click on a workflow node to view and edit its configuration

Live Execution Trace

Click "Test Workflow" to begin execution...

Approval Policy Configuration

checkAuto-Execute (No approval needed)
✓ Single server, confidence >85%
✓ Non-production environments
✓ Low-risk actions (restart monitoring)
warningRequire Approval (You must review)
⚠️ Multiple servers affected
⚠️ Confidence <85%
⚠️ Production database servers
4
priority_highAlways Escalate (Senior engineer required)
🚨 Actions affecting >10 servers
🚨 Downtime >5 minutes
🚨 Customer-facing services
Progressive Autonomy Control
4
ManualFully Autonomous
Current position Preview:
• 73% of actions would auto-execute
• 22% would require your approval
• 5% would escalate to senior engineer

Monitor Configuration

dnsMonitored Resources
• Loading...
• Loading...
• Loading...
scheduleCheck Interval
⏱ Loading...
📊 Loading...
sensorsAlert Thresholds
⚠️ Loading...
⚠️ Loading...
⚠️ Loading...

Detection Configuration

patternDetection Patterns
✓ Loading...
✓ Loading...
✓ Loading...
✓ Loading...
model_trainingMachine Learning Models
🤖 Loading...
🤖 Loading...
🤖 Loading...
speedConfidence Scoring
Loading...
Loading...
Loading...

Analysis Configuration

troubleshootAnalysis Method
🔍 Loading...
🔍 Loading...
🔍 Loading...
🔍 Loading...
library_booksKnowledge Sources
📚 Loading...
📚 Loading...
📚 Loading...
📚 Loading...
balanceImpact Assessment
Loading...
Loading...
Loading...
Loading...

Action Configuration

constructionAvailable Actions
🔧 Loading...
🔧 Loading...
🔧 Loading...
🔧 Loading...
🔧 Loading...
🔧 Loading...
safety_checkSafety Measures
✓ Loading...
✓ Loading...
✓ Loading...
✓ Loading...
undoRollback Policy
⏱ Loading...
Loading...
Loading...

Notification Configuration

sendNotification Channels
📧 Loading...
💬 Loading...
📱 Loading...
📊 Loading...
🎫 Loading...
priority_highAlert Priorities
🔴 Loading...
🟠 Loading...
🟡 Loading...
🟢 Loading...
descriptionMessage Template
Loading...
Loading...
Loading...
psychology The 6-step workflow orchestration (Monitor → Detect → Analyze → Approve → Act → Notify) stays consistent across all 4 use cases.

Use the dropdown to explore how the same AI governance principles apply universally to IT Operations, Customer Support, Marketing Budget, and Content Moderation.

Click any node to see its configuration.
Orchestrated AI Agent — TL;DR — Ken Hung
Prototype 04 Enterprise Workflows · Multi-Domain Designing for Different AI Archetypes
Orchestrated AI Agent
Autonomous Agent · Orchestrated Workflow

The governance features got built. The operator experience didn't.

1

Operators are responsible for systems they can't see or control.

Enterprise AI agents run continuous workflows across entire organizations — monitoring systems, routing tickets, reallocating budgets, moderating content — often executing dozens of actions per hour without direct human involvement. Every major platform now ships governance capabilities: dashboards, audit trails, compliance consoles.

But governance tooling is not the same as governance experience. A non-technical operator responsible for a system taking hundreds of autonomous actions on their behalf doesn't need more dashboards — they need an interface that makes them feel genuinely in control. The features got built. The operator experience of actually feeling in control didn't.


2

Five principles of Orchestrated AI Agent UX

01
Pipeline Visibility
When a system is executing dozens of actions across multiple domains, operators need to understand its state without reading a log. A visual workflow canvas communicates system status spatially — green nodes completed, amber nodes waiting, the Approval Gate pulsing when human input is required. An operator who hasn't checked in for an hour should understand the full system state in under three seconds.
02
Human Placement
In most enterprise AI systems, the approval step is an interruption — a notification or modal that appears outside the workflow. The better model embeds the human decision point structurally inside the workflow as a literal node in the sequence. Human judgment isn't an override of the process — it's a step that was always part of it. That distinction changes the operator's mental model entirely.
03
Configurable Autonomy
A binary on/off switch for AI autonomy is not governance — it's avoidance. A three-tier approval policy (Auto-Execute, Require Approval, Always Escalate) gives operators a graduated spectrum, with every threshold defined by them and visible in the interface. A progressive autonomy slider makes calibration tangible: operators see the projected operational impact before committing.
04
Audit Transparency
Transparency at enterprise scale must serve two audiences: non-technical operators who need ambient state awareness, and engineers who need step-by-step log detail. A visual canvas serves the first. A live execution trace with timestamps and source attribution serves the second. The approval modal adds a third layer — a confidence formula broken down line by line, with a full reasoning chain available on demand.
05
Framework Portability
Enterprise organizations run dozens of AI workflows across IT, support, marketing, and compliance — each with different risk profiles and approval triggers. The governance UX must be consistent enough for operators to transfer skills across domains, but flexible enough to reflect each domain's actual risk profile. When the structural UX stays constant and only the content layer changes, operators who learn one domain can immediately operate the next.

3

One framework, four organizational contexts

The prototype ships with four ready-made use cases. The same six-step workflow — Monitor → Detect → Analyze → Gate → Act → Notify — handles every domain. The governance UX stays constant. Only the content layer changes per domain.

IT
IT Operations Auto-Remediation
Monitors 47 servers. Triggers gate when: confidence <85% or production DB affected.
Medium Risk · 5 min review window
CS
Customer Support Ticket Routing
Reads email, chat, in-app tickets. Triggers gate when: legal language or enterprise account ($50K+).
High Risk · Immediate review
$
Marketing Budget Auto-Allocation
Tracks campaigns across platforms. Triggers gate when: budget move >$10K or confidence <80%.
Medium Risk · 24h review window
Content Moderation Auto-Review
Scans posts and flagged messages. Triggers gate when: confidence <70% or clean user history.
Medium Risk · 4h review window

The prototype also includes a visual workflow canvas with drag-and-drop nodes, animated execution state, and color-coded node status; a live execution trace panel running in parallel with the canvas; an approval modal with a line-by-line confidence formula and expandable reasoning chain; and a three-tier approval policy settings panel with a progressive autonomy slider showing projected split percentages before any policy is committed.


4

"At enterprise scale, agentic AI governance isn't a modal dialog — it's an operating model. The UX must make policy visible, adjustable, and auditable by the people responsible for the outcomes."