AI Behavior Design

For decades, UX and Product Design have been about interfaces. We designed screens, flows, buttons, hierarchies and novel interactive experiences—helping people navigate software to reach a goal already formed in their heads.

AI changes the premise entirely.

Today, users don't navigate systems. They state intent. "Make me a video." "Help me think through this idea." "Turn this into something better." And the system responds—probabilistically, creatively, sometimes unpredictably.

The new design surface isn't UI. It's behavior.

With AI systems, the "interface" is often just language. What actually matters is how the system interprets intent, handles ambiguity, balances creativity against reliability, maintains tone and continuity, fails and recovers, and builds trust over time rather than in a single interaction.

This is AI Behavior Design: the practice of intentionally shaping how an AI system behaves in relationship to humans. Not what it looks like—but how it responds, adapts, refuses, assists, surprises, and evolves.

Many traditional UX principles still apply, but transformed: human-in-the-loop becomes a question of when to surface control; progressive disclosure becomes teaching users what the system can do; error handling becomes trust repair.

Here's what that looks like in practice: A user asks for a video and the result misses the mark. The design question isn't "what button do they click next?" It's "how does the system know when to ask clarifying questions versus just iterate? How does it signal confidence? How does it learn from this interaction without requiring the user to teach it explicitly?" These aren't interface problems. They're behavioral ones.

In this world, designers aren't crafting screens. They're designing interpretive layers, defining behavioral constraints, teaching systems what "good" looks like, and calibrating expectations between humans and machines. The output isn't a mockup. It's trust, quality, and coherence over time.

This is why traditional roles are breaking down.

AI can already generate visuals, write code, prototype products, and produce endless variations. What it can't do on its own: decide what behavior feels right, understand cultural and emotional nuance without guidance, maintain a coherent identity across interactions, or know when not to act.

The role of designers isn't disappearing—it's being repositioned. From makers of artifacts to shapers of intelligent systems that make artifacts.

But I want to be clear about scope: this applies to designers already working at the systems level. For artists whose creativity lives in the craft itself—animators, illustrators, 3D artists, motion designers, concept artists, brand designers, and countless other creative roles—the disruption is more direct and the path forward is less clear. Not everyone gets repositioned. Some roles simply compress.

The future designer is part director, part editor, part systems thinker.

AI Behavior Design sits at the intersection of product design, creative direction, psychology, systems thinking, AI literacy—and critically, ethics. It's less about pixel perfection and more about intent translation and outcome quality.

Traditional product design balanced user value against business outcomes. AI Behavior Design adds a third axis: responsible behavior. This means asking harder questions: What should the system refuse to do, even when users want it and the business would benefit? How do we ensure fairness across cultures, contexts, and populations? Where's the line between personalization and manipulation? How do we design for transparency when the system itself can't fully explain its outputs? And when things go wrong at scale—who's accountable?

Naming the discipline matters.

Every new field starts fuzzy. Naming it makes the work legible—for hiring, collaboration, accountability, and ultimately, better human–AI relationships.

Interfaces taught people how to use software. Behavior will determine whether we trust intelligence.

And that's a design problem.

Amor Fati: Stress-Testing AI Video System by Breaking It on Purpose

A surreal short generated entirely through Google Flow, using one photo—my LinkedIn profile—as the only identity reference across wildly different aesthetics (cyberpunk, classical meets sci-fi, horror, video game).

The project probes system limits: identity drift, anatomical inconsistency, interpolation breakdowns, export artifacts, and UX friction at scale. Rather than optimizing for output, the work surfaces design insights for professional AI video workflows.

The result is both a personal narrative and a product-level case study in how AI video evolves from clip generator to creative infrastructure.