Showing confidence badly makes outcomes worse, not better.
Experts are being misled by the interfaces meant to help them.
Domain experts — clinicians, lawyers, financial analysts — use AI to augment decisions that carry real consequences. The instinct is to show a confidence score and call it transparency. Research says otherwise: displaying confidence badly causes expert accuracy to drop, not rise. A radiology study found accuracy fell from 82% to 46% when AI confidence was shown incorrectly.
The problem isn't the AI model — it's the interface. Experts who trust a high-confidence wrong answer more than a low-confidence right one aren't making bad decisions. They're responding rationally to a badly designed signal. In specialized AI, UX is not decoration — it is a safety layer.
Five principles of Domain Expert AI UX
Confidence shapes the entire interface — not just one element
The prototype uses a medical diagnosis context to show how the same interface must respond differently at three confidence levels. The UX adapts: evidence framing, alert severity, action emphasis, and the reasoning chain all shift with certainty.
Beyond the three scenarios, the prototype includes a sticky patient panel that keeps context always visible, an expandable reasoning chain that shows each diagnostic step with evidence weights, a data lineage section with model version and audit ID, and a Review & Modify panel that requires documented reasoning before an override is accepted — creating an audit trail without slowing the expert down.
"In specialized AI, UX is not decoration — it is a safety layer."