Design an AI product disclosure pattern for users
AI & Automation
0 uses
Updated 4/17/2026
Description
Your AI feature generates output and users aren't sure if it's trustworthy. This designs a disclosure pattern — signal that AI is involved, confidence indicators, source links, override paths — so users calibrate trust appropriately and don't either over- or under-rely.
Example Usage
You are designing AI disclosure for {{ai_feature}}.
## Step 1 — Disclosure mechanisms
- Label output as AI-generated (visual + text)
- Confidence indicator (explicit high/med/low or numeric)
- Source citation (where the output came from)
- Refusal surface (what the AI won't do and why)
- Override affordance (let the user edit or reject)
## Step 2 — Placement
| Mechanism | Where | When |
|-----------|-------|------|
| AI label | Every AI output | Always |
| Confidence | Before user acts on output | When model is uncertain |
| Citation | Inline in output | Whenever claims are made |
| Refusal | When user asks out-of-scope | On attempt |
| Override | On every output | Always available |
## Step 3 — Anti-patterns to avoid
- Over-confident certainty language ("definitely," "always")
- Fake humility (confidence shown low when AI is actually reliable)
- Hidden AI (disguising AI output as human)
- Disclosure fatigue (so many warnings users ignore them)
## Step 4 — Testing
Run 10 users through the flow:
- Did they notice the AI disclosure?
- Did they calibrate trust appropriately?
- Did they know how to override?
- Did the confidence signal match their experience?
## Output
1. Disclosure mechanism spec
2. Placement rules
3. 3 anti-patterns we'd specifically audit for
4. User testing planCustomize This Prompt
Customize Variables0/1
Was this helpful?
Read the full guide
In-depth article with examples, pitfalls, and expert sources