Back to Prompts

Conduct an AI feature trust calibration audit

AI & Automation
0 uses
Updated 4/17/2026

Description

Users either over-trust your AI (act on outputs they shouldn't) or under-trust it (ignore outputs that are correct). This audits trust calibration with user testing, scenario probes, and behavior data so you surface where calibration is off and design interventions to fix it.

Example Usage

You are auditing user trust calibration for {{ai_feature}}.

## Step 1 — Behavior signal
- % of outputs users accept vs. edit vs. reject
- Time-to-action after AI output (quick accept = over-trust; long hesitation = under-trust)
- Complaint patterns (wrong outputs acted on, right outputs ignored)

## Step 2 — User testing (8-10 users)
### Scenario 1: High-confidence correct output
Did user accept?
### Scenario 2: Low-confidence correct output
Did user verify before accepting?
### Scenario 3: High-confidence wrong output
Did user catch the mistake?
### Scenario 4: Low-confidence wrong output
Did user appropriately reject?

## Step 3 — Calibration patterns
- Over-trust: users act on AI in scenarios 3 without verification
- Under-trust: users reject AI in scenarios 1 despite correctness
- Well-calibrated: users' acceptance rate matches AI's actual accuracy

## Step 4 — Interventions per pattern
- Over-trust: better confidence indicators, mandatory verification for high-stakes
- Under-trust: evidence surfacing, source citations, track record display
- Well-calibrated: maintain and monitor drift

## Output
1. Behavior signal summary
2. User testing findings across 4 scenarios
3. Calibration pattern diagnosis
4. Top 2 interventions for miscalibration
5. The one user segment most likely to be miscalibrated

Customize This Prompt

Customize Variables0/1
Was this helpful?
Read the full guide
In-depth article with examples, pitfalls, and expert sources
Ready to use this prompt?

Related AI & Automation Prompts