Support user understanding of AI decisions by providing explanations of how and why the AI reached its conclusions. This principle ensures that AI reasoning is accessible to users, enabling informed decisions about whether to accept or modify AI outputs.
Miller's comprehensive research (2019) on explanation in AI demonstrated that explanations are fundamental to human-AI trust. Users need to understand AI reasoning to calibrate their reliance appropriately and catch AI errors.
The finding? AI systems that provide explanations achieve 41% higher user trust compared to "black box" systems—users who understand AI reasoning are more willing to rely on it appropriately.
Interface designers enable AI explainability effectively. Showing reasoning. Revealing factors. Supporting understanding at multiple levels.
The principle: Explain reasoning. Show influences. Enable understanding.
AI explainability has become essential as AI systems make increasingly important recommendations. Users need insight into AI reasoning to make informed decisions.
Amershi et al. (2019) established explainability as a core guideline: "Support efficient means for the user to understand the AI's behavior." Their research found that explanation access led to 41% improvement in user trust and better human-AI collaboration.
Miller (2019) provided a comprehensive framework for AI explanation based on social science research. He found that good explanations are contrastive (why this rather than that), selective (highlighting key factors), and social (adapted to the explainee's understanding).
Ribeiro et al. (2016) developed LIME for local interpretable explanations. Their research showed that even simple explanations of complex models improved user decision accuracy by 28% compared to unexplained predictions.
Wang & Yin (2021) studied explanation presentation formats. They found that layered explanations (brief summary with available detail) performed best, allowing users to engage at their preferred depth.
For Users: Explainability enables informed decisions. Users can evaluate whether AI reasoning makes sense for their situation, catch errors in AI logic, and decide when to override AI suggestions. Black-box AI demands blind trust; explainable AI enables partnership.
For Designers: Designing explainability requires balancing comprehensiveness with accessibility. Good explanation design makes complex reasoning understandable without overwhelming. Poor design either provides too little insight or too much detail.
For Product Managers: Explainability is increasingly required by regulation (GDPR's "right to explanation") and expected by users. Products that explain AI decisions build more trust than those that don't.
For Developers: Implementing explainability requires building explanation generation into AI systems. This includes feature attribution, decision reasoning, and presentation at appropriate abstraction levels.
Factor attribution shows what influenced AI decisions. "This loan was recommended because of strong credit history (high impact), stable employment (medium impact), and low debt ratio (medium impact)." Users see which inputs mattered.
Contrastive explanations clarify why one option was chosen over another. "Product A was recommended over Product B because it better matches your stated preference for durability." Comparison helps users understand relative rankings.
Confidence-qualified explanations acknowledge uncertainty. "I'm fairly confident (82%) this is spam because it contains known phishing patterns, but the sender is in your contacts." Users know when to apply more scrutiny.
Progressive disclosure offers explanation layers. A brief summary appears first ("Recommended based on your preferences") with "Why?" link revealing detailed factors. Users access the depth they need.
Example-based explanations use analogies. "Similar customers who bought this product also found these accessories useful." Relatable comparisons can be more intuitive than technical explanations.