Skip to main contentSkip to navigationSkip to footer
168+ Principles LibraryResearch-backed UX/UI guidelines with citationsAI Design ValidatorValidate AI designs with research-backed principlesAI Prompts600+ research-backed prompts with citationsFlow ChecklistsPre-flight & post-flight validation for 5 flowsUX Smells & FixesDiagnose interface problems in 2-5 minutes
View All Tools
Part 1FoundationsPart 2Core PrinciplesPart 3Design SystemsPart 4Interface PatternsPart 5Specialized DomainsPart 6Human-Centered
View All Parts
About
Sign in

Get the 6 "Must-Have" UX Laws

The principles that fix 80% of interface problems. Free breakdown + real examples to your inbox.

PrinciplesAboutDevelopersGlossaryTermsPrivacyCookiesRefunds

© 2026 UXUI Principles. All rights reserved. Designed & built with ❤️ by UXUIprinciples.com

ToolsFramework
Home/Part V - Specialized Domains/AI and Intelligent Interfaces

AI Transparency Principle

transparencyai-transparencyexplainable-aixaiai-ethicstrust-calibrationux designuser experience
Advanced
14 min read
Contents
0%

AI-powered systems must communicate their capabilities, limitations, and decision-making processes to establish appropriate user trust and enable effective collaboration. Unlike traditional deterministic software where behavior is predictable, AI systems operate with inherent uncertainty, potential biases, and failure modes that users need to understand to use them effectively. Transparency doesn't require exposing complex algorithms, but rather providing clear signals about confidence levels, data sources, reasoning patterns, and known limitations.

When AI systems lack transparency—presenting outputs without context, hiding confidence levels, or obscuring when automation has limits—users develop either misplaced trust (over-reliance leading to unverified errors) or complete distrust (abandonment of useful capabilities). Research demonstrates that appropriately calibrated transparency improves both task performance and user satisfaction, enabling users to develop accurate mental models of when to rely on AI assistance versus when to override or supplement automated suggestions.

The Research Foundation

Gunning's DARPA Explainable AI (XAI) program (2017-present) established explainability as critical AI system requirement beyond predictive accuracy through systematic research demonstrating opaque high-accuracy models often fail in deployment while interpretable lower-accuracy models succeed. His framework distinguished three explanation types: Global explanations describe overall model behavior ("this classifier primarily uses features A, B, C"), enabling users understanding general AI approach. Local explanations clarify individual predictions ("this specific recommendation occurred because factors X, Y, Z"), enabling case-by-case verification. Counterfactual explanations show decision boundaries ("changing input from X to Y would flip prediction"), enabling understanding of how to achieve desired outcomes.

Research validating XAI importance demonstrated that users interacting with explainable AI achieve 40-60% better decision accuracy versus opaque alternatives through identifying AI errors, developing appropriate skepticism for edge cases, understanding when to trust versus override. Medical diagnosis AI studies showed radiologists using explainable systems caught 50-70% more AI errors versus black-box systems through reasoning verification, while maintaining similar speed and accepting valid AI assistance appropriately. Financial lending showed transparent decision explanations reduced bias 30-40% through exposing problematic correlations enabling correction, while opaque systems perpetuated hidden biases indefinitely.

Jobin, Ienca, and Vayena's comprehensive analysis (2019) "The global landscape of AI ethics guidelines" synthesized 84 AI ethics documents from governments, industry, and academia identifying convergent principles across cultures and organizations. Transparency emerged as universal requirement appearing in 73% of guidelines—AI systems must disclose automated nature, explain decision-making, communicate limitations enabling informed consent. Human agency (68% of guidelines) requires maintaining human decision-making authority through override capabilities, preference controls, meaningful human involvement preventing complete automation of consequential decisions. Accountability (62%) demands clear responsibility assignment, error correction mechanisms, redress procedures when AI causes harm.

Their analysis revealed global consensus that transparency serves multiple functions: enabling informed user decision-making, facilitating algorithmic accountability and oversight, supporting appropriate trust calibration, enabling identification and correction of bias, building public confidence in AI systems. Countries implementing mandatory AI transparency (EU AI Act) versus voluntary approaches (US) demonstrate regulatory divergence, but technical consensus that high-stakes AI (hiring, lending, medical, legal) requires transparency regardless of jurisdiction with low-risk AI (content recommendations, personalization) benefiting from but not strictly requiring formal explainability.

Diakopoulos' accountability research (2016) "Principles for Accountable Algorithms" established specific transparency requirements for automated decision systems. Input disclosure: reveal what data feeds algorithmic decisions enabling users to verify appropriateness, identify missing factors, detect biases. Process disclosure: explain computational approach at appropriate abstraction (not source code but logic) enabling conceptual understanding. Output disclosure: communicate confidence levels, uncertainty ranges, alternative scenarios showing decision sensitivity to inputs. Responsibility assignment: identify human decision-makers accountable for algorithm deployment, configuration, outcomes.

Research demonstrated that platforms implementing algorithmic transparency experience increased user trust (40-60% higher), improved decision quality (30-50% better outcomes), reduced bias complaints (50-70% fewer), enhanced user agency (users feel 60-80% more in control) versus opaque algorithmic systems generating suspicion, resentment, and pushback. Facebook's 2021 "Why am I seeing this?" feature showing recommendation reasoning achieved 70-80% positive user sentiment, reduced advertiser skepticism 40-50%, despite exposing recommendation imperfections demonstrating transparency benefits outweigh perfect-appearing opacity.

Lee and See's trust calibration research (2004) established that appropriate reliance on automation requires accurate mental models of system capabilities—users must understand when to trust (system operating within capabilities) versus when to intervene (edge cases, novel situations). Over-trust (automation bias) causes uncritical acceptance of flawed automated outputs—documented catastrophically in aviation crashes where pilots trusted faulty automation despite contradictory evidence. Under-trust causes rejection of beneficial automation creating inefficiency—observed in medical settings where physicians ignore valuable AI assistance due to opacity-induced skepticism.

Their framework demonstrated effective trust calibration requires transparency enabling users to: verify automated reasoning against domain knowledge, identify situations where automation likely succeeds versus fails, develop appropriate skepticism for edge cases, maintain situation awareness preventing complacency. Studies comparing transparent versus opaque automation showed transparent systems achieving 60-80% better calibrated trust (users trust appropriately for system capabilities), 40-60% improved decision outcomes (better than human-only or automation-only), 50-70% faster error detection when automation fails, validating transparency as essential for effective human-AI collaboration not mere ethical nicety.

Why It Matters

For Users: Transparency enables informed consent and user agency in AI-mediated decisions affecting lives, rights, opportunities. When users understand AI hiring screening bases decisions on resume keywords, job history patterns, university rankings, they can tailor applications appropriately, identify potentially discriminatory factors, make informed decisions about participation. When loan decisions explain credit score, debt-to-income ratio, employment history weights, applicants understand outcomes, can work to improve creditworthiness, identify errors for correction. Opaque systems deny agency—users cannot understand why rejected, what to improve, whether to trust outcomes, creating helplessness and resentment. EU's "right to explanation" in GDPR reflects ethical consensus that consequential automated decisions require transparency enabling meaningful human oversight and appeal.

For Designers: Trust calibration through transparency prevents both over-reliance and under-reliance maximizing AI value while minimizing risks. Medical AI demonstrating high accuracy but low explainability shows paradoxical rejection—physicians under-rely despite benefit due to verification impossibility. Conversely, high explainability enables calibrated trust—physicians verify reasoning matches medical knowledge for typical cases building confidence, while identifying edge cases requiring skepticism. Research shows explainable medical AI achieves 50-70% physician adoption versus <30% for opaque despite similar accuracy, with transparent systems showing 40-60% better patient outcomes through appropriate human-AI collaboration.

For Product Managers: Business impact manifests through increased AI adoption, reduced liability, enhanced brand trust. Companies implementing transparent AI report 40-60% higher user adoption, 50-70% reduced bias complaints, 30-40% lower regulatory scrutiny versus opaque alternatives. Explainable credit decisions reduce discrimination lawsuits 60-80% through enabling identification and correction of biased factors before deployment. Transparent recommendation systems increase user engagement 30-50% through building confidence in suggestions, reducing algorithmic anxiety. Conversely, opaque AI generates backlash—Amazon abandoning opaque hiring AI after discovering gender bias, Apple facing discrimination complaints over opaque credit algorithms demonstrate reputational and legal risks of AI opacity.

For Developers: Accessibility improvements through transparency serve users requiring disability accommodations, non-technical users needing simplified explanations, diverse users with varying AI literacy. Explainable AI enables non-technical users to verify reasoning through accessible analogies, visual explanations, interactive exploration versus intimidating mathematical formulations. Research shows transparent AI achieves 60-80% higher adoption among elderly users, 50-70% better understanding among users without technical background, 40-60% improved confidence among users from communities historically experiencing algorithmic discrimination through demystifying automated decisions enabling informed participation.

How It Works in Practice

Implement clear AI disclosure distinguishing AI-generated content from human-created, automated decisions from human judgments. ChatGPT displays "ChatGPT" branding on all responses, GitHub Copilot marks AI-suggested code distinctively, Gmail distinguishes Smart Compose suggestions from user typing through visual differentiation. Research shows users interacting with clearly-labeled AI demonstrate 50-70% better calibrated trust versus ambiguous automation, 40-60% higher acceptance of beneficial suggestions, 30-50% faster error detection when AI fails through maintaining appropriate skepticism.

Design local explanations for individual AI decisions showing reasoning for specific outputs. Spotify's "Why this song?" shows listening history patterns, similar artist connections, genre preferences explaining recommendations. YouTube explains video suggestions through watch history, search patterns, trending content, subscriptions. LinkedIn explains job recommendations through profile match, skills alignment, company preferences. These granular explanations enable users to verify reasoning ("yes, I do like similar artists"), identify errors ("I watched that video only once by accident"), provide feedback improving future suggestions. Research shows recommendation systems with explanations achieve 40-60% higher click-through, 50-70% better user satisfaction, 30-40% more accurate feedback.

Provide confidence indicators communicating AI uncertainty and limitation awareness. Weather forecasts show prediction confidence (60% chance of rain), financial projections display confidence intervals (±15% margin), medical AI displays diagnostic confidence scores (85% certainty). Grammarly shows suggestion confidence through strength indicators (strong suggestion versus weak), enabling users to prioritize high-confidence corrections while skeptically evaluating weak suggestions. Research shows confidence-aware AI achieves 50-70% better trust calibration, 40-60% reduced automation bias (users question low-confidence outputs appropriately), 30-40% improved decision outcomes through selective reliance.

Implement global model explanations helping users understand overall AI behavior and capabilities. Netflix explains recommendation algorithm considers ratings, watch history, time of day, device type, creating mental model of system capabilities. Spotify's year-end summaries showing listening patterns, genre preferences, discovery metrics help users understand recommendation basis. AI writing tools explaining grammar checking uses rule-based parsing, style suggestions use pattern matching, content generation uses language models educates users about capability boundaries. Users with accurate global understanding show 60-80% better feature adoption, 40-60% more appropriate usage, 30-50% higher satisfaction through expectation alignment.

Design override and control capabilities maintaining human agency and decision authority. Email filters allow marking AI-classified spam as not spam training system, content recommendations provide "not interested" feedback refining future suggestions, smart home automation allows overriding automated decisions establishing boundaries. Research shows user-controllable AI achieves 70-90% higher adoption versus non-overridable automation, 50-60% better long-term satisfaction, 40-50% reduced reactance (psychological resistance to perceived autonomy threats) through maintaining sense of control critical for sustained AI acceptance.

Create feedback mechanisms enabling users to improve AI performance while understanding limitations. YouTube's "tell us why" for removed recommendations, Spotify's like/dislike training playback, Grammarly's accept/reject training suggestion quality all provide transparency into learning process while improving accuracy. Research shows AI with transparent learning achieves 50-70% better personalization accuracy, 40-60% higher user engagement with feedback features, 30-40% faster performance improvement through more informative training signals.

Real-World Example

AI Transparency Principle - Good vs Poor Implementation Comparison

Transparent vs opaque AI interface comparison

✗ Poor Implementation:

AI systems making decisions without disclosure or explanation. Resume screening rejecting candidates without showing factors, credit denials without reasoning. Users can't verify, correct errors, or understand how to improve outcomes.

✓ Good Implementation:

GitHub Copilot uses distinct visual styling for AI suggestions, shows confidence through acceptance rates, and requires conscious tab-to-accept interaction. Users always know what's AI-generated and maintain decision authority.

Modern Examples (2023-2026)

chatgptExample 1: ChatGPT - Transparent Limitation Communication

Focus: Why disclose shortcomings openly? Knowledge cutoff dates surface. Uncertainty gets acknowledged. Potential errors warn users upfront.

Insight: Honest limitation communication builds appropriate trust where users neither over-rely on flawed outputs nor abandon valuable assistance—sustaining productive collaboration despite acknowledged AI imperfection.

ChatGPT demonstrates comprehensive AI transparency through systematic capability and limitation disclosure. Knowledge cutoff clearly communicated ("My knowledge was last updated in January 2025"), uncertainty explicitly acknowledged ("I don't have access to real-time information"), potential errors disclosed ("I can make mistakes - please verify important information"), reasoning shown through step-by-step explanations when requested. Confidence communicated through hedging language ("likely," "probably," "I'm not certain"), alternatives presented when multiple interpretations valid, sources cited when drawing from specific works.

User agency maintained through extensive override capabilities—users can disagree with responses prompting re-generation, request different approaches, specify desired output characteristics, interrupt mid-generation. Transparent learning process—users understand conversations train future models (with opt-out available), feedback improves responses, system continuously evolving. Capability education through progressive disclosure—users discover advanced features (code generation, data analysis, image understanding) through usage building accurate mental models.

Result: ChatGPT achieves 85-90% user satisfaction despite acknowledged limitations, 95%+ first-session success demonstrating effective transparency, 70-80% users report appropriate trust calibration (neither over-relying nor under-relying), demonstrating effective transparency enables productive human-AI collaboration despite imperfect AI capabilities through honest communication building appropriate expectations and sustained trust.

google searchExample 2: Google Search - Algorithmic Transparency Evolution

Focus: Users question what black-box algorithms hide. Featured snippets cite sources. "About this result" clarifies why pages rank.

Insight: When 8.5 billion daily searches depend on opaque ranking, transparency doesn't weaken Google's edge—it strengthens user confidence that results deserve attention rather than manipulation skepticism.

Google implements growing search transparency communicating ranking factors, result types, personalization effects. Featured snippets show source attribution enabling verification, "About this result" explains ranking factors (relevance, authority, freshness), "From sources across the web" distinguishes diverse results from single-source answers. Personalization disclosure shows location effects ("Based on your location"), search history influence ("Because you searched for X"), language preferences impact. Ad labeling clearly distinguishes sponsored results from organic.

Explainability features enable understanding—search syntax help explains operators enabling sophisticated queries, autocomplete transparency shows trending searches versus personalized, spelling corrections displayed with original query preservation enabling user override. Privacy controls provide transparency and agency—clear data collection disclosure, granular privacy settings, search history viewing and deletion, personalized ad controls. Confidence indicators distinguish authoritative sources from general results through site reputation signals.

Result: Google maintains 90%+ search market share despite algorithmic complexity through transparency building trust, achieves 85%+ user satisfaction with result quality, demonstrates that transparency enables sustained engagement with sophisticated AI systems when users understand capabilities, limitations, personalization effects enabling informed search behaviors and appropriate result interpretation.

grammarlyExample 3: Grammarly - Explainable Writing Assistance

Focus: Corrections pair with explanations—grammar rules cited, clarity improvements justified, style enhancements rationalized through specific reasoning.

Insight: Educational transparency transforms correction tools into learning engines where users don't just fix errors but understand why, reporting 60-70% writing skill improvement beyond mere automated correction dependency.

Grammarly demonstrates explainable AI through detailed suggestion reasoning, confidence indicators, educational explanations. Each correction explains why suggested—grammar rules violated, clarity improvements, style enhancements with specific rationale ("This sentence is wordy - consider removing X for conciseness"). Confidence shown through suggestion types—critical errors (high confidence), suggestions (medium confidence), premium enhancements (optional improvements) enabling users to prioritize corrections appropriately.

Educational transparency teaches writing improvement—explanations cite grammar rules, provide examples of correct usage, link to detailed articles explaining concepts enabling learning not just correction. User control maintained through accept/reject feedback training personal writing style, custom dictionary for specialized terms, goal-based modes (clarity, engagement, delivery) allowing personalization. Performance transparency shows writing scores, improvement tracking, genre-specific metrics building understanding of writing quality evolution.

Result: Grammarly achieves 70-80% suggestion acceptance rates demonstrating appropriate trust calibration, 60-70% users report writing skill improvement through educational explanations, 85%+ satisfaction with transparency and control features demonstrating effective explainable AI increases adoption, learning, sustained engagement through building understanding and maintaining user agency.

Role-Specific Guidance

For Designers

Trust calibration through transparency prevents both over-reliance and under-reliance maximizing AI value while minimizing risks. Medical AI demonstrating high accuracy but low explainability shows paradoxical rejection—physicians under-rely despite benefit due to verification impossibility. Conversely, high explainability enables calibrated trust—physicians verify reasoning matches medical knowledge for typical cases building confidence, while identifying edge cases requiring skepticism. Research shows explainable medical AI achieves 50-70% physician adoption versus <30% for opaque despite similar accuracy, with transparent systems showing 40-60% better patient outcomes through appropriate human-AI collaboration.

Scientific Validation Checklist
  • Design AI disclosure systems clearly indicating when users interact with automation through visual branding, labels ("AI-generated"), distinct styling differentiating automated from human-created content
  • Create local explanations for AI decisions showing reasoning for specific outputs through "why this?" features, visual reasoning displays, factor attribution making individual decisions understandable
  • Implement confidence indicators communicating AI certainty through percentage scores, strength indicators (weak/medium/strong suggestions), visual cues (color intensity), uncertainty acknowledgment enabling appropriate skepticism
  • Design override controls maintaining user agency through reject/modify AI suggestions, preference settings adjusting behavior, feedback mechanisms improving performance, human escalation paths
  • Develop progressive transparency adapting explanation depth to user expertise through basic explanations for novices, detailed technical information for experts, interactive exploration for curious users

For Developers

Accessibility improvements through transparency serve users requiring disability accommodations, non-technical users needing simplified explanations, diverse users with varying AI literacy. Explainable AI enables non-technical users to verify reasoning through accessible analogies, visual explanations, interactive exploration versus intimidating mathematical formulations. Research shows transparent AI achieves 60-80% higher adoption among elderly users, 50-70% better understanding among users without technical background, 40-60% improved confidence among users from communities historically experiencing algorithmic discrimination through demystifying automated decisions enabling informed participation.

Scientific Validation Checklist
  • Implement explainable AI architectures using interpretable models (decision trees, linear models, rule-based systems) for high-stakes decisions, attention mechanisms in neural networks highlighting influential features, LIME/SHAP for post-hoc black-box explanations
  • Build confidence scoring systems calculating prediction uncertainty through ensemble disagreement, prediction intervals, calibrated probabilities enabling honest uncertainty communication versus false precision
  • Create audit logging tracking AI decisions, inputs, outputs, model versions enabling accountability, debugging, bias detection, regulatory compliance, performance monitoring
  • Develop feedback loops enabling user corrections improving model accuracy through active learning, preference modeling, error correction, explicit user teaching
  • Implement A/B testing comparing transparent versus opaque AI measuring adoption, trust, decision quality, user satisfaction validating transparency benefits

For Product Managers

Business impact manifests through increased AI adoption, reduced liability, enhanced brand trust. Companies implementing transparent AI report 40-60% higher user adoption, 50-70% reduced bias complaints, 30-40% lower regulatory scrutiny versus opaque alternatives. Explainable credit decisions reduce discrimination lawsuits 60-80% through enabling identification and correction of biased factors before deployment. Transparent recommendation systems increase user engagement 30-50% through building confidence in suggestions, reducing algorithmic anxiety. Conversely, opaque AI generates backlash—Amazon abandoning opaque hiring AI after discovering gender bias, Apple facing discrimination complaints over opaque credit algorithms demonstrate reputational and legal risks of AI opacity.

Scientific Validation Checklist
  • Prioritize transparency features through adoption metrics (higher for explainable AI), trust surveys (calibrated trust measurement), error detection rates (users catching AI mistakes), satisfaction scores quantifying transparency business value
  • Balance transparency depth against usability through expertise-adaptive explanations, progressive disclosure (simple default, detailed on request), visual summaries minimizing cognitive burden
  • Measure trust calibration through over-reliance indicators (automation bias rates), under-reliance metrics (beneficial suggestion rejection), error detection success (users catching AI failures), appropriate skepticism for edge cases
  • Advocate for ethical AI through transparency requirements in product roadmaps, bias auditing, limitation disclosure, user control prioritization demonstrating responsible AI development
  • Monitor regulatory landscape tracking AI transparency mandates (EU AI Act, algorithmic accountability bills), compliance requirements, industry standards ensuring product alignment

Common Pitfalls

  • Opaque black-box systems denying verification capability: Presenting AI outputs as definitive facts without reasoning disclosure, hiding decision factors preventing users from identifying errors or biases, lacking confidence indicators creating false precision illusion versus providing local explanations ("recommended because factors X, Y, Z"), global model descriptions, confidence scores enabling informed verification and appropriate skepticism calibration.
  • Automation bias through overconfident communication: Presenting uncertain AI outputs with unwarranted definitiveness causing uncritical acceptance of flawed recommendations, lacking explicit uncertainty acknowledgment ("I don't have access to real-time data...") enabling over-reliance versus communicating confidence levels through hedging language, strength indicators, explicit limitation disclosure preventing 40-60% of automation bias failures.
  • Removing user agency through non-overridable automation: Making AI decisions irreversible without human override options, lacking preference controls or feedback mechanisms, forcing complete automation acceptance versus maintaining decision authority through reject/modify capabilities, explicit approval requirements for consequential actions, feedback loops improving AI while preserving user control essential for sustained adoption.
  • Technical jargon obscuring explainability intent: Providing mathematically-precise explanations incomprehensible to non-technical users ("logistic regression coefficients: 0.73, -0.42, 0.89") defeating transparency purpose versus expertise-adaptive explanations offering accessible analogies for novices, detailed technical information for experts, interactive exploration for curious users scaling complexity appropriately.
  • Silent AI disclosure creating deceptive experiences: Failing to distinguish AI-generated content from human-created, presenting automated decisions as human judgments, lacking clear AI labeling creating trust violations when revealed versus explicit disclosure through visual branding ("AI-generated"), distinct styling, clear labels establishing transparent human-AI boundaries enabling informed consent and appropriate expectation setting.

Key Takeaways

  • Gunning's XAI research: Opaque systems achieve 20-40% lower adoption despite superior accuracy because users cannot verify reasoning, establishing explainability as fundamental requirement (Gunning 2017).
  • 73% of global AI ethics guidelines: Jobin's analysis reveals universal consensus across cultures requiring AI disclosure, explanation, limitation communication, mandating transparency (Jobin et al. 2019).
  • Trust calibration: Transparent systems achieve 60-80% better calibrated trust enabling appropriate reliance matching capabilities, preventing automation bias and under-reliance (Lee & See 2004).
  • Local explanations: Individual prediction reasoning enables case-by-case verification identifying AI errors and developing appropriate skepticism, improving decision accuracy 40-60%.
  • Transparent AI: Explainability builds confidence and sustained usage through enabling informed human-AI collaboration versus blind automation acceptance, achieving 40-60% higher user adoption.

AI Validation Prompts

Scientific prompts optimized for Cursor, V0, Claude, and Lovable

Cursor Optimized
V0 Optimized
Claude Optimized
Lovable Optimized

Take Action

Turn this principle into better designs with our free tools

AI Design Validator

Audit your AI-generated designs

AI Prompts Library

600+ copy-paste prompts with citations

UX Smells Detector

Fixes 2 anti-patterns: Silent Errors, Mystery Navigation

Flow Checklists

Validate before and after shipping


Related Principles

  • C.1.2.01 Visibility of System Status extends to AI systems requiring communication of automated processing, decision progress, completion status enabling user awareness of AI activity.
  • H.2.1.01 Ethical Design Principle encompasses AI transparency as ethical requirement enabling informed consent, fairness verification, accountability establishing trustworthy human-AI relationships.
  • C.1.2.02 Feedback Loop Completion Law applies to AI through explicit communication of automated actions, results, learning effects closing human-AI interaction loops.

ReferencesMultiple academic and industry sources

Primary Sources

  • Gunning, D. (2017). "Explainable Artificial Intelligence (XAI)." DARPA Program.
  • Jobin, A., Ienca, M., & Vayena, E. (2019). "The global landscape of AI ethics guidelines." Nature Machine Intelligence, 1(9), 389-399.
  • Diakopoulos, N. (2016). "Accountability in algorithmic decision making." Communications of the ACM, 59(2), 56-62.
  • Lee, J. D., & See, K. A. (2004). "Trust in automation: Designing for appropriate reliance." Human Factors, 46(1), 50-80.

Industry Research

  • Miller, T. (2019). "Explanation in artificial intelligence: Insights from the social sciences." Artificial Intelligence, 267, 1-38.
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Proceedings of the 22nd ACM SIGKDD, 1135-1144.
  • European Commission. (2021). Proposal for a Regulation on Artificial Intelligence (AI Act).
  • DARPA XAI program. https://www.darpa.mil/program/explainable-artificial-intelligence
  • Jobin et al., 2019. https://www.nature.com/articles/s42256-019-0088-2
  • AI Act. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
  • Diakopoulos, 2016. https://cacm.acm.org/magazines/2016/2/197421-accountability-in-algorithmic-decision-making/fulltext
  • Miller, 2019. https://www.sciencedirect.com/science/article/pii/S0004370218305988
  • Ribeiro et al., 2016. https://dl.acm.org/doi/10.1145/2939672.2939778
  • Lee & See, 2004. https://journals.sagepub.com/doi/10.1518/hfes.46.1.50_30392
  • Doshi-Velez et al., 2021. https://www.nature.com/articles/s42256-021-00338-9
  • Holstein et al., 2022. https://dl.acm.org/doi/10.1145/3491102
  • EU AI Act. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32023R2854
  • Google Research, 2024. https://ai.googleblog.com/2024/03/advancing-explainable-ai-in-healthcare.html

Enjoyed This Free Principle?

This is 1 of 12 free principles. Get the Principles Library for all 168+ research-backed principles with complete research foundations, modern examples, and role-specific implementation guidance.

Get Principles Library — Was $49, now $29 per year$29/yr
Browse Free Principles

Was $49, now $29 per year$49 → $29/yr • 30-day guarantee

Continue Learning

Continue your learning journey with these connected principles

Part II - Core PrinciplesPremium

Visibility of System Status

Nielsen's first heuristic (1994) requires feedback within 0.1s (instant), 1s (flow), 10s (attention) thresholds, with Mi...

Beginner
Part VI - Human-Centered ExcellencePremium

Ethical Design Principles

Ethical design principles (Friedman 2019, IEEE 2019) establish Value Sensitive Design frameworks addressing manipulation...

Advanced
Part II - Core PrinciplesPremium

Feedback Loop Completion Law

Wiener's cybernetics (1948) demonstrates feedback loops enable goal-directed behavior through continuous action-evaluati...

Intermediate
Previous
Spatial Hierarchy Principle
All Principles
Next
Progressive Enhancement Law
Validate AI Transparency Principle with the AI Design ValidatorGet AI prompts for AI Transparency PrincipleBrowse UX design flowsDetect UX problems with the UX smell detectorExplore the UX/UI design glossary