Help users understand the potential consequences of AI actions before they occur. This principle ensures that users can make informed decisions about whether to proceed with AI-suggested or AI-initiated actions by understanding what will change and what risks exist.
Cai et al.'s research (2019) on communicating AI consequences demonstrated that users who understand potential outcomes make significantly better decisions. Surprise consequences erode trust; previewed consequences build confidence.
The finding? Consequence previews reduce user regret by 45%—when users understand what will happen before it happens, they make choices they're satisfied with.
Interface designers communicate AI consequences effectively. Showing what will change. Highlighting risks. Enabling informed decisions.
The principle: Preview consequences. Communicate impact. Support informed choices.
Consequence communication has become essential as AI takes increasingly consequential actions. Users need to understand potential outcomes before committing to AI-driven changes.
Amershi et al. (2019) established consequence communication as a core guideline: "Notify users about the consequences of AI actions before they occur." Their research found that consequence previews led to 45% reduction in user regret about AI-assisted decisions.
Cai et al. (2019) studied how users understand AI action impacts. They found that visual previews of changes improved decision quality by 38% compared to text-only descriptions.
Yang et al. (2020) examined risk communication in AI systems. Users who saw risk indicators made 52% fewer errors when deciding whether to proceed with AI recommendations.
Kocielnik et al. (2019) demonstrated that reversibility information matters. Users who knew they could undo AI actions were 67% more willing to try AI features.
For Users: Consequence communication enables informed consent. Users can evaluate whether AI actions align with their goals and risk tolerance. Hidden consequences feel like manipulation; visible consequences feel like partnership.
For Designers: Designing consequence communication requires balancing completeness with cognitive load. Good consequence design shows enough to decide without overwhelming. Poor design either hides important impacts or drowns users in details.
For Product Managers: Consequence communication directly affects feature adoption and trust. Users who regret AI actions disengage. Users who make informed choices become advocates.
For Developers: Implementing consequence previews requires generating accurate predictions of AI action outcomes and presenting them clearly before execution.
Action summaries provide quick understanding. "AI will archive 247 emails older than 6 months" tells users the scope immediately. The summary should answer: what, how many, which ones?
Before/after previews show actual changes. Visual diffs help users understand exactly what will be different. Screenshots, file lists, or preview panels make abstract changes concrete.
Risk highlighting draws attention to potential problems. "3 emails from your boss will be archived" surfaces consequences that might not be intended. Risk indicators help users catch mistakes before they happen.
Reversibility information reduces commitment anxiety. "You can undo this action for 30 days" makes trying AI features feel safer. Knowing the escape route encourages exploration.
Staged confirmation prevents accidents. For high-stakes actions, requiring multiple confirmations or waiting periods prevents hasty mistakes. The friction should match the consequence severity.