Help users understand AI capabilities and limitations upfront before they interact with the system. This principle addresses the critical need to set accurate expectations at the start of any AI interaction.
Amershi et al.'s research (2019) on human-AI interaction established that users need to understand what an AI system can and cannot do before they begin interacting with it. Without this clarity, users may have unrealistic expectations leading to frustration, or they may underutilize the system by not knowing its full potential.
The finding? Users who receive clear capability disclosure upfront show 23% higher trust in AI systems and report significantly less frustration when the AI encounters its limitations.
Interface designers communicate AI capabilities immediately. Before the first interaction. Through clear, structured disclosure. With honest acknowledgment of limitations.
The principle: Be upfront. Be specific. Be honest about what AI can and cannot do.
Capability disclosure has become essential as AI systems handle increasingly complex tasks. Research demonstrates that upfront communication of AI abilities directly impacts user satisfaction and appropriate use.
Amershi et al. (2019) conducted extensive research across Microsoft products, analyzing thousands of AI interactions. Their Guidelines for Human-AI Interaction identified "Make clear what the system can do" as the first and foundational guideline. Studies showed that users who understood AI capabilities before interaction had 23% higher trust ratings and were more forgiving when the AI made errors.
Kocielnik et al. (2019) investigated expectation setting in AI assistants. In a study with 300 participants, those who received detailed capability disclosure before using an AI scheduling assistant reported 35% higher satisfaction compared to those who discovered capabilities through trial and error. The disclosure group also had 28% fewer failed interactions.
Yang et al. (2020) examined how capability framing affects user behavior. When AI limitations were disclosed alongside capabilities, users made 40% fewer requests outside the AI's scope, reducing frustration and improving overall experience quality.
Bansal et al. (2021) demonstrated that capability disclosure affects trust calibration. Users who knew AI limitations upfront showed more appropriate reliance patterns—neither over-trusting nor under-trusting the system—leading to better human-AI team performance.
For Users: Understanding AI capabilities enables informed decisions about when and how to use AI assistance. Users can set realistic expectations and avoid frustration from attempting unsupported tasks. Clear disclosure empowers users to leverage AI effectively within its actual scope.
For Designers: Designing capability disclosure creates opportunities for user education and trust-building. Thoughtful disclosure patterns reduce the need for error handling and recovery flows. Honest communication establishes a foundation of trust that benefits the entire product experience.
For Product Managers: Clear capability disclosure reduces support burden from confused users attempting unsupported tasks. Setting accurate expectations upfront increases user satisfaction and reduces churn from disappointment. Products with honest capability framing build stronger long-term user relationships.
For Developers: Well-defined capability boundaries simplify both development and testing. Clear scope communication enables better API design and error messaging. Capability disclosure requirements drive thoughtful feature scoping from the start.
Onboarding capability cards present structured information about what the AI can and cannot do during first-time user experiences. ChatGPT's initial screen shows "ChatGPT can make mistakes. Check important info." alongside examples of supported tasks. The disclosure appears before any interaction occurs, setting expectations from the start.
Inline capability hints surface contextually relevant capability information during use. GitHub Copilot shows "Copilot can help with: code completion, documentation, tests" when the extension activates. Hints appear at moments of potential confusion or new feature discovery.
Capability documentation provides detailed reference material accessible from any AI touchpoint. Microsoft Copilot's "What can Copilot do?" link leads to comprehensive documentation organized by task type. Documentation is searchable and includes examples for each capability.
Scope indicators communicate boundaries when users approach AI limitations. Google Assistant displays "I can help with these topics" suggestions when it doesn't understand a query. Indicators redirect users toward supported capabilities rather than simply failing.
Progressive capability reveal introduces AI features gradually as users demonstrate readiness. Grammarly unlocks advanced AI suggestions as users engage with basic features. Reveal is tied to user behavior, not arbitrary timing.