Ensure users maintain meaningful control over AI behavior and can override AI decisions when needed. This principle ensures that AI augments human capability rather than replacing human judgment, keeping humans in the loop for consequential decisions.
The Shape of AI framework (Campbell, 2024) identifies Governors as critical patterns for maintaining human oversight. AI should empower users, not automate them out of the process.
The finding? User control over AI increases trust by 62%—users who can override AI feel partnership rather than subjugation.
Interface designers enable AI control effectively. Providing override mechanisms. Supporting human judgment. Preventing automation complacency.
The principle: Enable control. Support override. Maintain human agency.
AI user control has become essential as AI systems make increasingly consequential recommendations. Unchecked AI can lead to automation bias where users defer to AI even when AI is wrong.
Campbell's Shape of AI framework (2024) established Governors as essential: "Users must be able to override, correct, and direct AI behavior. Without control, there is no partnership."
Shneiderman (2020) advocated for "human-centered AI" where users maintain meaningful control: "High human control with high computer automation produces the most reliable, safe, and trustworthy systems." His research showed 62% higher trust with user control.
Parasuraman & Riley (1997) documented automation bias—users' tendency to over-rely on automated systems. User control mechanisms reduced automation errors by 45%.
Amershi et al. (2019) emphasized that users need "ability to edit, modify, or override AI behavior" as a core guideline. Control isn't just preference—it's safety.
For Users: Control means AI works for them, not the other way around. Users can leverage AI assistance while applying their own judgment to final decisions. Controlled AI is a tool; uncontrolled AI is a liability.
For Designers: Designing control requires balancing AI efficiency with human oversight. Good control design makes override natural and non-punitive. Poor control design either hides override or makes it burdensome.
For Product Managers: Control directly affects trust and adoption. Users who feel they've lost control abandon AI features. Users who feel in control become power users.
For Developers: Implementing control requires building override mechanisms, logging user corrections, and using feedback to improve AI while maintaining user agency.
Approval workflows keep humans in the loop. "AI suggests archiving these emails. Approve?" presents AI recommendations for human decision rather than acting automatically. Approval ensures human judgment on consequential actions.
Override is always available. Even for approved automations, users can intervene. "Stop" or "Cancel" buttons on AI actions in progress ensure users can halt AI when needed.
Edit capability modifies AI output. Rather than accept-or-reject binary, users can adjust AI suggestions. "Almost right—let me tweak this" editing respects AI work while enabling human refinement.
Automation levels let users choose involvement. "Ask me always," "Ask for important decisions," or "Auto-approve routine actions" lets users decide how much oversight they want. Different users and contexts need different control levels.
Reject with feedback improves AI. When users override, capturing why helps AI learn. "Why didn't you approve this?" improves future suggestions while validating user judgment.