Make the AI’s role explicit

Clearly show when AI is involved and what it does within the interaction flow.

Human-Centered AI Design
Transparency

Users should never have to guess whether they are interacting with an automated system, what that system is responsible for, or how much control it has over outcomes. Making the AI’s role explicit reduces cognitive load, prevents over-trust or under-trust, and establishes clear accountability boundaries between the system and the user.

This principle is not about legal disclaimers or marketing labels. It is about operational clarity: what the AI does, when it acts, and where its responsibility ends.

A system that hides or blurs the AI’s role may feel “magical” at first, but it quickly becomes unpredictable, hard to reason about, and difficult to trust.

When I design AI-powered experiences, I make the AI's role explicit from the start.

Users should immediately understand whether they're interacting with:

  • a conversational agent,
  • a decision support tool,
  • a creative assistant,
  • or another type of AI system.

Ambiguity about the AI's role leads to:

  • mismatched expectations,
  • confusion about capabilities and limitations,
  • and reduced trust in the system.

By clearly stating the AI's role, I set the right context for the interaction and help users form accurate mental models of what the system can and cannot do.

Explicit AI Presence and Identity

Users should immediately recognize when they're interacting with an AI system, not a human agent.

off on
Use clear visual indicators (icons, badges) to distinguish AI from human agents. Hide or obscure the AI's identity to make it seem more human. State the AI's identity in the first interaction. Use ambiguous language that could apply to either AI or humans.

Clear Scope of AI Capabilities

Users should understand what the AI can and cannot do, its limitations, and when it might need human assistance.

off on
List specific capabilities and limitations upfront. Make users discover limitations through trial and error. Provide clear escalation paths when AI reaches its limits. Let the AI attempt tasks it cannot complete.

Clear Responsibility and Decision Ownership

Users should know who is responsible for decisions: the AI makes suggestions, but the user retains final control.

off on
Clearly distinguish between AI suggestions and automated actions. Let the AI make irreversible decisions without explicit user consent. Provide clear opt-out mechanisms for automated behaviors. Hide or obscure who is responsible for outcomes.
A list of tools and services related to this argument. Decision support systems it may be outdated

Explainability at the Right Level

Provide explanations that match the user's needs: high-level reasoning for most users, technical details for experts.

off on
Provide simple, user-friendly explanations by default. Overwhelm users with technical details they don't need. Offer progressive disclosure for users who want more detail. Hide explanations entirely or make them hard to find.
A list of tools and services related to this argument. Explainable AI tools it may be outdated

Explicit Handling of Uncertainty and Errors

When the AI is uncertain or makes mistakes, it should communicate this clearly and provide recovery options.

off on
Acknowledge uncertainty and provide confidence indicators. Present uncertain answers with the same confidence as certain ones. Take responsibility for errors and provide clear recovery paths. Blame users or external factors for AI mistakes.

Why this principle matters

Making the AI's role explicit is fundamental to building trust and setting appropriate expectations.

When users understand the AI's role:

  • they can interact with it more effectively,
  • they have realistic expectations about its capabilities,
  • and they can make informed decisions about when to rely on it.

Without clarity about the AI's role, users may:

  • overestimate or underestimate the system's capabilities,
  • use it in ways it wasn't designed for,
  • or lose trust when the system doesn't meet unstated expectations.