Principles & Ethics

Scope

The OneDesign AI Guidelines apply to all AI-driven experiences within the products, including:

  • Conversational UX: Chatbots, assistants, and dialogue-based interfaces.
  • Recommendations & Personalization: Suggesting content, actions, or configurations.
  • Summarization & Explanations: Condensing information and providing reasoning.
  • Agentive Actions: AI performing tasks proactively or semi-autonomously.

These guidelines ensure responsible, transparent, and user-centric AI design.

Core principles

Transparency
Control
Privacy
Fairness
Explainability
Accessibility

01. Transparency

AI should never feel hidden or deceptive. Users must know:

  • When content or decisions are AI-generated.
  • Why a recommendation or action was made.
  • Sources or confidence levels behind outputs.

Example: Show an "AI-generated" badge and provide a "View sources" link for summaries.

02. Control

Users remain in charge. Always provide:

  • Options to override, edit, or decline AI suggestions.
  • Clear manual alternatives for critical tasks.
  • Ability to pause or disable AI features.

Example: If AI suggests a document rewrite, include "Accept," "Edit," and "Discard" buttons.

Respect user data and choices:

  • Explain what data is used and why.
  • Ask for explicit consent before sensitive actions (e.g., sharing data externally).
  • Provide easy opt-out and deletion options.

Example: Before enabling personalized recommendations, show a consent dialog explaining data usage.

04. Fairness & inclusivity

AI must serve all users equitably:

  • Avoid biased outputs by testing across diverse scenarios.
  • Use culturally aware language.
  • Support localization and accessibility.

Example: Ensure AI-generated text avoids stereotypes and works for multiple languages.

05. Explainability

Users should understand AI reasoning:

  • Offer summaries with sources.
  • Show confidence indicators for uncertain outputs.
  • Include "Why am I seeing this?" links for recommendations.

Example: "Suggested because you viewed similar reports yesterday."

06. Reliability & safety

AI should fail gracefully:

  • Avoid definitive claims when uncertain.
  • Provide clear error messages and recovery steps.
  • Calibrate outputs to prevent harmful or misleading content.

Example: If AI cannot answer, say: "I'm not confident about this. Would you like to search manually?"

07. Accessibility

AI features must be usable by everyone:

  • Support keyboard navigation and screen readers.
  • Maintain WCAG-compliant contrast ratios.
  • Ensure responsive layouts for all devices.

Example: AI chat should announce new messages via ARIA live regions for screen readers.

Agent vs assistant

Assistant (reactive)

  • Responds to user prompts.
  • Limited autonomy.
  • Typically powered by LLMs + RAG (Retrieval-Augmented Generation).
  • Waits for user input before acting.

Example: An AI assistant that summarizes a document when asked.

Agent (proactive)

  • Can plan tasks, call tools, and act autonomously.
  • Requires auditability and reversible changes.
  • Must use stricter safeguards (confirmation dialogs, undo options).

Example: An AI agent that schedules meetings automatically based on email context.