Skip to main content
Validating AI Outputs

Validating AI Outputs

Set up automated checks that catch system violations in AI-generated work before it reaches production. Validation acts as a safety net, ensuring outputs meet your standards even when humans miss issues in review.

How to

  1. Identify validation points

    Decide where to validate: pre-commit hooks, CI/CD pipeline, design tool plugins, documentation builds.

  2. Build automated checks

    Create validators for: token usage correctness, component API compliance, naming convention adherence, accessibility requirements, code quality standards.

    • Example checks: Scan for hardcoded colours (flag # or rgb), verify all components import from design system package, check prop names follow conventions, validate ARIA attributes are present.
  3. Integrate with workflow

    Add validation to existing processes. Flag issues in pull requests, design tool layers, or documentation builds with clear explanations.

  4. Provide fix suggestions

    When validation fails, suggest corrections: "Use color.surface.primary instead of #FFFFFF" or "Add aria-label for accessibility".

  5. Track violation patterns

    Monitor which rules AI violates most often. Use this data to improve your Rules for AI Generation and Prompt Libraries for AI.

  6. Balance strictness

    Don't block everything—some violations might be intentional. Allow overrides with justification, and track these to inform future rules.