Set up automated checks that catch system violations in AI-generated work before it reaches production. Validation acts as a safety net, ensuring outputs meet your standards even when humans miss issues in review.

Validating AI Outputs
How to
-
Identify validation points
Decide where to validate: pre-commit hooks, CI/CD pipeline, design tool plugins, documentation builds.
-
Build automated checks
Create validators for: token usage correctness, component API compliance, naming convention adherence, accessibility requirements, code quality standards.
- Example checks: Scan for hardcoded colours (flag
#orrgb), verify all components import from design system package, check prop names follow conventions, validate ARIA attributes are present.
- Example checks: Scan for hardcoded colours (flag
-
Integrate with workflow
Add validation to existing processes. Flag issues in pull requests, design tool layers, or documentation builds with clear explanations.
-
Provide fix suggestions
When validation fails, suggest corrections: "Use
color.surface.primaryinstead of#FFFFFF" or "Add aria-label for accessibility". -
Track violation patterns
Monitor which rules AI violates most often. Use this data to improve your Rules for AI Generation and Prompt Libraries for AI.
-
Balance strictness
Don't block everything—some violations might be intentional. Allow overrides with justification, and track these to inform future rules.