Define clear boundaries for what AI can and can't generate, and what quality standards outputs must meet. Without rules, AI-generated work drifts from your system, creating inconsistency and maintenance burden.

Rules for AI Generation
How to
-
Define the allowed scope
Specify what AI can generate: documentation drafts, code snippets, token variations, test cases. List what requires human creation: design rationale, strategic decisions, brand expression.
- Example rules: AI can draft component specs and generate token variations. AI cannot make decisions about component API design, create new semantic tokens, or write accessibility guidance without review.
-
Set quality criteria
Establish standards AI outputs must meet: accessibility compliance, token usage, naming conventions, code structure, documentation completeness.
- Example criteria: All AI-generated code must use system tokens (no hardcoded values), include prop types, follow naming conventions, and pass automated accessibility checks.
-
Create validation checklists
Build checklists for reviewing AI-generated work. Include system-specific checks and common AI failure modes. You can feed these into your Minimum Viable Checklist.
-
Document common failures
Track where AI typically goes wrong: missing accessibility features, incorrect token usage, overly generic solutions. Share these patterns with your team.
-
Define modification rules
Clarify when AI outputs can be used as-is vs when they need review vs when they're just starting points for human work.
-
Update based on learnings
Refine rules as you learn more about AI capabilities and limitations. Share successful patterns and new failure modes.