Regularly audit how AI is being used across your organisation and where it's creating drift from your system. Audits reveal patterns in AI usage, highlight training needs, and show where your system structure needs improvement.

Auditing AI Usage
How to
-
Set audit cadence
Run lightweight checks monthly and deeper audits quarterly. Align with your existing system health check rhythm.
-
Scan for AI-generated content
Check codebases for AI-generated patterns: repetitive structures, tool-specific signatures, or common AI mistakes. Review design files for AI-generated components or content.
- Look for: Repetitive comment patterns, overly verbose variable names, generic prop names like
dataorconfig, missing edge case handling, or multiple similar components with slight variations.
- Look for: Repetitive comment patterns, overly verbose variable names, generic prop names like
-
Interview teams
Ask how they're using AI with the system: which tasks, which tools, what works well, where they struggle. Identify workarounds and pain points.
- Ask: Which tasks do you use AI for? Which prompts work well? Where does AI output need heavy editing? What system violations does AI commonly make?
-
Compare against standards
Measure AI-generated work against your Rules for AI Generation. Track compliance rates and common deviations.
-
Identify drift patterns
Look for systematic drift: teams consistently bypassing certain components, token misuse patterns, or repeated accessibility issues.
-
Share findings
Report audit results to stakeholders. Highlight successful AI usage and areas needing attention. Update guidance and rules based on learnings.