Skip to main content
Auditing AI Usage

Auditing AI Usage

Regularly audit how AI is being used across your organisation and where it's creating drift from your system. Audits reveal patterns in AI usage, highlight training needs, and show where your system structure needs improvement.

How to

  1. Set audit cadence

    Run lightweight checks monthly and deeper audits quarterly. Align with your existing system health check rhythm.

  2. Scan for AI-generated content

    Check codebases for AI-generated patterns: repetitive structures, tool-specific signatures, or common AI mistakes. Review design files for AI-generated components or content.

    • Look for: Repetitive comment patterns, overly verbose variable names, generic prop names like data or config, missing edge case handling, or multiple similar components with slight variations.
  3. Interview teams

    Ask how they're using AI with the system: which tasks, which tools, what works well, where they struggle. Identify workarounds and pain points.

    • Ask: Which tasks do you use AI for? Which prompts work well? Where does AI output need heavy editing? What system violations does AI commonly make?
  4. Compare against standards

    Measure AI-generated work against your Rules for AI Generation. Track compliance rates and common deviations.

  5. Identify drift patterns

    Look for systematic drift: teams consistently bypassing certain components, token misuse patterns, or repeated accessibility issues.

  6. Share findings

    Report audit results to stakeholders. Highlight successful AI usage and areas needing attention. Update guidance and rules based on learnings.