Skip to main content
System Health Check

System Health Check

A System Health Check combines feedback with evidence to show how well the system is working in practice. It highlights what teams value, where they struggle, and what is missing. Use this to guide improvements, prioritise roadmap items, and keep adoption efforts grounded in real usage. Pair with Success Metrics to measure both lived experience and quantifiable outcomes.

How to

  1. Set the scope

    Decide whether to look at the whole system, a single domain, or a recent release. Keep it clear and achievable.

  2. Gather input

    Collect insights through different formats. Use small group sessions to dig deep with users, or run async surveys to reach more people quickly, or Journey Mapping to capture frustrations and handoffs across disciplines.

    • Scan codebases for component usage patterns and token adoption rates
    • Feed feedback from multiple channels (Slack, surveys, interviews) to AI for theme clustering
  3. Audit usage

    Review design files, code repos, or analytics to see how tokens, components, and patterns are actually being used.

  4. Look for themes

    Cluster findings into strengths, gaps, and opportunities. Note adoption blockers or recurring pain points.

  5. Check against metrics

    Compare findings with your Success Metrics to see if outcomes on paper match lived experience.

  6. Share outcomes

    Summarise findings in a clear way and make them accessible to the wider org. Highlight priority improvements and next steps.

  7. Repeat regularly

    Set a predictable and maintainable cadence. Run larger health checks less often such as quarterly or yearly. Use smaller checks a few months after releases to see how changes are landing.

Synthesise at scale: AI can handle feedback synthesis across channels and identify usage patterns from code. While it can generate health scorecards, you need to interpret results and decide actions.