Use this when: your executive team has quietly stopped trusting the workforce dashboard, but the organisation still depends on it for hiring, promotion, succession, and learning investment decisions. Your move to skills-based workforce planning is exposing that self-reported skill data cannot carry that weight. AI adoption is generating new questions ("can this team govern AI well?") your existing data was never built to answer.
This is not a dashboard refresh or a visualisation project. It is a fundamental shift from measuring what people have accessed to measuring what your organisation can actually do, right now, and being able to show how that is changing.
Six patterns we keep finding inside workforce dashboards.
These are not data quality problems. They are baked into how the data was designed. Naming them honestly is the first step in the rebuild, and the part most analytics vendors will not help with.
01 · Completion does not equal capability
High completion rates that do not correlate with performance. The organisation is measuring access to learning, not development. Reported as ROI; functions as overhead.
02 · Self-rating gets the answer wrong
Early-career people rate themselves at or above experienced practitioners. Skills inventories built on self-assessment cannot be trusted for any high-stakes workforce decision.
03 · Annual snapshots, presented as live
Skills profiles that update once a year (or never) are reported as a real-time picture. The data cannot show capability degrading, growing through a project, or moving in any meaningful way.
04 · The most important learning is invisible
Platforms see who clicked what, alone. They cannot see who teams go to when stuck, what knowledge moves between people, or where shared sensemaking is happening. The most consequential capability is invisible to the system.
05 · False precision, sophisticated-looking
Five-point ratings averaged to two decimals. Satisfaction scores treated as quality indicators. Survey results presented as evidence. Sophisticated-looking analysis built on data that cannot bear the weight.
06 · Local progress, collective decline
Individual learning metrics improve while collective capability erodes, because the conditions that produce shared understanding are not measured. The dashboard says progress; the system is hollowing out.
A small set of signals that actually answer the leadership question.
We replace the existing flood of indicators with three to five capability signals chosen because they predict performance, not because they are easy to collect. Each one comes with the rules for when it can be acted on, and what it cannot tell you.
Signals about how people learn at work
Where people get stuck, what they reach for when they do, which problems are recurring across teams, and how quickly the organisation is learning from its own mistakes.
Signals about how knowledge moves
Who gets consulted, what gets documented, where collaboration happens, where it has been quietly optimised away, and where the organisation's expertise actually lives.
Signals about what people can actually do
Performance on real work, quality of deliverables, speed of recovery, judgement under pressure. The only signals that close the loop, because this is the only place capability is actually demonstrated.
Audit. Design. Pilot.
A realistic shape. The audit is the work, not a precursor to it. The design phase is short because we have spent years on the underlying frame. The pilot is where the real risk sits, so we run it carefully and stay close.
Audit and honest read-out
A structured look at what is being collected, what is being used to make decisions, and what is being quietly ignored. We then map which of the six patterns above are active in your data, with named evidence and severity.
- 5-7 hours of conversations with the CHRO, Head of L&D, Head of Talent or Workforce Planning, People Analytics lead, and 2-3 business unit leaders.
- Review of the systems that hold workforce data: LMS, skills platform, HRIS, performance management, and project or delivery systems where they exist.
- Quantitative checks against your real data: does completion correlate with performance, do self-ratings match observed work, how fresh is the skills data really.
- A list of the actual decisions leadership needs the data to support, mapped against whether the current data can answer them.
- A leadership read-out that holds the discomfort of the findings without retreating to defensiveness.
Signal design and architecture
A working session, then iteration. We define three to five capability signals tied to real performance, design how the data moves from source systems to decisions, and pick the pilot cohort. We resist the wish-list. Three signals well-implemented beat ten signals badly.
- A short brief for each signal: what it measures, where it comes from, how often, how to read it, and what it cannot tell you.
- Cross-check rules: which decisions need self-assessment, manager observation, and performance evidence to agree before action; where divergence between sources triggers a human conversation rather than an algorithmic average.
- A simple data flow: how data moves from source to decision, with a named owner for each step.
- Movement methodology: how to tell meaningful change from noise, and how to present that to non-technical stakeholders without false precision.
- Pilot scope: a low-risk team whose leader is curious, whose work generates real performance data, and whose failure would not be catastrophic.
Pilot and measurement
We instrument the systems, capture baseline, collect data on the agreed cadence, and stay close as your team and the analytics function learn the new approach. Reads at 30 and 60 days surface what is working and what needs adjustment before any wider rollout decision.
- Instrumentation in your existing systems wherever possible; minimal manual burden where automation is impossible.
- Baseline measurement on the pilot cohort, with named owners and weekly check-ins for data quality.
- 30-day and 60-day interim reports: which signals are behaving, which are noisy, where the design needs adjustment.
- Final read at the end of the pilot window, plus a clear recommendation: scale, refine, or stop.
- A short methodology document so the approach can be maintained without us on-site.
Six commitments under everything we propose.
- Show movement, not snapshots. We never report a capability number without showing how it has moved and what is behind that movement. A single data point is almost always misleading.
- Cross-check every high-stakes decision. For hiring, promotion, succession, or major project assignment, at least two evidence sources need to tell the same story before action.
- Design backward from the decision. Every signal must answer a real question someone needs answered. If a metric is interesting but not actionable, it is overhead.
- Numbers and stories together. Numbers without context mislead. Stories without numbers do not scale. Capability claims above a threshold come with a few sentences of context, enough to reveal sense-making without becoming a survey.
- Treat the work record and the learning record as one thing. When someone solves a problem using institutional knowledge, that is both a performance event and a learning event. We design for unified records wherever it is technically possible.
- Embed in real workflow. If the system depends on heroic data entry, it will fail. We build collection into work that is already happening, or we do not collect it.
Where this engagement is the wrong choice.
Not a dashboard refresh
If the brief is "make our dashboard prettier" or "add visualisations", we are the wrong practice. The data underneath is the problem.
Not predictive scoring on shaky data
We will not build "likelihood this person will succeed in role X" models on self-reported and completion data. That introduces false confidence at scale.
Not a maturity verdict for the board
We do not deliver maturity ratings the executive team can wave around. The point of the engagement is the rebuild, not the rating.
Not standalone if the strategy is unsettled
If your organisation is mid-argument about what good looks like, the data design will keep moving. Pause and run the capability diagnostic first.
A measurement system you can run, not a slide deck.
- An honest audit of where your current workforce data misleads you, with examples and severity, not abstract critique.
- A list of the leadership decisions the data needs to support, and which the new design will and will not be able to answer.
- Three to five capability signals, designed for your context, with cross-check rules and a named decision owner for each.
- A simple data flow diagram and ownership model: who owns each layer, how often it gets reviewed, where data quality issues escalate.
- A pilot in flight, instrumented, with baseline data and a 30/60/90-day plan running.
- A short methodology document your in-house People Analytics team can maintain without us.
- A clear recommendation at the end of the pilot: scale, refine, or stop. We will not pretend the system is working if it is not.