Use this when: programs have been running for years and capability is not moving the way it should. AI tools are being adopted faster than the organisation can describe what good looks like. Skills-based workforce planning is exposing that self-reported data cannot support hiring, promotion, or succession decisions. Leaders know "more programs" is not the answer but lack the framework, the evidence, or the language to make the systemic case to the board.
This is the engagement to choose when the question is no longer "which course or platform" but "what kind of organisation are we becoming, and what conditions will teach us to be it well."
It is not a course refresh, a platform replatform, or a strategy deck. It is the work of designing how learning happens inside your real work, and standing up the data, tools, and operating model that let leadership see capability change as it happens.
What every part of this engagement is built on.
Capability is not a database entry
A skill in a spreadsheet is not the same as someone who can do the work under pressure. We design measurement that shows what your people can actually do, not just what they say they know.
Most learning still happens between people
The best learning is in conversation, peer review, real handoffs, and shared reflection. AI tools do not replace any of that. But they can quietly route around it. We design so they do not.
AI works when it connects, not when it replaces
AI is at its best when it puts the right knowledge in front of the right person at the right moment. The judgement about what to do with it stays human. Designing for that line is where most AI rollouts fall short.
Four layers, designed and built as one system.
Capability does not sit inside individuals like a portable asset. It emerges from how your organisation works. We design and build across the four layers that determine whether capability can develop at scale, not as separate workstreams but as one connected system.
Environment
The structures, incentives, and rhythms of work that decide which behaviour gets rewarded and which is quietly punished. Without this, no programme survives contact with how your organisation actually runs.
Social
How expertise becomes visible, how knowledge moves between teams, how reflection gets built into work, and where shared sensemaking happens (or has been quietly optimised away).
Technical
The tools, data, and AI that mediate how knowledge reaches the moment of decision. Including, where it earns its place, the connective layer that brings your trusted knowledge into the AI tools your people are already using.
Measurement
The small set of capability signals that replace completion-and-satisfaction reporting with intelligence the organisation can actually act on, and that the executive team will accept as evidence.
Diagnose. Design. Build and hand over.
There is no plausible version of this work in six weeks. The diagnostic alone takes six to eight weeks if it is honest. Strategy and design is another two to three months. The first build runs alongside and beyond. After that, you own the system and we step back.
Diagnostic
A full read of what your organisation is currently teaching its people. We work across leaders, managers, and practitioners, and we look at the artefacts that show how work actually happens, not just what people say. Returns a plain-language report and a clear answer on what to change first.
- 15-25 conversations across levels and functions.
- Observation of real meetings, decisions, and handoffs (with consent).
- Review of onboarding, retrospectives, performance docs, AI usage patterns.
- Cross-checking against your existing learning, performance, and engagement data.
- A working session with leadership that holds the discomfort of the findings.
Strategy and design
The redesign work, with you at the table. We design the social, environmental, and technical changes that move capability, plus the measurement system that lets you see whether they are working. AI tooling, communities of practice, knowledge engineering, and incentive redesign all show up here only if your strategy needs them.
- A capability strategy the executive team can defend: what your organisation wants to teach itself, what stays private, who owns what.
- A social and environmental design: communities of practice, expertise networks, work-embedded reflection, onboarding redesign, and the incentive shifts that make the rest stick.
- A technical design: knowledge engineering, source structure, governance, and the AI connective layer where it earns its place.
- A workforce data design: the small set of capability signals that replace dashboards leadership has stopped trusting.
Build and hand over
First builds in production, instrumentation in place, and the handover that decides whether the system holds. A pilot team runs through real measurement intervals. Owners are named, the review cadence is agreed, and the organisation moves into ownership. From here, the work continues as a quarterly review retainer, not a project. Or it stops.
- Pilot team chosen for credibility, not just convenience.
- Baseline plus regular measurement on the agreed signals (typically 30, 60, 90 days).
- If AI tooling is in scope: a focused, governed build for the highest-value workflow, with audit and oversight from day one.
- Named owners for each layer; a quarterly review cadence the organisation runs itself.
- Optional ongoing retainer for quarterly review and continued evolution of the system.
A working system, not a strategy deck.
- A diagnostic report grounded in your real artefacts, with the gap between stated values and revealed priorities made explicit and named.
- A capability strategy the executive team can defend: what your knowledge is for, what your organisation wants to teach itself, and what stays private.
- A connected design across environment, social, technical, and measurement, with named owners and a stop list of what to retire.
- A workforce data design with three to five capability signals that leadership accepts as evidence.
- A first build in production, instrumented, with baseline data and a 30/60/90-day measurement plan running.
- If AI tooling is in scope: a focused, governed connection between your trusted knowledge and the tools your people use, plus a governance approach the technical team can maintain.
- An in-house operating model the organisation can run itself, and an exit point that does not depend on us being there.
What we will not do.
Not a programme refresh
We are not making your existing programs more engaging. Programs are interventions inside an environment; this engagement is the environment.
Not an LMS or platform replatform
We will not recommend a content library expansion as a capability solution. Platforms enable the system; they do not replace it.
Not a maturity scorecard
We do not produce maturity-model verdicts or capability scores you can wave at the board. The point is the system, not the rating.
Not a plug-and-play AI project
If the brief is "build us a Confluence chatbot", a specialist integrator will be faster and cheaper. Choose this work when the strategy and ownership questions are unsettled.