phase 02 · services
currently booking Q3 2026 onwards · discovery & sparring open sooner · last update 27.04
Midnight Labs
specialist engagement · Senior L&D + CTO/CIO sponsorship

Free your L&D team from admin work, without losing the human judgement that matters.

A focused engagement for L&D functions whose people are spending most of their time on coordination, triage, and admin. Almost none on the strategic work the organisation actually needs from them. We design and build two to four well-scoped AI tools that take the routine work off your team safely, with clear oversight points, an off-switch, and governance your IT function will actually accept.

shape~10-12 weeks plus measurement window
sponsorsSenior L&D leader + CTO/CIO
output2-4 AI tools in production, governed and instrumented
your timeL&D leader ~3 hrs/wk; IT counterpart ~1-2 hrs/wk

Use this when: your L&D team is operationally underwater, the ambition is for them to be more strategic, but the calendar is full of intake, routing, and "could you just" requests. Leadership has signalled appetite for AI inside L&D operations. But no enthusiasm for "another platform". The constraint is not technology access. It is design discipline, governance, and knowing exactly where AI helps and where human judgement has to stay.

This is the engagement to choose when the question is "how do we free the L&D function's strategic capacity safely", not "which copilot should we buy".

the principle under everything we build

AI handles the routine. People keep the judgement.

The line that decides which work AI is appropriate for, and which work has to stay with humans. We hold this line in the design before we hold it in the technology. Most L&D AI rollouts go wrong because they do not.

What AI is genuinely good at

Routine, high-volume, low-judgement work: coordination, intake, routing, summarisation, status updates, structured retrieval. Hours recovered, with no loss of the work people actually learn from.

What stays with humans

Career conversations, capability interpretation, leader development, conflict resolution, ethical judgement, and any decision that shapes someone's trajectory in the organisation. AI never decides any of these.

the kinds of tools we build

Three patterns we build inside L&D operations.

Most engagements deliver two to four tools across these patterns. Each one earns its place by giving the L&D team back hours that were going into admin, and freeing them for work the organisation actually needs.

pattern 01

Research and intelligence

A tool that does the literature scan, competitive read, or environmental research the L&D team currently does manually. Useful for board updates, strategy preparation, market signals, and vendor due diligence. Output is a draft for a human to interrogate, never a final answer.

pattern 02

Intake and coordination

A tool that handles the long tail of "could you just" requests (client onboarding into a programme, capability request triage, compliance routing, intake-to-design pipelines), with a human-review checkpoint at every point a real decision happens.

pattern 03

Knowledge capture

A tool that turns meeting transcripts, decision records, and post-incident notes into searchable institutional memory the rest of the organisation can find. Useful for capturing what programs actually taught, and what is being learned at work that nobody is currently writing down.

three phases · ~10-12 weeks

Map the work. Build with governance built in. Measure and hand over.

phase 1 · 2-3 weeks

Map the L&D team's work

We map where your team's time is actually going, identify the routine high-volume tasks where AI earns its place, and explicitly rule out the work that needs human judgement. We name the human review points before any technical design starts.

  • An honest read of where the L&D function's hours are actually going.
  • A short inventory of workflows: candidate for AI, hybrid, hands-off.
  • Governance design: where the human review points are, when escalation triggers, what gets logged for audit.
  • Selection of the first 2-4 tools to build, with named owners on the L&D and IT sides.
phase 2 · 5-6 weeks

Build with governance from day one

We design and build the tools with auditable rules, explicit access controls, and human checkpoints baked in. Nothing goes into production without governance your IT and security function will actually accept.

  • A short specification for each tool: scope, inputs, what it is allowed to do, when it must escalate, what gets logged, and what shuts it down.
  • Rules captured in version-controlled code that IT security and compliance can review the same way they would any other production system.
  • Where it earns its place: the connective layer that lets the tools reach your trusted internal knowledge, with enterprise-grade authentication and access controls.
  • Integration with the systems already in use (LMS, intake forms, ticketing, calendar) using the lightest viable touch.
  • Pilot rollout to a single team or function before any wider release.
phase 3 · 4 weeks plus measurement window

Measure, hand over, exit cleanly

We instrument the tools for the only measurements that matter: how much L&D time was actually recovered, where escalations happened, what the tools got wrong, and what the L&D function did with the time. We hand over day-to-day ownership, name the review cadence, and step back.

  • Operational metrics: time recovered, escalation rate, error rate, audit-flag rate, user-trust signals.
  • The bigger question: what strategic work the L&D function reinvested the recovered capacity into.
  • Owner roles named on the L&D and IT sides; a quarterly review the organisation runs itself.
  • An off-switch built in by design, plus a "stop using this tool" decision protocol.
how we approach the work

Six commitments under every tool we build.

  • AI surfaces context. Humans hold judgement. Career, capability, conflict, and ethical decisions stay human, by design, not by aspiration.
  • Governance from day one, not bolted on. Audit trails, access controls, escalation rules, and human review points are built in before any tool reaches production. They are the precondition, not paperwork to add later.
  • Fix the workflow first, then add a tool. If a workflow can be improved without AI, we improve it first. AI only earns a place where the human cost of the routine work is real.
  • Build the off-switch with the on-switch. Every tool has a defined termination condition, a named owner, and an explicit retirement protocol. Tools no one is allowed to retire become permanent liabilities.
  • Measure what L&D did with the time. Hours recovered are a means, not an end. If the time is not being reinvested in the strategic work the function is meant to do, the tools are not earning their place.
  • Be honest about what AI removes. Not every efficiency gain is a strategic gain. If automation removes the work people actually learn the job from, we ask whether the time saved is worth what the team will quietly lose.
what we will not do

Where this engagement is the wrong choice.

Not generic productivity tools

We are not building meeting summarisers, calendar assistants, or generic copilots. Specialist vendors do those better and cheaper.

Not employee-facing chatbots

If the brief is "an L&D chatbot for staff to ask anything", we will name what that pattern would quietly take away from the team and recommend a different shape.

Not anything that decides about a person

No automated capability ratings. No autonomous succession recommendations. No tool that decides whether someone is ready for promotion or development.

Not a platform replacement project

We integrate with what is already there. If the LMS, intake, or ticketing systems themselves are the constraint, that is a different engagement.

what you walk away with

Two to four working tools, plus the ownership model to run them.

  • A clear map of where the L&D function's time is actually going, with explicit human-review boundaries for every workflow.
  • Two to four AI tools in production, each with built-in governance, audit trails, escalation rules, and a defined off-switch.
  • Where it earns its place: a single, well-governed connection between the tools and your trusted internal knowledge, designed for enterprise.
  • A simple operational view: time recovered, escalation rate, error rate, and what the L&D function actually did with the recovered capacity.
  • A quarterly review the L&D and IT functions can run themselves, with named owners on each side.
  • A clean exit: the tools either earn their permanent place or are retired with a documented post-mortem.