The six-week signal rule.
Why we make a hard promise: six weeks to first observable behaviour change, or the intervention is wrong and we rebuild it. A note on how we arrived at six, and what we do in weeks five and seven.
There is a standard failure mode in capability work that everyone has seen and nobody wants to name. You design an intervention. You roll it out. People like it in the room. Three months later the team's Slack feels the same, the meetings feel the same, and when you re-read the artefacts the work produces, nothing has shifted. Someone suggests running another program.
We used to live in this failure mode. We don't any more, because every Midnight Labs engagement carries a single hard promise: six weeks to first signal. If we cannot see behaviour moving in the work within six weeks of an intervention going live, the intervention is wrong, and we rebuild it.
This note is about how we arrived at six, what we measure, and what we do when the week-five re-read reveals that nothing has moved.
What "signal" means, precisely
Signal is a behaviour in the actual work that has moved in a direction we can name, on a timeline we can see. It is not a satisfaction survey, a self-report on a 1–5 scale, or a quarterly KPI. Those are either measures of our own popularity (surveys) or measures that lag too far behind the cause to help us learn (KPIs).
In discovery we pick two or three behaviours per engagement. They tend to look like:
decision-velocityon revenue-affecting calls (median hours from problem-posed to call-made)recovery-cadenceafter a quality failure (hours to first written post-mortem)stakeholder-drift(proportion of decisions in which stated intent diverges from actual call)
We track those behaviours in the work itself, inside decision logs, brief templates, and written post-mortems, not in a survey six months later. The evidence comes from artefacts the team already produces, or that we have asked them to produce once. Either way, we re-read on a fixed cadence.
Why six
Six is not magic. It is the shortest window we have consistently been able to see real behaviour move inside, across five years of engagements. Shorter than six and what you're usually measuring is novelty: people doing the new thing because the new thing is there. Longer than six and you're usually measuring drift; the intervention is fading and you've lost your window to correct.
Six weeks is also long enough for two full cycles of a weekly ritual, which is the minimum you need to distinguish "they did it twice" from "they are doing it".
"The week-five reading is the most important document in the engagement. Everything before it is setup; everything after it is either confirmation or rebuild. Almost nothing in that document is about the intervention. Most of it is about whether people changed." internal method note, 2024
What happens in weeks five and seven
Week five is the first serious re-read. A principal sits with 90 days of pre-intervention artefacts and 5 weeks of post-intervention artefacts, and writes a short document (two pages) that answers one question: has the behaviour moved?
If yes, the document names the move, estimates its size, and notes the two or three things in the intervention that appear to have done the work. We will keep rebuilding the weaker loops in the engagement, but the core stays in place.
If no, week seven is a reset. We take the intervention off the work. Not pause: take off. We sit with the original discovery read, the week-five reading, and whatever else we have learned, and rebuild. This has happened in roughly one in four engagements we have run. It is never comfortable. It is always the right move.
The failure mode the rule protects against is the one where an intervention is kept in place for a full year because it was expensive, because the sponsor is publicly committed, because nobody wants to admit it isn't working, and at the end the re-read says no change. The cost of a rebuild at week seven is a fortnight of principal time. The cost of a rebuild at month twelve is a year.
Three lies we used to tell ourselves
1. "It's too early to tell."
In the first few years of the practice we repeated this as often as any other sentence. It is almost always false. If a good intervention has been running for five weeks in the right conditions and you cannot see any shift in the artefacts, the shift is not on its way. The statement is a way of buying time for something that isn't coming.
2. "The signal we want is behavioural, so of course it lags."
Lagging outcomes lag. Lagging behaviours move quickly. Whether people are writing a post-mortem, updating the decision log, or holding the ritual, is observable in days. Whether the business outcome has moved is a question for a year from now. Don't confuse the two.
3. "We can't measure the subtle things."
This one is half-true. We can't measure everything subtle. We can measure enough subtle things, honestly, to know whether the practice is alive. Most of the value of the six-week rule is that it forces us to decide in advance what those things are, and commit to reading them, instead of deciding after the fact what counts.
One thing we're still wrong about
We hold a looser version of this rule for Consilium, our point-of-view instrument for senior leaders: ninety days to the first readable shift in stance, not six weeks. Framing and narrative move on a longer clock than a team's weekly ritual. We don't have enough data yet to be sure ninety is the right number rather than sixty or one-twenty.
If you run Consilium and have thoughts on the window: write to us. The peer read gets there on its own, but the cadence question is one we're actively re-reading.
Filed under method. Related: what a good post-mortem actually does (2025.10.02), decision velocity is not decisiveness (2025.11.17).