Why most retrospectives fail

The wrong framing produces
the wrong outputs.

Teaching team meetings labelled as retrospectives but run as performance debriefs produce three predictable outcomes: defensiveness from teachers whose lessons are discussed, general complaints that don't connect to specific improvements, and action points that nobody follows up because nobody owns them.

The difference between a performance debrief and a design session is not just tone — it's the structural question that organises the meeting. A debrief asks: “what went wrong and who is responsible?” A design session asks: “what did we learn about our students this half-term, and what should we build or change as a result?” The second question produces forward-facing, specific, testable outputs. The first produces backward-facing, general, often unactionable ones.

The purpose of a teaching retrospective is not to determine who failed. It is to determine what the team will try differently next time. These are different meetings with different outputs, and conflating them is why most retrospectives are dreaded.
P6 · Agile teaching — C6 · School-wide culture
The format

45 minutes. Four phases.
One shared experiment as output.

0
0–5 min: Ground the data — iteration log review
What patterns appeared this half-term?

The facilitator has reviewed the iteration log before the session and identified three to four patterns: which lessons generated the most adaptations, which misconceptions appeared most frequently, which year groups required the most responsiveness. This 5-minute summary anchors the session in evidence rather than impression.

What to prepare
A one-page summary of the log's patterns. Not the full log — the patterns. Three to four observations maximum. Anything more exceeds what the team can act on in the remaining 40 minutes.
1
5–20 min: What worked — keep and share
One successful iteration from each teacher

Each teacher shares one adaptation that resolved a persistent gap — the signal they identified, the change they made, and the evidence that the change worked. The facilitator documents these and identifies which successful adaptations should be formally incorporated into the shared lesson plans.

Time allocation
2–3 minutes per teacher. A team of six takes 15 minutes. Do not allow discussion to extend — the sharing is what matters here, not the debate. This phase should feel good: it is celebration of professional problem-solving, not assessment of it.
2
20–35 min: What kept failing — design for change
Patterns that individual iteration couldn't fix

The facilitator presents patterns from the iteration log where the same gap appeared in multiple teachers' logs without a clear resolution. These are the curriculum-level problems — explanations that consistently produce the same misconception regardless of which teacher delivers them. The team makes one curriculum-level decision about each pattern.

The output of this phase
A decision — not a discussion. "We're going to redesign the osmosis lesson to use the particle-density approach from [teacher]'s iteration entry. Who will update the shared plan by [date]?" Named owner, named date. No ambiguity.
3
35–45 min: One shared experiment — what we'll all try
The next half-term's collective learning question

The retrospective ends with a single shared experiment: one specific practice that all teachers will try during the next half-term, with an agreed-upon way to evaluate whether it worked. Specific, testable, owned by the whole team.

Why one experiment and not five
Five experiments produce diffuse effort and no clear learning. One experiment produces concentrated effort and a clear answer. The single-experiment constraint is the discipline that makes retrospectives cumulative rather than repetitive. "All of us will use a 3-question check at the end of every Thursday lesson for the next six weeks."
The facilitator's role

Who runs it and how they
keep it on track.

The most important role in a teaching retrospective is the facilitator — the person who manages the transition between phases, prevents the “what went wrong and who is responsible” framing from creeping in, and drives toward the concrete output. This is typically the head of department, but can be any experienced teacher willing to hold the structure.

1
Behaviours that work
Keeping the session on track toward output

Starts each phase with the specific question for that phase. Redirects personal stories ('this happened in my class') to patterns ('does this appear in others' classes too?'). Names the output explicitly at the end of each phase ('so we've agreed to...'). Ends each meeting by reading back the shared experiment aloud. Sends a one-paragraph summary to the team within 24 hours.

The most important facilitator move
Ending the meeting by reading the shared experiment aloud and getting verbal confirmation from everyone in the room. "We've agreed to [experiment]. Everyone doing this?" This closes the session with a concrete commitment, not a vague intention.
2
Behaviours that undermine it
What turns a design session back into a debrief

Allowing the debrief framing ('so what went wrong with Topic 4?'). Letting discussion of one lesson consume the whole session. Concluding with 'let's all keep trying to be more responsive.' Not following up on the shared experiment before the next retrospective. Using log data to surface individual teacher weaknesses.

The framing to watch for
"What went wrong?" is always one question away from "who is responsible?" Once that framing enters the room, the session is over as a design session. The facilitator's job is to prevent this question from being asked — by keeping every phase anchored in patterns, not individuals.