The wrong framing produces
the wrong outputs.
Teaching team meetings labelled as retrospectives but run as performance debriefs produce three predictable outcomes: defensiveness from teachers whose lessons are discussed, general complaints that don't connect to specific improvements, and action points that nobody follows up because nobody owns them.
The difference between a performance debrief and a design session is not just tone — it's the structural question that organises the meeting. A debrief asks: “what went wrong and who is responsible?” A design session asks: “what did we learn about our students this half-term, and what should we build or change as a result?” The second question produces forward-facing, specific, testable outputs. The first produces backward-facing, general, often unactionable ones.
45 minutes. Four phases.
One shared experiment as output.
The facilitator has reviewed the iteration log before the session and identified three to four patterns: which lessons generated the most adaptations, which misconceptions appeared most frequently, which year groups required the most responsiveness. This 5-minute summary anchors the session in evidence rather than impression.
Each teacher shares one adaptation that resolved a persistent gap — the signal they identified, the change they made, and the evidence that the change worked. The facilitator documents these and identifies which successful adaptations should be formally incorporated into the shared lesson plans.
The facilitator presents patterns from the iteration log where the same gap appeared in multiple teachers' logs without a clear resolution. These are the curriculum-level problems — explanations that consistently produce the same misconception regardless of which teacher delivers them. The team makes one curriculum-level decision about each pattern.
The retrospective ends with a single shared experiment: one specific practice that all teachers will try during the next half-term, with an agreed-upon way to evaluate whether it worked. Specific, testable, owned by the whole team.
Who runs it and how they
keep it on track.
The most important role in a teaching retrospective is the facilitator — the person who manages the transition between phases, prevents the “what went wrong and who is responsible” framing from creeping in, and drives toward the concrete output. This is typically the head of department, but can be any experienced teacher willing to hold the structure.
Starts each phase with the specific question for that phase. Redirects personal stories ('this happened in my class') to patterns ('does this appear in others' classes too?'). Names the output explicitly at the end of each phase ('so we've agreed to...'). Ends each meeting by reading back the shared experiment aloud. Sends a one-paragraph summary to the team within 24 hours.
Allowing the debrief framing ('so what went wrong with Topic 4?'). Letting discussion of one lesson consume the whole session. Concluding with 'let's all keep trying to be more responsive.' Not following up on the shared experiment before the next retrospective. Using log data to surface individual teacher weaknesses.
The format works when
the culture supports it.
The 45-minute format depends on one condition: teachers trusting that participating honestly in the retrospective will not be used against them. A3 covers the leadership behaviours that create that condition — specifically, what leaders need to change about how they observe, evaluate, and talk about teaching.