Why the log exists

Individual iteration is invisible.
The log makes it institutional.

When a teacher makes a good iteration decision — replacing an explanation that produced a systematic misconception with one that works — that decision lives in their lesson notes. If they leave the school, the decision goes with them. If a colleague teaches the same lesson next term, they encounter the same misconception and spend the same 20 minutes fixing it from scratch.

The shared iteration log is the institutional memory that prevents this. Every time a teacher makes a significant adaptation decision, they record three things: what they observed, what they changed, and what they'll check to know whether the change worked. One line. Two minutes. Over a term, the log becomes an accurate, teacher-generated record of which lessons are fragile, which concepts produce persistent misconceptions, and which year groups need the most adaptation.

💡The log's primary value
The log's primary value is not individual accountability — it is shared learning. A department that reads each other's iteration notes identifies patterns that no individual teacher would see. The same misconception appearing in three different teachers' logs on the same topic is a curriculum design signal. The log makes it visible.
The three fields

What to record.
What not to.

The log has three fields and three fields only. Adding more reduces completion rates and reduces the quality of the data that is added. Keep it minimal.

💡One line per lesson — not one log per lesson
The log is a spreadsheet row, not a document. The three fields take 2 minutes to complete. A teacher who tries to maintain a full iteration document for every lesson will abandon the practice within a week. A teacher who fills in three spreadsheet cells at the end of class will maintain it indefinitely — because the friction is below the threshold where habits break.
1
Signal — what did the data show?
One sentence. Specific and quantitative where possible.

Not 'students didn't fully understand' — 'Q2 correct for 19/25 but Q3 application correct for only 8/25. Students can identify examples but cannot transfer to novel contexts.' The signal entry names the gap precisely, gives a number where possible, and ties to a specific question or observation rather than a general impression.

What makes a good signal entry
Specific (names the gap precisely), quantitative where possible (gives a number, not a proportion), tied to a specific question or observation. "14/25 students described the concentrated solution as pulling water — directional agency misconception confirmed."
2
Change — what did you do differently?
One sentence. Specific enough to replicate.

Not 'I improved the explanation' — 'Replaced the directed-arrow diagram with a particle-density ratio demonstration. Also added an explicit language-replacement exercise (students rewrite their wrong answer using correct framing).' The change entry should be specific enough that a colleague could implement the same adaptation without asking you to explain it.

The specificity test
Could a colleague read this entry and implement the same change without asking you about it? If no, it's not specific enough. "Used a different explanation" fails. "Replaced example A with example B because B avoids triggering misconception X" passes.
3
Check — how will you know it worked?
One sentence. Names a specific validation signal.

One sentence describing the validation signal: 'Same Q3 on tomorrow's exit ticket. Target: fewer than 4 students using transfer-failure pattern.' Without this check, the iteration is untestable. With it, the log automatically captures whether the adaptation resolved the gap or needs a second cycle.

Why the check field is the most important
The check field is what converts the log from a record of activity into a record of learning. Entries without a check tell you what changed. Entries with a check tell you whether the change worked — which is the only thing that matters for the next cohort.
Setting it up

From blank spreadsheet to
running log in under an hour.

1
Create a shared spreadsheet with six columns
Date / Teacher / Lesson / Signal / Change / Check

Nothing more. Use Google Sheets or Microsoft 365 so all teachers can access it simultaneously. Set the 'Lesson' column to a dropdown of your department's lessons — this allows later filtering by topic.

The minimum viable setup
Six columns, shared access, dropdown for lesson names. That's it. Anything more adds friction that reduces completion rates. Start with the minimum and add structure only if the data demands it.
2
Brief the team in 10 minutes — not a training session
Show one completed row from a real lesson you taught

Walk through each field. Emphasise: (1) one line per lesson; (2) two minutes to complete; (3) no requirement for complete sentences; (4) it is not observed or used in appraisal.

The most important thing to say
"This is for us, not for management. Its only purpose is to help each of us teach this content better." Say this explicitly. Without it, teachers assume the log will be used for performance management and will stop making honest entries.
3
Run for 4 weeks before reviewing
The data needs volume to be useful

After 4 weeks, filter the log by lesson and look for clustering. Which lessons have the most entries? Which signals appear repeatedly? These are your first curriculum improvement signals. Review this in the first A2 retrospective.

What 4 weeks of data looks like
A department of 5 teachers across 20 lessons = up to 100 entries in 4 weeks. 10 entries on one lesson and 1 on another is already a signal. The pattern is visible before you read a single entry in detail.
4
Review and prune quarterly
Remove entries where the check confirmed the change worked

Entries where the check column shows the adaptation resolved the gap are archive material — they've generated their value. Prune them to a separate archive sheet. What remains in the active log are ongoing open questions: adaptations not yet validated, or patterns appearing three or more times without a resolution.

Archive vs active
Resolved entries go to the archive sheet. Unresolved patterns stay in the active log. After 2 terms, the active log is a precise map of your department's unsolved curriculum problems — the right input for a curriculum review.
What you're looking for

The patterns that justify
curriculum-level changes.

Individual iteration log entries are tactical — they address one teacher's specific lesson on one day. Their value compounds when patterns become visible across teachers and topics. These are the patterns worth looking for in a quarterly review:

Pattern
What it means
Response
Same misconception appearing for multiple teachers on the same lesson
The original lesson's design creates the misconception — not individual teaching variation
Redesign the explanation at curriculum level. Affects all future cohorts.
Same lesson consistently requiring the most adaptation
The lesson's scope is too large or its prerequisite assumptions are wrong
Break the lesson into two, or add a prerequisite activity before it.
Adaptations that consistently don't resolve the gap
The misconception is deeper than the lesson-level fix can address
Investigate whether a prerequisite concept from an earlier unit is missing.
One teacher's adaptations consistently work for the whole department
That teacher has found the best approach to a persistent problem
Formally update the shared lesson plan with their approach.