The same decisions.
A fraction of the writing time.
The 20-minute workflow in A1 and the repair-or-replace decision in A2 do not change when you introduce AI. Steps 1, 2, and 4 — observing, diagnosing, and documenting — remain entirely yours. These steps require judgment about your class, your subject, and the specific data you collected. AI cannot replicate that judgment.
Step 3 — the writing — is different. Writing a new explanation from scratch takes 8–12 minutes of focused work. Describing what you need and having AI produce a draft takes 90 seconds. Reviewing and adjusting that draft takes another 60–90 seconds. The total is under 3 minutes — compared to 12 in the manual workflow.
The quality gate moves: instead of spending time generating the explanation, you spend time evaluating and refining a draft. The output is usually better than a first draft written at 9pm, because the AI doesn't have teacher fatigue.
The AI output is only as good as
the notes you wrote in class.
The most common reason AI lesson regeneration produces unhelpful output is vague observation notes. “The lesson didn't work well” tells the AI almost nothing. “18/25 students produced the specific wrong answer ‘the concentrated side pulls water across’, which reveals they are attributing directional agency to the solution rather than understanding diffusion as a passive process driven by concentration gradient” gives the AI exactly what it needs to produce a targeted replacement.
Good observation notes have four components. You don't need to write them in full sentences or in any particular order — but all four need to be present for the AI output to be specific enough to use.
Write the exact phrasing students used, or the most common error pattern. Not 'they got the application question wrong' but 'they predicted X when the correct prediction was Y, because they applied [wrong model] instead of [correct model].'
This is the most important component. What mental model are students working from? What did the lesson structure inadvertently teach them? This component most directly determines what the AI generates — because the replacement needs to address the mechanism, not just the symptom.
Tell the AI whether you are repairing or replacing, and which element. 'Repair the worked example in the third activity by adding a dark-room scenario as a counter-example' is different from 'Replace the energy-in-cells explanation with one that distinguishes respiration from photosynthesis from the first sentence.'
Based on your diagnosis, specify what the replacement cannot do. This prevents the AI from producing a different version of the same structurally broken approach.
The prompt structure that produces
targeted output, not generic rewriting.
The following prompt structures are designed to produce a modified lesson element rather than a rewritten lesson. They separate what stays from what changes, name the failure mode explicitly, and constrain the AI's approach.
The key phrase in both prompts is “keep all other elements of the lesson the same.” Without this, AI tends to rewrite the whole lesson rather than the targeted element. The constraint forces specificity.
5-point checklist before using
the AI output.
AI-generated lesson elements need a review pass before use. This is not about distrust — it is about context. The AI doesn't know your class's specific prior knowledge gaps, the idioms your department uses, the specific textbook your students have been taught from, or the three things you said in last week's lesson that will make this new explanation confusing. The editing pass injects that context.
Read the explanation or activity with your subject knowledge engaged. AI occasionally produces explanations that are directionally correct but technically imprecise. In science and maths especially, imprecision at the foundational level creates new misconceptions.
AI tends to write at a slightly higher vocabulary level than the target year group. Replace technical terms your class hasn't encountered with ones they have. Keep the precision — just lower the register.
AI sometimes reintroduces the problematic framing it was told to avoid — especially when the constraint was implicit rather than explicit. Re-read specifically for the thing you asked it to remove.
Insert one reference to something your class already knows — a previous lesson's content, an example you used earlier in the term, a shared experience. This single addition changes the explanation's reception: it signals to students that this was written for them.
AI-generated explanations are typically 30% longer than necessary. Read and cut. Every sentence that doesn't add new understanding or address the specific misconception can go. Shorter explanations with better targeting outperform longer ones with more coverage.
The full loop in
one platform.
SprintUp Education's lesson iteration tool is built around the workflow in C4. The exit quiz tool generates the 3-question check (C3). When you record the response pattern, the iteration tool generates a suggested tomorrow-opener (C3/A3). When you want to iterate the lesson itself, you paste the original lesson and your observation notes — and the tool applies the prompt structure above automatically, using the exit quiz data already in the session to inform the regeneration.
Where the iteration loop
goes next.
C4 has covered individual lesson iteration — one teacher, one lesson, one improvement cycle. C6 — School-wide culture scales this to the department: shared iteration logs, team retrospectives, and the leadership behaviours that turn individual practice into institutional improvement. C8 — AI as the agile tool extends the AI toolkit beyond lesson iteration to the full agile cycle.