Facilitated learning generates reasoning work.
Reasoning work is hard to give feedback on at scale.
Facilitated learning generates a different kind of student work — reasoning, argument, explanation — that is harder to give feedback on at scale than recall-based work. A teacher can scan 25 multiple-choice sheets in 3 minutes. Reading and providing developmental feedback on 25 written arguments takes an hour. This time cost is real and is one of the genuine structural barriers to facilitation in schools where teachers are stretched.
AI-assisted feedback doesn't solve this completely, but it changes the teacher's role from generating feedback to reviewing and personalising it — which is significantly faster. The teacher designs the reasoning rubric, feeds student responses to the AI with the rubric as context, reviews the AI's feedback suggestions, adjusts them for accuracy and class-specific context, and delivers the personalised feedback. The full cycle takes roughly 20 minutes for a class of 25, compared to 60–90 minutes for fully manual feedback.
Rubric. Batch. Review. Adjust.
Four steps for a class of 25.
Use the rubric from C5/A3 or generate a new one: 'Generate a 4-point rubric for evaluating student arguments about [learning objective]. Include criteria for: claim specificity, evidence integration, counterargument engagement, and reasoning visibility.' The rubric is the anchor that makes AI feedback consistent and the teacher's review pass efficient.
Paste 5–6 student responses at a time with the rubric as system context: 'Use the following rubric [paste rubric] to evaluate these student responses. For each response: (1) give a score for each rubric dimension; (2) identify the strongest element of their reasoning; (3) write one specific, developmental suggestion for improving their weakest dimension. Keep feedback to 3 sentences per student.'
Read the AI's feedback against the student response. The most common errors: (1) the AI misidentifies the claim (it takes the first sentence as the claim rather than the actual argumentative position); (2) the AI is overly positive about evidence when the student hasn't actually cited specific evidence; (3) the AI's developmental suggestion is vague ('add more evidence') rather than specific.
For each student, add one personalised line based on what you know about their reasoning development over the term: 'This is a significant improvement from last week' or 'Your evidence integration has been your consistent strength — the area to focus on is counterargument, which you haven't yet fully engaged with in three consecutive pieces.'
Developmental reasoning feedback in 20 minutes
for a class of 25.
The full workflow — rubric setup (2 min), batch generation (12 min), review (5 min), personalisation (3 min) — takes 22 minutes for a class of 25. This compares to 60–90 minutes for fully manual feedback. The quality is different from manual feedback in one important respect: the AI produces consistent application of the rubric across all students, which manual feedback does not — a tired teacher's feedback at 11pm on the 20th script is less consistent than the first.
Over a term, this workflow produces a dataset: each student's rubric scores across multiple pieces of reasoning work. This is the longitudinal reasoning development data from C5/A3 — the evidence that supports both student self-assessment and the school's case for facilitation as a measurable educational approach.
The complete toolkit for
the teacher as facilitator.
P7 has covered the full facilitation toolkit: the theory that justifies it (C1), the lesson design that makes it measurable (C2), the questioning skills that drive it (C3), the group dynamics that enable it (C4), the assessment that evaluates it (C5), the school leadership that sustains it (C6), the subject-specific adaptations it requires (C7), and the AI tools that make it efficient (C8).
The practices in P7 compound with those in P6 (agile teaching) — a teacher who facilitates effectively and iterates based on formative data is operating both pillars simultaneously, producing better reasoning in students whose progress is continuously tracked and improved.