The feedback scale problem

Facilitated learning generates reasoning work.
Reasoning work is hard to give feedback on at scale.

Facilitated learning generates a different kind of student work — reasoning, argument, explanation — that is harder to give feedback on at scale than recall-based work. A teacher can scan 25 multiple-choice sheets in 3 minutes. Reading and providing developmental feedback on 25 written arguments takes an hour. This time cost is real and is one of the genuine structural barriers to facilitation in schools where teachers are stretched.

AI-assisted feedback doesn't solve this completely, but it changes the teacher's role from generating feedback to reviewing and personalising it — which is significantly faster. The teacher designs the reasoning rubric, feeds student responses to the AI with the rubric as context, reviews the AI's feedback suggestions, adjusts them for accuracy and class-specific context, and delivers the personalised feedback. The full cycle takes roughly 20 minutes for a class of 25, compared to 60–90 minutes for fully manual feedback.

The AI feedback workflow

Rubric. Batch. Review. Adjust.
Four steps for a class of 25.

1
Start with a rubric calibrated to the learning objective
Not generic marking criteria — objective-specific reasoning criteria

Use the rubric from C5/A3 or generate a new one: 'Generate a 4-point rubric for evaluating student arguments about [learning objective]. Include criteria for: claim specificity, evidence integration, counterargument engagement, and reasoning visibility.' The rubric is the anchor that makes AI feedback consistent and the teacher's review pass efficient.

Time: 2 minutes
The rubric generation step is the most important — it determines the quality of every subsequent step. A well-specified rubric produces consistent AI evaluation. A vague rubric ('good/average/poor') produces feedback that cannot distinguish between types of reasoning gap.
2
Batch student responses with the rubric as context
Paste multiple responses in one prompt

Paste 5–6 student responses at a time with the rubric as system context: 'Use the following rubric [paste rubric] to evaluate these student responses. For each response: (1) give a score for each rubric dimension; (2) identify the strongest element of their reasoning; (3) write one specific, developmental suggestion for improving their weakest dimension. Keep feedback to 3 sentences per student.'

Time: 2–3 minutes per batch · 10–12 minutes for 25 students
Batching 5–6 responses per prompt is more reliable than batching all 25. AI produces less consistent output for large batches because the context window becomes crowded. 5–6 gives sufficient context for comparison while maintaining output quality.
3
Review the AI output for accuracy
The quality control pass

Read the AI's feedback against the student response. The most common errors: (1) the AI misidentifies the claim (it takes the first sentence as the claim rather than the actual argumentative position); (2) the AI is overly positive about evidence when the student hasn't actually cited specific evidence; (3) the AI's developmental suggestion is vague ('add more evidence') rather than specific.

Time: 5 minutes for 25 responses
Flag vague suggestions for replacement: 'add more evidence' → 'in your second paragraph, name the specific treaty that supports your claim.' The specificity is what makes feedback developmental rather than generic — and it only takes a few seconds per response to add once the AI has provided the scaffold.
4
Personalise one element per student
What only you know about this student

For each student, add one personalised line based on what you know about their reasoning development over the term: 'This is a significant improvement from last week' or 'Your evidence integration has been your consistent strength — the area to focus on is counterargument, which you haven't yet fully engaged with in three consecutive pieces.'

Time: 3 minutes for 25 students
The personalisation is the feedback's most valuable element — it connects the student's current work to their trajectory. It is also the element that only the teacher can provide. This is where AI assistance and teacher knowledge combine: AI handles the consistent rubric application; the teacher adds the longitudinal context.
What this produces

Developmental reasoning feedback in 20 minutes
for a class of 25.

The full workflow — rubric setup (2 min), batch generation (12 min), review (5 min), personalisation (3 min) — takes 22 minutes for a class of 25. This compares to 60–90 minutes for fully manual feedback. The quality is different from manual feedback in one important respect: the AI produces consistent application of the rubric across all students, which manual feedback does not — a tired teacher's feedback at 11pm on the 20th script is less consistent than the first.

Over a term, this workflow produces a dataset: each student's rubric scores across multiple pieces of reasoning work. This is the longitudinal reasoning development data from C5/A3 — the evidence that supports both student self-assessment and the school's case for facilitation as a measurable educational approach.

🤖SprintUp Education's AI feedback tool
SprintUp Education's AI assessment tools support the four-step workflow: generate the rubric from your learning objective, batch student responses with the rubric as context, review and adjust the output, and add personalisation. The tool logs rubric scores automatically — building the longitudinal dataset as you work. Free on every school account.
You've finished P7

The complete toolkit for
the teacher as facilitator.

P7 has covered the full facilitation toolkit: the theory that justifies it (C1), the lesson design that makes it measurable (C2), the questioning skills that drive it (C3), the group dynamics that enable it (C4), the assessment that evaluates it (C5), the school leadership that sustains it (C6), the subject-specific adaptations it requires (C7), and the AI tools that make it efficient (C8).

The practices in P7 compound with those in P6 (agile teaching) — a teacher who facilitates effectively and iterates based on formative data is operating both pillars simultaneously, producing better reasoning in students whose progress is continuously tracked and improved.

← Back to P7 pillar hub← Back to A2