Wellbeing data vs instructional data:
both matter, but only one feeds the loop.
Student voice collects two fundamentally different things depending on what you ask. “Did you enjoy today's lesson?” produces wellbeing and satisfaction data. “What part of today's topic are you most uncertain about?” produces instructional data. The first tells you how students feel. The second tells you what to teach differently tomorrow.
Both are legitimate and valuable. A school that ignores how students feel about learning produces technically informed but humanly disconnected instruction. But in the context of agile teaching, student voice's most underused function is as a precision data source that complements exit tickets — giving you qualitative information about what students think is happening in their learning, which exit tickets alone cannot capture.
Five student voice prompts that generate
agile data, not satisfaction scores.
"Identify the one step in today's process where you stopped feeling confident. Describe what you understood up to that point, and write the specific question you'd need answered to feel sure." This format forces specificity — students must name the exact step, not give a general impression. "12 students named Step 3" tells you exactly where to open tomorrow's lesson.
"Would you have understood this better with more worked examples, a diagram, a different explanation, or more time to practice?" Giving students a structured set of options produces more reliable data than open-ended "how should I teach this better?" — students can identify which option they needed even when they couldn't articulate a better approach from scratch.
"Rate your readiness to work on the next topic (not your enjoyment of this one): (a) ready — I understand the core concept; (b) almost — one more practice; (c) not yet — I still have a significant gap." Used before introducing a dependent concept. Students are often accurate predictors of their own readiness when the question is specific and the stakes are low.
"Where have you encountered this concept or something like it — inside or outside school?" This surfaces prior knowledge the lesson didn't activate, real-world connections that strengthen retention, and analogies that might be more accessible than the ones you used. Student responses to this prompt regularly produce better analogies than the teacher's prepared examples.
"In two sentences, explain today's central concept to a classmate who missed the lesson. Use the specific vocabulary we covered. Don't just define the term — explain how it works." Students who understood can do this. Students who didn't will avoid the causal language and describe peripheral details.
The timing that makes voice data
actionable rather than archival.
Student voice is most actionable when it is collected at the moment the learning is fresh — end of lesson, end of week, or at the natural boundary between topics. Voice collected in an end-of-term survey is too delayed to be actionable: by the time you read it, the class has moved on. Voice collected at the end of a lesson can inform tomorrow's teaching directly.
Voice prompts alongside
the exit quiz. Automatically.
Once you have voice data —
how do you build on it?
A1 has covered how to collect instructional voice data. A2 covers how to structure the learning to make the data meaningful: flexible learning paths that give students multiple routes to the same objective — preserving the agile data loop while increasing student ownership of how they get there.