The problem with most exit tickets

You're collecting data
you can't do anything with.

The exit ticket has become one of the most widely used formative assessment practices in schools. It is also one of the most widely misused. Walk into a staff room after school on any given day and you'll find teachers holding a stack of Post-it notes or index cards covered in student responses — and no clear idea what to do with them.

The problem isn't the practice. Exit tickets are a genuinely powerful data collection tool when designed correctly. The problem is what most teachers are asking students to produce. A useful exit ticket generates a specific, actionable signal about a specific gap in student understanding. Most exit tickets generate something else entirely: a measure of how confident students feel, or a test of whether they can remember what was just said.

Neither of those is useless in every context. But neither of them tells you what to change about tomorrow's lesson. And that's the only question an agile teacher needs an exit ticket to answer.

📊What the research says about exit ticket quality
Dylan Wiliam's work on formative assessment consistently shows that the quality of instructional response depends almost entirely on the quality of the data collected. High-inference data — “how well did you understand?” — produces low-inference responses — “reteach the whole topic.” Low-inference data — “which of these two explanations is better supported by the evidence, and why?” — produces high-inference responses: “the misconception is specific, I can address it in the opening 5 minutes of tomorrow's lesson.” The feedback loop is only as useful as the signal that feeds it.
Wiliam, D. — Embedded Formative Assessment, 2011 · Black, P. & Wiliam, D. — Inside the Black Box, 1998
The core design principle

The difference between vague data
and actionable data.

There is one design principle that separates exit tickets that produce usable data from those that don't: the question must require students to use the concept, not just recall it.

Recall questions test whether students can reproduce something from short-term memory. A student who was present and paying attention can usually answer a recall question correctly regardless of whether they understood the lesson. The question “name three causes of the French Revolution” can be answered by a student who has no idea how the causes relate to each other.

Application questions require students to do something with the concept — explain it in a new context, use it to predict an outcome, identify which example fits the principle and which doesn't. These questions cannot be answered correctly from memory alone. The pattern of wrong answers in an application exit ticket tells you precisely what students misunderstood and what you need to address tomorrow.

💡The self-test for any exit ticket question
Ask yourself: could a student who was present but confused still answer this question correctly? If yes, you've designed a recall check. If no, you've designed a formative check.
Format
Example prompt
What it actually measures
Confidence rating
Rate your understanding from 1–5
Student anxiety and self-perception. High-confidence wrong answers are invisible.
Content restate
Write the key term from today's lesson
Short-term memory. Doesn't predict whether the lesson needs changing.
Muddiest point
What's one thing you're still unsure about?
Student-identified gaps. Useful when responses are specific enough to act on.
Application prompt
Apply today's concept to this new scenario
Genuine understanding. Wrong answers reveal exact misconceptions.
Predict and explain
What do you think will happen if X? Why?
Causal reasoning. Reveals whether students understand relationships, not just facts.
The five formats

Five exit ticket formats that consistently
produce actionable data.

These five formats cover the majority of what teachers need from end-of-lesson data. Each is designed to take under 4 minutes for students to complete and under 3 minutes for a teacher to scan across a class of 25–30 students. The format you choose depends on what question you most need answered about tomorrow's lesson.

1
The novel application prompt
Best for: checking genuine understanding

Present a new scenario that wasn't covered in the lesson but requires the same concept to navigate. Students who understood the lesson can transfer the principle. Students who memorised examples cannot. The quality of reasoning in wrong answers tells you exactly which aspect of the concept wasn't understood.

Example — Year 10 History
"We discussed how nationalism contributed to WWI. A historian argues that nationalism was a minor factor compared to imperial rivalry. Based on today's lesson, do you agree or disagree? Give one piece of evidence for your position." What to look for: students who can identify relevant evidence vs students who can only restate your examples.
2
The specific muddiest point
Best for: surfacing unknown gaps

A plain "muddiest point" prompt produces vague responses. The specific version adds a constraint: the student must identify the exact moment in the lesson when they lost the thread, or the exact step in the process they don't understand. Specificity in the prompt produces specificity in the response.

The specific version
"Identify one specific step in today's process that you're not fully confident about. Describe what you do understand, and write the precise question you'd need answered to feel sure." What to look for: clusters of responses pointing to the same step. If 12 students identify Step 3 as the problem, Step 3 is tomorrow's opening.
3
The predict and explain
Best for: causal reasoning, science, economics, history

Ask students to predict an outcome and explain the causal chain. This format reveals whether students understand relationships between concepts — which is almost always what you actually taught — rather than just the concepts themselves.

Example — Year 9 Science
"A plant is moved from a sunny windowsill to a dark room. Predict what happens to its rate of photosynthesis over the next 24 hours. Explain the mechanism in 2–3 sentences." What to look for: the explanation, not the prediction. Most students can predict correctly by guessing. The explanation reveals whether they understand the biochemical mechanism.
4
The misconception check
Best for: topics with known predictable errors

In every subject there are predictable misconceptions — errors that appear reliably across cohorts because they reflect plausible but incorrect mental models. Present the misconception as a plausible-sounding statement and ask students to agree, disagree, or modify.

Example — Year 10 Maths
"A classmate says: 'When you divide a number, it always gets smaller.' Is this always true, sometimes true, or never true? Explain your answer with an example." What to look for: the explanation. Students who say "sometimes true" with a correct example understand division of fractions. Those who say "always true" still have the default misconception.
5
The two-sentence teach-back
Best for: complex processes, definitions, models

Ask students to explain the lesson's core concept as if teaching it to someone who wasn't there. The constraint of two sentences forces compression — students can't hide a fuzzy understanding behind a long response. Students who understood use precise vocabulary in context.

Any subject
"In exactly two sentences, explain [today's central concept] to a classmate who missed the lesson. Use the specific terms we covered. Don't just give a definition — explain how it works." What to look for: do students use the causal or mechanical language of the topic, or do they describe it in their own words that avoid the hard part?
The 3-minute scan

How to read 30 responses
in 3 minutes flat.

The most common objection to exit tickets is time: “I can't mark 30 responses every night.” The objection mistakes marking for scanning. Exit tickets are not homework — they are not graded, not returned with individual feedback, and not read word by word. They are scanned for patterns.

A scan takes 3 minutes for a class of 30. The protocol is specific: you are not reading for correctness. You are reading for the gap that appears most frequently. Once you've identified that gap, you have your opening 5 minutes of tomorrow's lesson. Put the responses down.

1
Sort into three piles as you read
Understood / Partial / Missing

Don't read each response fully on the first pass. Skim for the key indicator — the part of the response that reveals whether the central concept landed. Sort into three piles: understood, partial, missing. This takes 90 seconds for a class of 30 with practice.

What to look for
Not whether the student is right — whether the core mechanism or relationship is present in their response. Are they using the subject vocabulary correctly and in context?
2
Read the "partial" pile carefully
This is where tomorrow's lesson lives

The "understood" pile tells you who is ready to move on. The "missing" pile tells you who needs a fundamentally different approach. But the "partial" pile — students who almost got it — tells you what to address tomorrow. Read these responses looking for the specific step or relationship that broke down.

The question to ask
"What do these students understand, and what specific thing are they getting wrong?" That one sentence is tomorrow's lesson adjustment.
3
Write one sentence and put them down
The purpose is a decision, not a full analysis

Write one sentence: "Tomorrow: open with [specific intervention] because [specific gap]." That sentence is the output of the scan. You don't need to analyse further. The exit ticket's job is done when that sentence exists.

Example output sentences
"Tomorrow: open with a worked example of Step 3 — 18/25 students applied Step 2 correctly but stopped there." · "Tomorrow: challenge the misconception that division always reduces — 14 students stated this as true."
AI makes this faster

Generate the exit ticket
before you write the lesson plan.

One of the most effective uses of AI for formative assessment is exit ticket generation. If you write the exit ticket before the lesson — not as an afterthought at the end — it forces you to be precise about what you're actually trying to teach. The exit ticket becomes the success criterion, and the lesson is designed to produce that outcome.

🤖How SprintUp Education generates exit tickets
SprintUp Education's AI exit quiz tool takes your learning objective as the input and generates a three-question formative check in under 30 seconds. The tool uses the application-prompt format as default — every question requires students to use the concept, not just recall it. The output includes the questions, the expected correct responses, common wrong-answer patterns to watch for, and a one-line prompt for tomorrow's lesson if the most common wrong answer appears. Free on every school account.

The prompt is simple: “My learning objective for this lesson is [objective]. Generate a 3-question exit ticket using application prompts. Include what correct answers look like and the most common misconceptions to watch for.”