Why standard observation frameworks fail

They were designed to assess direct instruction,
not facilitation.

Most school observation frameworks reward clear explanations, visible learning objectives written on the board, efficient use of time, student-on-task behaviour, and smooth transitions. These are valid indicators of quality direct instruction. They are poor indicators of quality facilitation — and in some cases, actively penalise it.

A facilitating teacher who waits 8 seconds after a question appears to be losing control of the pace. A teacher who allows extended silence while students struggle productively appears to be failing to support students. A teacher who changes direction mid-lesson because the exit ticket revealed a gap appears to be poorly planned. Under a direct instruction observation framework, every one of these professional behaviours registers as a weakness.

⚠️The framework determines the behaviour
The observation framework signals to teachers what quality looks like. If the framework rewards compliance behaviours (objective on the board, all students on-task), teachers will optimise for compliance. If it rewards responsiveness (evidence of formative data use, quality of facilitation moves, reasoning depth in student responses), teachers will optimise for those. Updating the framework is one of the highest-leverage leadership actions in facilitation implementation.
What to observe in facilitated classrooms

Four observation foci that reveal
facilitation quality.

1
Teacher questioning quality
Are questions generating thinking or recall?

Observe the questions the teacher asks during the lesson. Count the proportion that require recall vs the proportion that require reasoning (application, analysis, evaluation). Note whether the teacher uses wait time (3+ seconds after questions requiring reasoning). Note whether the teacher probes student responses or accepts first answers.

What good looks like
High proportion of reasoning questions. Wait time visible and maintained. Teacher probes ('Why do you think that?' 'What would change if...?') rather than accepting or correcting. Teacher responds to wrong answers by exploring the reasoning, not correcting the conclusion.
2
Student response quality
Are students reasoning, or recalling?

Observe student verbal contributions. Are they single words or full sentences? Do they explain their reasoning or just state conclusions? Do they build on each other's contributions, or only respond to teacher questions? Are students who typically don't contribute participating?

What good looks like
Multi-sentence responses with reasoning. Student-to-student interaction without teacher mediation. Participation distributed across the class, not concentrated in the same 5 students. Students challenging each other's positions without teacher prompting.
3
Teacher response to difficulty
Does difficulty produce support or rescue?

Observe how the teacher responds when students are stuck. Does the teacher provide the answer (rescue), ask a question that activates the relevant knowledge (support), or wait longer for the student to work through it (tolerate productive difficulty)? The latter two indicate facilitation skill.

What good looks like
Teacher pauses visibly before responding to student difficulty. When responding, asks a question rather than providing content. Circulates during group work and asks probing questions rather than correcting or completing work for students.
4
Evidence of formative data use
Does the lesson respond to what the teacher knows about students?

Ask the teacher before the observation: 'What did yesterday's exit data show? How has that influenced today's lesson?' The quality of the answer reveals whether the teacher is operating the agile loop. During the lesson, observe whether there are explicit moments where the teacher collects formative data and whether the lesson visibly responds to it.

What good looks like
Teacher can name the specific gap yesterday's data showed and explain exactly how today's opening addresses it. During the lesson, teacher collects mid-lesson data (a quick show of hands, a written check) and explicitly adjusts based on it.
The post-observation conversation

What you ask determines
what the teacher learns.

The post-observation conversation is as important as the observation itself. The questions the observer asks signal what they valued in what they saw — which in turn signals what the teacher should optimise for next time. For facilitation observation, the conversation should centre on questions that treat the teacher as a professional analysing their own practice:

💬The four post-observation questions
(1) “What did you notice about student reasoning quality in today's lesson?”

(2) “What formative data were you collecting, and how were you using it?”

(3) “When was the most productive period of the lesson — and what created it?”

(4) “What would you change about the facilitation, and why?”

These questions treat the teacher as a professional who is analysing their own practice, not as a performance that was being evaluated.