Why AI produces direct instruction by default

AI generates what you ask for.
Most prompts ask for a lecture.

AI lesson planning tools produce very different outputs depending on how they're prompted. A prompt for a generic lesson plan produces a direct instruction sequence: introduction, teaching points, worked example, practice task. This is the statistical average of lesson plans in training data — and the statistical average of classroom lessons is a lecture.

A facilitating teacher who uses AI to generate lesson plans and then teaches them as generated will deliver direct instruction lessons that they didn't design themselves. The result combines the disadvantages of both approaches: the teacher hasn't thought through the content (because the AI did that), and the pedagogy isn't facilitative (because the AI defaulted to instruction). The solution is to specify facilitation explicitly in the prompt.

The facilitation lesson planning prompt

Five specifications that transform
the AI's output.

1
Specify the understanding goal, not the content
Not "a lesson on photosynthesis" but "a lesson where students can explain..."

This single change produces a backwards-designed lesson rather than a coverage-based one. The AI generates an exit question first, then an activity that leads to it, rather than deciding what to cover and then asking what students should know about it.

What this change produces
'A lesson on photosynthesis' → AI generates: introduction, key facts, worked examples, practice quiz. 'A lesson where students can explain the relationship between light, chlorophyll and glucose production in a novel plant scenario' → AI generates: a hook problem, a brief instruction phase, a structured discussion with Socratic questions, and a transfer-question exit ticket.
2
Specify the lesson structure explicitly
"Use a hook-teach-explore structure: start with a problem that requires the concept before teaching it"

Without this specification, the AI produces introduction-explanation-practice. With it, it produces a problem that creates the need for the concept (hook), a brief direct instruction phase (teach), and a facilitated exploration phase (explore). The structure changes the entire learning experience.

Why you must name the structure
AI training data is dominated by conventional lesson plans. The hook-teach-explore structure is less common in training data than introduction-instruction-practice. Without an explicit name, AI defaults to the statistical majority.
3
Request a formative check calibrated to the objective
"Include a 3-question formative check — recall, understanding, and application"

AI-generated formative checks default to recall questions unless told otherwise. This specification produces a backwards-designed assessment instrument alongside the lesson — with the application question using a novel context not covered in the lesson.

Add this to your prompt
'Include a 3-question formative check at the end — recall, understanding, and application. The application question should use a novel context not covered in the lesson.' This produces a diagnostic instrument, not just a comprehension quiz.
4
Request facilitation questions for the discussion phase
"Generate 5 Socratic questions — include at least one that probes assumptions and one that explores implications"

AI generates good Socratic questions when asked for them specifically. Without this request, the AI produces comprehension questions that could be used in direct instruction. The Socratic type specification is critical.

What you get with this specification
Instead of 'What is photosynthesis?' (recall), AI generates 'You argued that plants always need sunlight to produce energy — what assumption is that based on? Can you think of a plant or scenario that might challenge that assumption?' (assumption probe).
5
Specify the differentiation structure
"Include three levels of the application task — consolidation, core, and extension — with the same exit question for all three"

AI can generate differentiated activities efficiently when the differentiation framework is specified. The three-level structure from P6/C8/A3 applies here: all three levels lead to the same exit question, maintaining the agile data loop.

Why the same exit question matters
If the three levels have different exit questions, the formative data is incomparable. The teacher cannot tell whether a student who got the consolidation exit question right would also get the core version right. Common exit question = comparable data = functional agile loop.
🤖The complete facilitation lesson prompt — copy and fill
“Design a facilitated lesson for [year group] on [topic]. Learning goal: students can [understanding objective — what they'll be able to do, not what you'll cover]. Structure: (1) Hook (8 min) — a problem or question that requires today's concept before teaching it. (2) Instruction (15 min) — brief direct instruction of the concept, motivated by the hook. (3) Facilitated discussion (20 min) — structured activity with 5 Socratic questions that probe assumptions and implications. (4) Formative check (5 min) — 3 questions: recall, understanding, application to a novel context. Include: a differentiated application task with consolidation/core/extension levels, all leading to the same exit question.”
What to edit in the AI output

AI generates the structure.
You adjust for your class.

AI-generated facilitated lesson plans are typically accurate in structure but generic in content. The three things most worth editing: (1) the hook problem — ensure it genuinely creates cognitive conflict rather than just introducing the topic; (2) the Socratic questions — check that they probe the specific misconceptions your class is likely to hold; (3) the application task's novel context — ensure it is genuinely novel relative to your class's prior experience.

The editing pass takes 5–10 minutes. The generation pass takes 30 seconds. The combined investment of 10 minutes produces a lesson plan that would have taken 40–60 minutes to develop manually — and that is often better structured than a manually developed plan because the AI enforces the backwards design sequence.