AI generates what you ask for.
Most prompts ask for a lecture.
AI lesson planning tools produce very different outputs depending on how they're prompted. A prompt for a generic lesson plan produces a direct instruction sequence: introduction, teaching points, worked example, practice task. This is the statistical average of lesson plans in training data — and the statistical average of classroom lessons is a lecture.
A facilitating teacher who uses AI to generate lesson plans and then teaches them as generated will deliver direct instruction lessons that they didn't design themselves. The result combines the disadvantages of both approaches: the teacher hasn't thought through the content (because the AI did that), and the pedagogy isn't facilitative (because the AI defaulted to instruction). The solution is to specify facilitation explicitly in the prompt.
Five specifications that transform
the AI's output.
This single change produces a backwards-designed lesson rather than a coverage-based one. The AI generates an exit question first, then an activity that leads to it, rather than deciding what to cover and then asking what students should know about it.
Without this specification, the AI produces introduction-explanation-practice. With it, it produces a problem that creates the need for the concept (hook), a brief direct instruction phase (teach), and a facilitated exploration phase (explore). The structure changes the entire learning experience.
AI-generated formative checks default to recall questions unless told otherwise. This specification produces a backwards-designed assessment instrument alongside the lesson — with the application question using a novel context not covered in the lesson.
AI generates good Socratic questions when asked for them specifically. Without this request, the AI produces comprehension questions that could be used in direct instruction. The Socratic type specification is critical.
AI can generate differentiated activities efficiently when the differentiation framework is specified. The three-level structure from P6/C8/A3 applies here: all three levels lead to the same exit question, maintaining the agile data loop.
AI generates the structure.
You adjust for your class.
AI-generated facilitated lesson plans are typically accurate in structure but generic in content. The three things most worth editing: (1) the hook problem — ensure it genuinely creates cognitive conflict rather than just introducing the topic; (2) the Socratic questions — check that they probe the specific misconceptions your class is likely to hold; (3) the application task's novel context — ensure it is genuinely novel relative to your class's prior experience.
The editing pass takes 5–10 minutes. The generation pass takes 30 seconds. The combined investment of 10 minutes produces a lesson plan that would have taken 40–60 minutes to develop manually — and that is often better structured than a manually developed plan because the AI enforces the backwards design sequence.
Lesson plans need questions.
A2 generates them in 30 seconds.
A2 covers AI question generation — the Socratic sequence, discussion starters, and misconception challenges that the facilitated lesson plan requires. Three question types, the prompt template for each, and the editing pass that makes the output class-specific.