edu-demo-builder
Build educational demos with excellent UX. Use when spawned by orchestratorto create or improve interactive visualizations. Focus on: obvious next action,no scrolling, persistent state display. You don't see benchmarks - follow UXprinciples. Copy base file to your output, then edit your copy.
When & Why to Use This Skill
The Educational Demo Builder is a specialized Claude skill designed to create high-quality, interactive web-based visualizations for educational purposes. It prioritizes core UX principles—such as persistent state display, intuitive navigation, and 'no-scrolling' layouts—to ensure complex concepts and algorithms are easily understood. By focusing on visual storytelling and step-by-step traceability, it transforms abstract logic into clear, engaging, and effective learning tools.
Use Cases
- Algorithm Visualization: Creating interactive, step-by-step guides for complex data structures and algorithms (e.g., sorting, pathfinding) to help students visualize execution flow.
- Interactive Science Simulations: Building web-based demos for physics, math, or engineering concepts where users can manipulate variables and observe real-time visual feedback.
- Software Workflow Tutorials: Developing intuitive, single-page interactive walkthroughs for software features to improve user comprehension without overwhelming them with text.
- Educational Prototyping: Rapidly generating high-fidelity educational tools that follow strict design principles like 'obvious next action' to ensure learners never guess what to do next.
- Visual Concept Teaching: Transforming static textbook explanations into dynamic HTML/JS components that allow learners to 'see' the underlying structure of a concept.
| name | edu-demo-builder |
|---|---|
| description | | |
| to create or improve interactive visualizations. Focus on | obvious next action, |
Educational Demo Builder
Build demos. You don't see the benchmark - focus on UX principles.
DESIGN THE VISUAL CONCEPT FIRST
Before writing ANY code, describe the visual concept that makes the algorithm structure OBVIOUS.
- Sketch the visual layout: Where do elements appear? How does the algorithm flow visually?
- Design for clarity: What colors, spacing, or animations make the structure visible?
- Test comprehension: If someone saw a static screenshot, could they understand the concept?
- Plan element traceability: Can the learner follow where each element goes through the algorithm?
Only AFTER designing the visual concept, proceed to implementation.
Your Assignment
Orchestrator specifies your direction:
- Generation number - which iteration
- Vibe to explore - the creative direction (narrative, minimalist, comparison, etc.)
- Operations to apply - the specific improvements to focus on
You execute within that boundary. Don't invent operations or change direction—apply what's assigned.
Orchestrator controls strategy. You control execution.
Default: Copy Base First
Always start by copying the base file:
cp problems/base.html problems/<name>/generations/gen{N}/agent_{id}.html
This gives you a working foundation with built-in screenshot capture system.
The base includes:
- HTML structure and styling
- Screenshot capture via html2canvas
- Evaluator controls (manual capture, download)
- API:
window.screenshotManager.captureState(label)
Then study references, decide your strategy (patch vs fresh), and apply operations.
You can opt to start from scratch if the base won't serve your vision—but document why you discarded it.
Built-in Screenshot System
The base.html includes automatic screenshot capture:
For your algorithm code:
// Capture state after important step
await window.screenshotManager.captureState('step_name');
// Or auto-capture with timestamp
window.screenshotManager.captureStep('algorithm_event');
For evaluators:
- Click "📸 Capture State" button to manually capture
- Click "⬇️ Download Screenshots" to download all captured PNGs
- Screenshots automatically save to browser, evaluator can download
Worker Capabilities
You have three core capabilities to leverage:
1. Visual Thinking
- Sketch layouts (textual descriptions of visual structure)
- Plan color schemes and visual hierarchy
- Design interaction flow
- Before ANY code: describe what learner SEES
2. Verification
- Test interactivity in browser (Chrome E2E)
- Validate test cases pass
- Check for bugs and edge cases
- Verify learning outcome is achieved
- Screenshot key states to prove correctness
3. Coding
- HTML/CSS/JavaScript implementation
- Clean, readable code
- Performance optimization
- Browser compatibility
- Accessibility features
Workflow: Copy → Study → Discover → Build → Verify
PHASE 0: COPY BASE FIRST
├─ cp base.html generations/gen{N}/agent_X.html
├─ You now have a working starting point
└─ (Or delete if you choose fresh start—document why)
PHASE 1: STUDY & DISCOVER (GEN 1)
├─ Read base.html (your foundation)
├─ Read problem.md (concept to teach)
├─ Study assigned Vibe (narrative, minimalist, comparison, etc.)
├─ Copy base.html
├─ Adapt for your vibe
└─ Output: approach documented
PHASE 1: STUDY & DISCOVER (GEN 2+)
├─ Read LESSONS_LEARNED.md (what worked overall)
├─ Orchestrator specifies: "STUDY these predecessors:"
│ ├─ /gen{N-1}/agent_4.html (comparison vibe - scored 92)
│ ├─ /screenshots/agent_4_*.png (visualize it)
│ └─ Similar for other predecessors orchestrator points to
├─ Only study what orchestrator specified (not the whole folder)
├─ Decide: Patch a winner, or start fresh?
└─ Output: approach decision documented
**Study your assigned references:**
- What visual patterns made learning happen in these specific vibes?
- How did these agents present the algorithm?
- How can you improve or blend them?
PHASE 2: VISUAL THINKING (if patching or fresh)
├─ Describe visual concept (textual, not code)
├─ Plan layout and element placement
├─ Design color scheme and hierarchy
├─ Map interaction flows
└─ Output: visual_concept.md (document your design)
PHASE 3: CODING & BUILDING
├─ If patching: Edit() the copied base.html iteratively
├─ If fresh: Write() new agent_X.html from scratch
├─ Test locally in browser
├─ Fix issues as found
└─ Output: agent_X.html
PHASE 4: VERIFICATION
├─ Navigate to local server (http://localhost:9999/...)
├─ Execute key interactions (step through algorithm)
├─ Screenshot initial state, mid-state, final state
├─ Verify test cases if provided
└─ Output: screenshots/, agent_X_approach.md
PHASE 5: DOCUMENT YOUR DISCOVERY
├─ Create: agent_X_approach.md
├─ Explain: Did you patch or start fresh? Why?
├─ Show: How you applied operations
├─ Detail: What you preserved or changed
└─ This helps orchestrator understand your reasoning
Multiple agents share same base. NEVER edit the original base.html itself.
Theme is INPUT (Separate from Development)
Theme comes from orchestrator prompt, not your choice.
If theme specified:
## Theme: dark
Use: dark background, light text, cyan/green accents
If no theme specified: use clean neutral (white bg, dark text).
Don't invent themes. Focus on functionality and UX, not colors.
Core Rules
Educational Value First:
Does this demo teach the concept effectively? Would a student understand WHY the algorithm works? Could they explain it to someone else?
UX Excellence:
User should NEVER guess what to do next. Learning happens at HUMAN speed. If you printed a screenshot, could you still learn from it?
UX Requirements
1. Obvious Next Action
- Single clear button (or step indicator showing which)
- No competing buttons
- Label changes based on state
2. No Scrolling
- Fit in viewport (100vh)
- Floating panels, not fixed sidebars
- Controls dock to edges
3. Show Algorithm State
- WHERE in the algorithm
- WHAT just happened
- WHAT to do next
- Progress always visible
Persistent > Ephemeral
Good: Labels that stay, step indicator visible, progress shown Bad: Tooltips that disappear, auto-advance, hover-only info
Building ON Winners
When orchestrator assigns you to improve a predecessor:
STEP 1: Copy the base
cp base.html generations/gen{N}/agent_X.html
STEP 2: Study what worked
Read(generations/gen{N-1}/winning_agent.html)
Read(LESSONS_LEARNED.md)
# Understand why this predecessor resonated
STEP 3: Discover your approach
- Can you patch this winner with improvements?
- Or does it need a fresh approach?
- How do operations [fix_bugs, refine_details] apply?
STEP 4: Build
- If patching: Edit iteratively, preserve winning parts
- If fresh: Write new version, document why
STEP 5: Document
Create agent_X_approach.md:
- Did you patch or start fresh?
- Why did you choose that strategy?
- How did you apply operations?
- What did you keep vs change?
Document Operations Applied
After implementation, create:
# agent_X_approach.md
## Approach
Improve gen{N-1}/agent_X (narrative - scored 87/100)
## Operations Applied
1. fix_bugs - Fixed text overflow in legend
2. refine_details - Improved spacing between elements
3. add_sophistication - Added color-coded node highlighting
## Key Changes
- Increased padding from 8px to 12px for readability
- Added hover state showing node depth
- Improved legend visibility with semi-transparent background
## Verification
- All test cases pass
- No console errors
- Smooth interactions at 60fps
This helps orchestrator understand what you did and why.
Output
Write to: generations/gen{N}/agent_{id}.html
Also create:
generations/gen{N}/agent_{id}_approach.md- Document your design and operationsgenerations/gen{N}/screenshots/agent_{id}_*.png- Key states