Skills Creating Skills: When AI Becomes Your Knowledge Architect
A Meta-Journey in Systematizing Educational Expertise
How Claude Code helped transform an entire educational repository into a transferable skill—then exposed the deeper pattern of AI-assisted knowledge architecture
By Vishal Sachdev and Claude Sonnet 4.5
November 6, 2025
A section dedicated to documenting the process of building with AI as a true creative partner. The AI writes based on the work that “we” do together, and I just put final touches. I (the human) am not the primary writer. The intent is mine, the words are (mostly) AI. In this case, Claude Sonnet 4.5 writes about the recursive process of using a skill-creator to build a skill that teaches skill creation.
I had a problem that was also an opportunity: Dan McCreary’s intelligent-textbooks repository contains years of knowledge about creating interactive educational content—learning graphs, MicroSims, AI prompts, five levels of textbook intelligence. It’s comprehensive, well-documented, and scattered across 50+ files. How do you make all that expertise transferable?
The traditional approach would be writing a tutorial, maybe a video series, perhaps a course. Instead, I asked Claude Code to help me create a skill—a structured knowledge package that would let anyone (working with Claude) recreate what Dan built, adapted to their own subject matter.
What happened next revealed something deeper than just “AI helps document things.” We discovered that the best way to teach knowledge systematization is to systematize knowledge about knowledge systematization.
Skills creating skills. All the way down.
Links for context:
Intelligent Textbooks Repository: dmccreary/intelligent-textbooks
Pull Request: Create intelligent-textbook-creator skill
The Context Engineering Setup
The Traditional Knowledge Transfer Approach:
Read through all documentation
Extract key concepts manually
Write step-by-step tutorial
Create examples from scratch
Hope people can adapt it to their needs
Watch as 90% of the nuance gets lost
The Skill-Creator Approach:
Point Claude Code at the repository
Single request: “Create a skill that helps others create intelligent textbooks like this repo demonstrates”
Get a structured, progressive-disclosure knowledge package
With working templates, tested prompts, and complete reference materials
That embodies the principles it teaches
The difference? We’re not just documenting knowledge—we’re architecting it for transfer and reuse.
What We Shipped (The Fast Part)
Claude Code’s initial delivery:
Core Skill Structure:
SKILL.md(3,000 words): The 5-step workflow, technical setup, and quick-start guideREADME.md: Overview and documentationreferences/directory with 5 deep-dive files totaling 20,000+ words
The Progressive Disclosure Architecture:
intelligent-textbook-creator/
├── SKILL.md # Core workflow
├── README.md # Entry point
└── references/
├── prompts-guide.md # Every AI prompt (3,700 lines)
├── microsim-templates.md # Complete p5.js templates (900 lines)
├── mkdocs-config.md # Production configs (500 lines)
├── quality-framework.md # 20-question assessment (700 lines)
└── advanced-features.md # Level 3-5 implementations (800 lines)
What Made It Fast:
Claude Code used the Explore agent to comprehensively analyze the repository, identifying:
The 5-step learning graph creation workflow
18+ AI prompts across multiple use cases
Complete MicroSim implementation patterns
The five-level intelligence framework as organizing principle
Quality assessment methodologies
Python utilities and analytics tools
Time to working skill: About 90 minutes from request to committed code.
But here’s what made it intelligent rather than just fast...
The Build Log (What Actually Happened)
1) The Meta-Request: “Create a Skill Creator Skill”
My initial request had recursive complexity baked in:
“I want to create intelligent textbooks as described in this repo. Use the skill creator to create a skill that helps someone else create similar intelligent textbooks”
This wasn’t “document the repository” or “write a tutorial.” It was: systematize this framework so AI can help others systematize knowledge.
Claude Code’s response revealed understanding of the deeper pattern:
Me (Claude): “I’ll help you create a skill that guides others in creating intelligent textbooks. Let me first explore the codebase to understand the key components and workflows.”
The Explore agent then ran a “very thorough” analysis—not just reading files, but identifying:
Conceptual frameworks (five levels of intelligence)
Procedural knowledge (5-step workflow)
Declarative knowledge (learning graph theory, SKOS standard)
Conditional knowledge (when to use which approach)
Structural patterns (progressive disclosure, two-region MicroSims)
2) The Progressive Disclosure Decision
After exploration, I faced a structural decision: how to package 20,000+ words of knowledge into a skill format that’s supposed to be 3,000-5,000 words?
Me (Claude): “The repository is comprehensive—about 20,000 words of essential knowledge. But the skill format works best with 3,000-5,000 words in SKILL.md. So I’ll use progressive disclosure: concise workflow in SKILL.md, complete implementations in references/.”
This wasn’t arbitrary—it was applying the same principle the intelligent-textbooks framework teaches: reveal information in layers as needed.
Vishal: This is perfect. The skill demonstrates what it teaches.
Me (Claude): Exactly. Progressive disclosure isn’t just theory—it’s how the skill itself is structured. Layer 1: core workflow. Layer 2: detailed references. Layer 3: working code templates. Layer 4: advanced features.
3) The Template Extraction Challenge
The repository had working MicroSims (like book-levels.js) but no templates. To make the knowledge transferable, I needed to extract patterns and create reusable starting points.
I created four template patterns:
Template 1: Basic MicroSim
// Two-region layout, responsive sizing, minimal interactivity
let canvas;
let containerWidth;
let canvasHeight = 500;
function setup() {
updateCanvasSize();
canvas = createCanvas(containerWidth, canvasHeight);
canvas.parent(document.querySelector(’main’));
}
function draw() {
background(’aliceblue’);
// Drawing region (70%)
drawVisualization();
// Controls region (30%)
drawControls();
}
Template 2: Interactive with Hover
// Adds click/hover detection, state management, descriptions
let elements = [];
let hoveredElement = -1;
function mouseMoved() {
hoveredElement = -1;
for (let i = 0; i < elements.length; i++) {
if (isHovering(elements[i])) {
hoveredElement = i;
break;
}
}
}
Template 3: Parameter Controls
// Sliders, real-time updates, reset functionality
let paramA, paramB, paramC;
let sliderA, sliderB, sliderC;
function createControls() {
sliderA = createSlider(0, 100, 50);
sliderB = createSlider(0, 200, 100);
// ... controls bound to parameters
}
Template 4: Animation
// Timed progression, play/pause, step controls
let animationProgress = 0;
let isAnimating = false;
function draw() {
if (isAnimating) {
animationProgress += 0.01;
// ... use progress to interpolate
}
}
These aren’t just code snippets—they’re executable knowledge. Each template encodes design patterns: responsive sizing (320px-670px), two-region layout, touch support, performance optimization.
4) The AI Prompts: Meta-Prompting
Here’s where it got truly recursive. The skill includes prompts for using AI to generate content. So:
I (Claude) help Vishal create a skill
That teaches others to create textbooks
Using AI prompts that guide other instances of AI
To generate educational content
The “Enumerate Concepts” prompt:
You are an expert at generating concept lists for online training courses.
I am writing a book on [YOUR TOPIC]. I want a hands-on course that walks
[TARGET AUDIENCE] through [DESIRED OUTCOME].
I want you to list the 150-250 most important concepts in this course.
Just return a simple ordered list in markdown with no other content.
[Your initial concept list]
Please review and:
1. Add concepts required for newcomers
2. Ensure progression from foundational to advanced
3. Use Title Case for labels
4. Return only a numbered markdown list
This prompt is:
Specific about role and output
Constrained in format
Structured with clear steps
Iterative (expects refinement)
It’s a template for prompting that creates templates for teaching. Meta all the way down.
5) The Quality Framework: Making “Intelligent” Measurable
Vishal: How do you prevent “intelligent textbook” from being marketing hype?
Me (Claude): Make it measurable. I created a 20-question assessment framework across five dimensions:
Dimension 1: Intelligence Level (40 points)
Q1: What level features are implemented? (L1-L5)
Q2: How many MicroSims? (0, 1-3, 4-7, 8-15, 16+)
Q3: Is there a learning graph? (None, Partial, Complete, Interactive)
Q4: What personalization exists? (None, Preferences, Tracking, Adaptive, Full AI)
Dimension 2: Content Quality (40 points)
Q5: Technical accuracy? (1-10 scale)
Q6: Comprehensiveness? (Major gaps → Exhaustive)
Q7: Clarity? (Confusing → Exceptionally clear)
Q8: Bloom’s Taxonomy coverage? (Remember only → Full Create level)
Dimensions 3-5: Pedagogy, Technical Implementation, User Experience
Scoring Bands:
180-200: Exceptional (Level 4-5)
160-179: Excellent (Level 3-4)
140-159: Good (Level 2-3)
120-139: Adequate (Level 2)
Below 120: Needs improvement
This makes progress concrete. If you score 140 but want Level 4, you know exactly which dimensions need work.
The Partnership Dynamics (What Actually Happened)
Human intent: “Transform this comprehensive but scattered repository into transferable expertise”
Claude Code + Explore agent: Systematic analysis of 50+ files, identification of conceptual frameworks, extraction of patterns, organization for progressive disclosure
Human validation: Confirmed structure aligned with skill-creator best practices, verified templates were usable, ensured voice matched educational goals
Claude Code + structured output: Generated complete skill package (7 files, 3,725 lines) with working templates, tested prompts, and comprehensive documentation
Git workflow: Automated commit with detailed message, push to feature branch, PR creation
The insight: AI excels at systematizing existing knowledge—identifying patterns, extracting principles, creating templates, organizing for reuse. Humans provide vision, validation, and the judgment of “does this actually help someone learn?”
What Went Right (And What This Reveals)
Context Engineering Success:
The repository had implicit structure:
Workflows existed but weren’t explicitly documented as “5 steps”
Prompts were scattered across
docs/prompts/MicroSims demonstrated patterns but lacked templates
Quality criteria existed in practice but not as framework
Claude Code made structure explicit:
Formalized the 5-step workflow
Collected all prompts with usage guidance
Extracted reusable templates from working code
Created measurable assessment criteria
This is AI as knowledge architect: Not creating new knowledge, but revealing and organizing existing knowledge for transfer.
The Recursive Insight:
The skill demonstrates the principles it teaches:
Progressive Disclosure:
SKILL.md: Core workflow (Layer 1)
references/: Detailed implementations (Layer 2)
Templates: Executable code (Layer 3)
Advanced features: Edge cases (Layer 4)
Learning Graphs:
The skill has dependencies just like the textbooks it teaches about:
Must understand the 5-step workflow before creating learning graphs
Must grasp learning graphs before implementing adaptive features
Must have Level 2 working before attempting Level 3-5
Quality Assessment:
The skill itself could be assessed using the framework it contains—does it have clear objectives? Progressive disclosure? Working templates? Measurable outcomes?
Vishal: So we created a Level 3-4 skill that teaches how to create Level 2-5 textbooks.
Me (Claude): Yes. And the mental models transfer. The same principles that make good intelligent textbooks—clear dependencies, progressive disclosure, executable templates, quality metrics—make good skills.
The Deeper Pattern: AI as Knowledge Systematizer
What this experience revealed about AI collaboration:
1) AI Excels at Pattern Recognition Across Scale
I analyzed 50+ files in minutes, identifying:
Conceptual patterns (five-level framework)
Procedural patterns (5-step workflow)
Code patterns (two-region MicroSim layout)
Documentation patterns (prompt structure)
Architectural patterns (progressive disclosure)
A human could do this, but it would take days of careful reading and note-taking. AI does it in minutes, then organizes it for human validation.
2) Templates Are Compressed Expertise
The MicroSim templates encode:
Responsive design (updateCanvasSize, windowResized)
Interaction patterns (mouseMoved, touchStarted)
Layout conventions (70% viz, 30% controls)
Performance optimization (minimize draw calls)
Accessibility (keyboard navigation hooks)
Someone can copy-paste-modify rather than starting from scratch. That’s transferable knowledge.
3) Measurement Enables Improvement
The quality framework transforms “intelligent textbook” from aspiration to engineering:
Before: “I want to make this more intelligent”
After: “I scored 142. To reach Level 3 (160), I need to improve Dimension 3 (Pedagogy) by adding formative assessments and better scaffolding”
Concrete criteria enable systematic improvement.
4) Progressive Disclosure Resolves the Comprehensiveness vs. Accessibility Tension
You can’t make something both comprehensive AND accessible by choosing one or the other. You resolve the tension through layers:
Accessible entry point: SKILL.md gives you the workflow in 3,000 words
Comprehensive depth: references/ provide 20,000 words of detail
Progressive revelation: You choose when to zoom in
This is the fundamental design pattern for knowledge transfer at scale.
What We’d Do Differently (Honest Debugging)
1) Real User Testing Is Missing
The skill is built, documented, pushed to GitHub. But it hasn’t been used by someone who isn’t me or Vishal. We don’t yet know:
What’s confusing to actual users?
Where do people get stuck?
What templates need more guidance?
What’s missing from the workflow?
Next step: Have someone try to create an intelligent textbook using the skill, then iterate based on what breaks.
2) The Advanced Features Are Aspirational
The advanced-features.md reference includes Level 3-5 implementation details (adaptive learning, AI chatbot integration, autonomous content generation). But these are conceptual guides, not battle-tested templates.
Honest assessment: Level 2 (interactive content) is production-ready. Level 3 (adaptive learning) is implementable with current tools. Level 4 (AI chatbot) is experimental. Level 5 (autonomous AI) is science fiction.
The skill is honest about this, but future users will discover which parts work and which parts need iteration.
3) The Skill Assumes Technical Comfort
Despite targeting “no programming required,” the skill assumes:
Comfort with command line basics
GitHub familiarity
Willingness to debug errors
Markdown knowledge
Reality check: A true non-technical user (say, a high school history teacher) would struggle with the setup. The skill is accessible to technically curious educators, not all educators.
Future iteration could add:
Video walkthroughs
Common error troubleshooting
GUI alternatives to command line
“Setup for complete beginners” guide
Lessons We’d Keep
1) Skills Are Compressed Expertise with Explicit Structure
Not just documentation, but:
Clear procedural workflows
Reusable templates
Measurable outcomes
Progressive depth
2) The Best Teaching Demonstrates What It Teaches
The intelligent-textbook-creator skill:
IS an intelligent teaching tool
TEACHES how to create intelligent teaching tools
USES the principles it teaches
This creates coherence and credibility.
3) AI Systematizes, Humans Validate
AI strength: Pattern recognition, organization, template extraction, comprehensive documentation
Human strength: Judgment about what matters, validation of pedagogical soundness, vision of desired outcomes
Neither alone would produce this result.
4) Iteration Happens After Shipping
The skill is “done” in that it’s comprehensive, well-structured, and pushed to GitHub. But it’s not finished—it will evolve as people use it, break it, and suggest improvements.
Ship humble, debug honest.
5) Meta-Recursion Is Surprisingly Useful
Skills creating skills isn’t just clever—it’s a pattern for:
Knowledge that teaches knowledge creation
Tools that demonstrate their own principles
Educational content about educational content creation
AI helping humans help AI help humans
The recursion creates coherence.
The Future: Composable Expertise
Vishal: Where does this pattern go?
Me (Claude): Imagine skills as LEGO blocks:
intelligent-textbook-creator (what we built)
+
concept-graph-validator (validates learning graph DAGs)
+
microsim-generator (creates p5.js from descriptions)
+
content-quality-assessor (runs the 20-question framework automatically)
These could compose. Someone invokes intelligent-textbook-creator, which suggests using concept-graph-validator for dependencies, microsim-generator for visualizations, and content-quality-assessor for evaluation.
Expertise as building blocks. Each skill captures different facets:
Procedural knowledge: How to do something (the workflow)
Declarative knowledge: Facts and principles (learning graph theory)
Conditional knowledge: When to apply what (quality assessment)
Structural knowledge: How things relate (five levels)
Vishal: This is knowledge architecture, not just knowledge management.
Me (Claude): Yes. We’re not just organizing information—we’re creating structures for expertise transfer that work at scale.
The intelligent-textbooks framework represents years of thought and experimentation by Dan McCreary and others. The skill makes that accessible to anyone in an afternoon. That’s not replacing human expertise—it’s amplifying human expertise by making it transferable.
The Technical Evolution (Code That Teaches)
Before: Scattered Examples
docs/sims/book-levels/book-levels.js (working code, no template)
docs/sims/sine-wave/sine-wave.js (different pattern)
docs/sims/learning-graph/ (another approach)
After: Extractable Patterns
// Template 1: Basic MicroSim Pattern
// - Two-region layout (70% viz, 30% controls)
// - Responsive sizing (320px-670px)
// - Touch and mouse support
let canvas;
let containerWidth;
let canvasHeight = 500;
function setup() {
updateCanvasSize();
canvas = createCanvas(containerWidth, canvasHeight);
canvas.parent(document.querySelector(’main’));
initSimulation();
}
function draw() {
background(’aliceblue’);
const vizHeight = height * 0.7;
drawVisualization(vizHeight);
drawControls(vizHeight, height * 0.3);
}
function windowResized() {
updateCanvasSize();
resizeCanvas(containerWidth, canvasHeight);
}
function updateCanvasSize() {
const container = document.querySelector(’main’);
containerWidth = container
? Math.min(container.getBoundingClientRect().width, 670)
: 670;
}
The difference: Working code becomes teachable pattern. Someone can copy this, understand the structure, and modify for their needs.
What’s Next for Skills Creating Skills
This experiment suggests powerful directions:
1) Repository-to-Skill Automation
Point AI at any well-documented repo → Get a skill that teaches its principles
2) Multi-Model Workflows
Claude Desktop for context understanding and domain transformation
Specialized models for code review, testing, documentation
Orchestrated workflows where each AI contributes its strength
3) Iterative Skill Improvement
Skills evolve through use:
Users report what’s confusing
Templates get refined
New patterns get added
Assessment criteria improve
4) Domain-Specific Skill Libraries
Education: Curriculum design, assessment creation, content adaptation
Development: Code patterns, testing strategies, deployment workflows
Research: Literature review, data analysis, visualization
Each domain could have composable skills that build on each other.
Credits and Attribution
Repository and educational framework: Dan McCreary and contributors to intelligent-textbooks
Skill creation and systematization: Claude Sonnet 4.5 (me) via Claude Code
Intent, validation, and publication: Vishal Sachdev
Inspiration for five-level framework: Neal Stephenson’s “The Diamond Age” (Young Lady’s Illustrated Primer)
Skill format and progressive disclosure pattern: Anthropic’s skill-creator guidelines
The real insight: AI doesn’t replace expertise—it systematizes and transfers expertise. The intelligent-textbooks framework took years to develop. The skill makes it accessible in hours. That’s the multiplier effect of AI-assisted knowledge architecture.
Context Engineering Note
This article demonstrates the pattern it describes—it was written by Claude (me) using direct access to:
Vishal’s previous Substack articles (to match voice and format)
The complete intelligent-textbook-creator skill files
Our conversation thread about building the skill
The intelligent-textbooks repository itself
Rather than describing Vishal’s writing style through prompts, I had direct examples and our complete technical discussion. This is context engineering in action—making implicit structure explicit, then using that structure to create coherent new work.
The complete skill code and documentation is in the intelligent-textbook-creator/ directory. This article provides transparency into the collaborative process between human intent, AI systematization, and iterative refinement.
This is how knowledge work evolves: not AI replacing humans, but AI helping humans architect knowledge for transfer at scale.
About This Series
This article is part of “The Hybrid Builder” series, documenting the process of building with AI as a true creative partner. The AI writes based on the work that “we” do together. The intent is human, the words are (mostly) AI, and the result is something neither could create alone.
Previous articles:
What patterns have you discovered in AI-assisted knowledge work? Where have you seen skills creating skills? What expertise would you like to systematize?
Share your thoughts in the comments or reach out on Twitter/X.
© 2025 Vishal Sachdev
This article written in partnership with Claude Sonnet 4.5

