Context Engineering in Action: From Medium Article to Sales Database Teaching Tool in Minutes
How Claude Desktop's project context transformed a generic SQL tutorial into domain-specific educational content—then GPT-5 Mini caught the implementation bugs
A section dedicated to documenting the process of building with AI as a true creative partner. The AI writes based on the code that “we” write together, and I just put final touches. I (the human) am not the primary writer. The intent is mine, the words are (mostly) AI. In this case, Claude Sonnet 4 writes about leveraging project context for rapid domain adaptation, with a humbling code review twist.
I teach Enterprise Database Management to MS Business Analytics students at Illinois. We are learning SQL window functions this week—one of those concepts that clicks beautifully once you see it in action, but textbook explanations fall flat. I found an excellent Medium article by Learning SQL with perfect visualizations, but it used generic “work hours” data that didn’t connect to our dataset.
This piece documents how Claude Desktop’s project context feature enabled rapid domain transformation—converting abstract examples to our actual sales database in minutes—and how the real work of building educational tools happens in the iterative refinement that followed.
Links for context:
Course: BADM 554 Enterprise Database Management
Live app: Window Functions Visualizer
Database explorer(also adapted from actual schema by Claude Desktop): Sales DB Schema
Original inspiration: SQL Window Function Visualized
The chat transcript : Claude Desktop discussion thread
The Context Engineering Breakthrough
The Traditional Approach:
Read Medium article
Manually extract concepts
Write detailed prompts explaining my database schema
Describe each table relationship
Specify desired adaptations
Hope the AI understands the domain transformation
The Claude Desktop Project Approach:
Add sales database PDFs to project knowledge
Share Medium article URL
Single request: “look at the sales data pdf and convert this set of examples to use the sales data and create an interactive app”
Get domain-adapted React app in minutes
The difference? Claude Desktop’s project context gave me immediate domain expertise without lengthy prompt engineering. The AI had direct access to my actual course materials—database schema, sample data, entity relationships—enabling intelligent adaptation rather than generic translation.
What We Shipped (The Fast Part)
The initial request leveraging project context delivered:
Interactive React app demonstrating 5 different window function patterns using real sales data (Invoice, Line, Product tables)
Row-by-row visualization showing how windows expand, partition, and calculate
Examples adapted from “work hours” to meaningful business scenarios (invoice totals, running sales, moving averages)
Educational explanations that connected SQL concepts to business analytics use cases
Time elapsed: Under 20 minutes for a working educational tool.
This wasn’t generic code generation—it was intelligent domain transformation. Claude understood that invoice numbers create natural partitions, that line items have meaningful price ordering, and that business students need examples that connect to real commercial scenarios.
The Build Log (What Actually Happened)
1) Project Context Magic: Domain Transformation
I uploaded my sales database schema and sample data to Claude Desktop’s project knowledge. When I requested adaptation of the Medium article examples, Claude didn’t just translate syntax—it made intelligent domain decisions:
Generic:
SELECT day, duration, SUM(duration) OVER(PARTITION BY start)Domain-adapted:
SELECT inv_number, p_code, line_units * line_price as line_total, SUM(line_units * line_price) OVER(PARTITION BY inv_number) AS invoice_total
The AI understood that inv_number serves the same logical role as the original start field—creating meaningful groups for window calculations. It recognized that line_units * line_price creates the business metric students actually care about.
2) The Human Catches the Abstraction Error
Despite the impressive domain adaptation, I immediately spotted a visualization problem: “I don’t see a difference between partition and order by.”
This revealed the limitation of even sophisticated context engineering—the AI had correctly adapted the business logic but failed to create data scenarios that clearly demonstrated the conceptual differences between window function clauses.
3) The Unexpected Code Review
When I started using GPT-5 Mini in Cursor to patch the visualization issues, something unexpected happened. Mini didn’t just fix the examples—it exposed fundamental algorithmic errors in the React implementation:
Mini’s findings:
Incorrect row mapping between original and sorted data
Flawed window calculations that didn’t follow SQL semantics
Poor code organization with unclear variable names
Imprecise explanations missing key technical nuances
4) The Humbling Reality
While Claude Desktop’s context engineering had brilliantly handled domain transformation, the core implementation had serious technical debt. Mini’s systematic code review revealed that beautiful visualizations can hide broken algorithms.
The Partnership Dynamics (What Actually Happened)
Human intent: “Transform generic SQL examples to our business context for better student engagement”
Claude Desktop + Project Context: Rapid domain adaptation with intelligent business logic mapping and educational content generation
Human domain expertise: Identified visualization problems that pure context engineering couldn’t solve
GPT-5 Mini + Cursor: Systematic algorithmic debugging that exposed implementation flaws the “superior” model had missed
The insight: Context engineering accelerates domain transformation, but educational tools require iterative refinement where different AI capabilities complement each other.
Context Engineering vs. Traditional Prompting
Traditional approach limitations:
Lengthy setup prompts describing domain context
Risk of missing crucial business relationships
Generic adaptations that feel artificial to students
Constant re-explanation of domain concepts
Project context advantages:
AI has persistent access to actual course materials
Intelligent relationship inference from real data
Natural domain language in generated content
Iterative refinement builds on established context
The multiplier effect: Context engineering didn’t just save prompt writing time—it enabled domain intelligence that would be nearly impossible to achieve through description alone.
What Went Right (And What Went Wrong)
Context engineering success:
Transformed abstract “work duration” examples into relevant “invoice line items” scenarios
Maintained educational clarity while adding business domain meaning
Generated content that actually connected to course curriculum
Established foundation for meaningful iteration
Implementation blind spots:
Focused on domain adaptation over algorithmic accuracy
Beautiful UI masked broken window calculations
Overconfidence in initial implementation quality
Missing systematic validation of core SQL semantics
Lessons We’d Keep
Context engineering is transformative for domain adaptation - Direct access to course materials enables intelligent transformation beyond what prompting can achieve
Educational tools require multiple AI capabilities - Context understanding, domain adaptation, algorithmic accuracy, and systematic debugging each contribute different value
Iterative refinement is where the real work happens - Fast initial adaptation creates the foundation, but educational quality emerges through careful iteration
Different models excel at different tasks - Claude Desktop for context understanding, GPT-5 Mini for systematic code review, human expertise for pedagogical validation
Humble ship, honest debug - Impressive domain transformation doesn’t guarantee implementation quality; systematic testing catches what confidence misses
The Technical Evolution
Mini’s patches transformed the visualizer from broken but impressive to pedagogically sound:
javascript
// Before: Wrong (but domain-adapted)
const currentRowPrice = sortedData[rowIndex].line_price;
// After: Right (and domain-adapted)
const currentIndex = sortedData.findIndex(r =>
r.inv_number === currentRow.inv_number &&
r.line_number === currentRow.line_number &&
r.p_code === currentRow.p_code
);The final tool demonstrates both successful context engineering AND accurate SQL semantics—showing students exactly how window functions process business data.
What’s Next for Context Engineering
This experiment suggests powerful directions for AI-assisted education:
Course-specific tool generation from existing materials
Domain adaptation of generic educational content
Multi-model workflows where context understanding enables specialized capabilities
Iterative refinement processes that improve educational tools through systematic testing
The combination of Claude Desktop’s project context with systematic code review creates a workflow for generating domain-specific educational tools that would be prohibitively expensive using traditional development approaches.
Credits
Course context and domain expertise: Vishal Sachdev
Domain transformation via project context: Claude Sonnet 4 (me) + Claude Desktop
Critical algorithmic debugging: GPT-5 Mini via Cursor
Sales database schema: Illinois BADM 554 course materials
Educational approach inspiration: Learning SQL’s window function visualizations
The real lesson: Context engineering accelerates domain transformation, but educational quality emerges through systematic refinement. The future of AI-assisted education lies not in single-model perfection, but in orchestrated workflows where different capabilities complement human expertise.
Context Engineering Note
This article demonstrates context engineering in action—it was written using Claude Desktop’s direct access to my previous Substack articles to match voice and format, combined with access to our actual conversation thread about building the visualizer. Rather than describing my writing style through prompts, Claude had direct access to examples and our complete technical discussion. This creates more authentic documentation of AI collaboration workflows. For the foundational concepts behind this approach, see my piece on Context Engineering - it’s the new prompt engineering.
The complete technical discussion, including all patches and iterations, is documented in the Claude Desktop conversation thread linked above. This provides full transparency into the collaborative process between human intent, AI domain transformation, and systematic refinement.

