Context engineering is the new Prompt Engineering
Over the past few months, a new term has started to gain traction in the world of AI: context engineering. Champions like Shopify CEO Tobi Lütke and AI researcher Andrej Karpathy have taken to X (formerly Twitter) to argue that, while prompt engineering helped us get started with large language models, it’s context engineering that truly unlocks their potential. The difference? Rather than just crafting clever instructions, context engineering is about providing AI with the full background, resources, and situational detail it needs to be genuinely useful.
Tobi Lütke summed it up in a recent X post:
“I really like the term ‘context engineering’ over prompt engineering.
It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM.”
Andrej Karpathy echoed this sentiment:
“+1 for ‘context engineering’ over ‘prompt engineering’.”
“In every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step.”
Building on this, an even earlier articulation of context engineering comes from a February 2025 Substack post by Casper. The author points out that while AI has excelled at reasoning within well-defined, low-context problems, it struggles with the messy, ambiguous, and highly contextual tasks that dominate real-world work. To bridge this gap, Casper defines “context engineering” as the discipline of defining clear task specifications and supplying all the relevant information an AI needs to perform well—much like onboarding a new colleague.
He outlines three key mental models for context engineering:
Empathy for AI—Imagine yourself in the AI’s position, limited to only what’s in the prompt, to spot missing or unclear context. Experts can do this.
Capture Hidden Context—Recognize and explicitly provide the “hidden” or subconscious context that humans naturally use, such as recent actions, project goals, and team dynamics.
Observe Human Problem-Solving—Break down tasks into clear, logical steps, making implicit knowledge explicit as you would when teaching or onboarding someone new.
Casper argues that even as AI becomes better at seeking out its own context, humans will still need to guide and define what matters. Writing clear, concise instructions and providing the right context will remain essential for making AI a true collaborator in complex, real-world workflows. For a more technical treatment, take a look at the explanation by Harrison Chase of Langchain fame.
This shift is no longer just theoretical. I started using the new Dia browser, for example, and it is changing how I interact with AI—not by asking me to fit their workflows around the limitations of the technology, but by bringing AI directly into the flow of my work. Dia gathers and organizes the context from my digital environment—my open tabs, browsing history(7 days, opt-in), and even uploaded files—making it effortless to tap into AI’s power exactly when and where I need it.
Another example of context engineering in my workflow would be the use of ‘hidden context’ generated while I am creating Vibe-coded apps and then using the chat history with Claude Code, and the GitHub commit history to generate write-ups and how-tos for the projects, which I now document in another section of the newsletter, The Hybrid Builder.
I assume you are now wondering if you should stop learning/teaching prompt engineering and move to context engineering. Join the club! Perhaps we will just wait for the new hyped up term/concept :)

