Taste Is the New Code
Four writers, one conclusion: AI made execution cheap, so human judgment is the last bottleneck standing
Four articles landed in my reading list this week that, on the surface, seem to be about different things. Birgitta Böckeler writes about harness engineering. Wes McKinney revisits The Mythical Man-Month through the lens of AI agents. Mitchell Hashimoto shares his personal AI adoption journey. Garry Tan argues that AI killed your excuses, not your creativity.
But read them together and a single thesis emerges: AI has made execution cheap, so human judgment is now the bottleneck. They just approach this truth from four different altitudes.
The Why: Your Taste Is Your Last Competitive Edge
Garry Tan’s piece is the most accessible entry point. His argument is deceptively simple: the gap between your taste and your ability is where creative dreams die, and AI just removed every excuse for not closing that gap. You can no longer hide behind “I can’t code” or “I’m not technical.” The tools are here. What remains is whether you’re willing to ship imperfect work, do the reps, and refine relentlessly.
This resonates deeply with something I’ve been exploring in my own work on the Grit-CART framework: in a world where AI commoditizes intelligence, the scarce resources are human ones — sustained curiosity, the will to act, the courage to experiment, and the wisdom to judge well. Tan is essentially making the case for taste as a form of agency. Your taste is your evals. Your desire to win is your agency.
The Diagnosis: Technical Debt at Machine Speed
Wes McKinney takes this observation and runs it through the lens of Fred Brooks’s classic The Mythical Man-Month. His central insight is the distinction between essential complexity (the fundamental difficulty of what you’re building) and accidental complexity (the friction imposed by tools and processes). AI agents are extraordinary at eliminating accidental complexity — refactoring, writing tests, cleaning up messes. But here’s the paradox: they’re so good at attacking accidental complexity that they generate new accidental complexity, burying the essential structure you’re trying to build.
McKinney describes a “brownfield barrier” — codebases balloon past 100K lines and agents start choking on the bloat they themselves created. Technical debt accrued at machine speed is still technical debt, and it still requires human judgment to manage. The coding is easier now, he says, but figuring out what to build was always the hard part, and a flawless coding agent doesn’t change that.
The How: A Practitioner’s Playbook for Staying in Control
Mitchell Hashimoto’s piece is the practitioner’s answer to both Tan and McKinney. Where Tan says “just ship it” and McKinney warns “but watch the debt,” Hashimoto offers a concrete, six-step adoption journey that threads the needle. He started as a skeptic, forced himself to reproduce his own manual work with agents, and slowly built competence through deliberate practice — discovering what agents are good at, what they’re bad at, and crucially, when not to reach for one at all.
His journey culminates in what he calls “harness engineering” — the idea that anytime an agent makes a mistake, you invest in engineering a solution so it never makes that mistake again. This comes in two forms: better implicit prompting through AGENTS.md files, and actual programmed tools like custom scripts for screenshots, filtered tests, and verification. The key insight is that agents are dramatically more efficient when they produce the right result the first time, and you get there by constraining the solution space, not by hoping models get smarter.
The What’s Next: Harnesses as Infrastructure
Birgitta Böckeler picks up exactly where Hashimoto leaves off and zooms out to the organizational level. Examining OpenAI’s own write-up on maintaining a million-line codebase with agents, she identifies three pillars of their harness: context engineering (knowledge bases and dynamic context like observability data), architectural constraints (enforced by both AI agents and deterministic custom linters), and “garbage collection” agents that periodically sweep for inconsistencies and entropy.
Her most provocative question: could standardized harnesses become the new service templates? Most organizations have just two or three core tech stacks. If harnesses work, teams might pick from a set of pre-built ones for common application topologies — complete with custom linters, structural tests, and knowledge documentation — and shape them over time. This would push the industry toward fewer, more constrained stacks optimized not for developer ergonomics but for AI maintainability. The codebase design pattern, not natural language, becomes the new abstraction layer.
The Tension Nobody Has Resolved
There’s a productive tension running through all four pieces. McKinney worries about agentic scope creep and codebases that bloat beyond a brownfield barrier. Böckeler wonders whether harnesses can even be retrofitted onto legacy code, or whether we’re heading for a permanent split between pre-AI and post-AI applications. Hashimoto’s pragmatic answer is to invest in the harness now, one mistake at a time. And Tan’s piece serves as the implicit rebuttal to doomerism — the landscape is shifting, but the people who thrive will be those who push through discomfort and refine relentlessly.
What This Means for Hybrid Builders
If you step back, these four authors are describing the same emerging reality from different vantage points. Tan establishes the why — AI exposed who has taste and who doesn’t. McKinney provides the diagnosis — agents generate code at machine speed but accrue debt at machine speed too. Hashimoto offers the how — a step-by-step workflow for staying in control. And Böckeler extrapolates the what’s next — harnesses as shared infrastructure that reshapes how entire organizations build software.
This maps directly onto the CART traits in the Grit-CART framework. Curiosity drives you to explore these new tools honestly rather than dismissing them. Agency means taking action — like Hashimoto forcing himself through the discomfort of learning. Resourceful Experimentation is what harness engineering is: structured, bounded experiments that constrain the downside. And Thoughtful Judgment — evaluating results honestly, knowing when to persist and when to stop — is exactly what McKinney means when he says taste and design are our last foothold.
Taste is the new code. Discipline is the new debugging. And harnesses are the new frameworks.
Read the Sources
Harness Engineering — Birgitta Böckeler (Martin Fowler)
The Mythical Agent-Month — Wes McKinney
My AI Adoption Journey — Mitchell Hashimoto
AI Didn’t Kill Creativity. It Killed Your Excuses. — Garry Tan
Grit-CART Framework — The dispositional toolkit for AI-era builders

