Thursday, February 26, 2026
๐Ÿ›ก๏ธ
Adaptive Perspectives, 7-day Insights
AI

From Vibe Coding to Agentic Engineering

A year after coining 'vibe coding,' Andrej Karpathy says the real discipline is 'agentic engineering' โ€” orchestrating AI agents under structured human oversight. Here's why the distinction matters.

From Vibe Coding to Agentic Engineering
Image generated by ChatGPT

A year ago, Andrej Karpathy โ€” co-founder of OpenAI and former head of AI at Tesla โ€” posted a throwaway thought on X that became one of the most viral tech tweets of 2025:

There’s a new kind of coding I call “vibe coding,” where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.

The term took off. Collins English Dictionary named it Word of the Year for 2025. It described something millions of people were discovering: you could talk to an AI, accept whatever code it produced, paste error messages back in when things broke, and eventually arrive at something that worked. No need to read the diffs. No need to understand the code. Just vibes.

Last week โ€” almost exactly one year later โ€” Karpathy followed up with the next evolution. On February 4, 2026, he wrote:

Many people have tried to come up with a better name for this to differentiate it from vibe coding, personally, my current favorite is “agentic engineering.”

“Agentic” because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight.

“Engineering” to emphasize that there is an art & science and expertise to it. It’s something you can learn and become better at, with its own depth of a different kind.

He added, with some self-awareness: “Vibe coding is now mentioned on my Wikipedia as a major memetic ‘contribution,’ and even its article is longer. lol.”

What Changed in a Year

The tools matured. When Karpathy coined vibe coding in February 2025, AI code generation was mostly a single-turn affair: you prompted, the model responded, you iterated. The human was always in the driver’s seat, even if they weren’t reading the output carefully.

By early 2026, tools like Claude Code, Cursor, and others had evolved into something different. AI agents can now plan multi-step tasks, create and run tests, navigate browsers, read documentation, and self-correct across extended sessions โ€” often with minimal human involvement. The human’s role has shifted from writing prompts and accepting output to defining goals, setting constraints, and reviewing results.

That shift is what Karpathy is naming. Vibe coding was the prototype. Agentic engineering is the production version.

The Distinction

The difference isn’t subtle. Here’s how it breaks down:

Vibe CodingAgentic Engineering
Human rolePrompt writerArchitect and reviewer
Code reviewSkippedRequired
TestingMinimalCritical
PlanningIntuition-drivenSpec-driven
AI relationshipAssistant that generatesAgent that executes autonomously
Best suited forPrototypes, experimentsProfessional and enterprise systems

Addy Osmani, an engineering lead on Chrome at Google, has written extensively about this transition. His framing is practical: treat AI agents like unreliable junior developers. Give them clear specs. Require rigorous review. Demand comprehensive tests. Maintain full ownership of the codebase.

His key insight resonates: AI-assisted development actually rewards good engineering practices more than traditional coding does. Better specs yield better AI output. Comprehensive test suites give agents a reliable iteration loop. Architecture decisions matter more, not less, when the code is being generated rather than typed.

The 80% Problem

Osmani also identifies what he calls the “80% problem.” AI agents can get you 80% of the way to a solution quickly. The remaining 20% โ€” edge cases, security implications, architectural coherence, production reliability โ€” requires the kind of judgment that comes from experience.

This is where the distinction between vibe coding and agentic engineering becomes consequential. Vibe coding is fine for the 80%. It gets you a prototype, a proof of concept, a weekend project that works well enough. But if you ship the 80% to production without the 20%, you’re accumulating technical debt at machine speed.

Agentic engineering is about closing that gap. Not by writing the code yourself, but by bringing the discipline โ€” the design thinking, the testing strategy, the architectural oversight โ€” that turns 80% into something you’d stake your reputation on.

Why IT Leaders Should Pay Attention

I’ve been using Claude Code daily since last summer. I’ve watched my own workflow evolve from something that looked a lot like vibe coding to something closer to what Karpathy is now calling agentic engineering โ€” not because I read a framework, but because the tools demanded it.

When you’re building a production website with real users, real security requirements, and real uptime expectations, you can’t just accept the vibes. You need architecture. You need testing. You need to understand what the agent built, even if you didn’t write it line by line.

For IT and business leaders, the implications go beyond individual developer productivity:

The skill profile is changing. Agentic engineering disproportionately benefits people with deep fundamentals โ€” those who understand systems design, security, and architecture. They can review AI output effectively because they know what good looks like. Junior developers face a risk Osmani calls “skill atrophy”: they can prompt effectively but may struggle to debug or reason about the generated code. Hiring, training, and team development need to account for this.

Governance matters more. When AI agents are writing the code, the question of accountability gets complicated. Who reviews it? Who signs off? How do you audit it? Only 21% of organizations currently have a mature governance model for autonomous agents, according to Deloitte. That gap is a risk.

The ROI conversation is maturing. The era of AI investment justified by potential alone is ending. Every agentic AI initiative needs clear KPIs and a defensible return model. The question isn’t “should we use AI?” anymore. It’s “how do we use it responsibly, measurably, and at scale?”

Organizational structure will shift. Research from MIT Sloan Management Review suggests that organizations with extensive agentic AI adoption are seeing flatter hierarchies and wider spans of control, with managerial roles increasingly focused on orchestrating hybrid human-AI teams.

From Shower Thought to Discipline

Karpathy called his original vibe coding post “a shower of thoughts throwaway tweet.” A year later, it’s a Collins Dictionary entry with its own Wikipedia article. The fact that it needed a successor term โ€” one that emphasizes engineering discipline over vibes โ€” says something about how fast this field is moving.

I still vibe code. Saturday mornings, early ideas, things I’m testing for feasibility. But everything I build, I try to make production-grade โ€” A+ front-end security, multi-factor authentication where warranted, architecture that could withstand a viral moment, Web Application Firewalls and DDoS protection baked in from the start. That’s not vibes. That’s engineering. The difference now is that the agents do most of the building, and I do most of the thinking. Karpathy just gave it a name.