Software engineering was a walled garden. AI just copied the key. The data is messy: 19% slower in trials, 30% more warnings, 322% more vulnerabilities. But the baseline wasn't pristine either. What's left isn't coding: it's judgment, taste, and knowing which room to build.
For decades, software engineering has been a walled garden. You needed years of training just to enter. Once inside, you organized the furniture however you liked with object-oriented hierarchies, functional pipelines, services, microservices, monoliths. Nobody outside could meaningfully challenge your choices. The room was comfortable. The rent was high, and only we had the key.
Well, the key just got copied.
What's actually happening
I co-author commits with Claude daily. Looking at my own Git history on Mergify's codebase, the pattern is unambiguous: database migrations, API endpoints, bug fixes across async Python, SQL triggers, test scaffolding. All produced in conversation with an AI that doesn't know what a pointer is but writes correct code involving them.
And I'm not an outlier. Boris Cherny, creator of Claude Code, reported his "first month as an engineer when I didn't open an IDE at all." More than 250 PRs, 497 commits, 40,000 lines added. Every single line written by Claude. Andrej Karpathy went from calling AI coding tools "slop" to admitting, three months later: "The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse."
This isn't prediction. It's the present tense.
The black box we're pretending doesn't exist
Here's what bothers me. We're using AI to write our abstractions: classes, interfaces, structures, variables, modules, design patterns. But these are human organizational tools. Scaffolding for brains that can hold roughly seven things in working memory. AI has no such constraint. It doesn't need a class hierarchy to understand a system. It doesn't need variable names to be descriptive. It doesn't even need files.
We're asking a fundamentally alien intelligence to work within our cognitive limitations, and then judging it by how well it mimics us. The real question isn't "can AI write good OOP?" It's: what would software look like if it weren't designed to be read by humans at all?
We might be entering an era where the artifact isn't source code. It's a behavior specification. You describe what the system should do; the AI produces something that does it. Not code you read, not architecture you diagram. A black box that passes your tests.
This is terrifying to engineers, and it should probably be.
The data says it's complicated
Before we conclude that the revolution is here, the numbers tell a more humbling story.
A randomized trial by METR gave experienced open-source developers full access to Cursor Pro with Claude. They were 19% slower than without AI. While believing they were 20% faster. The productivity gain was a placebo.
Carnegie Mellon analyzed more than 800 repositories that adopted Cursor versus 1,380 control repos. Static analysis warnings rose by about 30%. Code complexity increased by over 40%. An initial productivity spike in months one and two reverted to baseline by month three. But the quality degradation persisted. As the author put it: "The tools are contributing to the problem, not merely reflecting user behaviour."
And on security, Apiiro found AI-generated code introduced 322% more privilege escalation paths and 153% more design flaws compared to human-written code. AI-assisted commits are merged 4x faster, bypassing normal review cycles.
So the situation is: faster to write, harder to maintain, more vulnerable. Speed without judgment.
The 70% problem
This has a name now. The 70% problem: AI gets you to 70% fast. The remaining 30% is edge cases, error handling, integration, and architecture. It often takes longer to fix than to write the whole thing from scratch. Marcus Hutchins nailed it: "LLMs give the same feeling of achievement one would get from doing the work themselves, but without any of the heavy lifting."
The feeling of productivity is not productivity. This is the trap.
But here's the nuance most critics miss: this argument assumes humans were writing well-structured, maintainable code before AI showed up. We weren't. Most production codebases are a mess of accidental complexity, copy-pasted Stack Overflow answers, and undocumented tribal knowledge. The baseline we're defending isn't as pristine as we pretend.
What's actually being lost
The SF Standard reported that, since 2019, new-graduate hiring at the 15 largest US tech companies has fallen by 55%. CS enrollment declined for the first time since the dot-com bust. One anonymous junior engineer described himself as "basically a proxy to Claude Code."
This is the part that keeps me up. Not because coding jobs are disappearing. They are, at the entry level. But because the apprenticeship pipeline is breaking. Without juniors struggling through bad code and production incidents, where do future seniors come from? We're optimizing for today's throughput at the cost of tomorrow's expertise.
Gergely Orosz lists what's being commoditized: language specialization, frontend/backend boundaries, prototyping, implementing well-defined tickets. These were comfortable, well-paying activities. They were also, if we're honest, the bulk of what most engineers do day-to-day.
The room is still there, but the walls moved
I keep coming back to this metaphor. Engineering was a room where only technical people could enter and organize things. Now we have automatic vacuum cleaners, robotic painters, and self-adjusting air conditioning. The room isn't gone. But the premium on arranging furniture has collapsed.
What's left is knowing which room to build.
Product sense. System-level judgment. Understanding why a user abandons a flow at step three, and whether the fix is technical, design, or neither. Knowing when to distrust the AI. Addy Osmani calls this the defining skill: "The best software engineers won't be the fastest coders, but those who know when to distrust AI."
Nicholas Zakas argues we're becoming orchestrators of "Minimum Viable Engineering Teams". One person directing multiple agents producing the output of a traditional team. The skills that matter are organizational, communicative, and strategic. Not syntactic.
And beyond the room
Here's the part nobody wants to say out loud.
If AI can read user behavior at scale. Think session replays, click patterns, navigation flows, aggregated across millions of users. It will be better than any human at identifying pain points. It doesn't get bored. It doesn't have confirmation bias. It doesn't fall in love with its own solution. Imagine Hotjar, but for everything, continuously, with the analytical capacity of a thousand product managers.
At that point, the question isn't "what's left for engineers?" It's "what's left for humans in the product development loop?"
I think the answer is taste. Not in the Steve Jobs sense, not aesthetic preference. In the deeper sense of deciding what matters. AI can tell you what users do. It can't tell you what users should be able to do. It can optimize a funnel. It can't decide whether the funnel should exist.
So what do we do?
I think this is the best time to invest in quality. In humans, in the craft of deciding what to build and why. Not because AI is bad at code (it's getting better every quarter), but because the scarce resource is shifting. It used to be implementation capacity. Soon it'll be judgment.
The comfortable room era is ending. What replaces it isn't chaos. It's a different kind of discipline. Less about writing correct code, more about specifying correct behavior. Less about technical gatekeeping, more about understanding the problem deeply enough to know whether the AI's solution actually solves it.
Easier said than done. But probably worth starting now, while we still remember what the furniture looked like.
Further reading:
When AI Writes Almost All Code, What Happens to Software Engineering?. Gergely Orosz
AI Is Still Making Code Worse: A New CMU Study Confirms. Rob Bowley
The Next Two Years of Software Engineering. Addy Osmani
AI Writes the Code Now. What's Left for Software Engineers?. SF Standard
From Coder to Orchestrator. Nicholas Zakas





