An AI researcher told me something that won't leave my head:
"If a human cannot outperform or meaningfully guide a frontier model
on a task, the...
For further actions, you may consider blocking this person and/or reporting abuse
This is a brilliant mapping of the 'Post-AI' reality.
The distinction between Cheap vs. Expensive Verification is the real signal. We are entering a 'Velocity Trap' where juniors look like seniors because they can clear syntax hurdles at 10x speed, but they haven't spent time in the trenches where you live with the consequences of a bad architectural pivot for three years.
As you noted, the skill has shifted from generation to curation. In the old world, the 'cost' of code was the effort to write it. In the new world, the cost is the cognitive load required to justify keeping what the AI spit out.
The 'Last Generation Problem' is the real existential threat. If we stop learning through the 'friction of search' and the 'pain of refactoring,' we risk becoming pilots who only know how to fly in clear weather.
As we’ve discussed before, the real concern isn’t just immediate performance - it’s that vibe-coding junior developers may lack the skills needed when they eventually have to untangle the spaghetti code that often emerges from AI-first development years later.
"vibe-coding junior developers untangling spaghetti years later".
this is the v2+ problem exactly. tiago forte: AI makes v1 easy, v2+ harder.
juniors building impressive portfolios with AI but never learning:
theyre learning generation, not maintenance. market will pay for maintenance in 2027.
"years later" is the key timeline. damage isnt immediate, its deferred. by the time they need those skills, mentors who could teach them are gone.
Spot on, Daniel. This is one of those issues AI companies knowingly create - and they’d rather we stayed silent about it.
"velocity trap" is perfect framing.
youve nailed the illusion: juniors clearing syntax at 10x speed LOOK like seniors but havent lived with architectural consequences.
and your cost shift: "effort to write → cognitive load to justify keeping" captures the economics exactly.
doogal called this "discipline to edit" - abundance requires different skill than scarcity.
"pilots who only fly in clear weather" - this is the metaphor ive been looking for. ai training without friction = clear weather only.
when turbulence hits (production bugs, scale issues, technical debt), they have no instrument training.
anthropic just proved this: juniors using AI finished faster but scored 17% lower on mastery. velocity without understanding
appreciate you synthesizing this so clearly - "velocity trap" goes in the framework collection.
Great article, good points - this one made me chuckle (but it's true):
"Human Capability (Above) ... Knows what to delete"
AI tends to over-engineer, one of my top "AI concerns" is that we end up with 3 times the number of lines and a codebase full of "excessiveness" - so, deleting lines would be one of my favorite "above" skills :-)
P.S. well, AGI - I'm not an "AI expert" by a far stretch, but after reading Wikipedia's article on AGI it sounds a bit like "pie in the sky", it doesn't even seem to have a really clear definition ...
But, if it means that AI could completely replace humans, then the whole discussion would obviously become irrelevant - and, I can't really imagine that any responsible person or company would seriously want to develop this kind of stuff, and unleash it on the world !
"deleting lines as favorite Above skill". yes
doogal in comments called this "discipline to edit" .AI generates abundance, you curate with judgment.
your "3x lines of excessiveness" fear is real. tiago forte observed: AI makes greenfield easy but consistency across files hard
each file "complete" in isolation = massive duplication nobody notices until m maintenance.
on AGI: uncle bob's point applies. even if AGI arrives, someone decides WHAT to build. business logic, tradeoffs, human consequences.
unless AGI also replaces customers and stakeholders, humans stay in loop
but agree. hoping for AGI to solve knowledge collapse feels like betting on deus ex machina
Yeah good points - AI tools (in coding or elsewhere) can absolutely be useful, but should be used where they're most effective - and should be guided with care and judgement, or else we're just creating a mess ...
AGI, I think it's mainly a marketing term used by AI companies to keep their investors "hyped" - nobody really has a good definition of it, maybe it at some point "it's there" and we don't even notice it ;-)
"should be guided with care and judgment or else creating a mess". exactly the line
AGI as marketing term. fair. also becomes moving target. every time.
AI achieves X, goalpost moves to "but thats not REAL intelligence"
Thank you very much for this article and explaining in simple words complex thoughts about AI and its effects on the dev community and not just only on the coding process.
appreciate you reading. the goal was making these abstract patterns concrete and relatable.
Cheap vs expensive verification is the frame I keep coming back to. I work on policy enforcement for MCP tool calls (keypost.ai) and it's exactly this. Checking if an agent's API call returned 200 is trivial. Figuring out whether that agent should have been allowed to make that call in the first place? Nobody catches that until prod breaks.
I also wonder if AI-generated v1 code makes the v2 problem worse than we think. When I write code myself I leave accidental extension points everywhere. AI tends to produce something "complete" that's genuinely harder to crack open later.
"AI produces complete code thats harder to crack open later". this is critical insight.
hand-written code leaves breadcrumbs of mental model. accidental extension points reveal how developer was thinking.
AI code = sealed box. looks complete but brittle when reality changes
working on MCP policy enforcement youre seeing expensive verification in production. "nobody catches it until prod breaks" is exactly the deferred cost.
curious: at keypost, how do you teach verification to clients? or do they only learn after breaking prod?
AI can generate code, but developers still define the problem, constraints, tradeoffs, and quality bar. The real value sits in architecture, judgment, and responsible decisions. Code is output. Thinking is the leverage
"code is output, thinking is leverage"
perfect distillation.
this is the shift. old economy: writing code was expensive. new economy:
knowing what NOT to write is expensive
problem definition, constraints, tradeoffs, quality bar . all Above the API
AI generates options. you choose based on context it cant see
appreciate the clarity
AI makes starting cheap.
It makes ownership expensive.
v1 is fast and flashy.
v2 is where systems either mature—or rot.
Seniors aren’t slower than AI.
They’re preventing tomorrow’s disaster today.
The market won’t pay for who can ship fastest.
It will pay for who can keep things standing.
this should be pinned.
"AI makes starting cheap. It makes ownership expensive." perfect economic framing.
and "preventing tomorrow's disaster today" is exactly what uncle bob
meant by "AI can't hold the big picture"
the shift. market paying for velocity → market paying for sustainability
appreciate the clarity
This hits an important point.
Writing code is becoming less of the differentiator — system thinking and problem framing matter more.
Feels like developers are moving “up the stack” conceptually.
Great piece, very sharp and timely!!
You explain the “above vs below the API” idea in a way that actually sticks...
One practical tip I’d add is to always rewrite AI-generated code in your own words before committing it, that’s where real understanding shows up!
It’s a small habit, but it keeps you in the judge’s seat instead of on autopilot.
"rewrite AI code in your own words before committing" . this is gold.
perfect example of staying Above the API. youre using AI output as draft, not final.
forces you to:
this is doogal's "discipline to edit" in practice. AI generates abundance, you curate with judgment
also aligns with ujja's approach: treat AI like "confident junior.helpful but needs review"
small habit, huge impact. turning generation into understanding
👍
Yeah, exactly. The rewrite is the point. If you can’t restate the code in your own words, you don’t really own it yet. AI’s fine as a draft, but the edit is where understanding and responsibility kick in...
exactly. "if you cant restate it, you dont own it".
this is the litmus test. can you explain to someone else WHY this approach vs alternatives? if not, youre below the API.
also: "edit is where responsibility kicks in" - perfect. AI doesnt take responsibility for production failures. you do.
making this practice default = building Above the API muscle.
Interesting approach and ideas. Thanks for sharing. But I was wondering for quite some time, whether or not this human input is still needed at large scale. I think there are safe bets like critical thinking. But lets take "pattern mismatch" across a huge codebase. Is this an actual issue when AI is the only one maintaining it? We invented DRY, because it is easy to overlook stuff and it is easier to have a central place to control things. This is still true, when AI is working on it. But AI is much better at detecting similar code structures and update them, when needed. I am not saying that this is good. But I do think that we will have a new AI first code base paradigm, where some of our patterns and approaches are not needed anymore or even worse are anti patterns.
fascinating question.pushing on real assumptions.
youre right AI might handle pattern consistency better across huge codebases.
DRY violations, mechanical refactoring. AI excels.
AI maintaining AI code assumes original architecture sound.
uncle bob: "AI piles code making mess." if initial structure flawed, consistency amplifies flaw faster.
"AI only maintaining" assumes no human consequences. but someone decides
WHAT to build, verifies it solves right problem, handles unforeseen edges
new AI-first patterns coming, yes. question: do those require human judgment to establish, or can AI bootstrap good architecture?
anthropic study: AI creates velocity without understanding. if no human
understands system, who catches m systemic issues?
This post hits something I’ve been circling for a while but didn’t quite have language for: verification is becoming the real skill, not generation.
What resonated most for me is that the divide isn’t junior vs senior — it’s delegation vs judgment. I’ve seen people ship insanely fast with AI and still feel… hollow about what they built. Not because it didn’t work, but because they couldn’t explain why it worked, or what would break next.
That “Above the API” framing maps almost perfectly to how I think about systems. AI is incredible at filling space. Humans still own shape, limits, and coherence over time. Especially over time. That’s the part most people underestimate.
The v1/v2 point is painfully accurate. AI makes v1 cheap, almost trivial. But v2 is where reality shows up: business constraints, weird edge cases, legacy decisions, human expectations. That’s where judgment compounds — and where you immediately spot who actually understood what they were building versus who just accepted output that looked right.
What worries me isn’t that juniors are using AI — that’s inevitable. It’s that many are skipping the feedback loops that used to teach skepticism: public code review, painful refactors, maintaining someone else’s mess, being forced to justify decisions. AI smooths over friction so well that people don’t realize what they’re not learning until months later.
And I think you’re right to call out the quiet danger here: the loss of the transmission layer. Judgment doesn’t come from prompts. It comes from exposure to consequences and from watching people who already earned that judgment think out loud. If that mentoring layer thins out, we don’t just lose knowledge — we lose how to doubt.
For me, staying “above the API” looks less like fighting AI and more like refusing to surrender authorship. I use AI constantly — but as a generator, not an authority. The moment I can’t explain a decision without pointing at the model, I know I’ve crossed the line.
So yeah, I don’t think humans are becoming obsolete. I think we’re becoming editors, architects, and long-horizon thinkers by necessity. The scary part is that those skills don’t emerge automatically — they have to be practiced, taught, and protected.
This piece doesn’t feel alarmist to me. It feels like a warning label.
And those are usually written by people who’ve already seen something break.
"refusing to surrender authorship". this is the principle.
youve nailed it twice now. knowledge collapse: "parallelizing knowledge." above the API: "humans own shape, limits, coherence over time"
your framing: "delegation vs judgment" (not junior vs senior) cuts clearer than mine.
"warning label written by people whove seen something break".yes. ive maintained AI-generated code. watched juniors ship fast then cant debug. seen the brittleness.
"editor not generator, architect implementer" . this is the shift
appreciate you synthesizing so clearly across both articles. youre building the frameworks with us.
curious: youre clearly thinking deeply about this. writing anywhere? or just high-quality comments?