An AI researcher told me something that won't leave my head:
"If a human cannot outperform or meaningfully guide a frontier model
on a task, the human's marginal value is effectively zero."
Now we have data. Anthropic's study with junior engineers shows: using AI
without understanding leads to 17% lower mastery—two letter grades.
But some AI users scored high. The difference? They used AI to learn,
not to delegate.
The question isn't "Can I use AI?" anymore.
It's "Am I using AI to understand, or to avoid understanding?"
The New Divide
There's a line forming in software development. Not senior vs junior. Not experienced vs beginner.
It's deeper than that.
Below the API:
- Can execute tasks AI handles autonomously
- Follows patterns without deep understanding
- Accepts AI output without verification
- Builds features fast but can't foresee disasters
Above the API:
- Guides systems with judgment
- Knows when AI is wrong
- Produces outcomes AI can't generate
- Exercises architectural thinking
The question: which side of that line are you on?
The Divide: What AI Does vs What Humans Still Own
| Domain | AI Capability (Below) | Human Capability (Above) | Why It Matters |
|---|---|---|---|
| Code Generation | Fast, comprehensive output | Knows what to delete | AI over-engineers by default |
| Debugging | Pattern matching from training data | System-level architectural thinking | AI misses root causes across components |
| Architecture | Local optimization within context | Big picture coherence | AI can't foresee cascading disasters |
| Refactoring | Mechanical transformation of code | Judgment on when/why/if to refactor | AI doesn't understand technical debt tradeoffs |
| Learning | Instant recall from training | Hard-won skepticism through pain | AI hasn't been burned by its own mistakes |
| Verification | Cheap domains (does it compile?) | Expensive domains (is this the right approach?) | AI can't judge "good" vs "working" |
| Consistency | Struggles across multiple files | Maintains patterns across codebase | AI loses context, creates inconsistent implementations |
| Simplification | Adds features comprehensively | Disciplines to reject complexity | AI defaults to kitchen-sink solutions |
Below the API: Can execute what AI suggests
Above the API: Can judge whether AI's suggestion is actually good
The line isn't about what you can build. It's about what you can verify, simplify, and maintain.
Why AI Makes Juniors Fast But Seniors Irreplaceable
Tiago Forte observed something crucial about AI-assisted development:
"Claude Code makes it easier to build something from scratch than to modify what exists. The value of building v1s will plummet, but the value of maintaining v2s will skyrocket."
The v1/v2 reality:
A junior developer uses Claude to build an authentication system. 200 lines of code, 20 minutes, tests pass, ships to production. Their portfolio looks impressive.
Six months later: the business needs SSO integration. Now they're debugging auth logic they didn't write, following patterns AI chose for reasons they don't understand, with zero architectural context. What should take 4 hours takes 3 days—because they never learned to structure v1 with v2 in mind.
This is the v1/v2 trap in action.
Skills AI Commoditizes (v1 territory):
- Building greenfield projects
- Generating boilerplate
- Following templates
- Speed and feature velocity
Skills AI Can't Replace (v2+ territory):
- Debugging existing systems
- Understanding technical debt
- Knowing when to refactor vs rebuild
- Maintaining architectural coherence
Here's the trap: Junior developers are using AI to build impressive v1 projects for their portfolios. But they're never learning the v2+ maintenance skills that actually command premium rates.
As Ben Podraza noted in response to Tiago: "Works great until you ask it to create two webpages with the same formatting. Then you iterate for hours burning thousands of tokens."
Consistency is hard. Context is hard. Legacy understanding is hard.
Those are exactly the skills you learn from working in mature codebases, reading other people's code, struggling through refactoring decisions.
The knowledge commons taught v2+ skills. AI teaches v1 skills.
Guess which one the market will pay for in 2027?
The Architecture Gap
Uncle Bob Martin (author of Clean Code) has been coding with Claude. His observation cuts to what humans still contribute:
"Claude codes faster than I do by a significant factor. It can hold more details in its 'mind' than I can. But Claude cannot hold the big picture. It doesn't understand architecture. And although it appreciates refactoring, it shows no inclination to acquire that value for itself. It does not foresee the disaster it is creating."
The danger: AI makes adding features so easy that you skip the "slow down and think" step.
"Left unchecked, AI will pile code on top of code making a mess. By the same token, it's so easy for humans to use AI to add features that they pile feature upon feature making a mess of things."
When someone asked "How much does code quality matter when we stop interacting directly with code?", Uncle Bob's response was stark:
"I'm starting to think code quality matters even more."
Why? Because someone still has to maintain architectural coherence across the mess AI generates. That someone needs to understand both what the code does AND why it was structured that way.
The Claude Code Reality Check
Since Claude Code and Anthropic's Model Context Protocol (MCP) launched, developers have been experimenting with AI-first workflows. The results mirror Uncle Bob's observation exactly: AI is incredibly fast at implementation but blind to architectural consequences.
What Claude Code excels at:
- Generating boilerplate quickly
- Following explicit patterns within single files
- Maintaining local context for focused tasks
- Implementing well-defined specifications
Where it fails (by design):
- Understanding project-wide architecture
- Maintaining consistency across multiple files
- Knowing when to slow down and reconsider the approach
- Foreseeing how today's "quick fix" becomes tomorrow's technical debt
- Asking "should we even build this?" instead of "how do we build this?"
The tool is powerful. I use it daily. But treating it as autopilot instead of compass leads to the "code pile" Uncle Bob warned about.
This isn't Claude's limitation—it's a fundamental constraint of current AI architecture. As Peter Truchly explained in the comments: "LLMs are not built to seek the truth. They're trained for output coherency (a.k.a. helpfulness)."
An LLM will confidently generate code that compiles and runs. Whether it's the right code—architecturally sound, maintainable, simple—requires human judgment in what Ben Santora calls "expensive verification domains."
That judgment is what keeps you Above the API.
The Skills That Actually Matter
From the discussions in my knowledge collapse article, here's what keeps you Above the API:
1. Architectural Thinking (Uncle Bob's "Big Picture")
- Knowing when to slow down
- Seeing consequences AI can't predict
- Making refactoring decisions with context
- Balancing technical debt vs new features
2. V2+ Mastery (Tiago's Maintenance Skills)
- Debugging complex existing systems
- Understanding why code was written certain ways
- Maintaining consistency across iterations
- Choosing between rebuild vs refactor
3. Verification Capability (Ben Santora's "Judge" Layer)
- Knowing when AI is confidently wrong
- Distinguishing cheap vs expensive verification domains
- Building skepticism without becoming paralyzed
- Testing assumptions, not just accepting outputs
As Ben Santora explained in his work on AI reasoning limits:
"Knowledge collapse happens when solver output is recycled without a strong, independent judging layer to validate it. The risk is not in AI writing content; it comes from AI becoming its own authority."
Cheap verification domains:
- Code compiles or doesn't
- Tests pass or fail
- API returns correct response
Expensive verification domains:
- Is this architecture sound?
- Will this scale?
- Is this maintainable?
- Is this the right approach?
AI sounds equally confident in both domains. But in expensive verification domains, you won't know you're wrong until months later when the system falls over in production.
4. Discipline to Simplify (Doogal Simpson's "Editing")
In the comments, Doogal Simpson reframed the shift from scarcity to abundance:
"We are trading the friction of search for the discipline of editing. The challenge now isn't generating the code, but having the guts to reject the 'Kitchen Sink' solutions the AI offers."
Old economy: Scarcity forced simplicity (finding answers was expensive)
New economy: Abundance requires discipline (AI generates everything, you must delete)
The skill shifts from ADDING to DELETING. From generating to curating. From solving to judging.
5. Domain Expertise (John H's Context)
In the comments, John H explained how he uses AI effectively as a one-man dev shop:
"I can concentrate on being the knowledge worker, ensuring the business rules are met and that the product meets the customer usability requirements."
What John brings:
- 3 years with his application
- Deep customer knowledge
- Business rules understanding
- Can verify if AI output actually solves the right problem
John isn't using AI as autopilot. He's using it as a force multiplier while staying as the judge.
The pattern: Experienced developers with deep context use AI effectively. They can verify output, catch errors, know when to override suggestions.
The problem: Can juniors learn this approach without first building the hard-won experience that makes verification possible?
The Anthropic Study: Using AI vs Learning With AI
While writing this piece, Anthropic published experimental data that
validates the Above/Below divide.
In a randomized controlled trial with junior engineers:
- AI-assistance group finished ~2 minutes faster
- But scored 17% lower on mastery quiz (two letter grades)
- "Significant decrease in mastery"
However: Some in the AI group scored highly while using AI.
The difference? They asked "conceptual and clarifying questions to
understand the code they were working with—rather than delegating or
relying on AI."
This is the divide:
Below the API (delegating):
"AI, write this function for me" → Fast → No understanding → Failed quiz
Above the API (learning with AI):
"AI, explain why this approach works" → Slower but understands → Scored high
Speed without understanding = Below the API.
Understanding while using AI = Above the API.
The tool is the same. Your approach determines which side you're on.
[Source: Anthropic's study, January 2026]
The Last Generation Problem
In the comments, Maame Afua revealed something crucial: she's a junior developer, but she's using AI effectively because she had mentors.
"I got loads of advice from really good developers who have been through the old school system (without AI). I have been following their advice."
The transmission mechanism: Pre-AI developers teaching verification skills to AI-era juniors.
Maame can verify AI output not because she's experienced, but because experienced devs taught her to be skeptical. Her learning path:
- Build foundation first (books, docs, accredited resources)
- Use AI as assistant, not primary learning tool
- Verify against authoritative sources
- Never implement what she can't explain
But here's the cliff we're approaching:
Right now, there are enough pre-AI developers to mentor. In 5-10 years, most seniors will have learned primarily WITH AI.
Who teaches the next generation to doubt? Who transfers verification habits when nobody has them?
We're one generation away from losing the transmission mechanism entirely.
Maame is lucky. She found good mentors before the window closed. The juniors starting in 2030 won't have that option.
How People Learn Verification
Two developers in the comments showed different paths to building verification skills:
The Hard Way (ujja)
ujja learned "zero-trust reasoning" through painful experience:
"Trusted AI a bit too much, moved fast, and only realized days later that a core assumption was wrong. By then it was already baked into the design and logic, so I had to scrap big chunks and start over."
His mental model shifted:
- Before: "Does this sound right?"
- After: "What would make this wrong?"
He now treats AI like "a very confident junior dev - super helpful, but needs review."
His insight: "I do not think pain is required, but without some kind of feedback loop like wasted time or broken builds, it is hard to internalize. AI removes friction, so people skip verification until the cost shows up later."
The Deliberate Way (Fernando)
Fernando Fornieles recognized the problem months ago and took action without waiting to get burned:
- Closed private social media accounts
- Migrated to fediverse
- Built home cloud server (Nextcloud on Raspberry Pi)
- Actively avoiding platform "enshittification"
He's not learning through pain. He's acting on principles.
The question: Can we teach ujja's learned skepticism without the pain? Can we scale Fernando's deliberate action?
Or does every junior need to scrap a week's work before they learn to verify AI output?
What the Knowledge Commons Taught
Stack Overflow debates taught architecture. Someone would propose a solution, others would tear it apart, consensus would emerge through friction. That friction built judgment.
Code review culture taught "slow down and think." You couldn't just ship it - someone would ask "why this approach?" and you'd have to justify architectural decisions.
Painful bugs taught foreseeing disaster. You'd implement something that seemed fine, it would blow up in production, you'd learn to see those patterns early.
Legacy codebases taught refactoring judgment. You'd maintain someone else's decisions, understand their constraints, learn when to preserve vs rebuild.
All of this happened in public. On Stack Overflow. In code review comments. In GitHub issues. In conference talks.
AI assistance happens in private. Individual optimization. No public friction. No collective refinement.
The skills that keep you Above the API were taught by the knowledge commons we're killing.
Practical Actions
If You're Junior/Early Career:
Seek pre-AI mentors actively
- Find developers who learned before ChatGPT
- Ask them to review your AI-generated code
- Learn their skepticism patterns
Work in mature codebases
- Don't just build greenfield projects
- Contribute to established open source
- Learn from technical debt decisions
Document your reasoning publicly
- Write about WHY you chose approaches
- Publish debugging journeys, not just solutions
- Contribute to the commons you're consuming
Build verification habits explicitly
- Always check AI output against docs
- Test assumptions, don't just ship
- Learn to recognize "confident wrongness"
Treat AI like ujja does
- "Very confident junior dev"
- Super helpful, but needs review
- Ask "what would make this wrong?" not "does this sound right?"
If You're Senior/Experienced:
Mentor explicitly
- Teach verification, not just syntax
- Share your skepticism patterns
- Explain architectural thinking out loud
Preserve architectural knowledge
- Document WHY decisions were made
- Publish architecture decision records
- Write about the disasters you foresaw
Contribute to commons deliberately
- Answer questions on Stack Overflow
- Write detailed technical blog posts
- Open source your reasoning, not just code
Make "slow down and think" visible
- Show juniors when you pause to consider
- Explain the questions you ask AI
- Demonstrate the editing/simplification process
The Uncomfortable Questions
The AGI Wild Card
In the comments on my knowledge collapse article, Leob raised the ultimate question: what if AI achieves true invention?
"Next breakthrough for AI would be if it can 'invent' something by itself, pose new questions, autonomously create content, instead of only regurgitating what's been fed to it."
If that happens, "Above the API" might become irrelevant.
But as Uncle Bob observed: "AI cannot hold the big picture. It doesn't understand architecture."
Peter Truchly added technical depth to this limitation:
"LLMs are not built to seek the truth. Gödel/Turing limitations do apply but LLM does not even care. The LLMs are just trained for output coherency (a.k.a. helpfulness)."
Two possible futures:
Scenario 1: AI remains sophisticated recombinator
Knowledge collapse poisons training data. Model quality degrades. The Above/Below divide matters enormously. Your architectural thinking and verification skills remain valuable for decades.
Scenario 2: AI achieves AGI and true invention
Knowledge collapse doesn't matter because AI generates novel knowledge. But then... what do humans contribute?
Betting everything on "AGI will save us from knowledge collapse" feels risky when we're already seeing the collapse happen.
Maybe we should fix the problem we KNOW exists rather than hoping for a breakthrough that might make everything worse.
Does Software Even Need Humans?
Mike Talbot pushed back on my entire premise:
"Why do humans need to build a knowledge base? So that they and others can make things work? Who cares about the knowledge base if the software works?"
His argument: Knowledge bases exist to help HUMANS build software. If AI can build software without human knowledge bases, who cares if Stack Overflow dies?
He used a personal example:
"I wrote my first computer game. I clearly remember working on a Disney project in the 90s and coming up with compiled sprites. All of that knowledge, all of that documentation, wiped out by graphics cards. Nobody cared about my compiled sprites; they cared about working software."
His point: Every paradigm shift makes previous knowledge obsolete. Maybe AI is just the next shift.
My response: Graphics cards didn't train on his compiled sprite documentation. They were a fundamentally different approach. AI is training on Stack Overflow, Wikipedia, GitHub. If those die and AI trains on AI output, we get model collapse not paradigm shift.
Mike's challenge matters because it forces clarity: Are we preserving human knowledge because it's inherently valuable? Or because it's necessary for AI to keep improving?
If AGI emerges, his question becomes more urgent. If it doesn't, preserving human knowledge becomes more critical.
What You Actually Contribute
Back to the original question: "What do you contribute that AI cannot?"
You contribute verification. AI solves problems. You judge if the solution is actually good.
You contribute architecture. AI writes code. You see the big picture it can't hold.
You contribute foresight. AI optimizes locally. You prevent disasters it doesn't see coming.
You contribute context. AI has patterns. You have domain expertise, customer knowledge, historical understanding.
You contribute judgment in expensive verification domains. AI excels where verification is cheap (does it compile?). You excel where verification is expensive (will this scale? is this maintainable? is this the right approach?).
You contribute simplification. AI generates comprehensive solutions. You have the discipline to delete complexity.
You contribute continuity. AI is stateless. You maintain coherence across systems, teams, and time.
But here's the uncomfortable truth: none of these skills are guaranteed.
They're learned. Through friction. Through pain. Through public struggle. Through mentorship from people who learned the hard way.
If we kill the knowledge commons, we kill the training grounds for Above-the-API skills.
If we stop mentoring explicitly, we lose the transmission mechanism in one generation.
If we optimize purely for velocity, we lose the "slow down and think" muscle.
Staying Above the API isn't automatic. It's a choice you make every day.
Choose to verify, not just accept.
Choose to simplify, not just generate.
Choose to foresee, not just react.
Choose to mentor, not just build.
Choose to publish, not just consume.
The API line is real. Which side will you be on?
This piece was built from discussions with developers working through these questions publicly. Special thanks to Uncle Bob Martin, Tiago Forte, Maame Afua, ujja, Fernando Fornieles, Doogal Simpson, Ben Santora, John H, Mike Talbot, Leob, Peter Truchly, and everyone else thinking through this transition.
What skills do you think will keep developers Above the API? What am I missing? Let's figure this out together.
Part of a series:
- My Chrome Tabs Tell a Story We Haven't Processed Yet
- We're Creating a Knowledge Collapse and No One's Talking About It
- Above the API: What Developers Contribute When AI Can Code (you are here)
- Building the Foundation (coming soon)
Top comments (27)
This is a brilliant mapping of the 'Post-AI' reality.
The distinction between Cheap vs. Expensive Verification is the real signal. We are entering a 'Velocity Trap' where juniors look like seniors because they can clear syntax hurdles at 10x speed, but they haven't spent time in the trenches where you live with the consequences of a bad architectural pivot for three years.
As you noted, the skill has shifted from generation to curation. In the old world, the 'cost' of code was the effort to write it. In the new world, the cost is the cognitive load required to justify keeping what the AI spit out.
The 'Last Generation Problem' is the real existential threat. If we stop learning through the 'friction of search' and the 'pain of refactoring,' we risk becoming pilots who only know how to fly in clear weather.
As we’ve discussed before, the real concern isn’t just immediate performance - it’s that vibe-coding junior developers may lack the skills needed when they eventually have to untangle the spaghetti code that often emerges from AI-first development years later.
"vibe-coding junior developers untangling spaghetti years later".
this is the v2+ problem exactly. tiago forte: AI makes v1 easy, v2+ harder.
juniors building impressive portfolios with AI but never learning:
theyre learning generation, not maintenance. market will pay for maintenance in 2027.
"years later" is the key timeline. damage isnt immediate, its deferred. by the time they need those skills, mentors who could teach them are gone.
Spot on, Daniel. This is one of those issues AI companies knowingly create - and they’d rather we stayed silent about it.
"velocity trap" is perfect framing.
youve nailed the illusion: juniors clearing syntax at 10x speed LOOK like seniors but havent lived with architectural consequences.
and your cost shift: "effort to write → cognitive load to justify keeping" captures the economics exactly.
doogal called this "discipline to edit" - abundance requires different skill than scarcity.
"pilots who only fly in clear weather" - this is the metaphor ive been looking for. ai training without friction = clear weather only.
when turbulence hits (production bugs, scale issues, technical debt), they have no instrument training.
anthropic just proved this: juniors using AI finished faster but scored 17% lower on mastery. velocity without understanding
appreciate you synthesizing this so clearly - "velocity trap" goes in the framework collection.
Great article, good points - this one made me chuckle (but it's true):
"Human Capability (Above) ... Knows what to delete"
AI tends to over-engineer, one of my top "AI concerns" is that we end up with 3 times the number of lines and a codebase full of "excessiveness" - so, deleting lines would be one of my favorite "above" skills :-)
P.S. well, AGI - I'm not an "AI expert" by a far stretch, but after reading Wikipedia's article on AGI it sounds a bit like "pie in the sky", it doesn't even seem to have a really clear definition ...
But, if it means that AI could completely replace humans, then the whole discussion would obviously become irrelevant - and, I can't really imagine that any responsible person or company would seriously want to develop this kind of stuff, and unleash it on the world !
"deleting lines as favorite Above skill". yes
doogal in comments called this "discipline to edit" .AI generates abundance, you curate with judgment.
your "3x lines of excessiveness" fear is real. tiago forte observed: AI makes greenfield easy but consistency across files hard
each file "complete" in isolation = massive duplication nobody notices until m maintenance.
on AGI: uncle bob's point applies. even if AGI arrives, someone decides WHAT to build. business logic, tradeoffs, human consequences.
unless AGI also replaces customers and stakeholders, humans stay in loop
but agree. hoping for AGI to solve knowledge collapse feels like betting on deus ex machina
Yeah good points - AI tools (in coding or elsewhere) can absolutely be useful, but should be used where they're most effective - and should be guided with care and judgement, or else we're just creating a mess ...
AGI, I think it's mainly a marketing term used by AI companies to keep their investors "hyped" - nobody really has a good definition of it, maybe it at some point "it's there" and we don't even notice it ;-)
"should be guided with care and judgment or else creating a mess". exactly the line
AGI as marketing term. fair. also becomes moving target. every time.
AI achieves X, goalpost moves to "but thats not REAL intelligence"
Thank you very much for this article and explaining in simple words complex thoughts about AI and its effects on the dev community and not just only on the coding process.
appreciate you reading. the goal was making these abstract patterns concrete and relatable.
Cheap vs expensive verification is the frame I keep coming back to. I work on policy enforcement for MCP tool calls (keypost.ai) and it's exactly this. Checking if an agent's API call returned 200 is trivial. Figuring out whether that agent should have been allowed to make that call in the first place? Nobody catches that until prod breaks.
I also wonder if AI-generated v1 code makes the v2 problem worse than we think. When I write code myself I leave accidental extension points everywhere. AI tends to produce something "complete" that's genuinely harder to crack open later.
"AI produces complete code thats harder to crack open later". this is critical insight.
hand-written code leaves breadcrumbs of mental model. accidental extension points reveal how developer was thinking.
AI code = sealed box. looks complete but brittle when reality changes
working on MCP policy enforcement youre seeing expensive verification in production. "nobody catches it until prod breaks" is exactly the deferred cost.
curious: at keypost, how do you teach verification to clients? or do they only learn after breaking prod?
AI can generate code, but developers still define the problem, constraints, tradeoffs, and quality bar. The real value sits in architecture, judgment, and responsible decisions. Code is output. Thinking is the leverage
"code is output, thinking is leverage"
perfect distillation.
this is the shift. old economy: writing code was expensive. new economy:
knowing what NOT to write is expensive
problem definition, constraints, tradeoffs, quality bar . all Above the API
AI generates options. you choose based on context it cant see
appreciate the clarity
AI makes starting cheap.
It makes ownership expensive.
v1 is fast and flashy.
v2 is where systems either mature—or rot.
Seniors aren’t slower than AI.
They’re preventing tomorrow’s disaster today.
The market won’t pay for who can ship fastest.
It will pay for who can keep things standing.
this should be pinned.
"AI makes starting cheap. It makes ownership expensive." perfect economic framing.
and "preventing tomorrow's disaster today" is exactly what uncle bob
meant by "AI can't hold the big picture"
the shift. market paying for velocity → market paying for sustainability
appreciate the clarity
This hits an important point.
Writing code is becoming less of the differentiator — system thinking and problem framing matter more.
Feels like developers are moving “up the stack” conceptually.
Great piece, very sharp and timely!!
You explain the “above vs below the API” idea in a way that actually sticks...
One practical tip I’d add is to always rewrite AI-generated code in your own words before committing it, that’s where real understanding shows up!
It’s a small habit, but it keeps you in the judge’s seat instead of on autopilot.
"rewrite AI code in your own words before committing" . this is gold.
perfect example of staying Above the API. youre using AI output as draft, not final.
forces you to:
this is doogal's "discipline to edit" in practice. AI generates abundance, you curate with judgment
also aligns with ujja's approach: treat AI like "confident junior.helpful but needs review"
small habit, huge impact. turning generation into understanding
👍
Yeah, exactly. The rewrite is the point. If you can’t restate the code in your own words, you don’t really own it yet. AI’s fine as a draft, but the edit is where understanding and responsibility kick in...
exactly. "if you cant restate it, you dont own it".
this is the litmus test. can you explain to someone else WHY this approach vs alternatives? if not, youre below the API.
also: "edit is where responsibility kicks in" - perfect. AI doesnt take responsibility for production failures. you do.
making this practice default = building Above the API muscle.
Interesting approach and ideas. Thanks for sharing. But I was wondering for quite some time, whether or not this human input is still needed at large scale. I think there are safe bets like critical thinking. But lets take "pattern mismatch" across a huge codebase. Is this an actual issue when AI is the only one maintaining it? We invented DRY, because it is easy to overlook stuff and it is easier to have a central place to control things. This is still true, when AI is working on it. But AI is much better at detecting similar code structures and update them, when needed. I am not saying that this is good. But I do think that we will have a new AI first code base paradigm, where some of our patterns and approaches are not needed anymore or even worse are anti patterns.
fascinating question.pushing on real assumptions.
youre right AI might handle pattern consistency better across huge codebases.
DRY violations, mechanical refactoring. AI excels.
AI maintaining AI code assumes original architecture sound.
uncle bob: "AI piles code making mess." if initial structure flawed, consistency amplifies flaw faster.
"AI only maintaining" assumes no human consequences. but someone decides
WHAT to build, verifies it solves right problem, handles unforeseen edges
new AI-first patterns coming, yes. question: do those require human judgment to establish, or can AI bootstrap good architecture?
anthropic study: AI creates velocity without understanding. if no human
understands system, who catches m systemic issues?
This post hits something I’ve been circling for a while but didn’t quite have language for: verification is becoming the real skill, not generation.
What resonated most for me is that the divide isn’t junior vs senior — it’s delegation vs judgment. I’ve seen people ship insanely fast with AI and still feel… hollow about what they built. Not because it didn’t work, but because they couldn’t explain why it worked, or what would break next.
That “Above the API” framing maps almost perfectly to how I think about systems. AI is incredible at filling space. Humans still own shape, limits, and coherence over time. Especially over time. That’s the part most people underestimate.
The v1/v2 point is painfully accurate. AI makes v1 cheap, almost trivial. But v2 is where reality shows up: business constraints, weird edge cases, legacy decisions, human expectations. That’s where judgment compounds — and where you immediately spot who actually understood what they were building versus who just accepted output that looked right.
What worries me isn’t that juniors are using AI — that’s inevitable. It’s that many are skipping the feedback loops that used to teach skepticism: public code review, painful refactors, maintaining someone else’s mess, being forced to justify decisions. AI smooths over friction so well that people don’t realize what they’re not learning until months later.
And I think you’re right to call out the quiet danger here: the loss of the transmission layer. Judgment doesn’t come from prompts. It comes from exposure to consequences and from watching people who already earned that judgment think out loud. If that mentoring layer thins out, we don’t just lose knowledge — we lose how to doubt.
For me, staying “above the API” looks less like fighting AI and more like refusing to surrender authorship. I use AI constantly — but as a generator, not an authority. The moment I can’t explain a decision without pointing at the model, I know I’ve crossed the line.
So yeah, I don’t think humans are becoming obsolete. I think we’re becoming editors, architects, and long-horizon thinkers by necessity. The scary part is that those skills don’t emerge automatically — they have to be practiced, taught, and protected.
This piece doesn’t feel alarmist to me. It feels like a warning label.
And those are usually written by people who’ve already seen something break.
"refusing to surrender authorship". this is the principle.
youve nailed it twice now. knowledge collapse: "parallelizing knowledge." above the API: "humans own shape, limits, coherence over time"
your framing: "delegation vs judgment" (not junior vs senior) cuts clearer than mine.
"warning label written by people whove seen something break".yes. ive maintained AI-generated code. watched juniors ship fast then cant debug. seen the brittleness.
"editor not generator, architect implementer" . this is the shift
appreciate you synthesizing so clearly across both articles. youre building the frameworks with us.
curious: youre clearly thinking deeply about this. writing anywhere? or just high-quality comments?
Some comments may only be visible to logged-in visitors. Sign in to view all comments.