Why Firing Developers Because of AI Will Be the Dumbest Thing You Do This Decade

Why Firing Developers Because of AI Will Be the Dumbest Thing You Do This Decade
Photo by Luna Wang / Unsplash - the robots coming for us.

Everyone's using AI. The winners will be the ones who have more people to use it well.


“The CEO announced it proudly: ‘We’re ahead of the curve. AI-first.’”
Then they cut 30% of engineering.

But the company that didn't cut their engineers? They used AI to produce more — with fewer errors.
They hired people who know how to work with AI. And in the emerging symbiosis of human judgment and machine speed, they outshipped everyone.

The "cost-saving" company fired its leverage.
They cut the very potential AI was giving them.

Welcome to the Future - Which You Are Messing Up Heavily

Congratulations. You've discovered AI. And like every tech-curious manager freshly armed with GPT-4 (already in few days GPT-5) and a copy of Harvard Business Review, your first thought is: "Whom can we fire?"

Bold move. Visionary, even. After all, why keep paying people when a machine can autocomplete Python functions and pretend to understand product specs?

Except here’s the punchline:

AI increases productivity — and that means you need more people, not fewer.

More output means more oversight, more integration, more judgment, more alignment, and — this is the hard part — more humans who actually care about the outcome.

But sure. Cut staff. Streamline! Flatten! Optimize! Let’s see where that goes.


AI Makes Everyone Faster — So Everyone Has to Go Faster

You gave your dev team a jet engine and thought, “Perfect, now we can take off the wings.”

AI does make individuals faster. In fact, a single developer today can produce what a small team did five years ago. Docs, tests, endpoints, UIs, even marketing copy — it’s all spewing out at firehose velocity. Great!

And now the reality: so can your competitors.

IF they are not stupid, they’re not downsizing. They’re scaling up. They’re building more. Shipping faster. Taking bigger swings because they know they can iterate in days, not months.

This isn’t automation in the factory sense. It’s augmentation on a battlefield. The side with the better soldiers still wins — they’re just wearing exoskeletons now.

If you’re cutting headcount while your rivals are hiring prompt-savvy engineers and shipping ten times faster, you're not reducing costs — you’re volunteering for irrelevance.

But don’t worry. You’ll get a great deal on office space.

“You fired the glue. You didn’t save money — you set yourself up for your company's downfall in three acts.”

A Böses Erwachen Awaits

Let’s borrow a fine German term for what’s coming next: ein böses Erwachen. A rude awakening. The kind that hits around Q3, when your product roadmap mysteriously stalls despite all the AI hype.

You fired the glue. The humans who made judgment calls. The ones who said, “This output looks plausible but is absolute garbage.”

You didn’t save money. You installed a chorus of eloquent parrots, trained on internet nonsense (and good stuff, too), and called it engineering. Then you fired the humans who could tell squawking from signal — and are now shocked your product roadmap sounds like fan fiction.


You Can't Automate Away Accountability

Your AI assistant just helped generate a feature that accidentally stores user health data in plain text.

Now what?

  • Call your lawyer? Great, they’ll want names.
  • Point at the LLM? Good luck.

Your CEO won’t accept “It was the AI.” Nor will the regulators.

AI doesn’t own outcomes. Humans do. And the more you generate, the more chances you have to screw up — publicly, irreversibly, and at scale.

“AI boosts output. But only humans can decide well about the meaning and what should ship — and what should burn.”

The Glue AI Will Never Understand

Let’s say it plainly: AI doesn’t know when Carl in Accounting is being passive-aggressive.

AI doesn’t see when your lead dev goes silent for three weeks — whether he’s burned out or just waiting out his notice period after accepting a better offer far, far away from your clown show.

It can’t tell that the smiling PM just threw another team under the bus during sprint planning.

But you can. Humans can. And these details — these political, emotional, irrational, beautiful messes — are the actual terrain of modern companies.

The work isn’t just writing code. It’s:

  • Filtering nonsense before it hits production
  • Picking the right implementation from five mediocre AI drafts
  • Seeing through fake consensus
  • Navigating office intrigue
  • Catching the telltale tone when your designer says, “I guess we could ship that…”

AI doesn’t read subtext. It doesn’t see patterns in human motive. It doesn’t preempt drama. It just outputs more stuff. Stuff you now have to deal with.


Yes, AI Makes Mistakes. Ridiculous Ones.

AI is astounding. Let’s not pretend otherwise. The models can translate, summarize, design, generate — it feels like wizardry.

And yet...

  • It still could think NaN > 1000 is true - just because it's seen it in training. (Who checks that? Right, the developer you fired ...)
  • It invents scientific citations.
  • It suggests function names that look real but don’t exist.
  • It confidently explains why 2 + 2 = 5 when prompted the wrong way.

You want to build your company’s future on this? Without humans in the loop?

That's quite a bold move. And good luck with that!


The Hard Limit: Training Data Is Already Exhausted

Let’s get technical for a second.

We are, in all likelihood, nearing the limit of high-quality training data for large language models. Some say even that it is already reached.

Most of what could be scraped has been scraped. Wikipedia? Done. StackOverflow? Rate-limited. GitHub? Mined. Reddit? Fighting back - yet.

What now?

We have to generate new data. Augment it. Simulate it. Fine-tune it with human preference. You know what that requires?

Humans.

Skilled ones. Ones who can distinguish good from bad, subtle from obvious, ethical from reckless.

That’s not a luxury but the new bottleneck.


You Can't Automate Accountability

That’s the line. Read it again.

You can automate deployment. You can automate testing. You can automate spam detection and log analysis and the entire cookie consent banner.

But you can’t automate trust. Or judgment. Or ownership.

And when it all fails, someone has to raise their hand and say: "This one’s on me."

That person won’t be an LLM.


Fire the Wrong People, and You Fire Your Future

So yes - by all means - fire your engineers.

Trust a black box to handle your systems. Pretend AI understands legal risk. Ignore the people who’ve been holding your technical debt together with duct tape and sheer spite.

But when the quarterly roadmap derails, the culture implodes, and your AI-generated release notes promise features that don’t exist?

Just don’t say you weren’t warned.

Because in the age of AI, the companies that win won’t be the ones who cut talent.

They’ll be the ones who amplify it.

After all, someone has to make sure the robots aren’t drunk.


Do you like this kind of thinking?

Follow me on Medium: @gwangjinkim for deep dives on Python, Lisp, system design, and developer thinking, and much more

- Subscribe on Substackgwangjinkim.substack.com — coming soon with early essays, experiments & newsletters (just getting started).

- Visit my Ghost blog (here)everyhub.org — with hands-on tech tutorials and tools (most of them I cross-post here in medium, but not all of them).

Follow anywhere that fits your style — or all three if you want front-row seats to what’s coming next.


The Hiring Process Is Not Broken. It’s Working Exactly as Designed (To Filter Out People Like You)
Why mastery, real skill, and intellectual firepower will never get you hired in a system built by people who don’t…
The Power of Multiple Dispatch in Python
The Secret Sauce Behind Julia and Lisp — Now in Python
Problem-Solving Like a Python Pro
A Hands-On Guide to Writing Clean, Flexible, and Future-Proof Python Code
From PDF to Markdown with Local LLMs — Fast, Private, and Free
No-Cost, Accurate OCR — Pain-Free, Efficient, and Fully Yours