Back to blog

The Apocalypse Nobody Trained For

November 21, 2025
5 min read
personal

Everyone is freaking out about AGI — killer robots, singularities, paperclip apocalypses, whatever the sci-fi doomer flavor of the week is.

But here’s the uncomfortable truth:

The real existential threat isn’t that AGI takes over.

It’s that AGI never shows up — and we built the entire modern world assuming it would.

We didn’t architect our economy for what exists.

We architected it for the thing we hope exists later.

And if that thing never arrives?

There isn’t a Plan B.

The uncomfortable math of belief

There’s a quiet doctrine running the global economy right now.

It isn’t printed on a banner anywhere, but every chart screams it:

Superintelligence is coming, and it will make these valuations make sense.

Nvidia explodes.

Microsoft spends like a drunk god.

Google torches the ad empire to bet on a black box.

Startups burn billions like they’re speedrunning bankruptcy.

Why?

Because everyone is convinced that once we hit AGI, all this red ink magically becomes a proof of faith.

Profitability stops mattering because intelligence becomes infinite.

Capital is reborn as compute.

Human labor becomes optional.

But what if that day never comes?

What if transformers don’t scale to godhood?

What if RLHF hits a ceiling?

What if the laws of physics or intelligence bottleneck earlier than the hype cycle budgeted for?

Then the bill arrives — and there’s no AGI to swipe the card for us.

The circular economy of belief

Let’s just say it:

A terrifying amount of “AI revenue” isn’t market-demand.

It’s self-licking ice cream cones.

  • Chip companies invest in AI startups
  • AI startups use the money to buy chips
  • Chip companies book it as “revenue”
  • Stock prices moon
  • Investors pump more money
  • Rinse → repeat → hallucinate growth forever

It’s Icelandic banks all over again — except this time the collateral isn’t fishing boats.

It’s the entire future of human civilization.

Because as long as that circular loop spins, markets look alive.

Retirement accounts look funded.

Tax revenues look stable.

Governments look competent.

Break the loop → everything else breaks with it.

Techno-faith as economic scaffolding

We’ve quietly replaced religion, capitalism, long-term planning, and realism with a single collective prayer:

“AGI will fix it.”
  • Productivity crisis? → AGI will fix it
  • Demographic collapse? → AGI will fix it
  • Healthcare and aging? → AGI will fix it
  • Climate adaptation? → AGI will fix it
  • Pension insolvency? → AGI will fix it
  • Espionage, cyber, logistics, war? → AGI will fix it

We’re not betting the future on technology anymore.

We’re betting the future on a messiah.

And that’s the blind spot.

Because if we were actually afraid of AGI,

policy would be cautious.

Instead, every government on the planet is encouraging its acceleration —

because they need the productivity miracle before their society collapses under its own weight.

This is not a safety race.

This is a desperation race.

What happens if AGI doesn’t land?

This is the part no one wants to say out loud.

If AGI fails to deliver exponential gains, then:

  • The AI economy collapses
  • The market collapses with it
  • Pension funds and sovereign wealth funds implode
  • Governments lose legitimacy
  • Globalization breaks under resource nationalism
  • People stop trusting centralized power
  • Regions go survival-mode

We don’t get Skynet.

We get Yugoslavia with TikTok.

Different geographies, ideologies, and power blocs start defining what “survival” looks like.

Some chase comfort at any cost.

Some chase control at any cost.

Some choose freedom even if it kills them.

That’s the real trident fork in the road.

The paradox of fear

People fear superintelligence because it means we become obsolete.

But the nightmare is the inverse:

What if we were supposed to be obsolete — and we never get the upgrade?

What if:

  • We pushed global systems past the breaking point
  • We assumed superintelligence would arrive in time to save us
  • And it just… doesn’t

Then we’re left flying a civilisation-sized plane

with no automation, no landing gear, and no runway.

The danger isn’t AGI rising.

The danger is AGI not rising fast enough to rescue the world we already committed to.

Everybody prepared for the robot uprising

Nobody prepared for the human one.

Nobody prepared for what happens when:

  • The markets realize the exponential curve isn’t coming back
  • The government tries to centralize control to prevent collapse
  • People choose identity over nation
  • Belief becomes more powerful than policy

That’s the timeline we’re tiptoeing toward.

And it’s not about machines.

It’s about what humans do when the miracle we were promised doesn’t arrive.

The future isn’t binary — it’s tribal

If AGI works: society reorganizes around abundance.

If AGI kills us: the story ends.

But if AGI stalls out…

if we hit the wall at 90% of godhood…

then the next century belongs to whoever can answer this question:

“What does humanity become when progress stops scaling?”

And that answer won’t be universal.

Different regions, cultures, and belief systems will design different futures,

and they will defend those futures.

The real Singularity isn’t intelligence merging.

It’s civilization splitting.

Bottom line

The scariest outcome isn’t that AGI takes control of humanity.

It’s that it never takes control — and humanity is left holding the bag.

We built this world on the assumption that intelligence keeps scaling forever.

Now we find out what happens

if it doesn’t.