We're in an AI bubble. A big one. But not the one everyone is yapping about.
I don't care that much about the financial AI bubble. Some people think it's about to implode. Others... think the risk is maybe off.
This conversation is for Wall Street and those lucky enough to hold roles inside the frontier AI companies worried about their IPOs.
The bubble I'm talking about is much more all encompassing.
And we have to break out of it if we're going to save the world.
The bubble I'm talking about is AI twitter and those of us who yammer back and forth about the next big AI feature all day long.
Those of us who actually understand what's happening in the AI space and specifically...
How AI takeoff is happening now and could leave a LOT of people behind.
Why This Matters Right Now...
I saw this from OpenAI/Sora researcher Gabriel Peterson over the weekend and it made my skin crawl.
Not just in the "ugh, people who work at these AI labs need to be more aware of how they sound" way but in the "oh boy, he might be right" way.
Gabriel did walk this sentiment back but it's coming from a place of honesty.
If you're not steeped in years of AI lingo, you might just quickly scroll by that, not giving it a second glance.
But in the inner circles of AI twitter (and, clearly, amongst the leading AI labs themselves) the conversation about the AI takeoff is very, very loud.
And that's exactly the problem.
It's loud in here. It's silent out there.
So let's fix that. Below, I'm going to explain very directly what AI takeoff is and why it matters.
Not for you but for those people that need to be aware. Friends. Family. Whomever.
Send this to them. Spread the word. It's important we break this out of the AI bubble and into more places to prepare people for what's coming.
So...What *Exactly* Is AI Takeoff?
The simplest definition:
AI takeoff is the moment (or period of time) where AI models begin to improve much faster than ever before, mostly because they can work on themselves.
That might sound somewhat charming...
"Oh, they're working on themselves! How fun!"
But what actually matters here is recursive self-improvement and exponential growth.
Once an AI gets better at making itself better, that improvement compounds back upon itself again and again until it's improving MUCH faster than before. And then faster than that. And then faster than that.
It's a weird concept for us humans to grasp. After all, we kind of have an upper limit to our ability to learn and grow.
We plateau. These systems don't.
One of the great tech explainers of our time, Tim Urban of Wait But Why, wrote a blog post TEN YEARS AGO with a simple illustration that does more to explain this than reading 100 Wikipedia pages.
I highly suggest you read both parts of Tim's AI post.
In Tim's illustration (especially the second part), you see how the moment before massive change can feel completely normal.
We're sitting right on the edge of it happening. Things feel like they always felt because we can't see the changes coming our way.
Most people are standing on that flat part of the curve, looking around, thinking everything is fine.
Meanwhile, those of us inside the AI world can see the curve starting to bend.
The AI takeoff is the moment where this improvement begins and for most people, it would be nearly impossible to see.
But people inside the AI labs are saying the quiet part out loud now.
For example: Anthropic just dropped a 2.5x faster version of Opus 4.6 days after announcing the new model.
The takeoff is happening.
So now what?
Why This Matters And How The World Actually Changes
The thing about the Claude Code and OpenClaw/Moltbook moments is that both of them showed more of the human population what these AI tools are capable of right now.
And then, in last week's release of OpenAI's GPT-5.3 Codex model, we got the first official confirmation that one of these models actually worked on itself.
It's not hard to see where we're headed. Yes, these AIs are soon going to be much more capable than ever before and we'll be turning more of our work over to them.
But AI takeoff is actually a bigger idea than that.
And it's kind of scary.
If AI takeoff happens, it's not just about being aware of these tools and using them.
It's about preparing yourself for an entirely different world.
Here's another illustration from Tim Urban that keeps me up at night:
It's not just about the idea of catching up.
It's the idea that we might never catch up.
There will be a massive gap (starting now) between the capabilities of humans and these AI systems, and that gap will get wider and wider with each passing day.
Ok, So… Now What?
If someone sent you this, it's because they care about you.
The stuff above? The recursive self-improvement, the exponential curve?
Most people aren't talking about it yet.
Not on the news. Not at work. Not at dinner.
But it's happening, and the people who are paying attention are starting to get a little anxious about the gap between what they're seeing and what everyone else is seeing.
So if you're reading this and thinking "Ok, but what am I supposed to actually do with this"...
There are things that you, the normal human, can do right now to start preparing for a world that looks like this.
You don't have to become an AI expert. You really don't.
But you should know this is happening and start thinking about what makes you valuable in a world where machines can do a LOT of the work we currently do.
Three big things to think about:
Lean into your creativity.
Whatever it is... writing, cooking, building things, solving problems at work in ways nobody else would think of.
That kind of original, human creative thinking is going to matter more, not less, as AI gets better. It's the thing these systems are worst at faking.
Invest in your people.
Your relationships, your network, your community. The friend or family member who sent you this.
AI can do a lot of things but it can't be a real person who shows up for another real person. That's going to be worth more than ever.
Make something.
Start a project. Build a thing. Launch a side hustle. Even a small one.
In a world where AI can copy and scale almost anything, the person who starts something, the person who has the original idea and puts it into motion, has a real advantage.
I know this is a lot.
And I know it might sound like the kind of breathless tech hype you've been trained to tune out.
That's fair. I work in this space every day and even I have moments where I think "is this real or are we all just in an echo chamber?"
But then I see the models working on themselves.
And I see the curve starting to bend.
And I think: I'd rather know about it now than find out later.
If you want to keep up with this stuff in a way that doesn't require a computer science degree, that's kind of what I do and why we made AI For Humans.
AI For Humans is a weekly podcast and newsletter where we try to make all of this accessible and, honestly, a little fun. You're welcome to stick around.
And if you have thoughts...
If this freaked you out, reassured you, confused you, whatever, I'd genuinely love to hear from you.
Shoot me an email at gavin AT gavinpurcell dot com. I might collect some of the best responses for next week's newsletter.
And thank your friend for sending you this.
Or at least don't immediately call them crazy... again.