Okay, so – I lied when I said I would publish my next newsletter on Thursday, December 4, 2025 😅 I still plan to take a hiatus in November while I’m traveling, but lately I’ve had a number of thoughts about the AI bubble on my mind.

Last Friday those thoughts boiled over, and I felt compelled to write them down in order to form a coherent picture of where I believe things stand at the moment. By the time I finished writing, this post was born – and so I decided to share it with all of you!

But first, some disclaimers:

  • I am not a financial advisor. None of what I’ve written here is personalized financial advice or should be construed as financial advice. I make no guarantee of future financial results.

  • This newsletter adheres to Daniel Miessler’s AI Influence Level (AIL) Zero

The financial state of the AI bubble

The AI bubble is reported to be 17 times the size of the dot-com bubble circa 2001, and four times the size of the subprime mortgage bubble circa 2008 [1]. In the last few months, both Sam Altman and Jeff Bezos (among others) have stated that we’re in an AI bubble [2] [3]. Hell, even Hank Green – one of my favorite YouTube personalities – recently shared why the state of the AI industry is freaking him out [4].

In the linked video above, Hank walks through an infographic from Bloomberg that traces the vendor financing ouroboros currently happening in Silicon Valley [5]. Likewise, it has recently been reported that U.S. GDP growth in the first half of 2025 was just 0.1% when excluding data center investments [6]. This ought to give everyone pause for concern.

When the dot-com bubble burst, the Nasdaq composite lost nearly 80% of its value, and the S&P 500 lost nearly 50% of it’s value [7]. When the subprime mortgage bubble burst, the Nasdaq composite lost nearly 40% of it’s value, and the and the S&P 500 lost nearly 50% of it’s value [8].

What do you think will happen to markets when the AI bubble bursts?

The technological state of the AI bubble

The difference between a financial bubble and a technological bubble is that they can be mutually exclusive. When the dot-com bubble burst, the World Wide Web didn’t disappear. Many sites continued to function, and much of the networking gear was still in use. Whereas when the NFT bubble burst, the technology became a worthless meme [9] – which some of us were saying all along, but I digress.

Anyway, what people need to understand is that the AI bubble we’re in looks a lot more like the dot-com bubble than the NFT bubble. The likely outcome with the AI bubble is that that the technology will not disappear when the corresponding financial bubble pops. Sure, some startups will get acquired by FAANG+ companies for pennies on the VC-invested dollar, and many more companies will go bankrupt, but the technology itself will likely continue to exist in some fashion.

Moreover, the current state of the technology is likely to be the worst it will get for the foreseeable future. Daniel Stenberg, founder of the cURL project and an outspoken critic of AI’s ability to find bugs in software, recently published a blog post where he cited a number of high quality findings discovered by researchers with the assistance of AI [10]. This ought to cause anyone working in application security or vulnerability research to sit up and start paying close attention.

Where I think the technology goes from here

I don’t believe we’ll reach Artificial General Intelligence (AGI) – or even Artificial Super Intelligence (ASI) – before the financial bubble pops. As a result, I think the technological mechanisms for achieving either AGI or ASI are likely a lot further away than what many pundits are currently forecasting. Even so, that doesn’t mean Language Models will cease to exist after the financial bubble bursts.

We’re already seeing allusion to how this technology will likely evolve – namely that there are benefits from Language Models becoming smaller [11]. Nvidia itself has published research this year where they make a case for small language models being the future of agentic AI [12]; I generally agree with them. Not only are small language models far less expensive to train than generalized large language models – they can be built in ways that ensure they never hallucinate [13].

What do you think will happen as Language Models become smaller, more fit-for-purpose, and never hallucinate?

How I’m investing my money right now

At the moment, I continue to regularly invest in an ETF tracking the S&P 500 through my employer’s 401(k) program. I think of these investments as a “dollar cost averaging” strategy, and with the employer match it comes to about 10% of my annual salary. Dollar cost averaging, a phrase coined by Benjamin Graham (Warren Buffetts mentor!), continues to be lauded as the simplest and most successful investment strategy for the average investor [14].

As for my personal and other retirement investments, I’ve mostly pulled out of the market. Instead, I’ve purchased options that fit my worldview based on research I’ve performed, and the information I have available. The GLD ETF $298 call options I purchased on April 14 (expiring January 16, 2026) have thus far produced a 300%+ ROI. Similarly, the GLD ETF $365 call options I purchased on October 6 (expiring January 15, 2027) have already seen 50%+ ROI.

I also recently purchased SPY ETF $670 put options dated for January 15, 2027, and January 21, 2028 respectively. The cost for these options represent a single-digit percentage of my investments, and I think of them as an insurance policy should the AI bubble burst while I’m dollar cost averaging into my 401(k).

I will eventually start dollar cost averaging my other investments back into the market. That said, given the irrational exuberance I am perceiving in the market at the moment, I’m following Warren Buffett’s strategy and keeping most of my money on the sidelines [15].

How are you investing your money right now?

How I’m investing my time right now

Since recognizing that Language Model technologies are here to stay, I’ve been making time to learn how they operate. I generally recommend starting with a brief understanding of how Language models work by watching 3blue1brown’s 7-minute explanation [16], and then watching some of his longer-form content to gain a deeper understanding [17].

I am also actively experimenting with the technology in order to discover what problems they can (and can’t) solve right now. By way of example, I recently published a blog post about such an experiment [18].

That said, I’ve adopted a mindset of being slow and methodical about my approach to adopting Language Models. This technology space is evolving so rapidly that yesterday’s knowledge quickly becomes stale and/or irrelevant. It’s likely the technology will continue evolving at this pace until the financial bubble pops, at which point there will be plenty of time to get caught up.

On the other hand, if you’re someone who outright refuses to learn about and adopt Language Model technologies, then you’ll have to be at least two-to-three standard deviations to the right of the bell curve in your field. You have to aim for Banksy-levels of fame, and deeply immerse yourself in your craft in order to survive in a Language Model-filled future [19].

How are you investing your time right now?

Additional thoughts

If you want to read more about my evolving thoughts on Artificial Intelligence, critical thinking, and life after the AI-pocalypse, you can find a handful of posts I’ve published on my personal blog [20].

So, what do you think? Let me know how you’re answering the questions I asked above.

Keith

Sources

Keep Reading

No posts found