My neighbor Daniel owns a mid sized accounting firm. Last Tuesday, while we were talking over coffee that had gone cold during our conversation, he told me something that has been weighing on me heavily ever since.

His firm recently signed a three year contract with an AI platform capable of automating 60% of their data entry and reconciliation work. The sales pitch was impressive. In the demos, their existing workflow was portrayed as if they were doing arithmetic with an abacus.

"The board loved it," he said, eyes fixed on his cup. "Gave the green light at the very first meeting."

Then he paused. "I haven't told the team yet that we might only need half the people we currently have."

What confuses me is that Daniel is an excellent person. He coaches little league and even remembers the names of employees' children. Yet here he is, making decisions that could drastically change people's lives, simply because everyone else is doing it, and the thought of being left behind feels more dangerous than not acting at all.

There's an odd feeling of upheaval in the air. It's not exactly the start of a whole new world, but more like those few minutes before a storm, when the air is heavy and everything is quiet. You sense that a big change is coming. You just can't decide whether to take cover or if you're being overly cautious.

The House That Confidence Built

I work on my laptop in coffee shops quite often, probably like half the people under 40 these days. Last week, I overheard a conversation between two young women, maybe 25 or younger, in one of those places. One of them was explaining why she had just invested her entire signing bonus probably around $8,000 to $10,000 into a few tech stocks.

"The risk is virtually zero," she said, as if this was the first time in her life she had seen her account value drop dramatically overnight. "AI is the way forward. Think about it, what we're doing is like buying Apple stock in the ྖs."

Her friend's expression showed that she wasn't entirely convinced, but she didn't argue. After all, no one wants to be the one left out.

It wasn't her confidence that surprised me, young people have always been overly confident about things they don't understand. What surprised me was her explanation: specific, yet at the same time very vague. She could name companies and even cite their growth percentages. But when her friend asked what these companies actually did, there was a pause. Something like "infrastructure for machine learning ecosystems" sounded odd.

Do you know what that made me think of? 2007. A different technology, but the same kind of energy. Back then it was "mortgage backed securities" and "collateralized debt obligations," ,fancy terms that made people feel smart for knowing them, even though hardly anyone actually understood them.

When the Smart Money Stops Being Smart

There are things that, if I dwell on them too much, will ruin my sleep. For example, I know some truly intelligent people: financial analysts, engineers, genius level people, the ones everyone would want to listen to. And yet they are essentially gambling on the same things as a 25 year old coffee shop investor. They have access to data I will never have: Bloomberg terminals and research reports that cost thousands of dollars. And yet, they all seem to agree with the same story.

The narrative is that AI is a major shift in productivity. If companies don't use AI, they will disappear. The market is pricing in future profits that are practically guaranteed. Any decline is seen as a buying opportunity.

I'm not saying this perspective is wrong. (Though honestly? I think it might be.) What frightens me is that almost no one in these discussions considers the opposite scenario. Doubting AI's progress is treated as backward or naive, as if those raising concerns are just trying to deceive us.

My friend Sarah, she works in venture capital, she shared something revealing with me not long ago. We were at a birthday party, and after a couple of drinks, her professional mask slipped a little.

"Do you want to know the real truth?" she asked. "I don't really understand half the pitches I get. I think they're creating solutions without having problems. But the valuations…" She whistled. "The valuations imply that the next OpenAI will come out of them."

"So why invest in them?" I asked.

She looked at me as if I had asked why fish swim. "Well," she said, "what if they are?"

It's basically a case of fear of missing out turned into an investment strategy, dressed up with enough jargon to sound respectable.

The Thing About Cascades

Imagine a situation like this: you're hiking and accidentally kick a small rock loose near the top of a slope. At first, it's just one rock, barely noticeable. Then it rolls and hits another rock. A few more rocks get involved. Before you know it, you have an unstable situation.

The economy is like that slope, and we keep kicking rocks.

Think about it: tech giants, the major players who dominate the indices, retirement accounts, and pension plans are borrowing enormous sums to develop AI infrastructure. Data centers use more energy than some small countries. The chip industry depends on the geopolitical stability of regions that are far from stable. Supply chains are stretched so thin that even a minor disruption can trigger a chain reaction. We call this "strategic vulnerability," but in reality, it's just poor planning.

And all of this assumes perfect conditions. No major wars. No lasting regulatory actions.

So what happens when, not if one of those assumptions fails?

I asked Daniel, my neighbor the one with the accounting firm. He was quiet for a moment.

"I can't help but think about it," he admitted. "But really, what else can I do? Just sit back and watch my competitors take over because I haven't invested in technology? The danger of staying put feels riskier than the danger of moving forward."

And that is quite the predicament. We are trapped in a prisoner's dilemma of our own making. What seems like a clever move for one business can create unintended consequences for the entire system.

The Regulation Paradox Nobody Wants to Talk About

This is where it really gets fascinating, very much like that uneasy feeling when a roller coaster starts to drop.

My buddy Marcus put it bluntly over beers last weekend: "We made the AI industry too big to fail, and we did it without even knowing whether it should succeed."

What Breaking Looks Like

I doubt the system will collapse all at once. It won't be a single, dramatic Monday morning when everyone wakes up and the market is gone. That's not how these things happen.

Most likely, it will be a series of small breakdowns that we rationalize, until we simply can't anymore.

It might start with a few well known AI projects underperforming. They wouldn't be failures exactly, just not the revolutionary productivity gains that had justified their valuations. Companies quietly abandon their investments. Share prices "correct." We call it a healthy market cycle.

Then maybe the supply chain gets affected, something in chip production, or rare earth minerals, or just the growing complexity of global logistics that we've been neglecting because everything has been going smoothly. A minor disruption, that's what everyone says. Prices will normalize.

After that, there could be a major regulatory action in the market. Perhaps Europe or California might decide that some guardrails are necessary. Tech companies would comply because they have to, but compliance costs money and slows innovation. Margins compress.

Each of these events on its own is manageable. Markets have faced worse situations. But what if they start to compound? When do investors who were happy with high valuations based on revolutionary potential begin to worry about those valuations when measured against real, current quarterly earnings?

That's when the rock slide really begins.

The Uncomfortable Question We're Not Asking

This thought keeps coming back to me, sometimes waking me up at 3 a.m.: What if we're not at the start of the AI revolution? What if we're at the end of the AI bubble?

Not because AI isn't real or useful, it obviously is both, but perhaps we've already imagined a future in which everything goes perfectly, and reality doesn't work that way.

I was discussing this with my sister. She's a doctor and completely outside the tech world, and she offered a perspective I hadn't considered:

"In medicine," she explained, "we see this phenomenon with new treatments almost every time. At first, the results are so promising that everyone gets incredibly excited. That leads to a lot of investment. But when we start using the treatment in the real world, on actual patients with complex, problematic conditions, we discover all sorts of limitations that weren't apparent in clinical trials. The treatment is usually still effective, just not to the extent that the whole world expected."

"So what happens to the companies that advertise miracle cures?" I asked.

She gave a noncommittal reply. "That depends on how much debt they took on while making those irresponsible promises."

That conversation has been echoing in my mind for quite some time.

What's Next After Pretense

Look, I'm not suggesting that you sell everything, buy gold, and move to a cabin in Montana. (Though if you do, I hear Montana is beautiful.) What I am saying is that maybe we need to get back to having more straightforward conversations about risk.

Real risk. Not the sanitized, portfolio diversification type that assumes markets are rational and that every correction is always followed by a recovery. I mean the messy, complicated kind of risk that arises when you build too much, too fast, based on assumptions that haven't yet been tested by reality.

The thing about tripwires is that you only notice them when you're already on your way down.

Daniel, my neighbor at the accounting firm, called me yesterday. The AI platform they committed to on a three year contract is having integration issues. Apparently, their current systems aren't compatible in ways the demo conveniently didn't reveal. Now they're paying for software they can't fully use while still paying employees to do the work manually.

"We're committed now," he said. "Too expensive to back out."

And that is, on a small scale, where every one of us is. We commit to a direction we cannot afford to reverse, hoping that by moving forward faster we can somehow make the problems behind us irrelevant.

Perhaps it will. There's a strange thing about technology, it often manages to work out, even when logically it shouldn't. We've stumbled into the future and survived.

But that woman in the coffee shop with her tech stock signing bonus? Daniel with his AI platform that doesn't quite work? Those pension funds and retirement accounts dependent on continuous growth in a possibly overextended sector?

They're all gambling on "maybe." And when you are this high, maybe is hardly a support.

The storm is on its way. The air pressure has already changed. Whether we decide to take cover or not, at least we should stop fooling ourselves that we don't feel it.

I also wrote: