Note: I will explain certain terms in the below essay because I have found that many people do not know what I’m talking about when I start ranting about the AI bubble if I don’t explain what I’m saying. However, nobody should ever make you feel stupid for not “getting” AI, because the industry is purposefully filled with jargon and incomprehensible, meaningless statements. If people really understood what AI does (and fails to do), they would be terrified by how much the U.S. economy is propped up on it. And that is why the AI industry uses language that purposefully obscures.
People ask me to write about AI. I have avoided it because it makes me feel like my job is bleeding into my writing. However, duty called today.
It's been a big month for tech stocks. Two weeks ago, cloud behemoth Oracle saw its stock price jump 43 percent. This was after announcing a deal with OpenAI: OpenAI will pay Oracle 300 billion dollars for data center construction across the U.S. over the next five years. And then yesterday, Nvidia’s stock price jumped 4 percent after announcing a 100-billion-dollar investment in OpenAI’s computing infrastructure. This was a real “you scratch my back, I scratch yours” investment, because OpenAI will purchase an estimated 35 billion dollars in Nvidia-made graphics processing units (GPUs) per every 10 billion dollars of Nvidia’s investment. This investment added one hundred and sixty billion dollars to Nvidia’s 4 trillion dollar valuation. If you are holding Nvidia and Oracle stocks, congratulations. But also, best of luck. Because little of this money actually exists.
To explain why these massive deals are happening and why they are concerning, I will first outline some terminology. AI comes in many forms.1 But most American AI models receiving tons of investment—including large language models (LLMs) like ChatGPT, Gemini, and Claude—are basically statistical machines.2 These models do not “reason”; rather, they perform complex mathematical calculations to detect patterns in data and then produce outputs. As this is resource intensive, AI companies cannot house all of the infrastructure of their models in their offices. Thus, they use data centers, which store the physical machines and hardware that are used by AI models virtually, or via “the cloud.” An essential piece of hardware is GPUs: chips that process vast amounts of data simultaneously and therefore have the capacity to perform models’ calculations. Most American AI companies believe that the answer to improving their models is scaling up their models’ computing power. That means access to more data centers and GPUs. Thus, to develop their models, OpenAI is signing computing infrastructure deals in the hundreds of billions of dollars.
It would really be great if OpenAI had the money to pay for these things. However, the company’s financials raise perplexing questions. OpenAI has not turned a profit and will not do so until 2030 at the earliest—and that is by its own optimistic figures. Of course, the expectation that a company must turn a profit to be valuable was forever altered by the Great Recession of 2007, after which the U.S. Federal Reserve pumped tons of money into Wall Street, and Wall Street then lucratively invested in unprofitable companies (like Tesla, Uber and Netflix) via financial instruments like stock buybacks (to the detriment of the public). These companies eventually became profitable. Thus, a few years of massive unprofitability does not necessarily doom a business.
However, OpenAI’s projected profit is based upon a pixie dust business model. OpenAI is betting that its LLMs will restructure the workforce and fundamentally alter the course of humanity. However, LLMs are not actually producing revenue in the vast majority of businesses, and corporations that mass-fired workers due to “AI optimization” are now having to hire them back because it turns out AI cannot actually do their jobs. LLM evangelists say that this is just because LLMs aren’t good enough yet, but soon they will “replace humans.” Unfortunately, many AI scientists do not believe this is possible because this is not how LLMs work. LLMs are not actually intelligent: they are pattern recognition machines trained on specific data for specific kinds of tasks. That is why ChatGPT can’t do your math homework. It doesn’t matter how much you scale up their computing power, because at their core, LLMs are not built to be able to learn and reason like humans, and thus will not rearrange the economy and make all our jobs obsolete or whatever. They will not suddenly be able to do something that they are not hardwired to do—OpenAI’s LLMs included. OpenAI’s infrastructure plans are also disconcerting on an environmental level. Data centers require a vast amount energy for powering and freshwater for cooling, as saltwater is highly corrosive to metal. If the rate of OpenAI’s data center growth matches that which is laid out in its deals, U.S. power grids and freshwater sources will be massively strained, if not depleted. Investors are relying on OpenAI’s LLMs to finally being able to “do something” before that happens. They won’t.
OpenAI is not alone in these issues—all LLM-focused companies face them. And these companies are also cracking deals with computing infrastructure giants like Oracle and Nvidia. That is concerning as a whole. However, it is particularly so for Nvidia, the most valuable company in the world. As the dominant GPU provider, Nvidia uses a cyclical method to gain customers and inflate its valuation: invest in small, cash-strapped AI start-ups with the knowledge that they will then buy a bunch of chips. Some of these start-ups are employing AI in ways that are useful. But many don’t actually do anything, have valuations pulled out of the crack of some overworked venture capital analyst’s ass, and will fail. The sustainability of Nvidia’s business model is thus questionable, casting doubt upon the sustainability of our financial market that relies upon it so heavily. And to make matters more alarming, there may come a time when GPU demand falls. We got our first glimpse of it in January 2025, when AI company DeepSeek released the LLM R1, and revealed that it circumvented the need for tons of expensive GPUs by using technical workarounds to make R1 less energy intensive. The model went on to outperform many of Silicon Valley’s own, and sent U.S. financial markets into a tailspin. When LLMs don’t need as many GPUs, what happens to the massive infrastructure deals? When investors realize that LLMs cannot do what they have promised to do, what happens to the AI companies themselves? And what happens to a stock market reliant upon these valuations?
If this is all true—which it is—then why is the U.S. economy so dependent on AI? It’s simple: because people put a lot of money into AI, so they keep putting more money into it because they need it to work. This is not detached from the concept of FOMO, nor of personal pride.3 Many AI investors feel smart talking about AI, and AI CEOs appeal to this instinct. AI CEOs are successful for the same reason as their LLMs: they are sycophants. LLMs tells us how eloquent we are before we hit “Submit” on the paper it wrote that will make us fail the class; AI CEOs butter up venture capitalists with aphorisms like “you are pushing humanity forward” as the venture capitalists make investments that are completely divorced from reality.
This whole thing—putting more and more money into something until something awesome happens—is not at all dissimilar to the American approach towards AI: scale until the model hits a breakthrough. But that’s not quite how AI works. And it’s not quite how financial markets work either.
This is not to say that AI is bullshit. It is extremely effective at data collection, analysis and surveillance, and in some medical, anthropological, and technical settings, it has proven to be very useful. However, the idea that LLMs are going to take all of our jobs, much less solve the climate crisis, is absurd. And investors are slowly beginning to realize, and publicly admit, that LLMs—OpenAI’s included—are glorified chatbots. OpenAI CEO Sam Altman probably knows it too. That is why he is now running around trying to sign multi-billion dollar deals with governments to give citizens premium access to ChatGPT. What else can he do? The scaling is not working; the underwhelming performance of ChatGPT’s most recent model is proof (which Altman initially sold as having “artificial general intelligence,” whatever the fuck that means).4 LLMs can write your CV, emails and some of your code. They have their place and will not just disappear. But these investments? These valuations? These multi-billion-dollar deals made from thin air? Those, my friends, will not go on forever.
In sum, do not put all your savings into tech stocks.
…and is a contested term in and of itself.
Again, not all AI models are built this way; it’s just what’s currently popular in Silicon Valley.
As many have noted who work in Silicon Valley, in some places AI also has a sort of cult around it. This is not unrelated from the argument of my Taylor Swift essay: that the U.S. is suffering a religious breakdown and people are filling their need for the divine with other things. AI is an omnipresent, pseudo-supernatural thing that, unlike God, will talk to us and make us feel real good inside.
I know what it means and it means nothing; don’t come here crawling up my ass.



Really interesting read, I've been calling them Plausible Sentence Generators to anyone who will listen
Really enjoyed this write-up. I primarily deal with AI in terms of teaching writing to students so its interesting to get the business side of things. Helpful to think about when I'm trying to convince my students that their voice is more important than their grade.