This is the second part of my project outlining the current and future landscape of AI. For those who didn’t read the first part, I went over the history of the development of AI technology. Some terminology that I will use:
Large Language Model (LLM) – General purpose assistants that produce human like responses and reasoning. The most common example is ChatGPT 5.
Foundational Model (FM) – The LLM used by companies that don’t train their own LLMs in the background, so they don’t have to spend hundreds of millions of dollars on the training process.
Graphics Processing Unit (GPU) – A graphics card, which are used to train LLMs due to their ability to process many things in parallel which is extremely useful for speeding up the training process.
Compute – Processing power from a GPU needed to train models. Think of it as a unit for using GPUs. More compute = more AI training & usage, and compute is obtained from GPUs.
One big question is who will lose their jobs. Not if, but who. Even now, since 2023 there has been a notable drop in graduate roles available. In the UK at least, it’s tough to say what portion of this drop is a result of AI backed automation given the economy was not strong in this time period, but I’d say it’s fair to assume that a sizable portion of this drop is AI related, given this phenomenon has been observed in lots of different countries.
The general consensus is that repetitive, rule-based work are most at risk since these are the easiest to automate. On top of this, entry level jobs are at risk and are already dropping due to these jobs tending to be repetitive, rule-based grunt work. Jobs that require a lot of dexterity are harder to replace due the bottleneck in technology of AI and robotics. Managerial roles where you need to take in lots of information from different sources and come up with a good decision consistently are also thought to be harder to replace.
This is just who are the easiest to replace though, down the line, what will happen? Will everyone be replaced? If not, how many will be replaced? I’ll try and answer with current wisdom and my own two cents sprinkled in.
This is a very common analogy drawn to the AI revolution. Back in the late 1700s, the first industrial revolution had started after we created steam powered engines and moved production from households to localized buildings called factories. This was transformative to society and led to a lot of people losing their jobs. More specifically, the jobs that were available changed. These factories were built in cities since they were more populated and these factories needed workers to create the goods using this new technology. Consequently, jobs got much more concentrated in cities as people with certain occupations could no longer work from their homes/in the nearby farms. So rather than jobs being destroyed, they were replaced by different jobs.
That’s not to say this didn’t cause economic hardship. This forced people to cities to work in gruelling and dangerous jobs, and with the need to actually secure the new job in the first place. Some of those who were particularly affected by this joined forces and were dubbed “Luddites”. They were against the new technology and would destroy machinery in protest.
This revolution was actually very slow, taking place over several decades and as the technology developed slowly, jobs in factories were slowly replacing other jobs, and people immigrated slowly. For example, one family member settles in the city, then a second comes and stays with the first etc. The slowness helped ease the stress put on affected working-class families.
The similarity is that, in both cases, they automate parts of human labour, which is not the case with most innovation. However, they don’t fully replace human labour, they just change it by automating certain parts and creating a need for control over this automation.
For example, the decentralized production of cotton cloth (i.e. people making it in their own homes rather than in one specific area) then led to jobs making cotton cloth in factories using a loom. Likewise, rather than jobs disappearing due to AI, they are replaced with jobs that involve controlling/monitoring AI. The key fact similar to the industrial revolution is, just like factories can’t produce goods on their own, we won’t be getting AI to produce products on their own either. AI at the moment is nowhere near reliable enough to not be drowned with supervision. Even if AI does improve dramatically, it would have to be amazing to be trusted with no supervision. I personally don’t expect the future to look like totally autonomous with no human supervision. If it does somehow happen though… now that would be a genuine issue for jobs.
That said, we can’t guarantee the number of AI jobs will exactly fill in the gap for the number of lost jobs due to AI. For all we know, the number of available jobs could be net negative. It is also likely that certain groups of people will feel the pinch much more than others when this transition happens. The scariest part of this transition that deviates from this analogue is how much faster AI is developing. As mentioned earlier, the first industrial revolution took decades from when the first genuine products and factories were being built (rather than from when we first conceived the theory for the technology) for the widespread adoption within society. If we look at when we first started building products using LLMs, in just a few years we are already seeing large progress in widespread adoption. I think most people are much too afraid of AI in general but to be fair, the immense speed of widespread adoption is intimidating.
I have categorised what are, in my views, the factors which determines the extent to which those worst affected by the widespread adoption of AI experience economic hardship:
1) Generally speaking, which of the following three options will happen as a consequent of efficiency gains made by AI? (And will this vary at all by sector?):
a. Producing more product (Hyper-consumerism)
b. Producing the same amount of product and firing unneeded employees to maintain the same output (Post-AI Dystopia)
c. Keeping the same number of employees and lowering their weekly work hours (Post-AI Utopia)
2) How many lost jobs will be replaced with jobs using, implementing or supervising AI?
3) Is there much more barrier to entry for these jobs compared to the jobs replaced? And are the skills and qualifications similar? (i.e. qualifications)
4) How long after jobs start disappearing will new jobs start appearing?
These are all questions we can only speculate the answer to, more on 1) in the coming three sections. To be honest, I don’t have an answer to the other three questions! There are people far more qualified than me for those issues.
I think it is totally feasible, though far from guaranteed, that the answer to all these questions is what you would want for a non-detrimental economic outcome to those worst affected by the adoption of AI. Any potential non-technical roles such as maybe an AI workflow designer who integrates AI into the workflow of an existing company, AI quality reviewers who checks AI outputs and maybe even a whole new sector of jobs revolving around the governance of AI as this field develops. On an individual level, I think there will be people negatively affected by the widespread adoption of AI unfortunately, but maybe as this happens, governments can try and plan some sort of initiative to help these people out. Of course, the quality or existence of such services at all will vary greatly by country, if implemented at all. This also relies on government legislation keeping up with the pace of AI adoption, which is demonstrably a tall order.
I would categorize the economy into three levels. The first are people who work at companies that train and optimize cutting edge LLMs and work on these LLMs in some way. Companies like OpenAI, Google, Anthropic etc. Call this group the “AI producers”.
The next strata are people who don’t train or create LLMs but work at companies who utilise LLM outputs and reasoning as part of their product. Perplexity is an example but not a typical example. It’s hard to give concrete examples given the amount of noise in this space that probably won’t last long term (more on this later), but any tool you inevitably would have seen advertised online that wants to automate or speed up parts of your life fall into this category. They indirectly sell LLM’s human-like reasoning by utilising it in their product. Call these the “AI consuming producers”.
The final strata, consisting of the majority of the population, are people whose work for companies whose product does not inherently utilise LLMs, and only consumes LLMs to improve work efficiency. Call these the “AI consumers”.
So, we have AI producers, AI consuming producers and the AI consumers (I may omit the AI prefix when using this terminology). Now, to cover each case in my first category of the future in the previous part.
There have been talks of a universal basic income for those unemployed in the scenario that there is widespread unemployment. “AI consumers” as defined in the last paragraph would be the people at risk of falling into this group. The CEO of OpenAI, Sam Altman himself mentioned this in an interview. I will go over why in this section.
There is potential for enormous concentration of money and power around these companies that create LLMs. When widespread adoption is more or less complete, essentially everyone will be paying the “AI producers” to be using their models in the background as foundation models. These are companies like OpenAI, Anthropic, Google, DeepSeek etc. I mentioned in my previous blog that cutting edge models cost upwards of $100 million to train! This massive barrier to entry inevitably leads to very few companies producing their own LLMs. This creates an oligarchy of power, as a handful of companies hold all the money and power over this AI reliant society.
As these “AI producers” amass ridiculous amounts of money in this scenario, and as unemployment soars, we land in a scenario of immense economic inequality as you have swarms of people who can’t get a job due to shortage of work on one end and a few companies with all the capital received from the entire world using their LLMs for their automation and productivity gains on the other. The proposed solution to this problem is universal basic income.
The companies who own these LLMs, using their obscene amounts of capital, provide these unemployed people a basic income so they can get by, even without employment. The companies would not be paying the tax directly, but rather there would be some form of AI tax, and this extra tax is then used by the Government to provide the universal basic income.
Even if they are heavily taxed, imagine the amount of power these “AI producers” have in this scenario! Also, with a lot of people, their jobs are deeply tied into their identity, their livelihoods, or just what they do to pass the time. I remember hearing several people in my old job say they wouldn’t know what they would do with themselves if it weren’t for their job! Even if this solution stops a mass of homelessness, it could cause a lot of other problems.
The “AI consuming producers”, in my view, would be the bourgeois of this new society, distinct from the “AI producers” who are the upper-class elite and the “AI consumers” who are the working class, or maybe more aptly named in this scenario, the not-working class! Potentially, the not-working class would be an extra class for the unemployed that fit below and distinct from the working class and are the people who receive universal basic income. Then both the working class and not-working class are “AI consumers”, and the distribution of the two reflects the distribution of industries that lost more jobs than gained due to AI automation. Whereas this middle class taken up by the “AI consuming producers”, being positioned in the AI space, will be able to shield from unemployment as they still sell to consumers granted that the product actually lasts.
This has been dubbed "techno-feudalism". Feudalism is a very old economic system from hundreds of years ago used in Europe. To gloss over the details, you have the king on top and the wealthiest, the serfs who live on the king's land and work for the king and are the poorest, then you have knights and nobility in the middle. Serfs were paid by being allowed to live on the land of the king. This is key to Feudalism as it results in no ability for social mobility due to the inability to accumulate a surplus using this method of payment. Techno-feudalism draws parallels to this system, where the king are the AI companies and the serfs are those in the working class who, say, are on a universal basic income. Though a key difference is, since the payment is genuine money, there would be possibility to accumulate wealth, though the extent of this ability depends on the amount given for the universal basic income.
On the other hand, the potential utopia post AI adoption is the “two-day work week” as it has been dubbed by some AI leaders. If employees become 2x as efficient, why not just let them work half as much and not change their pay? They are still producing the same amount of labour, so the company can function as usual. In my opinion, this scenario is more likely if there is no financial incentive to produce more, specifically in scenarios where efficiency is not the bottleneck of selling.
Take insurance companies as an example. You only have to employ labour proportional to your customer base. Efficiency gains may improve your ability to handle claims faster and more accurately, or price customers more competitively and profitably, and answer customer calls faster, but these don’t get you more customers, and if everyone has AI in advertising, nobody has gained an advantage in terms of advertising, so there is no reason to expect a big shift in the number of insured customers. Thus, in this scenario, I think it could be plausible that employees have shorter workdays post AI adoption.
Another point of interest is the potential for massive deflation for several reasons. The first is the productivity gains caused by AI adoption. If companies are producing significantly more, and if consumers can’t increase consumption at the same rate, this will force companies to decrease their prices from simple supply and demand. The big if is on how much increased consumption does not keep up. As prices drop, people naturally consume more which will, at least to some extent, curb inflation by curbing the disparity between production and consumption.
The second reason is the efficiency gains drops the cost of creating goods, which then drops the price needed to sell the product at the same profit margin. The ability to decrease the price of your service at the same profit margin allows for easy growth of customer base, or (more likely in this case) is necessary for staying competitive if all the competition is also able to drop prices due to everyone enjoying the benefits of cheaper production.
Another factor that would curb inflation is government intervention. The goal for inflation is always 2%, and if we saw massive deflation, governments would try and oppose this force through the usual means such as increasing interest, buying back government bonds, increasing government spending etc.
Given there are factors that would oppose deflation, the extent to which we would see this occur is not certain. What is also not certain is which sectors would be hit harder by deflation, which is significant as government intervention does not target specific sectors. Generally speaking, in the past, technological innovations that led to large improvements in efficiency led to deflation in sectors affected by computing. So, for AI, sectors that rely on software could be disproportionally affected by deflation, and hence more resistant to being fully curbed by country wide government intervention.
Potentially, of these three options, we see more than one option happen. For example, we start in the post-AI dystopia where there’s initial mass unemployment, then we come out the other end and reach the post-AI utopia.
If a company over-commits to this cost cutting strategy of firing and not hiring at the entry level, it could easily cause problems down the line if done too aggressively. The reason is simple; your middle and upper management didn’t start there! You need to either train people up or nab them from other companies. You can’t train people up if you keep the volume of junior level roles in your company meagre and you can’t nab experienced people if everyone else is also firing juniors and consequently leaving a hole of experienced workers down the line! There have been mentions of companies that have fired a lot, but in my view these companies are being far too rash. I think that is a sign of prioritising money now vs the long-term strategy, but the long-term strategy is what makes or breaks businesses. There have also already been instances of companies who have done mass firing to replace workers with AI, only to immediately re-hire because they realise the mistake they had made! A PR nightmare and a waste of training and talent.
For the post-AI dystopia, there is a big question mark as to how people in higher up positions are replaced when they move companies or retire. Training people has always been an annoying entry cost for companies, especially when they don’t stay and you don’t get the dividends from the time invested into these employees. Some companies are stingier with developing talent than others. But I just don’t see how it’s worth the future problems of your departments structure to save some money and resource on hiring and training up juniors.
The .com bubble at the turn of the millennia and the housing market bubble that led to the 2008 financial crash are often cited when talking about the AI bubble. Are we headed for the same fate that we had here? First, we need to establish what a bubble actually is before we can discuss whether the AI industry is in one.
A bubble is created when speculative future income far outweighs what the future income will actually be. I’ll give a concrete example. It’s a bit lengthy, so I've included the grey dropdown which you can click to reveal, and I summarise it in a few lines in the paragraph after if you don't want to read my massive tangent on how the financial crash of 2008 happened.
Say you are a bank who lends mortgages to people who want to buy a house. For simplicity’s sake, let us assume all houses are £100,000 (God I wish). The bank buys the house on behalf of the customer and then, when the bank grants this loan, the person takes full ownership of the house and has to pay off the loan to the bank for essentially buying the house for them. Then the bank charges interest on top of the regular payments to make this loaning process profitable. These loans might last 20-30 years though, so you don’t get all of the interest income immediately and you don’t get back the money for buying the property on behalf of the customer immediately. This means banks need to project their income into the future. Since in theory, you will gain back that £100k you have initially incurred and get interest on top, one way to project the amount of income from all mortgages given is to approximate the amount of interest you will get from each customer, add it up and say that is your income. If you had 5% interest for 25 years, you might pay £75,000 in interest which nearly the entire property value! So, you could assume £75k per customer. But then, say 5% of customers don’t manage to pay the loan and get the house repossessed. You then don’t get all the interest and by selling the house in a hurry, you may sell it under market value.
Say on average, a customer who doesn’t pay the whole loan loses the bank £10k. Then, when the bank is audited to work out the profits of the bank, you could say your mortgage income is just 75k*number of loans – 5k*(5% the number of loans). It would be much more complicated than this, this is just an example. Now, say houses are going up and up in price and more people decide to get on the property ladder because “house prices only go up”. Slowly as word spreads, you get more and more people mortgaging, and it just looks like easy money for the bank. At the same time, bundles of mortgages are being sold to investors due to the rising interest and money in this area. Then these bundles being sold to investors were being repackaged and sold again. The money made by investors by these bundles as well as bundles of bundles all rely on the banks gaining all this extra interest from these extra mortgages, which in turn come from people thinking “house prices never go down, I’ll get on the property ladder”. The issue is, there weren’t sufficient checks to make sure these customers could actually afford these mortgages. Ultimately, a lot of these people ended up defaulting (not being able to pay).
Suddenly, as reality kicks in and all these defaults come in, banks realise they largely overestimated the amount of income they had as they don’t get the full interest from these non-payers and even sell the houses at sub-optimal prices. This in turn hurts the value of the mortgage bundle sold to investors, as well as the repackaged bundles. These bundles being sold to investors had spread the detriment that resulted from this assumption of income all over the economy, rather than the issue staying within banking. Thus, the entire economy collapsed when reality set in and banks started to be extremely risk adverse with new mortgages.
Fundamentally, it was assumed the market was worth much more than it actually was due to counting income from incomplete mortgages from customers who had no chance of paying them off due to irresponsible lending. This put banks in a terrible future position. Then, deals and investments were being made off this false assumption which spread the issue across the entire economy.
Now, the .com bubble. With the internet blowing up, venture capitalists were very liberally investing millions into any company that had .com in their name with the promise of easy internet profits, some of these companies with a basically non-existent profit-model. On top of that, telecom companies were then providing loans to customers to buy their own telecom products, and this circular financing of loaning to customers to buy your own product led to disingenuous income inflation and hence market valuations (more on circular financing later). Then, slowly as no revenue came in, it was seen that the majority of these companies were duds, people start selling shares to other investors who demand a lower price as it starts to become apparent the stock is overvalued, which then makes even more people sell as they realise the price is dropping. At the same time, all the loans given by telecom now won’t be paid back because these customers companies have gone under, causing a crash for telecom companies. This process snowballs into a financial crash. The classic example is pets.com, but there were many companies with the same issue.
In both scenarios, we see that these bubbles came about because investors were buying extremely overpriced shares/bundles at a very large scale. Then, once we realise they are overpriced, the price tanks and shakes the economy. A key component is that there is speculation involved in the value of a company and expecting significantly more than what actually happens in reality is what inflated the value of these companies. However, if these huge speculative incomes were matched, no bubble would have burst. Bubbles don’t come from huge valuations of companies, but unjustified valuations of companies.
So now, does AI fall into this? Well, first I’ll give the reasons why you could very sensibly believe so. The first big reason is that there is not a single company who trained an LLM from scratch that has led to the LLM being profitable. ChatGPT, Claude, Gemini as examples all aren’t profitable endeavours for their companies OpenAI, Anthropic and Google respectively. Despite this, OpenAI, who has no other product, has a valuation of $850 billion! This is after just last month when OpenAI managed to raise an extra $100 billion of funding. They are currently trying to get an IPO going to get on the stock market, and it’s thought the company could reach a valuation of $1 trillion! That is 1/3 the GDP of the entire UK! This company which has not made a penny and is losing money is as valuable as 1/3 the value of the UK’s output for an entire year. Likewise, Anthropic is valued at $380 billion despite not being a profitable company. I think they have very valid reasons to be investing so much and while it’s hard to say whether the valuations are inflated, I can see the logic behind these valuations. I will get into why in a bit. In my opinion, these companies are very likely to become profitable in the next few years, before 2030.
Like I mentioned before, bubbles don’t inherently come from big valuations. It comes from the company’s ability to match the valuations with results. What we should be asking isn’t what the profits are now, but what will they look like in the future? Ultimately this is the million-dollar question (well more like a trillion-dollar question!) The leaders of AI will have a pitch that will be convincing enough that these investors now believe that, at some point in the future, the profits will start coming in en masse. Whether a bubble burst happens depends on whether they can achieve this before faith in the technology dies out. Now the real question is, why are LLMs not profitable at the moment, and what is expected to change for the profits to skyrocket to the extent that the valuations would suggest investors believe. Before I go into that, there is a different mechanism in play which has led to fear of bubble-like behaviour within these tech companies, though there is an explanation as to why the following does not necessarily imply a bubble blowing up.
These companies have a very peculiar profit model which lays ahead of them. They have the current costs, which consists of the costs of GPUs to train models as well as for inference (the use of their models), paying employees and for investing into research to improve these models. They then have their current income, which is investment from venture capitalists as well as subscriptions from customers to get premium versions and other benefits from their product. When you weigh these two together, these companies are comfortably profitable. There is one glaring cost that makes these companies unprofitable, which is buying compute in anticipation of increasing future demand. Each year, the customer base of LLMs has roughly increased 10x. This means you need roughly 10x the compute every year. But you can’t just spawn compute in and buy it, you need to build data centres to facilitate all the infrastructure needed for compute (remember, I have been simplifying when I just say “GPUs”.) This process of building data centres as well as securing massive deals with companies who can provide quality parts for the infrastructure takes time! This means you must buy in advance. This then leads to a very difficult question: How much will we grow in the next few years?
This 10x year on year cannot continue forever, in fact with the number of people on the planet it will have to slow down drastically very soon. If you overestimate the number of people using your product, you buy too much compute, your income can’t sustain the massive cost of buying this excess compute, and you go bust. If you underestimate the number of customers, your servers will be overloaded since you don’t have a sufficient volume of infrastructure to uphold all the requests from all the customers which leads to a poor experience for the customer when their LLMs aren’t responding (which could lead to people moving to a competitor), as well as a massive loss of potential income from all the extra customers that could have been sustained had more compute been bought. Just to reiterate, it is make or break with these companies whether they can, at least to some accuracy, predict the amount of compute needed for the next few years. Paying for all this future compute, based off projected customer volume growth, is why they are getting so much funding from VCs. The 10x growth year on year completely justifies it in my view.
Once growth slows down, these future deals will diminish in size and cost, eventually even not being needed anymore once customer volumes level out. At this point, when these massive payments for the future aren’t needed, and the income from the current customer base reflects the costs of the present and the future, then these companies will be profitable. Remember, as mentioned earlier, this buying of future compute is why these companies are not profitable at the moment.
On top of this, the deals for compute tend to be for multiple years, making them an even more expensive downpayment than if the deals were renewed more frequently. This also makes them riskier, since you have to project further into the future, meaning more uncertainty and less margin for error in projecting future income.
OpenAI have been the most aggressive by far, and if they overshoot it could be bankruptcy. But if they nail it and have the best partnerships and the largest supply compute-wise, they’ll come out on top on a multi-trillion-dollar industry! On the other hand, Anthropic have been quite timid with ordering compute. If they are wrong by a large margin, a lot of money will be lost from not being able to service their entire base, and potentially the service will be completely unusable if the amount of acquired compute is not sufficient for the usage of their LLMs.
For these reasons, I think it’s very reasonable for these companies to have large valuations, and the valuation sizes on their own are not the sign of a bubble. There is another factor which I would agree could lead to some bubbling of the AI industry if a close eye is not kept on it, which is circular financing.
Circular financing is the process in which money flows around the same set of companies in a loop, consequently inflating demand artificially, which is argued to lead to some bubble-like behaviour in our case with AI companies. Imagine I am a company worth £100. Now, imagine you have £100 and I give you £10 as an investment. Now I have £90, you have £110. Then, using this £10, you buy something from me for £10. Now I have £100 and you have £100 (when liquidised) and I have £10 in sales. Then, using this £10 I just got, I buy something from you for £10. Again, I have £90, you have £110 but we both have £10 worth of sales. Then you buy something from me for £10, we are both back to £100 but I have £20 of sales, you have £10 of sales. The actual value of the two companies is the exact same, but we’ve artificially grown our sales, which in turn can be used to try and justify higher valuations, because we’ve artificially manufactured these sales and representing it as demand. These higher valuations come from people looking at these sales and deciding to invest, not seeing they are very misleading of the actual finances and profit of the company. This process played a big role in the .com crash.
This issue could be happening with AI. The first case is Microsoft and OpenAI. Microsoft invested billions of dollars into OpenAI, and now OpenAI utilise Azure, Microsoft’s cloud computing platform. In both scenarios, companies are cyclically buying from their own customers.
My second example is involving Nvidia, CoreWeave and OpenAI. Firstly, CoreWeave is a company which deals with the infrastructure for GPUs via cloud computing, specifically for GPUs. Basically, you can use GPUs provided by CoreWeave via the cloud, rather than having to go buy a GPU and without having to physically acquire it from CoreWeave.
Now, Nvidia invested a lot of money into CoreWeave, and consequently CoreWeave bought a lot of graphics cards from Nvidia to use as part of the GPU infrastructure (Nvidia make GPUs, CoreWeave use them in infrastructure so people can use them on the cloud). Then, Nvidia pays for the cloud computing power provided by CoreWeave, indirectly just using the GPUs they just sold to CoreWeave.
A third case is between OpenAI, Oracle and Nvidia. Oracle is, more or less, a very large company that deals with storing databases. This example joins onto the second example, so you could argue these two are part of one larger case of circular financing. Nvidia, as well as investing a lot of money into CoreWeave, has also invested billions into OpenAI. Using funding, OpenAI then builds a bunch of data centers to train their LLMs. Then, Oracle have a partnership with OpenAI called Stargate where they join forces to use Oracle’s database capabilities in these data centers. Then, using Nvidia GPUs, OpenAI training algorithms on Oracle databases, these LLMs are trained. Then, Oracle invests heavily into Nvidia for the use of their GPUs, as I just mentioned.
We see OpenAI and Nvidia at the centre of a lot of this circular financing, though any company involved is potentially at risk of a bubble. The issue is, as discussed before, artificially pumping up projected valuations of a company that can not be matched with reality which is exactly what circular financing has the potential of doing.
Despite all this I have seen arguments that circular financing going on here is not necessarily an issue or the start of bubble-like behaviour. If the companies involved in these circular finances are genuinely serving each other with useful products/investment for each involved company, and since these circular investments don’t affect profits and just potentially inflate income, ultimately if these companies can hit these valuations, which they are likely debating due to a vision of immense compute demand for AI, then these circular deals could lead to no harm. But of course, you have to be very very careful! Especially given the size of some of these circular deals are in the hundreds of billions!
Also, as a last point, it is worth noting that in the .com bubble and the financial bubble, one major issue was that it completely blindsided everyone. Nobody had any idea what we were headed towards. With AI, everyone is going on about it. If you read any article about a bubble, they will bring up the .com bubble. If the public know about it, the incredibly rich investors who do this for a living, have millions of dollars on the line and that actually control whether bubbles develop and burst definitely know about it. I don’t think the situation is quite as simple as it was in the other two scenarios for this reason. Investors will be taking the potential for a bubble into account when supply and demand price these stocks and as well as when funding grows these AI company valuations.
When the .com bubble burst in March 2000, a lot of companies went bust. The vast majority of companies that received venture capital funding went under. However, some survived. These were companies with strong fundamentals, solved a genuine problem for customers, had an actual profit model, was forward thinking and had a future vision rather than following the hype of the latest trend. While the dust was settling from the bubble bursting, these companies continued to grind quietly, and eventually when the internet was ready to yield the value that investors were too presumptuous about during the .com bubble about the speed and ease at which this value would come about, these companies would flourish. One example of a startup that flourished was a small online bookstore. It lost 90-95% of its value during the bubble bursting and was predicted to go under within the next year, but a few years later in 2003 it had its first profitable year, and it was only up from there. That company’s name is Amazon.
Likewise, we have the same issue at the moment with AI. Lots of “noise” businesses are being built as people join the new fad of AI. Every other ad I see on YouTube is about some gimmick software that I obviously don’t need and I’m sure you’ve seen the same. But eventually as these companies die out, it is thought that the Amazons of the AI space that focused on fundamentals and thought long term rather than just jumping on a hype train will appear from the smoke as the survivors and go on to be successful businesses. How long it will take to weed out the “noise” companies is hard to say, but that time will come eventually.
All in all, it’s very hard to predict what AI will do to the economy in the future. Like with a lot of economic predictions, the stochastic nature of the market makes any sort of economic prediction extremely difficult, AI or not. It’s much more common to look at why a certain economic outcome happened in hindsight, than it is to predict what will happen.
That being said, I don’t think it’s total doomsday for humanity economically, but I do think there is potential for certain groups of people to face financial hardship. I also think these hardships will be transient though as the dust settles and governments figure out what to actually do. I also don’t think these are issues of the next few years, but rather over the next decade or two. If someone asked me what I recommend them to do to survive the AI wave, it would be to stay informed about AI technology in general, keep an eye on the floor for which jobs are disappearing and which are appearing, and making sure to do any studying and qualifications that can better your chances of being in the running for more future proof work.
As for the profit models of these LLM producers, you must not fixate on the present numbers. So much of successful business involves looking into the future and making prudent predictions. None of these companies are profitable, but if you look just a bit into the surface, you see why. Buying compute for the future will not be needed once customer volumes level out, and then when the future does not sink billions anymore, it’s very likely these companies will be very, very rich IF they order the correct amount of compute now. Any company that over-orders now could be bankrupt in the next five years, and any that under-order could massively dampen their income and user experience if response times are massive from insufficient compute. A genuinely stable flow of revenue could make the circular financing point harmless, but care is definitely needed on this front.