Hacker News new | past | comments | ask | show | jobs | submit login
The Magnitude of the AI Bubble (apolloacademy.com)
47 points by helsinkiandrew 4 months ago | hide | past | favorite | 68 comments

I think many people on both sides of the "AI is a bubble" debate seem to continuously carry the same misconception: that significance is somehow tied to merit.

On the "AI is just a bubble" side we have: AI is not as good as claimed therefore its impact is not as significant as advertised - ergo it is a passing fad & market caps are inflated.

On the "AI is legit" side we have the counter: AI is as at least nearly as good as claimed and/or has potential to be - ergo the hype is warranted & it's here to stay & market caps are warranted.

I'm in the third camp: impact is not related to merit, and AI is impactful in not-necessarily-positive ways that are societally transformative & will reach far outside any bubble & persist. Probably to a more significant degree even than predicted. And I think this extends far beyond the impact of purely economic knock-ons as seen with some latent impacts of blockchain speculation.

Given that third thesis I don't think the market caps are significantly inflated (beyond any other comparable market caps outside AI): investment is not merit based, it's impact based.

AI has seen so many hype cycles there is a term for it in the industry : AI winter.


Origin & usage of the term are fine, but structurally a lot of that Wikipedia article seems like original research.

Also, grouping the current "AI summer" in the "2005-" category seems questionable.

Spring is coming.

Question is if summer is coming or if another winter is looming after this spring.

The way I see it, either AI takes off and capital becomes an even greater proportion of production than in the late 19th century, or my job stays relevant.

To hedge against the first possibility, I want to at least have some presence in the stock market. (A broad presence, not only "AI stocks"). If the current AI boom is a bubble, I will stay employed for another couple of decades, and don't need those savings yet.

> or my job stays relevant

As an abstract, I would wager AI will take off AND your job will stay relevant.

As an individual, the relevance of your skillset within the professional sphere is more variable - some jobs change more frequently than others, & I do think AI will change the nature of some jobs.

The most discussed example is graphic design: if you're a graphic designer "pre-AI take off", there's a reasonable chance you're balancing a more lucrative corporate slog with a less lucrative creative outlet. I'm banking on AI becoming favoured for the slog, but never approaching anything competitive in the creative space. That will lead to job losses or jobs becoming even more vacuous than before, but as an abstract it won't eliminate the actual creative role.

If your job does not stay relevant then a large number of other jobs will not stay relevant either. This would impact consumption and therefore also negatively impact those stocks. Unless, of course, political steps are taken, in which case holding stock will not matter.

Consumers are not _really_ needed in an economy, and in particular mass consumers are not needed.

It's perfectly possible to imagine an economy where most of the productivity goes into catering to the needs of those who hold capital, such as personal consumption, security and investment into activities that increase the value of their capital.

As an example, consider North Korea. Very little goes to the consumption of the regular citizens as most go into the regime and their police and military forces.

With AI, it potentially becomes easier for an elite to do this, as the police force can be robotized, meaning a handful of people (or none at all) can monopolize violence without relying on the support of a large group of humans.

For instance, if democracy collapses, the tech industry could step in as "saviors" and institute some kind of oligarchic republic or empire where the government is under the control of a wealthy elite (or an individual).

Even without a full collapse of democracy, I imagine it may be difficult for a government to fully control industry if AI becomes sufficiently powerful.

It is quite plausible to me that the result will be that international corporations will be able to protect 50% or more of value generation from taxation, by relocating to favorable markets, lobbying, clever lawyers, etc, just like today.

In both of the scenarios above, we may be seeing a massive increase in the gap between those who own capital and those who do not as labor is no longer needed to generate this value.

While it IS conceivable that at least some countries are able to adopt full socialism based on an AI economy, I wouldn't take that for granted.

> the third camp: impact is not related to merit, and AI is impactful

I suppose I'm in Camp Zero: If you want me to invest, tell me exactly what it does and what you think it can do.

If you have created a better probabalistic language generator, you may have created some business value. But calling it "A.I." and implying that it is "A.G.I." is the kind of deceit practised in a typical economic bubble.

AI is a superset of ML, and calling it AI is perfectly acceptable by academic and industry standards regardless of your personal feelings about it.

Following your logic, we could order the terms thus: A.G.I. > A.I. > M.L.

No marketing team has yet had the audacity to claim that A.G.I. == M.L. but I am alas not aware of any robust long-term protection against marketing audacity. ^_^

A.I. remains a theoretical hypothesis, and that is not an opinion regardless of the marketing tactics and money thrown at the question. As for probablistic text generation, generating dubious text at scale isn't intelligence.

It's good enough for chat-bots but will not do for any task requiring serious responsibility.

It may nonetheless prove to be a viable business... or short-term pump-and-dump. ^_^

my take is that even without significant improvements over the current level of LLMs you are going to see significant impact on jobs and society over the coming years. There's so much low hanging fruit just by improving the UI around the existing models, I've seen prototypes that can automate a huge amount of grunge work and common manual workflows that office workers do every day. They just need to be integrated at the web browser or OS level so they can be used across apps and have some ability to preview/rollback any changes via human sanity check in the cases the AI agent messes up. Microsoft and Apple can destroy most of these AI startups by streamlining the workflow, Microsoft is already rolling stuff out. Apple really needs to back one of the open source models to support their on-device privacy message and include it with their devices as an improved Siri

and all of this is assuming we don't see the AGI exponential model improvement a lot of the AI bros hype up. If we continue to see improvements on the level of GPT-2 to GPT-3 to GPT-4, then the economy as we know it changes forever. I've already seen research prototypes using LLMs to do automated reinforcement learning and optimization for robotics, so you could rapidly see automation there as well

It's true that LLMs can be used to automate things that couldn't be automated before. But even without LLMs there are untold millions of jobs that could be replaced with a simple shell script. Staggering inefficiency is everywhere. So maybe the GPTs won't be that much of a gamechanger?

The reason the scripts are not in use is that they're costly to maintain due to sprawling edge cases. One LLM argument is that they are better able to handle these ambiguities without expensive developer support.

I for one look forward to the 737 Max on board flight critical LLM, it'll help with our emissions goals.

unless the reason why these jobs weren't replaced with a simple shell script was that we didn't have enough people for whom such a shell script was "simple".

maybe LLMs will change that: maybe it will help write all those "simple" shell scripts?

I think there's probably a lot of truth to the meme that goes around saying that our jobs are safe because effective LLM use requires people to know what they want.

I think the exact opposite is true.

The more you play with these tools, the more you realize that they are not nearly as robust as they need to be to have a significant impact on society. We need much more reliability.

We will undoubtedly continue to see new generative AI tools rolling out, but I think the end result is going to be a lot less disruptive than many claimed when ChatGPT was initially released.

Of course if there is exponential improvement all bets are off, but we have had nearly a year of stagnation where GPT-4 hasn’t been beaten. This year we’ll see if this is an actual ceiling.

Weird title. Just to pick one point: considering Apple has very little to do with the AI hype (Siri still sucks!), it seems pretty crazy to call Apples stock value related to the "AI bubble".

Apple has actually created ML chipsets and SDKs[0], so AI can be executed natively, on-device.

They tend to lag the hypewave, letting the dust settle, before they move in.

[0] https://developer.apple.com/machine-learning/

Won't local inference cause either slow responses and/or increased battery usage? Also storage space.

I wonder if it's worth it. Some here will say privacy but in my experience most users outside HN don't really care about that.

> in my experience most users outside HN don't really care about that.

In my experience, they very much do care, but don’t understand the ramifications of the convenience, pushed by … um … HN users.

In my experience, once I explain to them, what kinds of risks they take, they tend to batten down the hatches.

Out of curiosity, what do you tell them?

I don't have a prepared script.

I basically fill them in on the kinds of stuff that we all take for granted.

But the folks I Serve, with my software, tend to be a wee bit more tinfoil than most. It's a long story, and not one for this venue.

So is it safe to say, reading your last line, that it's not really most users outside HN but a specific group?


I'm usually careful to couch things in terms of "It has been my experience," etc.

> Apple has very little to do with the AI hype

The AI Hype is boosting all large tech players because investors think they will be in a good position to benefit from it.

It's rumoured/estimated that their AI hardware spending will be over $4Billion next year [1].

I would guess that at some point Apple will release a better Siri or an iPhone with onboard hardware for local LLM capabilities, or whatever to compete with MS at some point.

[1] https://medium.com/@mingchikuo/how-much-does-apple-need-to-i...


Look at the 5 year trend. Please tell me where on that graph the AI investor hype starts. I'll wait.

Apple stands to benefit significantly as soon as they put base models for transcription, text generation, image generation, etc behind apis so their developers can leverage the latest improvements

When some of the best models are open source and allow commercial use (llama 2, Mistral), assuming there is economic value here [1], apple will be in the conversation

[1] feels like a big assumption, maybe! Isn't this just autocomplete? I think good semantic search is pretty valuable (as are other high level semantic concepts beyond similarity), but this is the part that needs more evidence for sure

>Apple stands to benefit significantly as soon as they put base models for transcription, text generation, image generation, etc behind apis so their developers can leverage the latest improvements

ok but most companies stand to benefit significantly as soon as they do things that are beneficial to their bottom line that they have not yet managed to do. Sure AI is probably totally a thing that they will do and benefit from but it seems like they have wanted to do and benefit without success for a long time now.

Huh? Apple is investing huge amounts into AI, mainly generative AI.

I read about the art market a while ago.

Some artist's works have an interesting dynamic. One such is Andy Warhol.

A great number of his works were bought by a few families in the USA. These were then used as collateral for loans that underpin empires of real estate & businesses. The value of the collateral can never be allowed to sink, and obviously - if it rises further leverage can be created.

Should a Warhol hit the market it will have a bottom price set by these investors. No matter what it will sell for £300k or £1m or whatever, depending on what category it falls into. If the families have to buy it it will enter the collateral pool, if they don't their collateral remains good, and potentially grows.

So Warhol's, at least are just disconnected from their value as anything but a token. I suspect, very strongly, that at least some of these stocks have been given similar dynamics by their ownership and control structures. That's sort of ok while it lasts, but at some point someone may drop the ball and if they do then the extent of the pretend value will become very apparent very quickly.

It's an interesting anecdote, but the floor on the stocks being talked about in this article is due to them already making tons of money. Warhols are worth a lot of money because a select group of people say they are worth a lot of money. The average person on the street probably wouldn't be willing to pay the price of an iPhone for a Warhol if they didn't know who he was. On the other hand, billions of people use Microsoft and Apple products, so it's not just due to an elite opinion that the stocks have lots of value. And it's not unreasonable to imagine a world where these companies dominate in AI and make much more money in the future.

I do feel like there are WAY too many people that don't take the tech seriously. I don't know if it is ignorance, hubris, or something else. Even though the stocks are hot, I don't feel like there's a bubble on the basis of bogus tech. The dot com bubble was a bubble because investors got into a frenzy, and invested in every novel idea out there - but the current AI/ML wave feels much more focused, like there's a clear path how people want to use the models.

I'm not saying that the models will make workers obsolete right now, but there are many white-collar jobs/tasks that will be seriously affected the next 5-10 years, all while the workers themselves envision a 20-30 year career doing the same stuff they do today.

If someone had asked me 5 years ago, if these things were possible, I'd react the same way - and say "Yeah, maybe 10-20 years from now", but for me that timespan has effectively been halved after all the progress of 2023.

Anyone working in anything remotely tech adjacent is fooling themselves if they thought their career would somehow be static for 30 years. Let's look backwards and think about the drastic paradigm shifts in tech however you want to slice it whether deployment models, monetization models, languages, etc. Server/desktop/web/mobile OR paid-apps/e-commerce/subcription-apps/microtransactions/saas OR C/Java/HTML&CSS/JS/Python/Rust/.. etc. Remember when we were all going to be out of a job when Dotcom imploded, or because of India outsourcing boom?

LLMs give me a similar feeling to some of the car ADAS or Voice Assistant stuff. The first time you interact with the state of the art it is startling and impressive, going to change everything. The rate of change however is not that dramatic on an annual time scale, the rough edges become more obvious and you understand the product is actually quite limited, interest fades.

So, I think a lot of the 1-3 year time frame forecasts are extremely optimistic. However in a 10 year time frame I think we'll have some very interesting viable products. We probably have no idea today what those viable products will look like.

> ignorance, hubris, or something else

People that think brains are fueled by magic tend to assume AI will never fully take off.

> The dot com bubble was a bubble

There is not a 1:1 relationship between how big an impact AI will have in the coming years and how it will impact the valuation of "AI companies". Even if the progress slows down, it's possible that some companies will find ways to monetize it in ways that generate immense profit, if they can create some moat.

On the other hand, even if AI development takes off beyond expectation, it may be hard for companies like OpenAI or even Nvidia to continue to extract profit if they lose their moat.

In fact, AI itself can potentially cause this, as a sufficiently strong AI may be able to remove most of the costs associated with developing new chips or AI models, to the point where the marginal value of intelligence and compute becomes really low.

The tech is very interesting and quite useful in certain contexts, but no question we’re in a big bubble.

Like any bubble/hype wave there’s a lot of low value-add stuff that needs to get washed out of the system before a small number of long term winners are established.

The dominance of the Big 7/Big tech is concerning (and maybe be driven primarily by passive index flows) but what has that to do with AI? In this list, just top of my head

nVIDIA - clearly riding the AI wave

Alphabet/Google - supposedly a mortal risk to their current model (search) so negative for them

Microsoft - Depending on the drama of the week with OpenAI, either the biggest beneficiary or biggest loser

Apple - supposedly not a player but again maybe will do something on-device and leapfrog everyone

Tesla - Depends on the moods of Musk that week and what he blurts out with Cathie Wood

Amazon/Meta - not clear

> nVIDIA - clearly riding the AI wave

More like "the ones selling shovels during the Gold Rush", in my opinion.

...If the shovels had been made of 24 carat gold, perhaps.

This is a somewhat cynical take for the last two on your list.


- currently hosting AI/ML generated reviews submitted by users of the site[0]

- Using AI-generated review summaries (search for "AI-generated from the text of customer reviews" on any product page)


- Currently funneling AI generated propaganda to their users

- Attempting to help funnel AI generated propaganda for their users [1]

- Attempting to help funnel AI generated propaganda for their customers [2]

[0] https://www.cnbc.com/2023/04/25/amazon-reviews-are-being-wri... [1] https://about.fb.com/news/2023/09/introducing-ai-powered-ass... [2] https://www.facebook.com/business/news/generative-ai-feature...

IMHO Apple should be able to launch the Vision Pro with functionality/apps to describe some 3d scene in words or pictures/gestures and render it on the fly.

https://eckertzhang.github.io/Text2NeRF.github.io/ + 3d gaussian splatting + time-varying flow on spats (for which transformers would be quite apt, to ground changes based on prompt) is my bet how they could do it.

AI bubble only exist because search sucks. Remove the novelty of chat, and it is nothing but a filter on bad search results.

Google will hopefully reinvent search for post-AI. IF AI is search, AI is indexing. and they are good at it. All the others are using stale data, which won't cut it besides lame limited assistants for specific use cases.

but yeah, more likely they won't do anything good in the end.

I've been thinking that for a while. What I also thought interesting is the limited vocabulary we use for writing code. There are obvious huge benefits if others are suppose to be able to read your code but there are also different approaches to accomplish the same thing which are not very readable if you are used to the other approach. (implementations but also different languages) If we appreciate how hard it is to explain some technical goals (in human languages) it doesn't seem a stretch of the imagination to have an almost human programming language that is supper easy to access for untrained "programmers".

Then there are a good number of occupations that require so much memory that even the best of humans are a poor fit. I cant wait for a help desk "employee" who answers the phone immediately, knows who I am, is familiar with all my previous interactions with the company, is able to guess why I would be calling and is allowed to make decisions. It seems much better than having 100 employees who know very little trying very hard to deal with 300 calls with poor results while it costs $1500 per hour * 90 hours per week (135 000)

not sure what you are smoking :) but you started with the divide between system programmer and business logic programmer... and ended up with some thoughts about access to information. Most call center already knows everything just by the phone number you call in. they just dont' tell you or want/can act on what they need.

I'm pretty sure there is more to it than just search.

The topic reminds me of early computers. People could imagine games and some niche company archives. The idea that it would touch everything didn't sound very likely to most.

All Nvidia is doing is doing is unlocking a different growth curve. Huang’s Law vs Intel’s leisurely Moore’s law.

They finally found a case where this matters and that’s all that mattered, it’s gonna saturate them for years to come.

Nvidia is clearly riding the AI wave, but on the expectation that the demand for compute will keep increasing.

As for Tesla, I thought the general consensus was that they are behind cruise, who are far behind Waymo on self-driving?

Waymo has self-driving ready and is operating in a few cities.

It's a decade ahead of Tesla or Cruise (which just had some drama with their permits to operate because their software is crap.)

We are in a world where self-driving is already a reality. It's just missing the scale aspect.

Where does the Tesla << Cruise come from? Tesla has the data moat, Cruise doesn't.

The data from Tesla at present value is of lower value, as with our current hardware/software/AI knowledge, they can't clearly meaningful use of it to drive a car in complex traffic scenarios.

Meanwhile Cruise use lidars which simplify a lot, because Lidars give out better data. It's still crap compared to Waymo, which can actually drive a car very well. Waymo has already won this battle.

Ride a Tesla using FSD, Cruise and then Waymo, I'm sure you'll be surprised by how good Waymo is.

Tesla's Data faces a similar situation to training a LLM with data from the internet, it will be utter garbage. But if you use less data, but good data + RLHF, you get better results.

The main gist is that there's a whole lot of money chasing AI right now. The market value of the corporations involved are not limited to their AI efforts. I'd go so far as to say, for them, AI is just the latest feature to add to their products.

Yeah, they are chasing the position of being the first to have that added feature to lock in their position in whatever space they are in, or to take it from the current holder. It will become a commodity at some point. That is when the bubble bursts.

Putting hype and usefulness on the same axis is a common mistake.

It helps to think of AI as a discovery, a feat of science we now are unlocking using experimentation rather than a traditional engineering product.

Once you make that shift, the expectations we project on it start being silly. It’s marginally useful as a search engine but discussed like one primarily because Microsoft used they framing as the most business model advantageous they saw hurting google - not because it’s a particularly good fit for that usecase.

Once you understand it as a discovery you understand how silly much of the framing and opinionation really is - the “flaws” usually being projection of expectations used by the companies trying to monetize it.

Just because the technology isn’t fitting their business model framing doesn’t mean it can’t be phenomenally transformative to society and wreck a whole bunch of stuff.

Scientific discovery this year has been breathtaking on every dimension - not even taking hardware into account. And we don’t have a sense of where it ends beyond - not going on forever and not stopping tomorrow. Most people arguing about the technology is from a perspective of todays limitations in regards to todays usecases. And that’s not a good way to approach the problem.

This has far less to do with an AI bubble and far more to do with the fact the US has a lot of very high margin, relatively high growth international businesses.

These companies are richly valued primarily because they make massive sums of money and all seem to have bright futures. It's hard to find companies with characteristics like that (if they exist at all) outside of the US.

It's such a feedback loop though. Give promising european talent the same amount of cash and it's likely they'd do as well. Sometimes money is the main constraining factor (especially ML related things).

> Give promising european talent the same amount of cash and it's likely they'd do as well.

I don't think so. The mentality in most of Europe is completely different. Most people don't strive for the extreme levels of success of companies like Google, Tesla or Apple, but are happy to merely get rich. Also, most gain beyond "merely rich" tends to be absorbed by governments here, not the founders. That severely hinders rapid growth.

The few people with a similar mindset to Americans either migrate there or end up selling to some American company before scaling.

For those not actually clicking through to the article, it makes no statement whether AI itself is a bubble.

This is simply a chart of the frothiness of the Magnificent 7 post-COVID in what has been an AI sentiment driven tech rally.

The tech market rolled over after the initial COVID recovery WFH/remote tech rally as money tightened & rates went up. We then rotated into this AI sentiment rally about a year ago.

Whatever you think about about AI it is often important to understand the sentiments driving broad stock movements and when they turn..

This is incredibly misleading, as it has little (to nothing) to do with any AI bubble. To tie this to a current bubble, you’d have to show some sort of outsized growth in these companies since the AI hype train started, and that’s not the case.

3 years ago, these same 7 companies combined were worth… 4 times the Russel 2000 *[0].

So the only story here is that stocks have managed to mostly keep pace with each other before and during the current AI hype cycle.

[0] https://edition.cnn.com/2020/08/20/investing/faang-microsoft...

I have yet to see the bubble. Where are the triple-digit P/E values for vast amounts of tech stocks? Where are the companies nobody knew before (like pets.com in 2000) but ranking at the top? Where are the daily IPOs popping out of nowhere? Where are the instant paper millionaires via employee stock options, but caught up in lock-up periods? Cannot see any of that.


Not to dunk on the author, but after reading a bunch of literature on it and experimenting with 2 asset bubbles myself, it's quite bad that people dilute the word/economic phenomena without any further research from the past.

What is happening is that we have a great fragmentation of technology due to its many barriers of adoption and commoditization, where everyone will surely try to ride their R&D teams and investments.

At the end of the day markets will pursuit the maximum edge with the cheapest price available (in this case Open Source models, libs, and data(?), and consumers will converge to the cheapest of the best network; so the most rational movimento for a lot of businesses is to bet in something that can give some positive returns instead to miss out and be eaten.

Half of the companies in the Russel 2000 are unprofitable and chronically mismanaged. By contrast he magnificent 7 are effectively monopolies with good management teams that make money hand over fist.

OK, Apple and Tesla are overvalued. They're both struggling with growth. But Microsoft has a market cap of 2.8 trillion and it's still growing at 15% year over year. It's absolutely unreal. Amazon, Meta and NVIDIA are also doing great.

It's easy to yell "bubble!" and certainly it's not hard to find stocks that are overvalued. But it's nothing like yesterday's zaniness of NFTs and DAOs.

Slightly misleading to the extent that decent chunks of these businesses' revenues come from those countries whose stock markets are compared without the presence of these multi nationals that are present in all of them. Survival bias in other words

The financial economy feels like a python that consumed a giant lunch, and has that giant lunch moving through the thing in one big lump trying to get digested.

I think the primary issue is that during the pandemic we printed a bunch of dollars, and those dollars have been chasing a productive position ever since. AI is clearly a tech that is going to change the world, that much should be clear. But how do you invest in it? I guess throw it in Microsoft and Nvidia.

Link to archive:


What's the the Magnificent Seven?

I mean the pursuit for AI seems very pure. Similar to the pursuit for cleaner energy. So, in that way whether AI is currently a bubble or not doesn't matter too much. In the longer term, if the pursuit is to have infinite intelligence, there can actually be no value or market cap associated to it.

I don't understand why the market cap of a certain group of companies is correlated to the "AI bubble".

AI is just a tailwind for these fantastic companies (regardless of your opinion on how they make the money, they ARE fantastic companies, at least in the brutal capitalistic sense - they make a boat load of money!)

I mean these companies have been going up since the 90s. Obviously the author doesn't understand the geometric nature of markets - especially in the case of market leaders.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact