Hacker News new | past | comments | ask | show | jobs | submit login
AI Winter Is Coming (leehanchung.github.io)
67 points by fzliu 4 days ago | hide | past | favorite | 56 comments





I don't know about "Winter". The original "AI Winter" was near-total devastation. But it's probably reasonable to think that after the hype train of the last year or two we're due to be headed into the Trough of Disillusionment for LLM-based AI technologies on the standard Gartner hype cycle: https://en.wikipedia.org/wiki/Gartner_hype_cycle

Maybe, but modern AI, I find already immensely useful in a lot of ways and I see things constantly improving. E.g. realistic video generation, music generation, OpenAI advanced voice mode - it's still wild to me how good these are and how well LLMs can perform.

I still remember even when seeing GPT3.5 I thought it must be impossible what it can do and that there must be some sort of trickery involved, but no.

I feel like I'm still impressed and amazed daily what AI can do now.


Infinite content produced by AI has close to no value.

Note the "trough of disillusionment" does not drop to 0.

It is also a measure of hype, not utility.

The current crop of AI tools have their uses and they aren't going away. However, the hype was basically built on the principle of "YOU CAN JUST WAVE AI AT ANYTHING AND REMOVE ALL THE PEOPLE!!!1!", without any need to think about how the AI will be useful, or think about how it will fail, or, you know, doing any of the usual engineering that new technologies inevitably need. You won't need to! The AI will just engineer itself!

This is, of course, bunk.


I am wondering if people are part of many different environments? I haven't witnessed this kind of hype except for few cherry picked maybe even out of the context statements.

Where do people actually notice all of the hype? Because what I'm noticing more is people complaining about the AI hype than the hype itself. Since the beginning, basically.

I'm not from the US so maybe I'm not exposed to those things, since when I did travel to SF some time ago I did see a lot of weird banners, so is this the hype that people are talking about?

I do see people talking about the future and what AI could mean for the future, that it could replace X and Y with many different opinions with many different provided timelines, but I think that's all still plausible to happen.

Do companies heavily invest in AI? Do they talk a lot about AI in their earnings release? Do they try to put AI into a lot of products? For sure - but I think that's a very reasonable thing to do when a new technology like this arises. It does have promise, we don't know exactly how much yet, but if we don't try it out, we won't find out either. You should try it out and see where it works and where it doesn't. Given the seeming promise of this, I think current investments in it are very, very reasonable.

It could fail and plateau, but given where it is right now and how much it has evolved in the past years, I think it makes sense to invest that much in it.

Despite seeing amazing things everyday that I wonder how they are possible, which I never would've thought 5 years ago would be possible now, there's people writing articles on how AI hype is dying out, how the AI winter is coming, which seems crazy to me. It's only been few years with amazing advancements. It's the time when tech has evolved with a pace never seen before and people are claiming that things are slowing down?


Well, one of the measures is, is anybody actually making money with this AI yet, in the sense of mining gold and not selling shovels?

The answer is, it's fairly hard to find a solid yes. The ad companies claim it's contributing to revenue, but a lot of them have a lot of reason to keep the bubble going and aren't going to say anything else anyhow, and it's hard to establish what is contributing to what in the ad space anyhow.

It's been put in a lot of things, but it is not at all clear that it is contributing money, as opposed to being in things that were loss leaders anyhow. How, exactly, does an AI assistant on Amazon's shopping site "make money"? It's an intrisically fuzzy question under the best of circumstances.

Evidence that it has helped programmers is decidedly mixed at best. For everyone saying it has given them superpowers we have an awful lot of reports of buggy generated code and more bugs making it into final code as a result.

You might say "it's not all about the money", which is true, but again, this is about hype, not social utility. I don't see AI living up to the hype. All the moneymakers are the shovel-sellers. If AI was living up to the hype, somebody ought to be making money by now.

Part of the reason it has not lived up to the hype is the sky-high bar the hype has set. The stock market bubble is not pricing companies like nVidia to make some decent money on AI over the next few years. It's priced like they're going to be the only company that can do AI and that everyone has not yet begun to spend on AI. But if returns don't start coming back on the AI spend, that valuation is going to prove to be premature.

It can help to look back at a previous example of this exuberance to see what I'm talking about: The Dot-Com boom. The reality is, basically everything that the Dot Com boom promised happened! Even the thing people mocked for years, "selling pet food online", happens now.

But it happened 20 years later. Far too late for a company founded in 1998 that absolutely depended on having 2020-levels of internet infrastructure.

AI isn't going to disappear and we may even be underestimating the change it will bring in the long term. But that doesn't mean that the curve is going to smoothly slope up over the next 50 years. These things often get out "over their skis". AI seems badly over its skis to me, not because it won't be as useful in 20 years as it is promised, but because it is not as useful right now as is promised.


> Well, one of the measures is, is anybody actually making money with this AI yet, in the sense of mining gold and not selling shovels?

With a new tech, or starting something new it's rare to be profitable within first 5 years in the first place. Certainly there's coming in revenue in many places. I spend a lot on AI services myself. Starting from all current popular LLM variants, like Claude, GPT, Perplexity to music generation tools like Suno. Some of them I use for fun, but many of them I do think my productivity and value output has increased more than I pay for them. Many I use for experimenting or just out of curiousity. There's a lot of revenue coming in, but it also at the same time makes sense that the costs right now are higher than what they are actually making. But I have also directly increased my income thanks to AI tools, because I do my usual work faster, and I do freelancing on the side which I charge quite a bit for if it comes to hourly. Far more than I spend on those AI tools. without AI tools I couldn't do the work as fast or have the energy to produce this much.

> How, exactly, does an AI assistant on Amazon's shopping site "make money"?

It depends on how this AI assistant is built. I have a lot of thoughts about shopping UX, and I think it's a UX related question, how a better UX will increase e-commerce conversion, but I don't want to go that deep into it here. I definitely imagine ways how AI can improve UX in such a way that it finds matching products for the customer much faster than standard UX would. This would provide value because it takes less time to find the product and it would possibly find a higher quality match.

> Evidence that it has helped programmers is decidedly mixed at best. For everyone saying it has given them superpowers we have an awful lot of reports of buggy generated code and more bugs making it into final code as a result.

I know that it's definitely helped me a lot. I don't know if it's a skill issue or a thought issue or what it is, how some people don't see it valuable multiplier for their productivity, but I haven't noticed myself doing buggy work more because of that.

> You might say "it's not all about the money", which is true, but again, this is about hype, not social utility. I don't see AI living up to the hype. All the moneymakers are the shovel-sellers. If AI was living up to the hype, somebody ought to be making money by now.

I mean I wouldn't say that. I do think it has to make money, but I also think that with new tech there's always a period of time where it makes strategical sense for it to lose money, just like starting any new company. And some definitely do make money. Again I make individually more money, because I can work more, and I can also translate it into freelance work which if I just did usually salary based work might not be rewarded as such as directly. Although I use AI that also help me at my work to spend less hours on it.

> Part of the reason it has not lived up to the hype is the sky-high bar the hype has set. The stock market bubble is not pricing companies like nVidia to make some decent money on AI over the next few years. It's priced like they're going to be the only company that can do AI and that everyone has not yet begun to spend on AI. But if returns don't start coming back on the AI spend, that valuation is going to prove to be premature.

This argument requires to come up with specific numbers, and the market valuation is very nuanced.

> It can help to look back at a previous example of this exuberance to see what I'm talking about: The Dot-Com boom. The reality is, basically everything that the Dot Com boom promised happened! Even the thing people mocked for years, "selling pet food online", happens now.

Yeah, but I think it's an argument for AI rather than anything else?

> But it happened 20 years later. Far too late for a company founded in 1998 that absolutely depended on having 2020-levels of internet infrastructure.

The major players right now have a lot of funds to keep going with it though.

> AI isn't going to disappear and we may even be underestimating the change it will bring in the long term. But that doesn't mean that the curve is going to smoothly slope up over the next 50 years. These things often get out "over their skis". AI seems badly over its skis to me, not because it won't be as useful in 20 years as it is promised, but because it is not as useful right now as is promised.

We don't know the future or the curve, and no one can for sure predict the timelines, but I think based on the knowledge we have it makes sense to put in the money that currently is being put into AI, based on at least capabilities and speed of those capabilities improving. If we had an estimation that there's 50% chances of AGI by 2035, I think people are absolutely not putting enough money in right now, because if AGI was to happen by 2035, then it would make sense for very many to go absolutely all in bonkers on that.


You seem to be unable to separate the concept of "hype" from "value".

Until you succeed in doing that, you're going to be terminally confused by a lot of things.


It’s can’t be economically sustainable if this is it right ?

Feels different to past hype cycles (Internet bubble, Crypto bubble).

LLMs with meaningful capabilities arrive very quickly. e.g. One week they were not that useful, the next week they gained meaningful capabilities.

A function that takes text and returns text isn't that useful without it being integrated into products, and this takes time.

Next 12-24 months will be the AIfication of many workflows: that is, discovering and integrating LLM-based reasoning into business processes. Assuming even a gradual improvement in capabilities of LLMs over time, all of these AI enhanced business processes will simply get better.

Diffusion of technology is slow slow slow, and then fast. As I become more capable with AI (e.g. what tasks as an engineer are helped using AI) I'm getting better and better at it. So there's a non-linear learning curve where, as you learn to use the technology better, you can unlock more productivity.


the aiification of products to me sounds like being made both less reliable and less predictable. Not a good thing

Honestly I think we're already there, it just takes a bit before the realisation trickles down.

The successful uses of LLMs don't seem to depart too far from the basic chatbot that started the whole hype. And the truly 'magic' uses seem to fail in practice because even a small error rate is way too high for a system that cannot learn from its mistakes (quickly).


>don't seem to depart too far from the basic chatbot that started the whole hype.

Is ChatGPT-3.5 a basic chatbot now? It's been less than two years since it was SOTA.


Nicely put

> Or take Ed Thorp, who invented the option pricing model and quietly traded on it for years until Black-Scholes published a similar formula and won a Nobel Prize

Hardly quietly. Thorpe published "Beat the Market" in 1967 detailing his formulae, six years before Black Scholes won the Nobel.


I like the distinction between producers and promoters. This is why I am naturally skeptical of polished demos and people posting in their real name. If you post in your real name, you are at a minimum promoting yourself (generally boils down to “I am a smart, employable person”).

I wish I had a better heuristic, but the best I’ve found on Twitter is pseudonymous users with anime profile pics. These are people who don’t care about boosting a product. They’re possibly core contributors to a lesser-known but essential python library. They deeply understand a single thing very well. They don’t post all day because they are busy producing.


I also second anime pfps and other blue checkmarkless accounts on X producing far more grounded takes (not just for AI).

X is a very good microcosm of that producer/promoter model from Dalio except that the promoters are seemingly the entirety and they are extremely loud to the point that it triumphs all common sense and reasoning.

It's also very tiring to scroll through "I made $XXXX in 30 days with AI and I'm only 17 year old high school student" or "we shipped a ChatGPT wrapper and used dark patterns for subs"

On Linkedin its far worse, everybody is a genius and everybody needs to pay attention to me of the remote chance a recruiter from big tech will reach out and pay me a large salary for managing their impression.

All in all, it really feels like the American economy is running on pure hopium and fumes. This cannot be good for it in the long run.


> On Linkedin its far worse, everybody is a genius and everybody needs to pay attention to me of the remote chance a recruiter from big tech will reach out and pay me a large salary for managing their impression.

Right. So much content, but it feels so empty. Do people actually network there?


You can network perfectly fine with the chat functionality. The LinkedIn feed doesn’t serve any purpose other than to deliver ads.

these, indeed are the actual accounts worth following, those still bearing some resemblance of the early internet adopters who were there for the fun, not the profit part. though never thought about the name perspective, something I can only agree with you on. which immediately cancels out people such as lex friedman and alex volkov, again - seems like the right thing to do tbh. some very obscure accounts are to me the real opinion leaders, they know how to ride the viral wave on repeat. Grimes doesn't though.

Soon: "LLM, take my self-promotional content and rephrase it as if I was a producer."

People warning about a coming AI winter are almost as annoying as people doomsaying about AGI. It’s going to be somewhere in between. It can be disappointing and revolutionary at the same time. We had the dot-com crash and yet out of that grew some of the largest corporations the world. Microsoft, Facebook, Apple, Amazon, etc

The article is less about a winter for the field than a winter for AI boosters, who will soon move on to become “experts” in The Next Big Thing.

For people working in the field, deep learning has already proven itself to be self-funding. It’s the main source of Google’s profits. It’s TikTok’s algorithm. Et cetera.


Every one of those companies predates the dot com bomb by quite some time.

And AGI is science fiction with no credible plan of how to get there. If you can even get everyone to agree on the same definition.

An AI winter is something that can be measured and is factual eg. the lacklustre spending on AI products and the dry up in VC funding.


Kind of depends on what you call “AI”. Large language models, maybe. AI is a lot more than that though. Deep learning isn’t going anywhere.

The fact that VCs aren’t throwing millions of dollars after every CS undergrad who figured out how to make an API call to OpenAI means they are wising up. The main question is why it took this long.


Microsoft and Apple preceded the dot com bomb by several decades. (Microsoft 1975, Apple 1976)

Amazon was a company that was around and survived the dot com bomb (founded in 1994, roughly around the time of the beginning of the bubble) [though its stock took about 7 years to recover]

Facebook was post dot com bomb. (founded 2004)


And AGI is science fiction with no credible plan of how to get there.

I mean... you can't really have a (strict) plan for how to build something that nobody knows how to build (yet). But that doesn't necessarily mean it's "science fiction". There are credible reasons[1] to believe that AGI will happen - eventually. To me, the biggest question is around timeline, not "will it happen or not". Now granted, that allows for anything from "tomorrow" up to "the heat death of the universe" so you can accuse me of the dodging the issue if you'd like. But I'd bet money on it happening closer to "tomorrow" than "the heat death of the universe".

[1]: among others - the progress on AI that's already been made. And while we may not have AGI, it's hard to deny that we have AI that's a far sight better than what we had in 1956. The other is that, unless you believe in magic, the human brain is an existence proof that human level AGI is achievable on a deterministic machine that operates according to the physical laws of the universe. It would seem to follow then that it should be possible (albeit perhaps very difficult) to achieve that same level of intelligence on some other deterministic machine. And note that even if "Penrose is right" about the brain relying on quantum mechanical phenomenon, there's no particular reason to think that those can't also be mirrored on a human created machine.


There are credible reasons to believe that anything will happen eventually.

You could say that faster than light travel, curing every disease, etc are all possible because progress has been made.


> There are credible reasons to believe that anything will happen eventually.

Yes. To paraphrase something I read once (I think it was from Eric Drexler):

Any technology that isn't physically impossible, will be realized one day. The question is always "when" not "if".

Now of course one might counter that there isn't much value in that knowledge when you can't say anything meaningful about that eventual time-frame. But I personally think it's worth considering that binary distinction between "absolutely will not happen, ever, period" and "will almost certainly happen eventually."

> You could say that faster than light travel,

I generally agree with your overall sentiment, but I might quibble on this specific example a bit. Yes, we've "made progress" at traveling faster and faster, but I would argue that that doesn't count as "progress" towards FTL travel, because we have a specific, generally accepted, theoretical reason why FTL travel is actually physically impossible. Of course some hypothetical new finding could overturn Einstein's work, but that seems unlikely. In this context, I'd treat FTL as "impossible" and something like AGI as "will happen, we just don't know when."


Part of the issue is people don’t even agree on what “intelligence” is or how to measure it. This isn’t like fusion power where we will definitely know when we have it and the goalposts aren’t moving.

That's a fair point. We certainly lack a measure of rigor in terms of defining intelligence and AGI - especially in the vernacular sense and among the lay public. But among people who work on this stuff, there are useful definitions that are widely used - if not universally accepted as "the" definition.

I would say that the material from Chapter 4 of Engineering General Intelligence - Volume 1[1] by Ben Goertzel reflects a pretty spirited and useful attempt to capture the important details, at least vis-a-vis the discussion at hand.

Excerpt:

Many attempts to characterize general intelligence have been made; Legg and Hutter [LH07a] review over 70! Our preferred abstract characterization of intelligence is: the capability of a system to choose actions maximizing its goal-achievement, based on its perceptions and memories, and making reasonably efficient use of its computational resources [Goe10b]. A general intelligence is then understood as one that can do this for a variety of complex goals in a variety of complex environments. However, apart from positing definitions, it is difficult to say anything nontrivial about general intelligence in general. Marcus Hutter [Hut05a] has demonstrated, using a characterization of general intelligence similar to the one above, that a very simple algorithm called AIXI can demonstrate arbitrarily high levels of general intelligence, if given sufficiently immense computational resources. This is interesting because it shows that (if we assume the universe can effectively be modeled as a computational system) general intelligence is basically a problem of computational efficiency. The particular structures and dynamics that characterize real-world general intelligences like humans arise because of the need to achieve reasonable levels of intelligence using modest space and time resources.

[1]: https://www.amazon.com/Engineering-General-Intelligence-Part...


The original "AI Winter" was primarily a government funding phenomenon [1]. There was no "bubble" in the private sector. I.e., the winter was the result of responsible people in government realizing the hype was over-extended and standing up for the taxpayer. Progress would be made, eventually, but not in that moment. (Those people were correct, btw.)

> But beneath the surface, there are rampant issues: citation rings, reproducibility crises, and even outright cheating. Just look at the Stanford students who claimed to fine-tune LLaMA3 to have be multimodal with vision at the level of GPT-4v, only to be exposed for faking their results. This incident is just the tip of the iceberg, with arXiv increasingly resembling BuzzFeed more than a serious academic repository.

Completely agreed. Academia is terminally broken. The citation rings don't bother me. Bibliometrics are the OG karma -- basically, fake internet points. Who cares?

The much bigger problem is that those totally corrupt circular influence rings extend into program director positions and grant review committees at federal funding agencies. Most of those people are themselves academics (on leave, visiting, etc.) who depend on money from the exact sources they are reviewing for. So this time is their friends turn, and next time is their turn. And don't dare tell me that this isn't how it works. I've been in too many of those rooms.

It's gotten incredibly bad in in ML in particular. Our government needs to cut these people off. I am sick of my tax money going to these assholes (via the NSF, DARPA, etc.). Just stop funding the entire subfield for a few years, tbh. It's that bad.

On the private sector side, I think that the speculative AI bubble will deflate, but also that some real value is being created and many large institutions are actually behaving quite reasonably compared to previous nonsense cycles. You just have to realize we're mid-late cycle and companies/groups that aren't finding PMF with llm tech in the next 2-3 years are probably not great bets.

--

[1] https://en.wikipedia.org/wiki/Lighthill_report


> There was no "bubble" in the private sector.

There was a small bubble.

There were 1980s AI startups: IntelliCorp and Teknowledge. Intellicorp pivoted from expert systems to UML and was acquired. Teknowledge seems to have disappeared. (The outsourcing company called Teknowledge today seems to be unrelated.) There were the LISP machine companies, Symbolics and LMI. There were a few others, mostly forgotten now.


The first (big) AI winter he refers to was in the mid 70s.

The 80s (mid/end) AI winter have had an effect on private companies, but it was also mostly because of government funding was reduced/eliminated, where their revenue was coming from. Much of the revenue of the computer hardware and software companies in the 80s AI bubble was coming from government funding, like the Strategic Computing Initiative and the Strategic Defence Initiative ("Star Wars"), both running from 1983 until 1993, with various levels&aims of funding. That was a part of the effort to win the cold war (here by investing huge amounts of money into modern weapons & defense systems, which meant also into computing and AI) with the Soviet Union , which eventually collapsed, end 80s - early 90s. Also many of the promises of the AI technology did not materialize -> the private sector did not take over the funding.


An "AI Fall" maybe. But "AI Winter"? I really doubt it. And the author of this piece presents very little in the way of compelling arguments for the advent of said AI Winter.

For all the valid criticisms of "AI"[1] today, it's creating too much value to disappear completely and there's no particular reason[2] to expect progress to halt.

[1]: scare quotes because a lot of people today are mis-using the term "AI" to exclusively mean "LLM's" and that's just wrong. There's a lot more to AI than LLM's.

[2]: yes, I'm aware of neural scaling laws and some related charts showing a slow-down in progress, and the arguments around not having enough (energy|data|whatever) to continue to scale LLM's. But see [1] above - there is more to AI than LLM's.


> This is how we’re headed for another AI winter, just as we saw with the fall of data science, crypto, and the modern data stack.

The fall of data science??? When did that happen? I’m not squarely in the field, but I thought I would have heard about it


> The fall of data science??? When did that happen?

It didn't. "Data science" may not be the latest, trendy, catchy "buzzword of the day" but nothing holds onto that title forever. Losing that crown to trendy tech du-jour isn't the same as "falling off" IMO.


Similar phenomenon, on a smaller scale, is happening with what I call meta-cloud PaaS, which facilitates web app deployments/provisioning. They usually run on top of AWS or other large clouds, hence meta-cloud.

It started with Heroku but now it has gained VC attention in the form of Next/Vercel, Laravel Cloud, Void(0), Deno Deploy and Bun-yet-to-be-announced solution. I'm probably forgetting one or two.

Don't get me wrong, they are legit solutions. But the VC money currently being poured in on influencers to push these solutions make them seem much more appealing than they would be otherwise.


Heroku has been around for almost 20 years, Vercel was Zeit ~10 years ago, and they both have always been widespread solutions, I wouldn't say that that there is hype only now

I cannot vouch for laravel cloud or void, since I've never used them, nor I will comment on Deno/Bun since they are far more recent


> That leading edge research paper is most probably someone’s production code.

Very powerful, albeit sad, statement.


So the argument that during a gold rush, there are scammers selling pyrite and misleading prospective prospectors to quarries where there is no gold, soo because all these are happening incidentally the gold rush is therefore near over. Okay. Good article otherwise. But Geoffrey Hinton takes the opposite stance (so does eric schmidt)with recently stating the last 10 years of ai development have been unexpected and the trend will continue with the next 10 years. But perhaps that could be handwaived off as cheerleaders/promoters.

Here is the thesis at the end

> the real producers will keep moving forward, building a more capable future for AI.

This is one of many signal flares going up.

Do something or cash out of the AI space. Engineers are tired.


I appreciate a good original perspective, but much of this seems over blown…

“Meanwhile, data scientists and statisticians who oftentimes lack engineering skills are now being pushed to write Python and “do AI,” often producing nothing more than unscalable Jupyter Notebooks”

Most data scientists are already well versed in python. There’s so many platforms emerging that abstract a lot of the infra required to build semi-scalable applications


I don't think there'll be much of a winter for a while. The winters were mostly economic effects where the funding dried up and current AI seems to be getting near human levels which will be a big economic incentive to plow ahead.

our take on this from the industry pov: https://www.latent.space/p/mar-jun-2024 (there is a podcast version too if u click thru)

broadly agree but i think predicting ai winter isnt as useful as handicapping how deep and still building useful things regardless.


At least we got a new keyboard Super modifier key out of it. Or maybe we should make it the Compose key?

Interesting that this article is right next to one making the opposite point:

https://news.ycombinator.com/item?id=41813268


> as we saw with the fall of data science, crypto, and the modern data stack.

Has data science or the modern data stack fallen? What does crypto(I assume currency) have any relevance to ai winter for?


Again?

If anyone had this knowledge, they wouldnt tell us, theyd keep their market edge and make a bet for their own selfish greed.

Anything else is PR

discuss amongst yourselves Rhode Island, neither a road nor an island


I am a pure mathematician by training. I _hate_ machine learning. The entire field seems to me like a bunch of unprincipled recipes and random empirics. The fact that it works is infuriating, and genuinely seems like a tragedy to me. The bitter lesson is very bitter indeed.

But I've been hearing the refrain of this article for a decade now. I just don't believe it anymore.


> The bitter lesson is very bitter indeed.

Assuming you mean this[1] bitter lesson... sharing the link for anyone who isn't familiar with the term in this context.

[1]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html


Just like with most other criticisms I've seen of AI, this seems to be criticizing the hype around AI, not the technology itself. It isn't clear if the author conflates those but a lot of people wrongly do. AI isn't one to one with NFTs, there being a lot of grift around something doesn't make it useless or mean it won't change the world.

I hate to post a reply that amounts to a long-winded "this". But you nailed it, IMO. I agree with everything you said here.

Awesome.

chatgpt wrapper startups are ngmi

who's going to tell him #feeltheagi



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: