Hacker News new | past | comments | ask | show | jobs | submit login
Is AI Winter coming again?
40 points by gpt4o 53 days ago | hide | past | favorite | 53 comments
I heard someone saying that most of the AI startups aren't getting funding. I know that the first AI winter started by not getting enough funding.

So what's your opinion?




The first AI winter was because of a book about Perceptrons, was perceived as being extremely critical of them, which killed interest. It took decades for neural nets to finally prove their effectiveness.

The second was because the benefits of "Expert Systems" were oversold, and didn't prove to be as beneficial as expected.

There will be a heavy correction in the market caps of companies that bet heavy on AI, as the promises don't live up to the hype. However, in this case, there clearly is benefit to using LLMs, the cost of training has proven to be prohibitive. Slower and more prudent approaches, with better hardware will allow progress to continue. If nothing else, it'll be a mild AI winter.


> However, in this case, there clearly is benefit to using LLMs

Legitimate curious question: what are the clear benefits exactly? I mean stuff that objectively improves society, not the plaything that ChatGPT has mostly become (there are exceptions to this as well, of course, I had a CFO telling me lately he used ChatGPT to find mistakes or bad contract clauses).


Uses for LLMs and Neural Nets in general include: Searching documents in discovery, sentiment analysis, summarization, flushing out ideas, 1st level support, writing code, doing speech->text, text->speech, translation.

There are many things we can do now that were previously expensive or cost prohibitive.

This is about the same as the computerization of records. No longer to we have to pay people to keep rooms full of files in order to aid retrieval and long term record storage.

--- but I see you're asking about non-cost things

We can now read documents that are translated from other languages, lowering those barriers to learning about other cultures. It's now easier to interact with the information available to all via the internet.

It also makes some evil things easier, but that comes with every technology.


OK, I can partially agree with some of the points but... writing code? Any tangible proof on that one? I have spoken with many colleagues who were adamant that the LLMs can at best provide some boilerplate generation and nothing much beyond that. One or two mentioned it could find a bug in their Python code.

> This is about the same as the computerization of records. No longer to we have to pay people to keep rooms full of files in order to aid retrieval and long term record storage.

Eh, have you read the "Bullshit Jobs" book? It's a social and economic problem, rarely has been a technological one... TL;DR people want headcount, not efficiency. :/


> writing code? Any tangible proof on that one?

Personal experience? Not just me, but e.g. all the comments on HN on ChatGPT, Copilot, et al.?

I’ve been using it for about two years now to draft new code, after having programmed for two decades. It provides value to me in that some tasks that previously would take me a whole day now can be done in a couple of hours, maybe one hour of throwing ball with ChatGPT and then one hour of refactoring the code myself. I have plenty of colleagues who have similar experiences, especially when using it for tasks that can be “tricky to write but easy to verify” (e.g. visualizations).

Just keep in mind that (i) the paid version is miles ahead of the free version for coding, and (ii) using an AI assistant is like any other skill, it requires some training to learn what tasks they’re good at and how to talk to them.


Saving programmer time writing code does not, in and of itself, "objectively improve society" as the OP above sought examples of. Perhaps, if you are doing something extraordinary. However, AI also makes spambots and malware easier to write, as well as enormous amounts of simply mundane stuff - shopping cart APIs or credit card comparison websites and ad servers.


Fair enough, but I feel that the same can be said for e.g. the combustion engine…

The only thing cars did was to save time and resources doing things humans were already capable of doing. Now, such engines are used for life saving things like ambulances and food transport, but also nefarious things like invading other countries, or mundane things like dropping off your kids at karate class.

I suppose whether you think the combustion engine actually improved society depends on who you ask as well :)


Huh, didn't realize humans could fly or go to the moon without combustion engines.


I guess we’re stretching the analogy a bit far… But we’re mostly flying using jet engines, which is a distant descendant of a combustion engine that was developed over a century later than the car engine. If we want to see the AI analogue of this, it’s probably a future AGI and not what we have today :)


"We" aren't stretching any anology. Your statement was ridiculous from the start. Pat yourself on the back for saving a couple hours at work, and leave it at that.


> I have spoken with many colleagues who were adamant that the LLMs can at best provide some boilerplate generation and nothing much beyond that.

Why don't you try it out yourself, eh? there are a lot of free options


Because my time is valuable, dude, and having to use an entire weekend -- likely much more than that even -- is not very feasible. And because I trust my circle of programmer acquaintances, they are extremely pragmatic folk who usually gets called to fix messes left by juniors, wannabe, poseurs and "AI" enthusiasts.

Still, I am not entirely close-minded on "AI" but I don't think it's unfair to say that if it was that good then our area would have been hugely disrupted by now. And it does not seem like it is.


I don't use them that much, but a relative does, and they've told me that they like using one that can do recipes (they have a household with very specific food restrictions) and another one. I don't remember what it's called, but it sounds vaguely like Magic School Bus, and it's good for doing forms and rewriting things to be less, say, sweary (they're a grader and the stories they tell about students submitting stupid things...)


Ohhh, those are actually awesome use-cases, thanks for bringing them up to my attention!


> There will be a heavy correction in the market caps of companies that bet heavy on AI, as the promises don't live up to the hype.

This might be your point, but I think in most cases the disappointment will be entirely financial. Some, perhaps even most of the tools and products being developed are pretty cool, but in the majority of cases they're basically just AI wrappers or nice to haves.

It's like launching an email service or a website that tells you the current time in different countries in the late 90s or something. Email might be great, and your service might be really popular, but that doesn't mean there's any money to be made because anyone can implement POP3. Same with a website that tells you the current time in different countries – nifty tool, but not a real profit making business.


It will be really interesting if the for-profit OpenAI fail then making the nonprofit OpenAI achieve its initial purpose.


> AI startups aren't getting funding

Corollary: 99% of AI startups are not providing real customer value and do not deserve to get funding on merit.


There is a lot of funding for capital-intensive projects and not much funding for consumer-level projects which will bring AI to the masses. If just 10% of what's going to Nvidia went to the end user ecosystem we would see much AI bringing much more ROI.


Correction: 99% of all startups are not providing real customer value. They would all die. The other 1% however, bring real customer value and go on to become a success. AI startups included.

> do not deserve to get funding on merit

What is the criteria to decide here? If you know please. I want to drop everything and invest in them.


I've seen a obvious ChatGPT wrapper being funded on a local variant of Shark Tank just because it was actually a nice idea. We've gone way to far already.


Criteria is providing real customer value. Now go drop everything and figure out who is doing that.


Many do. The question is can I recreate their exact business with a few lines of code?


No, you can't create a business with a few lines of code. Developing and marketing an app is so difficult that it is called "fullstack" and most people that call themselves fullstack developers aren't capable of doing the full stack.


AI tools and LLMs have been overfunded. A few are able to stand out, but many aren't able to catch up with the likes of OpenAI.

Image, video, music, speech aren't fully solved yet, and new models are still competing with giants like Imagen and Midjourney.

The startups that build on AI can still be heavily funded. Most likely you aren't going to see many AI startups like character.ai. They'll be things made possible with AI.

Uber wasn't an app company nor a cloud company, it's a company made possible by the two. Reddit and FB/Meta didn't need mobile apps, but it certainly increased engagement.


To be honest I wish the Attention paper never came out.

All of this AI/LLM stuff is exasperating. I don't think I can stomach to hear another MBA talk about "leveraging AI" in their next startup/product initiative.

Cool, better autocomplete in my IDE is handy. But that's about it. I'd trade that back in a heart beat if I didn't have to have awful chatbots shoved in my face every time I open a webpage.

Retirement has never felt so far away.


MBAs? All I hear are programmers (including on this page) praising ChatGPT for writing some subroutine so that they didn't have to themselves.


> for writing some subroutine

Hadn't realised there were that many FORTRAN programmers still around (in fairness, various languages have used 'subroutine' to describe various special cases of functions, or sometimes just all functions, but generally those languages have largely died out.)


Unless there is another jump in quality in the next GPT-5, we have plateaued. There is still time as there has been a significant jump between GTP-3.5 and GPT-4. I'd say we have at max one year to find out. The fact that OpenAI keeps releasing new products means they are quite pressured by market forces to demonstrate interpolation as investors are anxious about the billions being burned on compute.

So in a nutshell, it'll boil down to the next year or so.

Regarding people who say that GPT-4 increase productivity and whatnot. That's not the point, GPT-4/Claude3.5 do increase productivity and I use them daily and frequently. The question is whether that productivity increase will match the investments made. And investors have poured too much into AI.


The questions is if completely new ways of using GPT/LLMs will come up. If they are just what they are now (Copilot/Summarize Text etc.) better GPT will just have very marginal gains and 99% of investments in AI is wasted.

If they are somehow useful in very novel ways maybe things look better.


We kind of had an explosion of wrapper companies around chatGPT. It seems, by far, text -> images -> code are the only things they are capable of and in diminishing returns order. I can't really imagine them being useful to something else because the ability to reason shouldn't be scoped to one particular field. If they can reason in math, they should be able to reason in programming or medicine too.


AI Winter reasons:

Funding is lost due to unmet promises ● Lighthill Report 1973 shuts down funding in UK ● Dreyfus at MIT argues lots of human reasoning is not based on logic rules, involving instinct and unconscious reasoning ○ (No AI researcher will eat lunch with Dreyfus for the next decade) ● Sussman: “using precise language to describe essentially imprecise concepts doesn't make them any more precise.”


Winter is seasonal and localized. There will be a winter but it will not be everywhere. There's a lot of folks building AI companies with no path to useful product or profit. It's just the next hot things, there's likewise been quite a number of investors that didn't want to miss the train so they hopped into whatever train they could irrespective of the destination. There will be a moment of reckoning.

With that said, generative AI has proved to be very useful and we have absolutely no idea what the limits are, there' a lot to be built and in that manner, things will be good for quite a while. It might be that as the very first useful ideas introduce useful ideas, the companies that are currently lost will gain ideas and pivot their way to usefulness and profit. Or perhaps there will be enough useful companies to drown out the lost ones.

With that said, plenty startups will die and plenty will succeed. It doesn't matter, just figure out a way to contribute positively and be part of those that will survive.


> I know that the first AI winter started by not getting enough funding.

Well.. not quite. The funding was government funding. The opponents got behind things like the lighthill report [1] and had the government funding cut.

IMNSHO this was the right thing to do. The claims were overblown and it was wasting national funds in research. But it was pretty political. People stopped talking to each other for decades. I was a 12 year old in this time, My dad was in the trenches in the opposition to AI funding camp so that's my bias. Nothing which has come since makes me re-consider this, I might say that modern-day private capital JV/VC funding has been a bit silly, but in the end it's private funds so it's their call.

[1] Lighthill, James (1973). "Artificial Intelligence: A General Survey" http://www.chilton-computing.org.uk/inf/literature/reports/l...


The only thing that could cause an AI winter at this point is a thermonuclear war.

Just the release of o1 hasn't been fully appreciated yet. And in the next few months and year ahead we are getting a deluge of new SOTA models that will completely change the assumptions of what AI can do.

A lot of the AI startups are just cash-grab wrappers with little understanding of fundamentals under the hood. Their failure isn't reflective of the massive gains in AI and track to AGI in the next few years.


Common programming problems will be solved because they can be approached in a probabilistic fashion by the very nature that they are common problems. I would expect harder problems to be much harder.

I am a non-software engineer, I work for a small company that we could do whatever we want with language models in terms of the business. The issue though is all of our business problems are stacks of one off problems that the current models offer us basically no help on. Even if the models could help us we would basically have to rebuild the business from the ground up around the models to really benefit much.

Like the dotcom bubble, there is a large amount of time it will take for culture , business and society to leverage the technology. Between then and now I suspect sits an absolutely brutal AI winter that is brought on by nothing more than a good old recession.


Not a winter, but a correction akin to crypto a couple years ago.


Yep. A lot of eggs are currently in one, barely-solvent basket. What happens next is anyone's guess.


Big Tech is still pouring billions into AI. We're seeing real-world applications and breakthroughs. It's just harder for the "we'll sprinkle some AI on it" startups to get funded.

Investors are looking for solid business models and actual problems solved, not just hype. The wheat's being separated from the chaff. Probably healthy for the ecosystem long-term.


Remove startups from your thinking, it has nothing to do with what AI winter was. The only winter will be for those that do not use AI.


We were talking about the new AI winter coming 10 and 5 years ago. Advances keep coming that put it off. As far as I can tell, we keep getting further away from the next AI winter, but that could change in a decade or so.


OpenAI is in hibernation now. If you look at OpenAI LLC articles of incorporation and address you will see them signaling this themselves. This is the hype cycle maneuvering.


Yeah I have the impression they are deliverying the bare minimum (or even less than that) for a big shocking release as soon as they can (but there goes 1 year and a half already)


I wonder if they want to go for-profit before major updates


Most of the AI startups also are shameless api proxies. I don’t see why funding or payments should go to yet another system prompt wrapper.


Yes, LLMs are hitting a plateau and there's no new clear path to AGI or something close.


I heard someone say that we're at the AI version of 1996 during the dotcom boom.


People said the same thing about crypto right before it crashed. People say lots of things, it doesn't make them any more likely to be true.


most people didn't even notice the crypto boom/bust, even people in tech.

It wasn't on the scale of dotcom


Crypto never crashes, it just allows you to buy it sometimes.


I said that. I still think so.


I think so, yet for my career I’d hate to be wrong. I’m not sure how to weight my biases in the matter.

I use AI daily and like it for a few use cases, but I’m not sure how it could bridge the gap from “useful boilerplate generator and rubber duck” to something as basic as “can actually find useful ways to clarify or optimize code in ways I can’t”. It’s not a big ask but it feels very far away despite recent progress.


We're really early. It's been only 1.5 years since GPT4 - which caught everyone's pants down. Hardware has to catch up. We need to produce a lot more compute. People need more time to build on top of the foundational models.


GPT3, right?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: