Hacker News new | past | comments | ask | show | jobs | submit login

I've been saying the same things for weeks, right here and in the usual places. Basically - OpenAI will not be able to continue to commercialise chatGPT-3.5, they will have to move to GPT-4 because the open source alternatives will catch up. Their island of exclusivity is shrinking fast. In a few months nobody will want to pay for GPT-4 either when they can have private, cheap equivalents. So GPT-5 it is for OpenAI.

But the bulk of the tasks can probably be solved at 3.5 level, another more difficult chunk with 4, I'm wondering how many of the requests will be so complex as to require GPT-5. Probably less than 1%.

There's a significant distinction between web search and generative AI. You can't download "a Google" but you can download "a LLaMA". This marks the end of the centralisation era and increased user freedom. Engaging in chat and image generation without being tracked is now possible while searching, browsing the web or torrenting are still tracked.




> I've been saying the same things for weeks, right here and in the usual places. Basically - OpenAI will not be able to continue to commercialise chatGPT-3.5, they will have to move to GPT-4 because the open source alternatives will catch up. Their island of exclusivity is shrinking fast. In a few months nobody will want to pay for GPT-4 either when they can have private, cheap equivalents. So GPT-5 it is for OpenAI.

It is worth $20 a month to have one UI on one service that does everything.

Unless specialized models can far exceed what GPT4 can do, being general purpose is amazing.

IMHO the future is APIs written for consumption by LLMs, and then natural language interfaces and just telling an AI literally anything you want done.


>It is worth $20 a month to have one UI on one service that does everything.

competition will drive profit margins and prices down to nothing because the number of companies that can spin up an UI is unlimited. Markets don't pay you what something is worth, they pay what the cheapest participant is willing to sell it for.


>Markets don't pay you what something is worth, they pay what the cheapest participant is willing to sell it for.

I believe 'what something is worth' is defined as what the market is willing to pay.

And sometimes the customer will pay for something that isn't the cheapest of something, which is why I'm writing this on a mac.


> competition will drive profit margins and prices down to nothing

I strongly suspect the profit margin on ChatGPT is already pretty low!

> Markets don't pay you what something is worth, they pay what the cheapest participant is willing to sell it for.

Correction: Markets pay what companies are able to convince consumers to pay. Some products bring negative value to the buyer, but are still sold for hundreds of millions of dollars (see: enterprise sales and integrations, which oftentime fail).


That last argument is a tautology btw


I'm paying but hate the UI. I had to add labels myself as a Tampermonkey extension, but it would be much better if they would give API access to what I'm paying for and let UIs compete.


> It is worth $20 a month to have one UI on one service that does everything.

Today it is. When there is an open source, capable “one UI for everything” that runs locally and can consume external services as needed (but keeps your data locally otherwise), will it still be?


You can't train ChatGPT with your own data and it has the infamous "As a language model..." problem. This is why an alternative that can be run locally is a better option for many people.


> I've been saying the same things for weeks, right here and in the usual places. Basically - OpenAI will not be able to continue to commercialise chatGPT-3.5, they will have to move to GPT-4 because the open source alternatives will catch up. Their island of exclusivity is shrinking fast. In a few months nobody will want to pay for GPT-4 either when they can have private, cheap equivalents. So GPT-5 it is for OpenAI.

I wonder if this effect will be compounded by regulatory pressure that seems poised to slow down progress at the bleeding edge of LLMs.

Open source closing the gap at the bottom, and governments restricting further movement at the top...


> I'm wondering how many of the requests will be so complex as to require GPT-5

I am not sure the pessimism is warranted. True that few people have the need to upgrade from GPT-3.5 to GPT-4 now, but if GPT-5 is another serious leap in capabilities, it might have an effect closer to the difference between old chatbots (useless, interesting) and ChatGPT (immediate economic impact, transforming some jobs). Or at any rate, we should expect such a leap to occur soon, even if it's not GPT-5.


Also significant to note that much of this AI boom was due to the UI of ChatGPT that gave everyone easy access to the model. Perhaps much of the improvements to be had in GPT-5 will also be found in the UI. I mean UI in the broadest possible sense, I'm sure we'll come up with very creative ways to interact with this over the coming years.

But the moat problem addressed in the article remains. Good luck patenting your amazing UI change in such a way that open source models can't catch up within a few weeks.


I also would like to believe that, but there are countless examples which show the difference. Companies have no time to figure out which of the open source offerings is the best. Even worse, they don’t have the time to switch from one project to the other or back to OpenAI if OpenAI releases a new state-of-the-art model.


And where are these open source models where I can go to a url and do all the things I can do in ChatGPT or through api keys for OpenAI? I googled a couple of weeks ago to find hosted versions of these open source models to try, and every one was either down or woefully poor.

OpenAI and MS are going to win because they have a package to go and it’s ready and available and working well - they have set the benchmark. I’m not seeing any evidence of this in the OSS community thus far.

Until I can spin up a docker image capable of the same as OpenAI in hetzner for 30 bucks a month - it’s not in the same league.


>Until I can spin up a docker image capable of the same as OpenAI in hetzner for 30 bucks a month

I do exactly this with https://github.com/nsarrazin/serge

Hetzner will install any hardware you send them for $100. So you can send them a $200 P40 24GB to run 33B parameter GPU models at ChatGPT speeds without increasing your monthly cost.


That $200 card's price seems to have been hit hard by inflation in Finland [1]

[1] https://www.proshop.fi/Naeytoenohjaimet/HP-Tesla-P40-24GB-GD...



One issue with the current generation of open source models is most have been based on some llama core architecture, and that's not licensed for commercial use. Once you get to the point of spinning up a full and easy API, and selling API credentials, you're entering into the commercial clause. Once we have a llama alternative (or a more permissively licensed separate architecture) I guarantee hosting providers like Render or Model are going to come in with an API offering. Just waiting on those core models to improve licensing, would be my guess.


> Until I can spin up a docker image capable of the same as OpenAI in hetzner for 30 bucks a month - it’s not in the same league.

Yes, you are right

That’s irrelevant to the point of this, which is about the dynamics of the market over a longer window than “what is available to use immediately today”, because a “moat” is a different thing than “a current lead”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: