Hacker News new | past | comments | ask | show | jobs | submit login

I find the 5 billion a year burn rate amazing, and OpenAI’s competition is stiff. I happily pay ABACUS.AI ten dollars a month for easy access to all models, with a nice web interface. I just started paying OpenAI twenty a month again, but only because I am hoping to get access to their interactive talking mode.

I was really surprised when OpenAI started providing most of their good features for free. I am not a business person, but it seems crazy to me to not try for profitability, of at least being close to profitability. I would like to know what the competitors’ burn rates are also.

For API use, I think OpenAI’s big competition is Groq, serving open models like Llama 3.1.




> it seems crazy to me to not try for profitability

A business is worth the sum of future profits, discounted for time (because making money today is better than making money tomorrow). Negative profits today are fine as long as they are offset by future profits tomorrow. This should make intuitive sense.

And this is still true when the investment won't pay off for a long time. For example, governments worldwide provide free (or highly subsidized) schooling to all children. Only when the children become taxpaying adults, 20 years or so later, does the government get a return on their investment.

Most good things in life require a long time horizon. In healthy societies people plant trees that won't bear fruit or provide shade for many years.


Yes. If ChatGPT-like products will be widely and commonly used in the future, it's much more valuable right now to try to acquire users and make their usage sticky (through habituation, memory, context/data, integrations, etc) than it is to monetize them fully right now.


I’m not super familiar with the latest AI services out there. Is abacus the cheapest way to access LLMs for personal use? Do they offer privacy and anonymity? What about their stance on censorship of answers?


I don’t use Groq, but I agree the free models are probably the biggest competitors. Especially since we can run them locally and privately.

Because I’ve seen a lot of questions about how to use these models, I recorded a quick video showing how I use them on MacOS.

https://makervoyage.com/ai


Local private models are not a threat to openai.

Local is not where the money is, it’s in cloud services and api usage fees.


They aren’t in terms of profitability, but they are in terms of future revenue. If most early adopters start self-hosting models then a lot of future products will be build outside of OpenAI’s ecosystem. Then corporations will also start searching how to self-host models because privacy is the primary concern for AI’s adoption. And we already have models like Llama3 400B that is close to ChatGPT.


Have you paid much attention to the local model world?

They all tout OpenAI compatible APIs because OAI was the first mover. No real threat for incompatibility with OAI.

Plus these LLMs don’t have any kind of interface moat. It’s text in and text out.


Just because Ollama and friends copied the API doesn't mean that they're not competitive. They've all done this just the same as others copying the S3 API - ease of integration and lower barrier to entry during a switching event, should one arise.

> Plus these LLMs don't have any kind of interface moat.

The interface really has very little influence. Nobody in the enterprise world cares about the ChatGPT interface because they're all building features into their own products. The UI for ChatGPT has been copied ad nauseam - so if anyone really wanted something to look and feel the same it's already out there. Chat and visual modals are already there, so I'm curious how you think ChatGPT has an "interface moat"?

> Local private models are not a threat to openai.

There are lots of threats to AI. One of them being local models. Because if the OpenAI approach is to continue at their burn rate and hope that they will be the one and only I think they're very wrong. Small, targeted models provide for many more use cases than a bloated, expensive, generalized model. I would gather long term OpenAI either becomes a replacement for Google search or they, ultimately, fail. When I look around me I don't see many great implementations of any of this - mostly because many of them look and feel like bolt-ons to a foundational model that tries to do something slightly product specific. But even in those cases the confidence with which I'd put in these products today is of relatively low quality.


My argument was, because ollama and friends use the exact same interface as openai, tools built on top of them are compatible with OpenAI’s products and thus those tools don’t pull users away from OpenAI, so the local model world isn’t something OpenAI is worried about.

There is no interface moat. No reason for a happy openai user to ever leave openai, because they can enjoy all the local model tools with GPT.


Who cares about the interface? Not everyone is interested in conversational tasks. Corporations in particular need LLMs to process their data. A restful API is more than enough.


By interface, I meant API. (The “I” in API)

I should’ve been more clear.


I use Ollama running local models about half the time (from Common Lisp or Python apps) myself.


OpenAI features aren’t free, they take your mind-patterns in the “imitation game” as the price, and you can’t do the same to them without breaking their rules.

https://ibb.co/M1TnRgr


>it seems crazy to me to not try for profitability

I'm reminded of the Silicon Valley bit about no revenue https://youtu.be/BzAdXyPYKQo

It probably looks better to be not really trying for profitability and losing $5bn a year than trying hard and losing $4bn




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: