Hacker News new | past | comments | ask | show | jobs | submit login

That's like saying I have a healthy revenue stream from my credit card.



Not quite. In 2 years their revenue has ~20x from 200M ARR to 3.7B ARR. The inference costs I believe pay for themselves (in fact are quite profitable). So what they're putting on their investor's credit cards are the costs of employees & model training. Given it's projected to be a multi-trillion dollar industry and they're seen as a market leader, investors are more than happy to throw in interest free cash flow now in exchange for variable future interest in the form of stocks.

That's not quite the same thing at all as your credit card's revenue stream as you have a ~18%+ monthly interest rate on that revenue stream. If you recall AMZN (& all startups really) have this mode early in their business where they're over-spending on R&D to grow more quickly than their free cash flow otherwise allows to stay ahead of competition and dominate the market. Indeed if investors agree and your business is actually strong, this is a strong play because you're leveraging some future value into today's growth.


All well and good, but how well will it work if the pattern continues that the best open models are less than a year behind what OpenAI is doing?

How long can they maintain their position at the top without the insane cashflow?


One system will be god like and then it doesn't matter


These types of responses always strike me as dogmatic.


Reminds me of the crypto craze where people were claiming that Bitcoin was going to replace all world currencies.


Only the usa Is running a Manhattan project. Nothing to see here really. Go back to bed


Platform economics "works" in theory only upto a point. Its super inefficient if you zoom out and look not at system level but ecosystem level. It hasn't lasted long enough to hit failure cases. Just wait a few years.

As to openai, given deepseek and the fact lot of use cases dont even need real time inference its not obvious this story will end well.


I also can't see it ending well for OpenAI. This seems like it's going to be a commodity market with a race to the bottom on pricing. I read that NVIDIA has a roughly 1000% (10x) profit margin on H100's, which means that someone like Google making their own TPUs has a massive cost advantage.

Moore's law seems to be against them too... hardware getting more powerful, small models getting more powerful... Not at all obvious that companies will need to rely on cloud models vs running locally (licencing models from whoever wants that market). Also, a lot of corporate use probably isn't that time critical, and can afford to run slower and cheaper.

Of course the US government could choose to wreck free-market economics by mandating powerful models to be run in "secure" cloud environments, but unless other countries did same that might put US at competitive price disadvantage.


Have they built their own ASICs for inference like Google and Microsoft have? Or are they using NVIDIA chips exclusively for inference as well?


The rumors I've heard are that they have a hardware team targeting a 2026 release, but no productions ASICs at the moment.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: