Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

$6.6B raise. The company loses $5B per year. So all this money literally gives them just an extra ~year and change of runway. I know the AI hype is sky high at the moment (hence the crazy valuation), but if they don't make the numbers make sense soon then I don't see things ending well for OpenAI.

Another interesting part:

> Under the terms of the new investment round, OpenAI has two years to transform into a for-profit business or its funding will convert into debt, according to documents reviewed by The Times.

Considering there are already lawsuits ongoing about their non-profit structure, that clause with that timeline seems a bit risky.



> if they don't make the numbers make sense soon then I don't see things ending well for OpenAI.

This is pretty much obvious just from the valuations.

The wild bull case where they invent a revolutionary superintelligence would clearly value them in the trillions, so the fact that they're presently valued an order of magnitude less implies that it is viewed as an unlikely scenario (and reasonably so, in my opinion).


You don't need science fiction to find the bull case for OpenAI. You just have to think it stands to be the "next" Google, which feels increasingly plausible. Google's current market capitalization is in the trillions.


Google is a digital advertising company. OpenAI hasn't even entered the ads business. In the absolute best case they can take over a large chunk of Google's search market share, sure, but that still doesn't make it anything similar to Google in terms of finances. How do they start making the queries profitable? What do they do when their competitors (Claude, Gemini, Llama, Mistral, Grok and several others) undercut them on price?


Google didn’t start as an ads company. It started as a blank text box that gave you a bunch of good answers from the internet in a list of links.

Were there competitors that did the same thing? AltaVista? Yahoo? Did they undercut on cost? Google was free, I guess. But Google won because it maintained its quality, kept its interface clean and simple, and kept all the eyeballs as a result. Now Google is essentially the entry point to the internet, baked into every major browser except Edge.

Could ChatGPT become the “go-to” first stop on the internet? I think there’s a fair chance. The revenue will find its way to the eyeballs from there.


Well when you describe it that way, OpenAI also started as a blank text box that gave you a bunch of good answers, and they've already expanded with other services.

I already use ChatGPT as my first go-to stop for certain search queries.


I guess the difference (at least at comparable development stages) is that a single user query cost almost nothing for Google compared to how much money running ChatGPT costs.

I wouldn’t be surprised if OpenAI were still losing money even with the same CPM that Google search has.


I am not bearish on OpenAI, but the analogy is flawed in that Google probably raised, I don't know, less than 50 million, before it was actually profitable.


It quickly moved into ads though. Incorporated late 1998 and started selling ads in 2000.

Normal people would need to start using a Chat GPT owned interface for search to make an ad based business viable surely? And there's no real sign of that even beginning to happen.


This subthread is full of people explaining why they don't believe OpenAI could successfully match Google's financial performance. Sure. I'm not investing either. My point isn't that they're going to be successful, it's that there are plausible stories for their success that don't involve science fiction.


People don’t seem to understand that investment portfolios are personal, that just because an investment doesn’t make sense for their portfolio doesn’t mean that it doesn’t make sense for anyone’s. Allocating a tiny fraction of a portfolio to high risk/high reward investments is a sound practice. When those portfolios are, say, large pension funds, the total sum to invest can be hundreds of millions.

Dissenters should consider that their might be short plays, that if what they think is true they could make some money.


Google also offered a free product. OpenAI isn’t offering a free product but a subscription product plus metered API product amongst other. Their economics are structurally better than googles assuming they can keep growing their captured market share. Their outrageous costs are also opportunities to optimize costs, including massive amounts of R&D, etc. They don’t need to be profitable now - in fact as bezos demonstrated with Amazon for many years, profit is an indication you’ve run out of uses for capital to grow.


> OpenAI isn’t offering a free product

I encourage you to visit https://chatgpt.com in incognito mode.


That’s demoware.


That's like saying Youtube isn't free because there's a subscription...


It’s like saying just because there’s a free offering the product isn’t free. Anyone who has ever built a shareware or demoware product understand the free offering is a funnel to the paid offering and doesn’t exist as an independent part but as an advertisement for the paid product.


Yes, but still a free product. Unless you say that in the future the product won't have a free offering anymore, it's a free product.


That’s literally not how people in the business of creating product strategies classify things.


Yes, it is? Usually, products with free offering + subscriptions get classified as "freemium".


No it isn’t. Free tier of YouTube make money from ads. Free tier of ChatGPT is just a funnel for the paid tier. It’s a marketing cost.


Don't you think they will put ads on ChatGPT?

There are several ways to monetize a product (they already make some use of it).


> Their economics are structurally better than googles assuming

Are they? I would guess that the cost per query for Google, even back then was insignificant compared to how must OpenAI is spending on GPU compute for every prompt. Are they even breaking even on the $20 subscriptions?

During their growth phase Google could make nothing from most of their users and still have very high gross margins.

OpenAI not only has to attract enough new users but to also ensure that they are bringing more revenue than they cost. Which isn’t really a problem Google or FB ever faced.

Of course presumably more optimized models and faster hardware might solve that longterm. However consumers expectations will likely keep increasing as well and OpenAI has a bunch of competitors willing to undercut them (e.g. they have to keep continuously spending enough money to stay ahead of “open/free” models and then there is Google who would probably prefer to cannibalize their search business themselves than let someone else do it).

> in fact as bezos demonstrated with Amazon for many years, profit is an indication you’ve run out of uses for capital to grow.

Was Amazon primarily funding that growth using their own revenue or cash from external investors? Because that makes a massive difference which makes both cases hardly comparable (Uber might be a better example).


Ads are the most likely monetization path for openai. They want to capture as many users right now and can pull the trigger on ads whenever they want to start juicing users further. As long as the funding flows they can delay the ads. Google and Facebook were ad free initially for years only switching to it for monetization after building up critical user mass.


To cost per user for Google and FB was/is almost insignificant(relative to LLMs). So all the ad revenue was almost free cash.

It’s not even clear if OpenAI is breaking even with the $20 subscription just on GPU/compute costs alone (the newer models seem to be a lot faster so maybe they are). So incrementally growing their revenue might be very painful if they keep making the UX worse with extra ads while still simultaneously losing money on every user.

Presumably the idea is that costs will go down as HW became faster and models themselves more optimized/efficient. But LLMs themselves already seem to almost be a commodity so it might become tricky for OpenAI to compete with a bunch of random services using open models that are offering the same thing (while spending a fraction on R&D)


They’re already monetized in lots of ways. I doubt ads make much sense or are necessary.


I would say they are likely already working through a potential ads experience


With slipping consumer standards around separating ads from real content, OpenAI are in a position to much more insidiously advertise than Google.


Google started as a useful search service then corrupted itself with ads. This is the same thing that Facebook and Reddit did. It’s not hard to imagine an LLM that provides “sponsored” responses.

So it’s a long term bet but the idea that Google would lose to an LLM isn’t far fetched to me.


The unit economics appear to be substantially different.


Also Google's adds are not just in their products. They are absolutely everywhere step down from other big players. And I don't think OpenAI can beat that moat. It is entirely different game, and very hard to enter or someone else would have done it.


But don't they need a moat? They're running against not only every major tech company with access to training data and also all the open source models.

The models will have diminishing returns and other players seem better suited to providing value added features.


you don't need a moat during the gold rush. You need scale - largest number of biggest shovels with which the stuff to be shoveled the fastest. There is so much money right now to be sucked up from the world. We're talking valuations in AI at $100M+ per employee.

https://finance.yahoo.com/news/uae-backs-sam-altman-idea-095...

>The models will have diminishing returns

Wasn't that the going thinking before ChatGPT? And before AlexNet. Of course, we'll again be having some diminishing returns until the next leap.


> the stuff to be shoveled

They are spending a lot on shovels but it’s not clear that there is that much “stuff” (consumer demand) to be shoveled.

VC money can only take you so far, you still need to have an actual way of making money.

LLMs might effectively replace Google but they are already a commodity. It’s really not clear what moat OpenAI can build when there are already a bunch of proprietary/open models that are more or less on the same level.

That basically means that they can’t charge much above datacenter cost + small premium longterm and won’t be able achieve margins that are high enough to justify current valuation.


The moat is a first-party integration with Windows, a third-party integration with iOS, and first-mover advantage. The discount rate still isn't very high; ~5% is the risk-free rate. 157 billion is a reasonable valuation.


Integrations are not OAI's moat as those are primarily UX developments that are kept by Apple/MSFT. Right now, and if they want, they can change some lines of code to get up and running with another provider, like Anthropic or whatever.

Only moat OAI has right now is advanced audio mode / real-time audio API, plus arguably o1 and the new eval tools shown the other day as those are essentially vertical integrations.

And maybe, like you said, first-mover advantage. But is not that clear, as even Anthropic got ahead in the race for a while with Claude 3.5 Sonnet.


> Google's current market capitalization is in the trillions.

2 trillion. Approximately 13x OpenAI's current valuation. Google nets almost 100 billion a year. OpenAI grosses 4 billion a year.

Wild numbers.


A private pre-IPO investment is a bet on where OpenAI will be 10 years from now, not where they are now.


Yes, but they generally shoot for 5-10x. So they are betting on OpenAI growing gross revenue like 20x assuming their costs stay the same.


The later the investment stage, the lower the expected multiple, but also I don't know that 20x is a crazy expectation to have about OpenAI? It might not happen, but people really fixated on that "might", which is not what venture investing is about.


20x isn't crazy, but its what they need just to reach parity P/E with Google (assuming their costs remain flat). In order for the investment to make sense they have to grow much much more than that to account for the extra risk, otherwise you're better off buying Google stock.

If we throw out some conservative numbers, and assume costs will rise super modestly, you have to believe OpenAI's earning will grow 50-100x for the investment to make sense. They'd have to maintain their current growth rate for 5+ years, but I wouldn't be surprised if their revenue growth is already slowing


To someone that uses OpenAI's tools everyday and generally finds them to be genius morons, I disagree that it feels increasingly plausible they stand to be the next Google.


You're kind of selling investing in Google instead, given that they're one of OpenAI's competitors.


If you think Google wins and crushes them, sure.


In 2023 Google had $307.39B in revenue and $24B in profit last quarter (suggesting ~100B in profit this year). Meanwhile OpenAI is losing money and making no where near these sums.


In fairness, whilst Google did reach profitability early (given the VCs had got their fingers burned on internet companies in 1999, they didn't have much choice) its revenues were lower than OpenAIs at IPO stage. The IPO was both well below Google's original hopes and considered frothy by others in the Valley because at the time the impressive and widely-used tech, limited as a business argument seemed to totally apply to the company that did search really well and had just settled the litigation for cloning Yahoo's Overture advertising idea. And their moat didn't look any better than OpenAIs

And much as AI hype irritates me, the idea that the most popular LLM platform becomes a ubiquitous consumer technology heavily monetised by context-sensitive ads or a B2B service integrated into everything doesn't seem nearly as fanciful as some of the other "next Googles". At the very least they seem to have enough demand for their API and SaaS services to be able stop losing money as soon as VCs stop queuing up to give them more.


Facebook got all the way to an IPO with business fundamentals so bad that after the IPO, Paul Graham wrote a letter to all the then-current YC companies warning them that the Facebook stink was going to foul the whole VC market in the following years. Meta is now worth something like 1.4T.


Facebook grew 100% and had 45% gaap operating margins the year before their IPO.

Facebooks IPO financials were among the best financials at IPO ever

OpenAI has negative 130% adjusted operating margins.


I don't think OpenAI is about to IPO.


FB financials were incredibly good at their IPO.

Revenue - 3,711 - 88% YoY growth.

Net Income - 1,000

Cash - 3,908

Tell me how those are bad?


Facebook made it out by committing click fraud against advertisers on a massive scale, which I don't see as a viable path for sama (even ignoring any legal concerns) considering that openAI isn't a platform company.


Look, I don't care. At some point we're just arguing about the validity of big tech investing. I don't invest money in tech companies. I don't have a strong opinion about Facebook, or, for that matter, about OpenAI. I'm just saying, you don't need a sci-fi story about AGI to see why people would plow this much money into them. That's all.


Google has also magnificently shit the bed with Gemini; their ads business is getting raked over the coals in court; they are a twice convicted monopolist; and are driving away top talent in droves.

It reminds me of the old joke:

Heard about the guy who fell off a skyscraper? On his way down past each floor, he kept saying to reassure himself: So far so good... so far so good... so far so good.


People have been talking about the downfall of Google and FB/Meta for years now and yet every single year both of them still grow, still print money, and run the most used products in the world by far.

Google's generative ai models probably are used more in a day than the rest combined. Google is a highly profitable business that still has never not grown YoY in its nearly 3 decade history.

In you mind you might think Google is going down, but in reality they have only been going up for nearly 3 decades now.


> their ads business is getting raked over the coals in court

only display ads business, which is a fraction of total ads revenue


It’s all wired together.


display ads probably wired to google infra, but google search + youtube + ads can exists on their own.


[flagged]


Product adoption counts, imaginary benchmarks don’t.


Tell me how Search AI Overviews aren't using Gemini?


I used the newly released gemini live mode. I thought they were positioning it against chatgpt live mode. But the voice is hilariously unnatural. It makes you not want to talk to it at all. If this is the best google can do, they are years behind the competition.


I just tried it after your message. It has like 5-10 voices. Many of them very reasonable, some of them quite good on my pixel phone.


I don't even have any options to choose a different voice. It's just a very mechanical voice. I feel the default google assistant voice was way better than this one.


Uber has not had a profitable year from their core business during its life time.


> which feels increasingly plausible.

not really; ChatGPT may have the brand name but there are other offerings that are just as good and which can be incorporated into existing apps that have a captive userbase (Apple, Google, Meta). Why should I switch to another app when I can do genAI within the apps that I'm already using.


> You just have to think it stands to be the "next" Google, which feels increasingly plausible. Google's current market capitalization is in the trillions.

for this, in addition to "google" part, they also need to build "ads" part for monetization, which is also not trivial task.


Besides name recognition, what’s special about OpenAI at this point?

I was prototyping some ideas with ChatGPT that I wanted to integrate into a MVP. It basically involved converting a users query into a well categorized JSON object for processing downstream. I took the same prompt verbatim and used it with an Amazon Bedrock hosted Anthropic model.

It worked just as well. I’m sure there will be plenty of “good enough” models that are just as good as Open AI’s models.


> Besides name recognition, what’s special about OpenAI at this point?

nothing


Do you want to know how much traffic OpenAI has pulled off Google in the past two years? Because it's not pretty lol. It's definitely single percentage points if not less than a percent (can't remember the exact numbers). They're a rounding error compared to Google.


Is there a source for this data?

I personally always use ChatGPT over Google.


https://datos.live/predicted-25-drop-in-search-volume-remain...

Love the use of the personal anecdote to refute my point BTW.


Its also plausible they are the next AltaVista.


That assumes that the revolutionary superintelligence is willing to give away its economic value by the trillions. (Revolutionary superintelligences are known to be supergenerous too)


I assume it needs super amounts of hardware and super amounts of energy. No one gets a free ride not even superintelligences or people working for health insurance.


I would put odds on whatever it is generating the revenue having the leverage at the table. 'We pay your salary which allows you to eat' is a poor argument when the opposing one is 'without me your company would be losing money'.


That's true. Thankfully, superintelligences are also poor at negotiating and projecting their own income, so you can still make a handsome profit off of them.


But even if they invent revolutionary super intelligence, big if, what's stopping other companies to follow suit? Talent between these companies moves fast, some start their own.

Hell even open source models are nowadays better than the best models this billions burning company had just 6 months ago.

I'm lost at what is the moat here, because I really don't see it and I don't believe any of these companies has any advantage at all.


It actually represents the scenario where they invent a revolutionary superintelligence that doesn't kill the VCs investing in the firm, and allows them enough control to take profit. In the top range ASI capacity outcomes, the sand god does not return trillions to the VCs.

This actually represents only the narrow "aligned" range of AI outcomes, so it makes sense it's a small one.


Judging by the ones I have met, the VCs probably believe that any kind of superintelligence would by definition be something that would like them and be like them. If it wasn’t on their side they would take it as incontrovertible proof that it wasn’t a superintelligence.


Thanks, made me giggle because it rings true! But what if... it's training actually caused it to acquire similar motivation?

(Don't want to think about that too much but man just imagine... A superintelligence with Sam Altman mindset)


I am not sure who you have met, but I have mostly talked to VCs with the same range of optimism and concerns regarding AI as normal technologists.


OpenAI finances is a bit tricky since most of their expenses are the cloud costs while their biggest investor/shareholder Microsoft invested in them with mostly Azure credit so although their finances seem unsustainable, I think the investors are banking in if the things went bad Microsoft will buy them out and make even small ROI like what they did with Inflection.


I think OpenAi will drop their ambition for AGI and focus on product, they'll never state this of course, but it's clearly telegraphed in this for profit move.

Research and safety have to take a backseat for cost reduction. I mean there are many avenues to profitability for them, one I can think of is they could cut their cost significantly by creating smaller and smaller models that match or nearly match Gpt4, while paid subscribers wouldn't be able to really tell the difference. No one is really challenging them on their benchmark claims.

I think their main challenge is that in 5-10 years from now, if their current definition of AGI is still elusive, models of Gpt4 capabilities or similar (Llama 3 can fool most people I think) will be running locally and freely on pretty much any OS of choice without having to make a single outside API call. Every app will have access to local inference that neither costs developers nor the users anything to use. Especially after the novelty has worn off a bit, it's hard to see consumers or developers paying up to use something that's technically better but not significantly enough to justify a $20+/mo subscription or per token cost. Right now though, local inference has a huge barrier of entry, especially when you think across platforms.

Honestly, I think Google and Apple can afford to spend the cash to develop these models in perpetuity, while OpenAi needs to worry about massive revenue growth for the next few years, and they probably don't really have the personnel to grow revenue aggressively either. It's a research lab. The downside of revenue seeking too, is that sometimes the pursuit kills the product.


> 5-10 years from now

> models of Gpt4 capabilities or similar

I took Apple how many years to change their bass config memory from 8 GB to 16? Somewhere between 8 and 10..

Regardless, I’m not sure running reasonably advanced models locally will necessarily become that common anytime soon on mainstream devices. $20 per month isn’t that much compared to the much higher HW costs, of course it’s not obvious that OpenAI/etc. can make any money longterm by charging that.


Check out this more in-depth financial analysis: https://www.wheresyoured.at/oai-business/

> OpenAI's monthly revenue hit $300 million in August, and the company expects to make $3.7 billion in revenue this year (the company will, as mentioned, lose $5 billion anyway), yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029

> For OpenAI to hit $11.6 billion of revenue by the end of 2025, it will have to more than triple its revenue. At the current cost of revenue, it will cost OpenAI more than $27 billion to hit that revenue target. Even if it somehow halves its costs, OpenAI will still lose $2 billion.


Same thing was said of Netflix, of Uber, etc...

Venture capital loses money to win marketshare, tale as old as time


Were Microsoft, Google, Amazon and Tesla/Twitter(x), and a whole bunch of other gigantic (Chinese aswell) corporations trying to compete for the same market back then?

I don't think Netflix and Uber had even a fraction of the competition that this field will have.


Netflix had all the paid TV channels worldwide, physical media sales and video-clubs as competitors. Uber had (still has) taxis. All of these were entrenched competitors with strong moats.

It's easy with hindsight to underestimate the forces of Netflix's competitors, but consider that even today, the revenue of Pay TV worldwide is still bigger that the revenue of streaming platforms. And unlike streaming platforms, Pay TV make profits hand over fist and didn't incur crazy debts to gain marketshare. They may be on the way to extinction, but they'll make tons of profits on their way out.


Netflix and Uber both created new product categories in their respective markets. And both where quite established before there was any real competition in those product categories.

Netflix discovered that content was the only moat available to them, after which it was effectively inevitable that they become a major production studio. I think Uber are still trying to figure out their most, but ironically regulation is probably part of it.

OpenAI also created a new product category, but the competition was very quick to move into that category and has very deep pockets. At some point, they clearly felt regulation might be a moat but it’s hard to see them landing that and winning.


I think the simplified question is if OpenAI is a natural monopoly like Google was/is or if that market has a different structure like an oligopoly (e.g. mobile phone market), "perfect" market, etc. On the technological side it seems obvious (does it?) that we can run good models ourselves or in the cloud. What is less clear if there will be a few brands that will offer you a great UX/UI, and the last mile wins.


> I don't think Netflix and Uber had even a fraction of the competition that this field will have.

Uber had significantly more competition than all of these companies combined. Including from China which they were forced out of.


They totally did in their own ways. Blockbuster. YouTube. Taxis. Google self driving


Neither had Zuckerberg giving free hikes and movies to people.


Most of the loss comes from hefty cost of inference, right? OAI runs on Azure, so everything is expensive at their scale: the data, the compute, and GPU instances, and storage, and etc.

I'd venture to guess that they will start building their own data centers with their own inference infra to cut the cost by potentially 75% -- i.e., the gross markup of a public cloud service. Given their cost structure, building their own infra seems cheap.


Given their scale and direct investments from Microsoft, what makes you think this is any cheaper? They’ll be getting stuff at or near cost from azure, and Azure already exists and with huge scale to amortize all of the investments across more than just OpenAI, including volume discounts (and preference) from nvidia.


I assumed that Microsoft gave them discount and credits as investment, but the cost by itself is like any other big customer can get.


Of course the cost isn’t near zero, but is the pricing “at Microsoft’s cost”? If it is, then them building a datacenter wouldn’t save anything, plus would have enormous expenses related to, well, building, maintaining, and continuously upgrading data centers.


Good point. I updated the assumption accordingly. I assume the cost of using Azure cloud after discount but before the credit will be on par with Azure's big customers. And given the scale of OAI, I suspect that the only way to be profitable is to have their own infra.


Wouldn't it be foolish for microsoft to give them a discount?

Like you could discount say a 10B investment down to 6B or w/e the actual costs are but now your share of OpenAI is based on 6B. Seems better to me to give OpenAI a "10B" investment that internally only costs 6B so your share of OpenAI is based on 10B.

Plus OpenAI probably likes the 10B investment more as it raises their market cap.


The cost of vertically integrated inference isn’t zero either.


As part of Microsoft's last investment in 2023 OpenAI agreed to exclusively use Azure for all their computing needs. I'm not sure what the time limit on that agreement is, but at least for now OpenAI cannot build their own datacenters even if they wanted to.


Microsoft restricted their computing needs provider, but they could have allowed them to build their own datacenters.


If they only have enough for a year or two of runway at current costs how the heck would they have enough capital to build an AI data center?


I cannot believe they don't already get steep discounts via custom contracts with providers.

Someone has to pay for the hardware and electricity.


Building datacenters will take a significant amount of time. if they don’t have locations secured then even more so.


Are there enough gpus available at the scale that they need them?


All they have to do in order to make the bet pay off is create a heretofore only imagined technology that will possibly lead us into either a techno-utopia or post-apocalyptic hellscape.

I don’t see why that’s so far-fetched?


My hope is that they verge failure in 2 years, then turn to a non-profit. This in turn gets subsidized by DARPA, who by then realize they're WAY behind the curve on AI and the investment is helpful towards keeping the US competitive in the race to AGI that many believe to be a 'first to the goal wins forever' scenario. And the contingency for this prop-up? OpenAI have to actually be 'open' to receive the money.

Inevitably this will result in this one company's eventual decline but it'll push everyone else in the space forward.


> OpenAI has two years to transform into a for-profit business or its funding will convert into debt

Last I saw, the whole thing was convertible debt with a $150bn cap. I’m not sure if they swapped structures or this is some brilliant PR branding the cap as the heading valuation.


>I don't see things ending well for OpenAI.

I mean, what exactly do you see happening? The have a product people love and practically incalculable upside potential. They may or may not end up the winners, but I see no scenario in which it "doesn't end well". It's already "well", even if the company went defunct tomorrow.

>that clause with that timeline seems a bit risky.

I'm 99% certain that OpenAI drove the terms of this investment round, they weren't out there hat in hand begging. Debt is just another way to finance a company, cant really say it's better or worse.


I don't think it really matters how much people love their product if every person using it costs them money. I'm sure people would love a company that sold US$10 bills for 25c, but it's not exactly a sustainable venture.

Will people love ChatGPT et al just as much if OpenAI have to charge what it costs them to buy and run all the GPUs? Maybe, but it's absolutely not certain.

If they "went defunct" tomorrow then the people who just invested US$6bn and lost every penny probably would not agree with your assessment that it "ended well".


Model training is what costs so much. I would expect OpenAI makes a profit on inference services.


Running models locally brings my beefy rig to the knees for about half a minute for each querry for smaller models. Answering querries has to be expensive too?


The hardware required is the same, just in different amounts.

It’s less (gross) expensive for inference, since it takes less time, but the cost of that time (per second) is the same as training.


Obviously, that's my point.

We can do the math. GPT-4o can emit about 70 tokens a second. API pricing is $10/million for output tokens and $2.5/million for input tokens.

Assuming a workload where inputs tokens are 10:1 with output tokens, and that I can generate continuous load (constantly generating tokens). I'll end up paying $210/day in API fees, or $76,650 in a year.

Let's assume the hardware required to service this load is a rack of 8 H100s (probably not accurate, but likely in the ballpark.). That cost $240k.

So the hardware would pay for itself in 3 years. It probably has a service life of about double that.

Of course we have to consider energy too. Each H100 is 700watts, meaning our rack is 5.6 kilowatts, so we're looking at about 49 megawatt-hours to operate for the year. Let's assume they pay wholesale electricity prices of $50/mwh (not unreasonable), and you're looking at a ~$2,500 annual energy bill.

So there's no reason to think that inference alone isn't a profitable business.


That doesn't sounds like brilliant margins, to be honest. You've left out the entire "running a business" costs, plus the model training costs. They need to pay their staff, offices, and especially lawyers (for all the lawsuits over the scraped content used to train the models).

It's not unusual for a startup to not be profitable, and they're obviously not as the company doesn't make a profit, but I'm not sure why isolating one aspect of their business and declaring it profitable would justify the idea that this company is inevitably a good investment "even if the company went defunct tomorrow".

Perhaps you meant "win" in the sense of "being influential" or something, but I'm pretty sure the people who invested billions of dollars use definitions that involve more concrete returns on their investment.


Oh they are 100% losing money hand over fist if you include training costs and the eye-watering salaries they pay some of their employees.

I was responding to someone upthread suggesting that they were running even inference at a loss.


You're missing the fact that requests are batched. It's 70 tokens per second for you, but also for 10s-100s of other paying customers at the same time.


All these efficiencies just increase OpenAI's margin on inference. Of course it's not "one cluster per customer" and of course a customer can't saturate a cluster by themselves, my illustration was only to point out that the economics work.


Inference alone totally can be. Just look at banana.dev, runpod, lambda labs, or replicate.

The issue is OpenAI is not just selling inference.

Though I wouldn’t be surprised if there were some hidden costs that are hard for us to account for due to the sheer amount of traffic they must be getting on an hourly basis.


Oh actually banana.dev shutdown. Maybe it’s not as profitable.


70 somethings per second, is slow. So that means it does take a very significant amount of resources, considering it's running on the same or better hardware. To sustain 70 things per second for thousands of users, it gets expensive really quickly.


My point is that at current API pricing the users are paying enough to cover inference costs.


> The have a product people love and practically incalculable upside potential

I'm willing to bet that if you swapped out GPT with Claude, Gemini or Llama under the hood 95% of their users wouldn't even notice. LLMs are fast becoming a commodity. The differentiating factor is simply how many latest NVIDIA GPUs the company owns.

And even otherwise, people loving a product isn't what makes a company successful. People loved WeWork as well. Ultimately what matters is the quarterly financial statement. OpenAI is burning an incredible amount of money on training newer models and serving every query, and that's not changing anytime soon.


> I'm willing to bet that if you swapped out GPT with Claude, Gemini or Llama under the hood 95% of their users wouldn't even notice

You can say exactly the same about Google and Bing (or any other search engines), yet Google search is still dominant. Execution, market perception, brand recognition, momentum are also important factors, not to mention talent and funding.

Not everyone who wants to invest, can invest in this round. You may bet the investors are wrong, but they put money where their mouth is. Microsoft participate, even though they already invested $13b.


Thing is, when I go onto Google, I know I'm using Google. When my employees use the internal functions chatbot at my company (we're small but it's an enterprise use case), they don't know whether it's OpenAI or Claude under the hood. Nor do they care honestly.


From an API point of view (I.e. developer/corporate usage), I am quite sure that OpenAI is lower usage than Gemini now, and Anthropic API revenue is like 60% of OpenAI based on recent reporting. OpenAI is definitely not dominant in this area.

The aspect of corporate usage where OpenAI seems to be ahead is in direct enterprise subscriptions to ChatGPT.


Yeah because no company that attracted funding from lots of top VCs has ever failed...

How is A16Z's massive crypto fund doing again? And what about Softbank's other big bets?


I use LLMs for coding and I would instantly notice. It’s GPT4 or Claude, Gemini a close third, llama and rest are far away. The harder the question, the better OpenAI performs


> they have a product people love and practically incalculable upside potential.

They are not the only ones with the product. They don't have a moat. They are marginally better than competing models, at best. There is no moat into LLMs unless you happen to find the secret formula for super intelligence, hope nobody else finds it, and lock all of your R&D in the Moria mines so they don't go working and building it elsewhere.

There is no moat here. I can't believe so many intelligent people, even on HN cannot grasp it.


the default answer is to love ChatGPT but be unable to use it because of the prohibition on competition. Who wants to chat with something that learns to imitate you and you can’t learn to imitate it back? Seems like everyone using ChatGPT is sleepwalking into economic devastation…

also, for my use case I eventually found it’s faster and easier and less frustrating to write code myself (empowering) and not get sucked into repeatedly reminding AI of my intent and asking it to fix bugs and divergences (disempowering)

Plus, you can find alternatives now for the random edge cases where you do actually want to chat with an out of date version of the docs, which don’t train on user input.

I recommend we all “hard pass” on OpenAI, Anthropic, Google, basically anyone who’s got prohibitions on competition while simultaneously training their competing intelligence on your input. Eventually these things are going to wreck our knowledge work economy and it seems like a form of economic self-harm to knowingly contribute to outsource our knowledge work to externals…


We already freely distribute entire revision histories with helpful notes explaining everything in the git history.

The git repo and review history for any large project is probably more helpful for training a model than anything, including people using the model to write code.

It sounds like you are happy to not use LLMs. I’m the opposite way; code is a means to an end. If an LLM can help smooth the road to the end result, I’m happy to take the smooth road.

Refusing to learn the new tool won’t keep it from getting made. I really don’t think that code writers are going to influence it that much. The training data is already out there.


I seems like not ending well is the vast majority of outcomes. They dont have a profitable product or business today.

It seems that me most likely outcome is that they have one replaceable product against many and few options to get return commensurate with valuation.

My guess is that investors are are making a calculated bet. 90% chance the company become irrelevant, 10% chance it has a major breakthrough and somehow throws up a moat to prevent everyone else from doing the same.

That said, I have no clue what confidential information they are showing to investors. For all we know, they are being shown super human intelligence behind closed doors.


> That said, I have no clue what confidential information they are showing to investors. For all we know, they are being shown super human intelligence behind closed doors.

If that were the case, I wonder why Apple passed on this investment.


If that was the case then the valuation would have a couple extra 0s at the end of it. Right now this "super human intelligence" is still figuring out how to count the number of Rs in strawberry, and failing.


> being shown super human intelligence behind closed doors

This seems to be the "crypto is about to replace fiat for buying day to day goods/services" statement of this hype cycle. I've been hearing it at least since gpt-2 that the secret next iteration will change everything. That was actually probably most true with 2 given how much of a step function improvement 3 + chatGPT were.


> The have a product people love and practically incalculable upside potential.

...yet they struggle to find productive applications, shamefully hide their training data and can't substantiate their claims of superhuman capability. You could have said the same thing about Bitcoin and been technically correct, but society as a whole moved in a different direction. It's really not that big of a stretch to imagine a world where LLM capability plateaus and OpenAI's value goes down the toilet.

There is simply no evidence for the sort of scaling Sam Altman insists is possible. No preliminary research has confirmed it is around the corner, and in fact tends to suggest the opposite of what OpenAI claims is possible. It's not nuclear fusion or commercial supersonic flight - it's a pipe-dream from start to finish.


They took money from ARKK and SoftBank this round which suggests that there were a lot of funds passing.


SoftBank wrote a ticket of $500m which is only 2x the min ticket size of $250m.


The AI hype train is like gasoline for somebody’s car. It’s something people pay for to protect themselves against risk.


How does OpenAI manage to spend $5B in one year?

Is that mostly spend in building their own datacenters with GPUs?


Training and inference costs: Paying for GPU time in Azure datacenters.


Yeah, they're buying a bit of time.

Everyone involved is hoping OpenAI will either

a) figure out how to resolve all these issues before the clock runs out, or

b) raise more money before the clock runs out, to buy more time again.


Valued at $157B, and losing $5B a year. What a price-earnings ratio.


With all new tech you've got to value them on expected future earnings really.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: