Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI may lose $5B this year and may run out of cash in 12 months (twitter.com/garymarcus)
86 points by robertguss 85 days ago | hide | past | favorite | 82 comments



Why is this a link to a tweet by a guy whose entire public persona is "AI bad! AI bad! AI bad!" rather than the actual article?


Probably because theinformation.com is hardwalled and thus, unfortunately, off topic for HN. If there's a more substantive article that people can actually read, we can switch the URL to that.


Here's a non-paywalled article that was posted recently: https://news.ycombinator.com/item?id=41063097


Thanks!


By the way this article is complete BS. OpenAI partners with MSFT, AAPL - impossible to run out.


Serious question, is this an actual risk or is this the business equivalent of writing a panicked tweet that my car has lost 6 gallons of fuel already and may run out in a matter of hours? Because to me it sounds closer to the latter


We're in 2024, a year where venture capital is drying up and interest rates are much higher.

> that my car has lost 6 gallons of fuel already and may run out in a matter of hours?

Are you driving in a large desert where it is known that there are fewer gas stations on the average?

Current economic conditions do not look good for anyone raising capital today. Maybe things are fine a year from now but this ain't the 2010s anymore.

OpenAI has enough name brand recognition that they're probably fine (maybe they get a loan at a % higher than healthy, but that's survivable). But there's clearly risks in running out of cash this year.


AI companies do not seem to be having issues finding cash from VCs, and the idea that OpenAI of all companies might not be able to find investors willing to give them money seems crazy to me, but maybe that's discussed in the paywalled article I wouldn't know.


Given how much stock Microsoft is putting into them, I would fully expect them to prop OpenAI up with funding, or possibly even buy them outright once the cost is low enough.

Plus, they're still giving away ChatGPT for free and not forcing payment for it. If they were imminently worried about solvency, you'd be seeing changes in business tactics to try and prevent that.


They are desperately trying to get it entrenched in company workflows, so when they close it off people pay no questions asked. It's not clear to me if people find it useful enough to pay for, and even where it is, I can't make the basic economics work at this sort of cash burn rate.

Like at the moment there's little chance our org would pay more for it per head than we do for our groupware (about $20/mo).

I guess if every single business does it for every seat it'll pan out - and that's why they're keeping it free - but it seems more likely the useful bits of it are gonna be embedded in all the other software we already pay for.


Probably the later. OpenAI has a ton of revenue and I’m guessing their massive amount of spending is for developing cutting edge LLMs. If there’s a danger of running out of money, they could always cut back on spending. Or they could raise more which is probably pretty feasible goven that they’re OpenAI


OpenAI is not in the AI business. It's in the platform business. In other words, its biggest customers will make their own AI and LLM-based apps from the services that OpenAI provides.

As the first commenter said, they are the first to market with a service that's usable enough and cheap enough that regular people can now dream up their own AI apps and implement them quickly. This is a going and growing business for them and they're far ahead of any competitor. Don't expect to see OpenAI going away anytime soon.


While I agree with your platform point, I think you still need to address OP's question of

> What is their route in profitability when Meta is giving away similar tech for free?


Quality and accessibility is my belief.

Meta's Llama3x are great models but they're not providing the same quality and/or accessibility as OpenAI does. Take a look at all the products attempting to sit themselves on top of OpenAI's APIs vs those running on Llama3x; OpenAI dwarfs Meta in that regard, today.

We avoid OpenAI's models/API because of client data sharing constraints and are instead using OSS models, including L3x, but it appears most do not see that as a barrier to adoption and are moving forward with OpenAI's offerings.

We shall see if the work that OpenAI is doing will pay for itself in the long term and whether OSS models can match the gains made with the funding the commercial offerings are receiving, however.


You really think people aren't iterating on OpenAI because it's easy and will then look for cheaper alternatives if their apps take off? I will if the economics make sense.


Meta is giving away the model for free, but running that model is not free at all, and quite expensive instead. At some point it will become the usual discussion pay a service or pay infra resources + devops.


A few come to mind...

Brand power

Exclusive data deals

Being able to call an API vs having to pay for & devops an always on model yourself


It's giving the models. I don't have the hardware to run them. Openai offers me a straight forward way to do it.


That's kind of like saying in 1999 that Google is really in the business of providing custom search bars to companies' web sites. They were (and maybe still are) providing these custom search bars but they didn't become a separate product because search is "really just one thing". And similarly LLM output only seems unique. Everything a company can do to customize LLM output can be done by their customers as well since it's all "prompt engineering" (simplistic stuff plus some voodoo).


My anecdotal evidence to this effect:

The OpenAPI playground. I keep it open as my main chat-interface to their product for random queries and curiosity, etc. It's handy enough to load up the page, write a query, and get a result.

That is - until they instituted some sort of weird "logout policy" that makes it time out randomly every X-days or after Y-time of non-use. Okay, fine, I don't mind logging in each random time this happens.

Then they added Captcha to it, except it's the weirdest and most-failing type I've ever encountered (no idea how they managed to handroll their own and F it up this bad.)

So now I don't use it anymore as it was always inconvenient when you just really wanted to "quickly ask AI something" like you do with Google. They don't want me as a client, they want me to pay someone else who will generate queries to them via some stupid API key. So now I pay someone else, who bear in mind, is spending lots of time and effort to remove the need for OpenAPI entirely by answering my queries in-house.


You could say the same thing about virtually any large tech company that does B2C and B2B, which is basically all of them.


"regular people can now dream up their own AI apps and implement them quickly" - this isn't going to happen, at least not soon enough to offset burning $5B/yr


What I mean here is "companies that don't run LLM servers." That's an expensive business and OpenAI offers it as an easy to use service with an API.


I wonder about OpenAI's moat. Thanks to advances in hardware and rapidly improving open-source ML frameworks, it's getting significantly easier and cheaper to replicate what they've built over time. That's not the case for most startups: it's not really any easier to build an Uber clone today than it was ten years ago.

OpenAI depends on spending vast amounts of money to stay a year or two ahead of the competition. I'm doubting whether that's a justified tradeoff.


Open source just caught up to GPT-4, which was released over a year ago. You don't think all of the advances in hardware don't also play into their hands? They have GPT-5 in the pipeline and are likely hard at work planning and prepping for GPT-6. A year or two beyond the competition, at this point, is their moat.


> A year or two beyond the competition, at this point, is their moat.

That doesn't seem like much of a moat.

> They have GPT-5 in the pipeline and are likely hard at work planning and prepping for GPT-6.

I wouldn't be surprised if there's an element of diminishing returns here. The improvement from GPT4 to GPT5 is likely much less than GPT3 to GTP4.


And they are going to run into source data problems. The internet is not producing that much more quality content, and now they have a problem with AI-generated clickbait.


If you think Nadella will let OpenAI run out of money, you're dreaming.


Of course he will. He's giving them Azure credits and those cost him less than the alleged $13bn. He has thrown a spanner in the Google's works with his "investment" into OpenAI and that is worth something to MSFT, because it distracted Google for a few years. Unfortunately, LLMs and the rest of AI today are still not good enough to do any useful work, so it's all going down the same path as VR/AR. Ultimately, AI has no platform, no APIs, no standards, so it's impossible to achieve the economies of scale and the low barrier to entry to build something like you can on top of TCP/IP. Google hovered up information and made it accessible via their search service, LLMs are amazing at turning any amount of information into garbage.


> LLMs and the rest of AI today are still not good enough to do any useful work, so it's all going down the same path as VR/AR

As a former AR/VR dev, this is a wild take to me. Have you used cursor? Have you ever had to write copyright for a website? Have you ever tried to learn something new or ideate with chatgpt?

When I was working on immersive apps, I would only ever pick up my device to do dev work. Very rarely would I pick it up outside of that (admittedly, I'm not really a gamer). But I use these new generation of AI tools many times a day.


I have. I tried LLMs for creative writing, it was shit. I tried using them for translation, it went off the rails within the first page, then refused to continue. I tried using them to write code, I got an ethics lecture or a perfectly testable code that does not what I asked for. I got tired of trying different LLMs as it makes no sense to waste time and money on this shit. AI companies are the most incompetent IP thieves, they steal content and can't produce anything of value with it.


I think my previous comment came off a little combative. I am genuinely curious about your experience.

I fall somewhere in the indie-hacker/entrepreneur/not quite solo-preneur category and these tools have have provided a lot of value to me. I am maybe 3x more productive with them. They are definitely not without their flaws.

If you haven't tried cursor out, I recommend giving it a try


Not OP

To me AI is like an Ouija board. It works if you believe it works, and if you doubt it it falls completely flat. However it's not magic, I think it's something self-fulfilling in the phrasing. If you approach it with suspicion and prompt it to 'see if it can', the model will auto-complete itself into failure. If you take a sunny, optimistic approach, the auto-complete grants your wish.

It's also just straight up non-deterministic like a roulette wheel, and some people get 100 jackpots in a row (this sucked me in at first, believing the world was about to change) and some people run out of luck so quickly they never got to feel the magic. On average its just OK and kind of annoying and not worth $20/month.


Yeah agreed, it s not garbage garbage it's just silly and useless, and all I can think is "meh". I really find it incredible they call those token generators "AI"


If you say so. I encourage you to invest your beliefs. But it's worth remembering that millions of people are already paying OpenAI and others for the value their models provide. GPT-4 can write code at least as well as a new college grad.

Also, there's no such thing as "Azure credits" at this scale. What they give OpenAI is control over billions of dollars of their capex, which is invested in datacenter scale GPU clusters designed by OpenAI, for their own exclusive use. It has as much in common with a startup getting a $1000 cloud credit coupon as a paper airplane does with an A380.


MSFT invested $2bn in a 4% chunk of the London Stock Exchange. https://news.microsoft.com/2022/12/11/lseg-and-microsoft-lau... My hunch is that it will provide a better ROI than OpenAI and its "millions of people" paying for the bullshit generator.


Nvidia/Microsoft would probably buy all of openAI before letting it fail.


And then do the typical parting out and consolidation


Hallucinating*


"BSing", these days.-


How will he stop it in your opinion? I don't think he has tens of billions in personal wealth...


He will direct Microsoft to continue investing in them? He's the CEO, and their investment in OpenAI has already paid dividends in the MSFT stock price. There is absolutely no world in which he voluntarily decides, "Actually, a few billion is too expensive for my 3 trillion dollar company, time to jettison the biggest growth driver of the past three years."


Stock price driver. Not growth.


no doubt he won't - he will instead buy it all in a firesale


It would not appear Gary Marcus is a fan of OpenAI [1].

[1] https://news.ycombinator.com/from?site=twitter.com/garymarcu...


OpenAI is at the center of the AI hype vortex. VC want AI startups. AI startups don't have their own tech, they are reselling OpenAI APIs. Microsoft wants to dump money into OpenAI because they immediately book their investment as Azure revenue, and because it helps them make Google look bad. NVidia obviously happy to participate by selling Microsoft the hardware they need to rent to OpenAI with the money that Microsoft gave them in the first place.

Nothing about the tornado of money looks like it would end in the next year. It has plenty of momentum.


> OpenAI is at the center of the AI hype vortex

Their logo even depicts this.


Gary Marcus is unquestionably one of the most negative , and consistently wrong voices in the AI community. I do not understand why he is continued to be given credence or ears to anything he claims.


From cases against OpenAI, where ChatGPT was banned in Italy, and then a data audit was opened in Greece. What I have learned is that, OpenAI is a big liability for Microsoft, which is their major funder, after my case Microsoft now claims only a 'Minority Economic Interest.' However, if the data audit reveals deeper involvement, fines could escalate into the billions for any company investing in OpenAI. This risk along with a large fine may deter future investments in OpenAI, potentially leading to financial troubles.


The Saudi’s will happily take up the slack


Saudis are rich, not foolish.


A major AI washout is coming. I know a business owner of a niche and highly profitable product for medical billing, and he added an LLM feature to auto-write and correct notes in the record. Small feature, definitely valuable, but his own engineers did it themselves with open source software trained on their own data. No third-party was involved.

I have no idea how all of these companies will make any money. The funding is washed because it comes from the owners of compute, its stipulated to be used only for them, and then the money comes right back. What insane nonsense this bubble is, and we will look back in a few years like we did for the dot com crash of the 90s.


It’s also becoming clear that the tech isn’t really ready for primetime. It hallucinates too much. Categories like image generation have also not been able to improve beyond slop

You essentially can’t use it anything mission critical or production ready


It feels like the very early days of aviation. But with people now trying to do transoceanic.-


don't get me wrong - it's an incredible tech demo, and so much of the stuff feels like magic.

The problem is that there is still a substantial gap between what looks cool and what can be put into production at any scale.

Midjourney churns out fantastic images. But for most paying customers, the goal of image generation is to create something that can be used in a financially useful way - such as an ad campaign.

Midjourney's images just don't give me that much control to create the kind of ads I exactly want. I get a poor approximation at best. Which might be fine if I'm a small business running Facebook ads with a $2,000 monthly budget. But if I'm a large brand with a reputation and creative vision to protect, Midjourney just doesn't give me the control I need.

This is the same problem across all current LLM applications. Great for small businesses, hustlers, entrepreneurs. But not enough control for the big boys to deploy it at scale.


> Midjourney's images just don't give me that much control to create the kind of ads I exactly want. I get a poor approximation at best.

It's quite paradoxical, is it not?

The genie is bottle-unbound. But, not yet controlled ...

> This is the same problem across all current LLM applications. Great for small businesses, hustlers, entrepreneurs. But not enough control for the big boys

What you posit - which may be true - does offer an interesting corollary: It would act as an "equalizer" favoring the 'small guy'?


It can be an equalizer provided its neatly tucked away underneath a more performative layer. Coding is a great example. The majority of webapps are so mediocre that AI has no problems replicating their mediocrity. Most webapps are also barely more than glorified CRUD apps - something a cutting edge AI can string together with brute force prompting.

Slap a professional looking veneer on top and no one would know you’re running spaghetti code underneath.

Where AI is visible - marketing imagery, written content, ad copy - I feel it has the opposite effect of being an “equalizer”. Its mediocrity stands out like a sore thumb and makes you, the small business owner, appear a tasteless connoisseur of slop.


Winter is coming.


OpenAI has a strong "the real deal" feel to me.

I entered my credit card number and now I can get answers from the best LLM out there via a simple curl command.

I have tinkered with all the competitors, but none had low enough friction for me to become a paying customer.


Have you tried Sonnet? 3.5 is imo even better than chatGPT


OpenAI is so hot and already makes so much revenue, it can raise money at the snap of a finger.

I wouldn't hold my breath at this tweet.


They have a hit product and no investor is going to miss out on that potential. They could IPO and not have to worry about profit for 10 years and their stock price will be sky high.

Or they get acquired by Microsoft or some other massive corporation for $100B.


They're losing advantage in every front. Compute, model, deployment, cloud services, community.


And people were afraid of them achieving AGI


Probably why Sam takes very little salary, and makes his money via other methods.


y'know, aside from the massive tax advantages that come from doing that.

You put money in Sam's jar. Sam doesn't put money in the jar. He takes the money out.


Also aliens may visit earth. Everything is possible.


If Earth's governments refuse to act, maybe the aliens can save us from the AI labs.


But Gary Marcus will never run out of takes.


I guess they need that IPO to happen sooner


How can employee costs be 1.5B/year?


Keyword is "could". Currently 2.5k employees, that's $600k per employees but if there's plans to significantly increase employees that can be quite a bit lower. That's likely including equity grants. It could also have to do with how the accounting works for equity grants, I heard they were charged on year 0 at time of hire but I'm not sure. Does anyone know better?

Clearly they're paying to attract talent.


$600k per employee doesn't sound unreasonable even without a hiring plan, overhead per employee is typically in the 150-200% range putting average pay at 300-400k. At Bay Area salaries at one of the most prestigious places to work requiring a highly specialized skill set that sounds pretty reasonable.


That rule of thumb tends to break down at high salaries. Most of those costs don't scale with salary (like desk space, insurance costs, 401k match which is usually capped, etc).


Is "a highly specialized skill set" consistent with "2.5k employees" ?


How do you normalize for prestige and clout chasing to actually get quality though?


That's a question for their hiring team, not me. But normally companies apply a culture fit interview. Also it's not inertially bad. Non-compete agreements allow employees to become field expert in niche areas and to jump between companies standing up their infrastructure. Getting better and learning from their mistake each time. Hiring those individuals is a costly but effective way to scale up quickly.

I don't think we should shame anyone for doing that if they're delivering value doing that.


Why not? Employment can cost a lot. They have 2.5k employees so the base of the estimation is $600K on average. Even if you don't count their equity compensation, their base salary is known to be best in the industry ($250K~$500K?) so $600K makes sense since then the real cost of employment include other various stuffs other than just cashes.



Good riddance.


The actual source article is in The Information. But it is paywalled. Anyone know why their subscription is so expensive? Is it actually worth it?


Maybe, if he keeps saying the same thing for enough years, one day it will actually come true!

> Remember how I said that deep learning was “hitting a wall” with compositionality in 2001, in 2018 and again 2022?

We get it. We really, really get it. We get it so hard I'm attempted to build a Gary Marcus GPT bot which only knows how to talk about how AI will never ever work and is doomed...any day now.

Talk about having your job replaced by a shell script...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: