To me it's a red flag when a company takes on Softbank funding. I worked at a portfolio company earlier in my career.
Their MO is to offer lots of money at inflated valuation vs domestic investors. This is compelling for founders - lots of money to grow, for less dilution.
That said, there's very little value Softbank adds other than the money. No connections, no advice, and it's generally not a helpful long term partnership. They also don't seem to conduct the level of scrutiny that other investors do, because they have so much cash and want to muscle into hot deals. And possibly also because founders wouldn't want to deal with their scrutiny vs domestic options.
Ultimately you take the money when you are greedy or don't have other good options. And neither is a good signal.
I think Sam is driving as aggressively as he can, given AI seems like a winner takes all type market. Domestic investors are balking at the exponential increase in needed investment amounts given economic uncertainty and lack of justified return. Meanwhile Softbank has been catching up and has been dying to get in on OpenAI. So here's the opportunity.
Maybe this works out and OpenAI is going to land this. But more likely, OpenAI is acting like the music is running out soon and they're throwing a hail mary. And Softbank's limited partners are going to be left holding the bag.
> given AI seems like a winner takes all type market
Does it, though? LLMs seemed magic when they arrived, and they continue to get better, but it seems like it takes a ton of hand-holding and experimentation to get useful work out of them. That opens up the field for different players to thrive in different niches, finding ways to make AI work for different applications in different industries.
In the realm of using LLMs for software development, for which you'd expect HNers to have a decent amount of hands-on experience, you see multiple LLMs from multiple companies mentioned in every conversation.
I think the LLM success stories are going to be companies that discover niches where the state of the art is sufficient to significantly reduce the amount of labor required for labor-intensive jobs, but it takes a combination of AI mastery and domain savvy to make it happen. Theoretically, companies like OpenAI should have a head start at finding and exploiting those opportunities, but history says the big success stories will emerge as the survivors of a gold rush where thousands or tens of thousands of companies are founded to "bring AI to X" where X is healthcare, insurance, shipping, underwear, etc. 99% of those companies will fail, a few will find seams of ore to exploit, and one or two will become the Microsoft or Amazon of their generation.
What is the support for the idea that AI is a winner takes all market? I don't see any network effects or lock-in in this market. If you built a IDE that queries two different services, I don't think anyone would object or notice. Is the idea that all of those users providing actual user-data makes the next generation better? I haven't seen much evidence for that either, everyone seems to have slowed down and been disappointed with the pace of their improvements.
I do see that OpenAI has a brand recognition that, e.g. Anthropic doesn't have, but what else leads you to think that it is winner take most/all?
> What is the support for the idea that AI is a winner takes all market?
A lot of people seem to have the idea (or at least SELL the idea, I'm not sure if they actually believe it) that this all leads to superhuman intelligence AGI and whoever hits that first is the "winner takes all" winner.
But yeah firstly as someone who uses LLMs often I don't believe there is reason to think that LLMs lead to AGI (and I think people selling the idea that this is a foregone outcome are hucksters).
And secondly, even if I ended up being wrong about that, there's no indication that any of the known groups working on LLMs is very far ahead of anyone else on LLM technology so even if you believed that LLMs are going to lead to AGI I don't see why that wouldn't just mean that like 3-5 super human intelligences were created around the same time which is still not a winner takes all situation unless those AIs go full Skynet and only one ends up standing at the end. (This last part is a joke, hopefully).
> A lot of people seem to have the idea (or at least SELL the idea, I'm not sure if they actually believe it) that this all leads to superhuman intelligence AGI and whoever hits that first is the "winner takes all" winner.
... Wait, why would making the basilisk give you a monopoly? Do they think their magic super intelligence would do their bidding or something? Have they never seen any sci-fi?
(The idea that LLMs will lead to superhuman intelligence seems ludicrous on the face of it, but even going along with that it doesn't make any sense.)
> unless those AIs go full Skynet and only one ends up standing at the end. (This last part is a joke, hopefully).
But is it a joke? One could argue that Skynet follows from Omohundro's Basic AI Drives: It's self-protection technologically extended backward in time.
> given AI seems like a winner takes all type market
Unfortunately for OpenAI and Softbank, it seems like AI will not be "winner take all", and may actually be quite commoditized. It's as easy as choosing a different model in a dropdown in Cursor or whatever your tool of choice.
When your local VC can't keep up with other investors, they make bold claims about intangible things like connections and advice that you can't easily verify. Safer to go with the money.
This is a rather tediously long article. The summary as I see it:
1. OpenAI hemorrhages money (on the order of $10's of billions a year).
1a. A subargument that this hemorrhaging is rather fundamental--OpenAI isn't anywhere close to breaking even on operational costs, and it seems that OpenAI is getting sweetheart deals on compute that aren't going to last very long.
2. There's very few entities capable of maintaining the pipeline of money that OpenAI desperately needs.
3. Most of those (this article claims) are unwilling to stump for the cash.
4. OpenAI's capital expenditures (this article claims) are a major (if not existential) source of revenue for its suppliers, so if OpenAI implodes, it presents a risk to many other tech companies as well via the network of suppliers.
The problem with this article is that, as much as I might be inclined to agree with it based on my priors, I just don't see any actual plausible way that OpenAI implodes spectacularly like that. If the funding dries up, the most likely scenario to me is that OpenAI undergoes a crunch mode where it tries to eke out an operational profit while begging everybody else (including probably the government) to finance capital expenditures at a reduced rate. Instead of a big bang like Lehman Brothers was, it instead would look a lot more like a longer, slower decline where the tech industry underperforms the market rather than explosively driving it.
Sometimes having more evidence that's weaker, weakens the entire argument.
Who cares that Oracle might lose $1B? or $10B? Oracle has $17B cash on hand.
Who cares if an entity that NVIDIA gets 6% of its revenue from collapses? Nvidia had over 100% revenue growth YoY. That's literally a few days of growth.
The assertion that OpenAI is the generative AI industry is absurd. There are plenty of other players. There are plenty of other applications. OpenAI's biggest problem is that it's not the generative AI industry, it's just another player. One that has nothing unique to offer.
Also, didn't Deepseek demonstrate that enormous compute is not a pre-requisite? All these analyses completely ignore the possibility of a breakthrough in the technology that drives costs down by orders of magnitude.
It showed that a single training run of a particular model could be done with a massive amount of compute, but not so massive that only OpenAI/Google/etc. can do it. Definitely not you nor I, or a university, or a mid-tier company.
In any case, that's small potatoes here. OpenAI spends most of its money not on training, but on inference. Inference is still way too expensive.
Even if inference got cheaper that's not great for OpenAI. It means that other people will launch a similar chat experience for fewer dollars.
Fundamentally, OpenAI needs a moat. And it has nothing.
It has a long list of content partnerships. And by far the highest user base, which means lots of unique training data. If it can succeed in spamming the open Internet enough to crowd out competition through costs and bot filters, it'll have a pretty good data moat.
The userbase is for an undifferentiated swappable product.
Its data hoard is nothing special compared to any of the other players (Google, Meta, etc.).
And you can see this play out. If it had anything and its data was so good it would be significantly ahead of the competition. Instead everyone is pretty much at the same place moving at the same speed.
I've liked some of Ed's previous writing, but this is a craaaazy statement: "The Future of Generative AI Rests On OpenAI".
OpenAI is an over-hyped, over-priced, and under-performing AI company. Fortunately, the underlying LLM/transformer technology is real and valuable and not at all dependent on OpenAI. It would be good to deflate some of the hype around OpenAI and other non-viable companies.
You can adjust he headline like this: "The Future of Generative AI (Market) Rests On OpenAI", then it would be more precise. Basically - if OAI crashes, then it will take down all competitors, like bowling pins. No one will erase existing NN software from the hard drives of course.
o1-pro at $200/month isn't holding up very well compared to Gemini 2.5 Pro at $20/month, for instance. It offers similar benchmark performance at perhaps 10% of the speed.
> Before we go any further: I hate to ask you to do this, but I need your help — I'm up for this year's Webbys for the best business podcast award. I know it's a pain in the ass, but can you sign up and vote for Better Offline? I have never won an award in my life, so help me win this one.
What does it mean to canvas people to vote for you without any suggestion that you have - actually listened to the podcast?
Goes as far as to undermine the award if there's a suggestion you've received votes from people just because you were good at marketing people to vote for you. Seems to confuse savvy business practice with that of an award for merit.
I’d be more interested in user retention. Every AI product I’ve seen (from the investment side) does not speak of this. I have seen some data though that suggests user retention is short (3mo avg). People try it, find it’s not that useful or even harmful after the initial experimentation and dump it.
Gotta keep hype to keep MRR up though even if it’s from different people. They will run out of interest and new users soon. Going to be a big fall.
Models will stagnate on this funding crush and the promises will be gone in a puff of smoke. And everyone will have to unfuck their dependency on it for upselling their existing crap to end users.
What's interesting is that they stopped reporting monthly active users and prefer weekly active users. The author argues that this is a bad thing, but I'm having trouble understanding why?
It’s likely bad because it may show volatility quicker and people will trade on it. That’s bad for the company and the existing users generally.
Anyway active users doesn’t matter at all regardless of the period. It’s retention AND active users. The result of having the two figures gives you user base churn.
Well, I haven't kept any of my subscriptions for longer than 3 months...BUT I'm always subscribed to something... innovation has been so quick, I've just hopped around quite a bit.
Their weekly active user number is 10% of the estimated total number of Internet users (and growing rapidly). It's quite hard for that to happen with a high churn rate, because they'd very quickly run out of new users. For the numbers to remain high, either the users are retained of they return.
Oh, well if that is the standard of evidence that is acceptable to you then here is some contradictory evidence. I paid for ChatGPT for 2 months and then canceled it. Did not even bother getting it trough work after that.
On a more serious note the only ones that could prove that ChatGPT user retention numbers are really good is OpenAi. Since it would be very good for them to do that and they haven't done it one should infer that the numbers aren't that good.
That's fine. I'm not the one making claims about chatGPTs retention. But assuming they have poor retention is just silly.
Why would OpenAI share their retention numbers for their competitors to know? Nobody outside of silly AI-skeptic HN devs would think they have crappy retention.
> That's fine. I'm not the one making claims about chatGPTs retention
But you did. Here, I'll quote you: "There is no evidence that ChatGPT has a retention problem. Quite the contrary.". That is your claim about ChatGPTs retention, that it is good. Later we find out that there is no evidence but only anecdotes to support your claim.
> But assuming they have poor retention is just silly.
What would be silly is to assume that they have good retention numbers. The logic of it I wrote in the previous post. What logic are you using to arrive to your conclusion?
> Why would OpenAI share their retention numbers for their competitors to know?
One would usually share those numbers to get more money from investors.
What the fuck would their competitors do with their retention numbers?
> Nobody outside of silly AI-skeptic HN devs would think they have crappy retention.
They say that name calling is the last resort of the truly desperate. No wonder your post ends like this.
> One would usually share those numbers to get more money from investors.
.... yes they share them with their prospective investors, not publicly lol. did you seriously just say this?
> What the fuck would their competitors do with their retention numbers?
The fact that you would even ask this question (and your ridiculous previous comment) shows your ineptitude and how pointless it is to discuss this with you
Run along now, go tell everyone how Dropbox is just a pointless FTP wrapper and will never catch on
> The fact that you would even ask this question (and your ridiculous previous comment) shows your ineptitude and how pointless it is to discuss this with you
Maybe look in the mirror. The fact that you wrote this instead of writing what their competitors could do with their retention numbers shows that there is no point in any discussion with you.
There seems to be a fairly large cohort among the HN crowd that does not understand the hold ChatGPT has on the cultural zeitgeist.
No normie in my life ever talks of using Gemini or Llama in their day to day life. Maybe some turbo twitter addict boomies talk about using grok, but that's about it.
Regular people are leveraging chatGPT to pick up the slack that google search enshittification left in their lives. It's a Skype to Zoom shift and lots of people here seem to not really appreciate it. They are too caught up on esoteric benchmark scores to recognize the tectonic plates moving beneath them.
> And there aren’t enough technical people on the planet to pay those bills.
This rings so true to me. The tech side seems hyperfocused in on these diminishing return results around benchmarks, but I just don't see a 4% improvement in X benchmark as any kind of market mover.
Props to the author for the well-researched original article.
I disagree with the conclusion. In the current environment, OpenAI can raise money as if were water pouring from a faucet. If SoftBank can't meet its agreements then there are 50 others waiting to take their place. In the current environment, OpenAI's revenue and capital requirements are not meaningful given their ability to raise.
The environment can change quickly, look at early year 2000 vs late year 2000 funding for .com for example - money went from on-faucet to you're-not-getting-a-dime in a few months. So if the funding environment for AI suddenly shifts, yes, OpenAI is cooked, but so is the entire AI industrial complex, from the smallest barely-billion-dollar startup all the way up to Nvidia.
My conclusion is that OpenAI is not a systemic risk, it's not going to fall or take down a large portion of the tech industry on its own, it will fall if investors sour on the entire AI industry for some reason.
I agree that OpenAI is not technically a single point of failure because if it were to disappear, then all the companies that depend on it could simply switch to Gemini or Deepseek or Llama, etc. It's been well established that they have no moat and there are no significant barriers to switching.
I think the author is essentially using OpenAI as a synechdoche for the entire AI industry. Essentially every AI company is reliant on frequent, massive cash infusions to stay alive, and if the money starts drying up for OAI, it will dry up for everyone else as well. The author persuasively argues that OAI will need ~40B per year to stay alive through the end of the decade. Let's assume that the whole industry combined will need something closer to 100B. Assuming that faucet will stay open that long is seems pretty crazy to me.
So AI is in this interesting place because while it seems obvious that it's an amazing, powerful, transformative technology... that actually hasn't materialized at an economic level, yet. Arguably the only place where it has a proven positive impact on productivity is in software development, and even there it's pretty obvious it's not a silver bullet.
I'm not interested so much if or when it will materialize economic value (that gets discussed here ad nauseum) but... how long a runway do you think we have where investors are going to continue to invest based on the promise, before the funding environment does shift? Because it will, eventually. And the LLM industry better have something to show that justifies current valuations or things are going to get very messy.
My fear is that we've entered "too big to fail" territory in which too much of the tech sector has too much to lose to be the first ones to start backing out. But that only means the bubble is going to get that much bigger before it detonates (and takes down half the economy with it.)
> Arguably the only place where it has a proven positive impact on productivity is in software development
There are no studies demonstrating this. Having to double check this randomly hallucinating pair programmer AI colleague, is not helping with productivity.
Absence of Evidence is not Evidence of Absence. Software developer productivity is very hard to measure, so we may never get a "good" study of this. Might be like IDEs, where at first no one was using them and people would complain that they don't actually help developer productivity. Fast forward 10 years and almost everyone is using them.
There is probably a difference here in a few different market segments. You have the AI tooling companies (foundation models, tooling frameworks, vector dbs, etcs...), AI product companies (Agents for Foo), Companies with existing business investing in AI projects, and also existing tooling/compute companies, especially the hyperscalers.
The AI tooling companies could loose a lot of hype and valuation, why the hyperscalers and companies just building incremental automation using llms/agents continue, albeit with less internal investment. How much would that take down the industry? How many existing tech players have fully bet on AI? Even a company like salesforce that has pivoted their marketing to all AI probably only has a small fraction of their revenue tied to it.
I think it's not clear that AI is driving significant revenue _anywhere_ outside its own ecosystem. The only companies with real AI-driven revenue are selling to people investing in the hype, not their own realized revenue.
Have you considered the monetary interactions of the full environment?
For instance the fact that that the only way to do this is through money printing, which non-reserve debt under Basel 3 modified (objective value) qualifies, and which itself creates artificial distortions that self-sustain throughout until something breaks?
This effectively drives inflation forward, while at the same time being in a negative GDP growth trend, which only further exacerbates a stagflationary environment.
Imagine the interest rate exposure risk in terms of an environment of uncontrollable higher interest from the petrodollar withdrawal where all production stalls because tech integration has been going on without anyone rational at the helm to mitigate systemic risk?
There really can be no part where OpenAI will become profitable because the profit is taken from jobs that would normally pay a wage.
A mechanical sieve if you will solely on the factor markets, that absorbs liquidity, creating and sustaining various forms of whipsaw distortions, an economic calculation problem that grows until something breaks.
The price of things generally speaking is the ratio of the amount in circulation to that which is available to the general worker. The less workers capable of using it, and there eventually is a point where a critical stall occurs and people stop transacting in the currency.
I agree - and I don't think investors will sour. Considering that AI and accelerationism is becoming close to a religion, and that there are many wealthy people who are tied to it, I think that the money will go somewhere. Even if it doesn't go into OpenAI, it'll go into one of the competitors.
I wish people would understand ChatGPT is a toy for consumers, and the real prize of AI is handling the mountains of data tech has been leeching for years.
Sam puts out vague worries of AGI Armageddon so CNBC anchors, who can't even turn their computer on, can argue about that all day because it's juicy. These are the same people that thought the Metaverse was a huge deal, and not just unfinished plans for VR Second Life.
Meanwhile, any company with good AI tech and enough data can classify, minimize, eradicate, and automate as much of our lives as possible. There's going to be no regulation on it, because we have never been about to keep up with text regulation.
And we are told we signed up for it when we were 15, trying to log into an app store on the new device all of our friends had.
customer support automation - "Sorry, I didn’t understand your request. Please type your issue again…" is the fastest way to make someone hate your brand.
marketing copy -> Not paying my marketing team to put out obliviously AI generated texts. People will notice and will hurt the company reputation.
sales automation - As in? Perfect place for random hallucinations?
internal knowledge base - Search in pretty good in full text. Training a model with all company data will be too expensive to justify the benefits of finding the correct technote one second faster? Plus do we want to hallucinate while doing airplane maintenance?
analytics - Not enough to justify current valuations.
Please provide a reference for this outrageous claim. Unless you are just looking for an argument? I am ok with it, but it will cost you: https://youtu.be/uLlv_aZjHXc
Here's some quick math, which I wish the article had made more prominent:
If there are 500 million active users and OpenAI is burning $40 billion per year, then at most, each user costs $80 per year or $6.67 per month. That's the upper limit because there are development costs, so the actual operating cost per user is probably half that (maybe $3 per month).
Thus even assuming they don't come up with new revenue models, the $20 per month Plus plan is profitable.
Moreover, since there are 20 million Plus subscribers, each subscriber is currently subsidizing 24 other users. If they can get the ratio down to 1:6 (each Plus subscriber subsidizing 6 free users), the math would work out and OpenAI would be profitable (at least operationally).
And that's assuming that they don't unlock the huge enterprise business models that, IMO, are going to be the real drivers of revenue.
The whole article is predicated on OpenAI being unable to find profit, but with the article's own number, it doesn't seem hard to convince investors that profit will be there.
[The usual caveats apply: I'm just a random idiot and not a financial analyst. Also, I'm bad at mathing, so please correct me if I'm wrong.]
Clearly it isn't hard to convince investors that profit will be there, they've passed that (pretty low) bar with flying colors so far. The question on everyone's minds is if investors are nuts (which they frequently are).
By your numbers, the math they have to make work requires converting to Plus at 4x the rate that they currently do. A 4x increase in conversation rate isn't an optimization, it requires a complete overhaul of their business plan, and we've yet to see any evidence of them having a plan for such an overhaul.
Interesting analysis but not sure it‘s correct. The subscribers are likely consuming much more resources than the free users. A 20$/m user might consume much more than that in compute, thus creating a loss.
The article specifically mentions that OpenAI is losing money on all plans, including Plus and Pro. The 500 million number is strongly inflated by those who only use the service on and off.
It does seem to me that if for example OpenAI suddenly could raise no more money they could drop the free tier and raise the paid price and make a profit, if with far fewer users. The Zitron article seems to assume they are forced to provide free service to millions but it seems to me it's a choice to grab market share that they could step back from.
A lot depends on where you think AI will go. Zitron is obviously skeptical and calls the AI 2027 article "a truly offensive piece of fan fiction." That article starts:
>We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
And if that happens investing a lot will make sense. If not I guess OpenAI will have to scale back and drop much of its free service. Or maybe go bust. I'm not sure it's as doom and gloom as Zitron makes out. Maybe a bit of an over allocation of capital to OpenAI and LLMs but things will go on. Even if OpenAI folds Google and others will probably do fine.
Are there any precedents for a subscription service that went from 0 to $5 billion in revenue in two years? I also think, they are doing a ton of expensive R&D, but as far as I can tell that $5 billion in revenue is a profitable and sustainable business. It's not like they're selling compute below cost.
They went from 0 to $5 billion when they were the only game in town. But now the crown of "best model" changes hands every few weeks, from OpenAI to Anthropic to DeepSeek to Google, and that looks likely to be the case for the foreseeable future. People "in the know" have already stopped treating OpenAI as automatically the best, and this knowledge will diffuse to everyone else over time. Without the best model as a moat anymore, there's no reason to expect OpenAI's growth to continue on the trajectory it used to have.
This is actually a SaaS product that has good unit economics. I am skeptical of it, but it's not like Tesla or Uber where the unit economics to justify their valuation are simply impossible.
It’s the complete lack of profitability all the way down, from the deep Azure discounts, to OpenAI’s training and inference costing more than it brings in, to all the OpenAI API wrapper apps losing money on OpenAI subscriptions to get some AI into their product.
There’s nowhere in the chain that gets anywhere close to zero marginal costs.
In any other industry, a business claiming success for selling $100 bills for $50 would be laughed out of the room.
OpenAI is a systemic risk only to current tech valuations and to the near-term availability of fresh capital for new AI infrastructure.
Whatever happens to OpenAI -- and to valuations, and to the availability of capital -- in the short run, technology will continue to progress, in fits and starts, as always.
Some interesting stuff in the article, but it doesn't substantiate its headline conclusion.
Not anywhere close, IMO.
E.g., I'm sure Oracle doesn't want to lose $1B on a datacenter, but that's a bruise, not a catastrophic loss to them. I'm sure NVIDIA likes all the OpenAI revenue, but we're talking about how steeply the slope of their revenue line goes up. (They will inevitably face the problem where the line no longer shoots straight up, and a faltering OpenAI could make that happen sooner, but OpenAI itself isn't a life-or-death problem for them.)
As with many startups (especially ones with high burn rates) OpenAI is risky. It could take down SoftBank and its data center vendors. 6% of nvidia’s revenue is not that concerning, as I’m sure they can find other buyers for those GPUs. But I really don’t buy the argument that OpenAI is the gen AI industry. If they ceased to exist tomorrow, the tech/genAI industry would just trundle along. At this point the tech is quite commodotize.
It seems improbable to me that sam doesn't have a plan lined up for this.
Also not so sure about systemic risk. Sure there would be a panic & scramble but realistically all the frontier models are reasonably close to each other these days. It's not like there is no substitute at all & market will rebalance pretty fast
It's unclear how OpenAI intends to quadruple its revenue? They just doubled their active user count in only a few weeks and their Pro subscription was a new offering as of January. I'd be surprised if their revenue wasn't at least four times larger by the end of this year.
I do not know one person who has bought a pro plan. And especially now where Gemini 2.5 Pro Experimental is leaps and bounds better than their pro plan models.
This guy is verging on a crank at this point, every article I've read from him is just spewing FUD about OpenAI, and AI in general.
Just click his name at the top of the article and read the titles. With someone like this, even with tons of cited numbers you have to be very, very careful. The conclusions are motivated, and it would take enormous amounts of time to vet what he's saying, because even though the numbers may be correct, it's what he didn't report that may actually matter.
He's a guy who ran a PR firm, he has no technical background or experience. He specifically stopped covering crypto because AI "generated more clicks". He only covers OpenAI because it's the one with the biggest brand recognition. He doesn't know anything about the underlying technology. His entire thesis that a single hyped company, that has no moat by the way, is some kind of single point of failure for the entire tech sector.
> His entire thesis that a single hyped company, that has no moat by the way, is some kind of single point of failure for the entire tech sector.
It's a perceptual/psychological point of failure, not an actual one. If OpenAI (the "blue chip" of AI) collapses, that will bring all other AI companies—both LLM providers and AI-powered apps—under scrutiny.
Not only that, but if OpenAI collapses in a way where their APIs become unavailable (not likely, but possible), all of the products that depend on those APIs will have to scramble to replace them. And replacing an LLM is tricky as parity between them on certain tasks is currently limited.
Keep in mind, tech is already in a shaky market. An insane amount of money has been pumped into tech (really, software) with relatively low ROI, both financially and societally. What looked like a burgeoning market was really just a lot of cheap money being pumped into massive gambles for ~10 years.
OpenAI collapsing would be a massive stain on the veracity of anything any tech company says. It could accidentally trigger a short-term stock market crash as investors panic flee from tech stocks. If it did, you can bet the vultures in the media will be running hit piece after hit piece about the unreliability of tech, the limited ROI, etc to get eyeballs. That will guaranteed cause mass panic among retail investors (and force the hand of institutional investors).
How would they look similar? I just told you I don't use the app at all, not 50%/50%? Anthropic targets the enterprise, while OpenAI targets the consumer. Anthropic's primary product today is the API.
People really don't get it. The Frontier Labs' ability to make money right now is irrelevant. If you build the first AGI you win capitalism. That's what is at stake here. That's potentially worth Trillions/Quadrillions(infinite?) money. Everything else is noise.
Oh believe me I'm right there with you but in the few timelines we don't all die this is clearly what's at stake. Altman can crown himself Emperor of the Galaxy...
> If you build the first AGI you win capitalism. That's what is at stake here.
Why? If you had a GPT version right now that was even more intelligent than an average human, what exactly would change?
The big assumption is that you could leverage that into an even bigger lead, but this seems questionable to me-- just throwing twice the compute at it is not going to make it twice as clever. Considering recent efforts, throwing more man(?)-hours into self improvement is also highly likely to run into quickly diminishing returns.
Sure, you might get significant market share for services that currently require a human-in-front-of-screen and nothing else, but what prevents your competition from building the exact same thing?
> This obviates all human labor that can be done on a PC
Sure, this is massive for the actual people doing such things right now (designers, programmers, callcenter employees) because their work becomes almost worthless, and the impacted sector of our economy is pretty large.
You can not hope to earn what a human used to get for the same work however, because that is not how markets work (and this should be very obvious on second though, for examples just consider how ChatGPT is not priced like a human personal assistent/concierge service, or how agriculture went from a "half the economy" status pre-industrialization to a reluctantly subsidised, barely profitable economical footnote nowadays).
So AI is not "winning" capitalism-- because the piece of the economic pie that you can hope to get is not the size of the "human-in-front-of-screen" sector-- you are basically just gonna compete to provide computing services, just like AWS or OpenAI today (which certainly can be profitable, but not really in a capitalism-shattering way).
Don't get me wrong-- I'm not saying that a human-level AI would not be an immensely impactful achievement with far reaching implications, but I think you are largely overestimating the expected economical gain for the "inventor".
I think you seriously lack imagination... You've read too much Tyler Cowen and imagine AGI only adding .5% to GDP per year lol. Instead of what it actually is which is the beginning of the Singularity.
> You've read too much Tyler Cowen and imagine AGI only adding .5% to GDP per year
I did not know who Tyler Cowen is.
I'm not making any assumptions about how much growth we would get from machine intelligence, I'm simply stating that the value you can expect from that whole new sector is gonna be much closer to the cost in silicon and energy to provide it than the former value of the human labor it replaces (history reinforces that view).
In my view, building machine intelligence makes you into a (groundbreaking, innovative) commodity provider medium-term at best, not a god.
I'm personally extremely skeptical of singularity scenarios in general.
Most models of it implicitly assume a sub-exponential, somewhat proportional or even constant relation between achieved "intelligence/utility" and required energy (thanks to "self improvement"), which is an extremely flawed assumption in my view. But what kind of singularity do you actually believe in?
I do agree that machine intelligence is a significant existential risk for our species, though.
>I'm simply stating that the value you can expect from that whole new sector is gonna be much closer to the cost in silicon and energy to provide it than the former value of the human labor it replaces (history reinforces that view).
Again lacking imagination. This not only replaces human labor but enables entirely new types of endeavors. Did ICs simply replace the value of Vacuum tubes? Did the Internet simply replace the value of Snail Mail or Libraries? You just aren't framing the problem correctly.
>I'm personally extremely skeptical of singularity scenarios in general.
Naturally...
>Most models of it implicitly assume a sub-exponential, somewhat proportional or even constant relation between achieved "intelligence/utility" and required energy (thanks to "self improvement"), which is an extremely flawed assumption in my view.
Until you get close to the efficiency of a human brain this is a total nothingburger and AI undergoing RSI has enormous room to improve purely within the algorithmic space. Its virtually certain that an AGI could run on average consumer hardware.
> This not only replaces human labor but enables entirely new types of endeavors.
I completely agree with that! I think we are talking past each other a bit, because to me all your examples seem supportive of my main point: None of those revolutions in tech really made their inventors "break capitalism", but instead the "commodity providers" of the digital age made good business together with all the new industry that sprouted up and used the tech...
> Its virtually certain that an AGI could run on average consumer hardware.
Sure. But then what? I can totally see you getting the equivalent of 10 human white-collars for a kilowatt within a decade or two. (Sidenote: I don't believe we'll ever get past the 0.1W/human-equivalent brain in efficiency, and even that is an optimistic estimate that I would not stake anything on).
You might be able to further improve those systems, getting up to a hundred or even a thousand clever humans-- a veritable army of consultants/contractors at everyones fingertips.
And thats great! But an army of consultants for almost free is NOT a singularity-- it is extremely doubtful to me that you are even gonna get "the sum of its parts" out of such an arrangement; expecting such a construct to be capable of rapid self improvement is extremely optimistic (just compare real-life armies of consultants, which are typically agreed to be not capable of rapid self improvement, even if you throw exponentially increasing amounts of them at it).
Just like real life armies of consultants, we will have to be careful that the work is actually aligned with our interests though, and having many more consultants than actual people is dangerous business already...
Their MO is to offer lots of money at inflated valuation vs domestic investors. This is compelling for founders - lots of money to grow, for less dilution.
That said, there's very little value Softbank adds other than the money. No connections, no advice, and it's generally not a helpful long term partnership. They also don't seem to conduct the level of scrutiny that other investors do, because they have so much cash and want to muscle into hot deals. And possibly also because founders wouldn't want to deal with their scrutiny vs domestic options.
Ultimately you take the money when you are greedy or don't have other good options. And neither is a good signal.
I think Sam is driving as aggressively as he can, given AI seems like a winner takes all type market. Domestic investors are balking at the exponential increase in needed investment amounts given economic uncertainty and lack of justified return. Meanwhile Softbank has been catching up and has been dying to get in on OpenAI. So here's the opportunity.
Maybe this works out and OpenAI is going to land this. But more likely, OpenAI is acting like the music is running out soon and they're throwing a hail mary. And Softbank's limited partners are going to be left holding the bag.
reply