I imagine that OpenAI's operating revenue highly exceeds their operating costs. They spend virtually all their money on training and R&D salaries, not inference and operating salaries.
There are three likely outcomes:
1. AI plateaus. OpenAI slashes the R&D budget to become profitable with revenue in double digit billions and profit in single digit billions. Valuation likely similar to today's.
2. AI doesn't plateau. OpenAI makes a killing. (Hopefully metaphorically, not literally)
3. Scenario 1 or 2, but it's a company other than OpenAI that wins.
Bold of you to assume AI plateauing would be somehow intrinsically obvious to everyone, or even to the minds at OpenAI. Tunnel vision is common in tech, particularly when your salary (or funding) depends upon it.
> I imagine that OpenAI's operating revenue highly exceeds their operating costs.
The Information estimates that OpenAI is spending $4 billion just to run ChatGPT and their APIs, along with $3 billion in training and $1.5 billion in salaries.
Don’t agree. Inference costs have been in a race to the bottom for a while and it’s likely imo that OpenAI is gross profit negative on inference right now
If AI were to plateau, OpenAI would be one of many providers without a clear edge, they'd lose market share. Companies might even start competing on price. Imo, it's not clear any of the software providers would really do that well in an "AI is commodity" scenario, HW companies might though.
I'm running Phi3, Llama 3.2 and Mistral Nemo locally and they're decent enough for many things.
That's delusion, not imagination. Inference is far and away more expensive than R&D and training. It dominates the cost at these companies because it represents most of the unit economics. If OpenAI is not profitable, it's because each marginal customer costs more than they bring in. That's especially true for heavy users since OpenAI charges a fixed amount per customer per month.
Couple that with a lack of pricing power thanks to all the other similar products in the market.
How will they continue to operate into the future with those kinds of losses? More rounds of funding? Loans? Charge more? Most companies with losses that big would be bankrupt, e.g. GM in 2009.
Why do you think cost would expand linearly with revenue? That's typically not how technology companies work. Compute gets cheaper over time and R&D expenses eventually cap out.
We're in the "add billions in dollars of datacenters" and "spin up nuclear power plants" level of infrastructure needed for their growth, so if anything, I might be underestimating the costs if they grow as much as Sam claims they can.
Didn't Sam ask TSMC to spend $7 trillion on new fabs? By comparison, $18 billion/yr spend seems very small.
> Commodities can be massive businesses with competitive moats. Oil is a commodity BP and Exxon do just fine financially speaking.
They invested in exploration and now they control those oilfields. They built refineries and have the systems and experienced people to operate them. Meta can't release a LLamaOilfieldAndRefinery which I can operate by just spending a few thousand on gpu's.
Oil is a special commodity in the sense that it's as important to civilization as water is to humans. It's also special in the other sense that we have geopolitical strife over who gets to extract it, sell it and to which market...
I don't see how a chatbot meets any of those criteria
> I don't see how a chatbot meets any of those criteria
Calling these things "a chatbot" is likely limiting your vision: some of the stuff people build by fine-tuning LLMs, such as the ones OpenAI offer, use them to generate database queries matching their customers' database schemas.
"Chatbot" is simply a convenient UI for an LLM in the same way that a web browser is a conventional UI for email. (And in this analogy, anyone calling an LLM "autocomplete on steroids" would be making the same mistake as someone saying "Wikipedia is just TCP on steroids").
I expect LLMs to continue to be extremely generic in the same way that web browsers are (market history shows periods where one browser dominates the market despite open source) or like spreadsheets (where Microsoft Office is, or was last I looked, dominant despite free offerings being good enough for most people).
Society needs intelligence. We started using mechanical aids because it became impractical to perform census work by hand as the population increased, AI is a continuation of this process: we need it, it's a commodity, there may be a market opportunity despite free and/or open source competition, and (like Netscape, like Internet Explorer) there's no guarantee that the winner in one year will still be leading the next year or even existing a decade later.
Legal systems becoming complex predates the emergence of lawyers.
Lawyers have also led significant efforts to simplify the law. For example the American Bar Association has consistently created simple model statute frameworks that eventually are adopted.
> For example the American Bar Association has consistently created simple model statute frameworks that eventually are adopted.
"Simple" is not an accurate description of typical model legislation.
> Law is complex because society is complex.
Law is complex because it's an evolved system influenced by politics and corruption. The extent of its complexity is not intrinsic and much of it is specifically a defense mechanism against public understanding, because the public wouldn't support many things in the status quo if they understood the workings of them, and the people who do understand the workings but prefer the status quo use this to their advantage.
Your description is not consistent with history. Politics and corruption are not outsized drivers of law. Especially case law built through the courts. It’s all edge cases.
Try the example of drafting a standard apartment lease, over millions of transactions between landlords and tenants lots of edge cases emerge. So over time leases get more complicated. And then the law around interpretation and enforcement gets complicated.
> Politics and corruption are not outsized drivers of law. Especially case law built through the courts. It’s all edge cases.
Case law is full of politics. How do you think courts resolve the ambiguities? If there was an objective standard for how to do it then judges could be replaced by computer programs. Judges are used instead because rigorous and consistent application of rules would lead to outcomes that are politically inexpedient, so judges only apply the rules as written when politics fails to require something different.
> Try the example of drafting a standard apartment lease, over millions of transactions between landlords and tenants lots of edge cases emerge. So over time leases get more complicated.
This is just a facet of how contracts and lawyers work. The law creates defaults that a contractual agreement can override, so each time the law establishes a default that landlords don't like but are allowed to change, they add a new clause to the lease to turn it back the other way. What they really want is a simple one-liner that says all disputes the law allows to be resolved in favor of the landlord, will be. But politics doesn't allow them to get away with that because what they're doing would be too clear to the public, so politics requires them to achieve the result they want through an opacifying layer of complexity.
"Politics and corruption" is exactly what is complex about society, and it is absolutely intristic to society. That's why we have laws in the first place.
> because the public wouldn't support many things in the status quo if they understood the workings of them
Personally, I've observed the opposite more often: somebody feeds the public a clickbait-ey and manipulative "explanation" of how things work, and public becomes enraged without any real understanding of complexities and trade-offs of the system, as well as unintended consequences of proposed "fixes". It is the main reason why socialism is a thing.
> "Politics and corruption" is exactly what is complex about society, and it is absolutely intristic to society. That's why we have laws in the first place.
The reason we have laws is to facilitate corruption? That seems like something we ought not to want.
> Personally, I've observed the opposite more often: somebody feeds the public a clickbait-ey and manipulative "explanation" of how things work, and public becomes enraged without any real understanding of complexities and trade-offs of the system, as well as unintended consequences of proposed "fixes".
That's the media. The government over-complicates things. The media over-simplifies things.
It has the same cause. People tune out when something becomes so complicated they can't understand it. So if they want people to pay attention to them, they over-simplify things. If they want people to ignore what they're doing, they over-complicate things.
> The reason we have laws is to facilitate corruption? That seems like something we ought not to want.
No, the reason we have laws is because politics and corruption and crime is intristic to society. They are the reality of human condition which can't go away and we can't ignore, so we have to deal with it.
> That's the media. The government over-complicates things. The media over-simplifies things.
I had in mind the part of the "media" which "rebels against the media", Noam Chomskies and Michael Moores of the world.
> No, the reason we have laws is because politics and corruption and crime is intristic to society. They are the reality of human condition which can't go away and we can't ignore, so we have to deal with it.
Public corruption is intrinsic to government action, but the way you constrain it isn't by passing laws that limit the public, it's by limiting what laws can be passed by the government.
> I had in mind the part of the "media" which "rebels against the media", Noam Chomskies and Michael Moores of the world.
Chomsky probably isn't a great example of over-simplifying things. Many of his criticisms are legitimate.
But having a legitimate criticism of the status quo is a different thing than having a viable solution.
There are a lot of cases (to the point where I expect your average person sees dozens of them every day) where the media isn't just "over-simplifying"; they're presenting things that are specifically crafted to both
- Be factually correct
- Make the reader leave with an false understanding of the situation
This exact same thing happens with political campaigns.
Over-simplifications are false. They don't even meet the bar of being factually correct, whether because the proponent is willfully leaving something out or because they're ignorant themselves. It's not impossible for it to happen innocently, because people selling simplistic narratives often build a following even when they're true believers.
The thing you're talking about is selection bias. It's the thing assholes do when they want to lie to people but don't want to get sued for defamation. Whenever you discover someone using this modus operandi, delete them from your feed.
I think the point of the op was that the size of systemic shocks increases to the point where societal collapse is inevitable. Centralization decreases resilience and at a certain point the system can’t handle the standard changes the system will experience.
Students perform better than in the past. One could argue that grades aren’t about comparing the peers in a class but individual performance.
Put another way, if I am hiring a student for writing, physics or engineering I don’t necessarily care if they are better than their peers as much as I care if they can write well, solve the math or engineering problems to get the write answer.
Finally, take law school as an example. LSAT scores, which are considered a reasonable proxy for IQ testing, are significantly hiring now for top schools than they were 30-40 years ago. That supports the thesis that law students today have a higher aptitude than previously so if grading is an objective measure of performance rather than a comparative measure of class rank it’s expected for average grades to increase.
No; there’s just been grade inflation combined with greater awareness on what will be on the tests. Objectively, our students are coming out of high school and college with the lowest preparedness for jobs than almost anytime prior, as ranked by employer surveys.
Also, the LSAT is not static. It has changed over the years as instruction methods have also changed and as demographics have changed; so it is not a reliable measure of aptitude in any way.
How would changing the mechanics of grading affect graduate readiness for jobs? I think two separate ideas are being conflated.
Whether or not someone gets an A or a C in a course for Physics likely will not have any bearing on the needs of an employer who needs someone who is a Python wiz for data science.
Maybe the bigger problem is colleges offering undergraduate majors which lack demand, coupled with in-demand majors not having enough relevant course content for the job market, or maybe employers have just gotten more unreasonable in terms of expectations for graduates over time?
It's hard to make the argument these students are less qualified.. look at the acceptance rate of top schools over time, its essentially at or near all-time low percentages for the majority.
Admission competition has risen across the board and the emphasis on grades and tests doesn't translate to work performance. In fact I'd call it a detriment but you can't be well rounded if you need to devote so much time keeping up. My dad and went to the same school, 30 years apart, and he straight up would not have a chance in hell getting in when I did, and he's got no problem admitting it.
Likewise, when I got into the CS major the entry requirement was a 2.5 GPA, by the time I graduated the next crop needed a 3.4 or 3.5 - and I’d be screwed if were just a couple semesters behind. That's a microcosm but that's the general trend. Look at the shit show ‘23-25 graduates are heading into now.
They surely haven't. The 2000s and 2010s spoiled employers because every other kid joining the labor pool spent hours a day on their computer, developing an aptitude for technology in general. For them it was a short leap from their home computer to running a retail or fast food POS.
That party is now over but most businesses still depend on desktop and laptop computing for much of their business operations and expectations haven't caught up.
Considering 1 in 3 companies have reached the point where they are dropping the need for college altogether for most roles offered; despite the percentage of college graduates having never been higher, should be enough of a sign that college is slowly becoming a training charade.
Meanwhile, we’re also at the point where 52% of graduates are underemployed within a year of graduation, and 45% still are after a decade. Employers don’t care nearly as much anymore. Tell your children their odds of it working out are literally flip a coin, and it changes the conversation.
I’m encouraging my kids to be entrepreneurs or go into the trades. College is a joke to extract fiat from unsophisticated consumers via non dischargeable government backed debt. If they end up unemployed or underemployed, cheaper to do it without the paper credential.
I don’t agree with this. Frankly every generation complains about the rising generation forgetting that a new employee needs to be trained over years. No one ever starts their career fully formed.
If it were the case that employees are worse why has Productivity has increased steadily? I’d instead posit that Employers expect way more than they did before. Technology and skill level expectations are drastically higher and employees are delivering.
> Put another way, if I am hiring a student for writing, physics or engineering I don’t necessarily care if they are better than their peers as much as I care if they can write well, solve the math or engineering problems to get the write answer.
I think the article's suggested solution of making all classes pass-fail would give you the information you're asking for.
Seems like you’re missing the implication of my argument.
If you take an exam with 100 questions that have right answers and score a 98 and I score a 98, we are equivalent. Why would anyone care which student they hire?
Further, if you score a 88 to my 98 there is still a comparison between two students but just on an absolute score of performance rather than a cohort analysis. My point is why do we need to lower that to a 2.5 gpa instead of a 3.75 and 4.0?
Finally, my point is that at a certain level of performance minute differences don’t matter.
Clawing back vested equity it materially different that a general “incentive”. It’s way outside of standard practice and borderline abuse given the imbalance of negotiating power. Further, if OpenAI or Sam lied publicly by saying they weren’t aware it was going on and did this seems to be the very type of untrustworthy lack of candor that the board identified when they ousted him.
I was a vocal defender of Sam in the immediate aftermath but these facts are definitely concerning. OpenAI is moving fast with a critical and dangerous technology. If their culture isn’t profoundly ethical they shouldn’t be trusted with the privilege and responsibility of what they are up to.
reply