Hacker Newsnew | past | comments | ask | show | jobs | submit | max51's commentslogin

"I give you 30B$ worth of hardware that costs me <10B$ to make in exchange for 30B$ worth of shares in your company" would be a more accurate description.

How does this work with private companies? It feels like Nvidia could find that the market does not value OpenAI stock the same way they did.

Public or private, there's never a guarantee of being able to sell back for some nominal price.

The "circular investment" is mostly start up companies using their stocks instead of cash to pay for server hardware and cloud computing. There is a few extra steps in between that make things look weird and convoluted, but the end results is really just big companies giving hardware and getting shares of ai companies in exchange for it.

I think you’re just describing how it’s circular.

It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.

But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.

Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.

Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.


> But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.

It's like how Uber and Airbnb in the early days were burning loads of cash to build market share. People went to these services because they were cheaper. Then they would increase prices once they had a comfortable position.

OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc. Compared to say Uber which didn't provide a lot of efficiency gains.


A little bit, but the scale is another magnitude higher. I just saw a chart yesterday that shows Uber burning $18B, Tesla burning $9B, and Netflix burning 11B before reaching profitability. Open AI so far spent $218 Billion.

The opportunity is disproportionately greater as well though.

Unfortunately that doesn't change the fact even a small miscalculation could have an enormous impact. We are approaching levels of risk comparable in size to the subprime crisis of 2008.


Is it? AI isn't going to be a winner take all market. Competition between American AI labs and even Chinese ones have seen to that.

The winners for AI will be the product companies, because soon enough the top-tier models are all going to have good enough performance that companies can just pick the cheapest. It'll be a race to the bottom for inference and OpenAI is very poorly placed to compete in that kind of thing.


> It's like how Uber and Airbnb [...]

I disagree. It's like Uber and Airbnb in how they try to gain market share. Big difference: For Uber (and when it got big, basically everybody I know has used it once in a while) and Airbnb, you oaid for each transaction. With OpenAI, most peopme are on the free tier. And if there is something incredibly hard, it's converting free users to paid users. That will, IMHO, be the thong that blows (many) of the AI companies up. They won't ever reach a profit/loss-equality.


I agree with this. For the casual user, I feel AI is only a "nice to have".

> OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc.

But also ever increasing quality requirements. So we can't possibly know at this point if this is a market with high margins or not.


And unlike Uber and Airbnb, OpenAI has no way to maintain marketshare. It’s a domain name with no moat.

Google has to pay Apple billions of dollars to make Google.com the default search engine. I just looked it up, over 15% of search revenue goes to pay to be the default search engine.

Every Android device defaults to Gemini.

Every Microsoft device defaults to Copilot.

I’d love to see where these cost reductions are. If costs are going to decrease rapidly why does OpenAI’s spending plan look so insane?


> Every Android device defaults to Gemini.

> Every Microsoft device defaults to Copilot.

I don't think it's right to say that these devices "default" to their vendors' AI software when it's impossible to replace it with something else. Yes I can install Claude as a standalone app but I don't have the OS-wide integration that Gemini does for Android for example.


Where are the cost reductions exactly? Except for using AI hype as an excuse for layoffs. Can you showe a reference? Genuinely interested.

Uber and Airbnb have network effects. You cant increase price when there is no cost in switching.

I dont see how network effects applies to Uber/Airbnb because nothing stops drivers/hosts from listing their property in multiple such apps

People continue using Airbnb because that's where the properties are listed. And owners keep listing properties because that's where the users are.

My point was that nothing stops hosts from listing their properties in AirBnb as well as a competitor. Unless AirBnb penalizes delisting or enforces price parity I guess?

Do you understand network effects? It’s not hand cuffs. I can also sell my rare baseball cards outside of ebay. But…

This is a common misconception

OpenAI and others are already profitable on inference (inference is really really cheap)

They are just heavily investing into the latest frontier

The biggest risk is whether they can stay cutting edge, or if open source or others will catch up quickly.


> OpenAI and others are already profitable on inference (inference is really really cheap)

If it's that cheap I'll soon be doing it self-hosted, or switching to a local provider.

It's a race to the bottom for tokens-providers.


It is that cheap. Look at Deepseek or GLM pricing.

> It is that cheap. Look at Deepseek or GLM pricing.

Then it's a race to the bottom.


Yep.

And unlike competitors, OpenAI has no ecosystem. Just a website and a domain name. Even a VSCode fork like Cursor is an improvement over that state.

Google pays over 15% of search revenue to be the default search engine on various browsers.


If you need to do the latter to be able to make money on the former, then you're not making money. Because if the latter requirement would disappear, inference margins would also drop.

At the end of the day, they're still burning cash. Even if inference is cheap, it's also not hard to compete on. They aren't going to be a trillion dollar inference company.

Eventually there will be a race to the bottom on inference price to the customer by companies that aren't trying to subsidize their GPU investments.

OpenAI is spending money because they think they need to for their business to survive. They're hoping that the next big breakthrough just requires more compute and, somehow, that'll build them a moat.


OpenAI and quite honestly the others think they are in a race to AGI not the bottom. That's why they aren't concerning themselves with moats or cost. This is quite simply a massive bet that we've already cracked AGI and the rest is just funding the engineering to make it happen.

I personally think we haven't cracked AGI yet but it doesn't change their calculus.


>inference is really really cheap

cough Sora cough


OK, so absolutely good faith here what is the end game?

Obviously, there’s a scenario of super power AI and then it’s a matter of continuing course. Electricity and silicon.

What if you are right, and the scaling doesn’t work. It is too much power, time, hardware to improve… does openAI fold?

Do they just actual use the models they have?

Does everyone just decide that AI didn’t work and go back 5 years like it didn’t happen?

Does the price change so that they have to be profitable making AI services expensive and rare instead of today where they are everywhere pointlessly?

Or does this insane valuation only make sense with information you don’t have like insider scaling or efficiency news?

Does China’s strategy of undercutting US value of models pay off bigly?


Why so extreme, most likely just AI winter for a while, then when tech and societies has caught up, the advancements begins again.

It is not like we threw away the dotcom advances, they were just put on hold for a while..


The people running these companies have a perverse incentive to keep the ball rolling as long as possible so that they can extricate as much personal wealth and influence as possible. Maybe AGI makes all the problems go away. But, failing that, they get out relatively scot-free when it all collapses. And they don't owe anything to the public. And no one is going to bring them up on fraud charges or any other kind of criminal charges. So, while the world is burning around them (including their former companies), they have the money and connections to acquire property and businesses that are actually productive. It's the Russian oligarch playbook. They're the kings of a struggling society on the brink of failure, but they heard "kings" and said, "Let's go."

I generally agree with the sentiment, but it's not the russian oligarch playbook. The playbook is some kind of a variation of buying out a productive asset in a legacy industry under it's market price (because everything is on fire already), then using political or monopoly power to funnel (tax) money through it and into your pockets (the asset has to function, but doesn't have to provide a good quality of service due to not allocating proper maintenance). Sovereign AI fund and Microsoft are very close to that setup. If NYC subway would be sold to certain Elon and he will then jack up the prices and have the city hall to subsidize it still, but keep the quality of service the same, that would be more or less it.

The other variation goes in reverse -- using the legacy asset and it's capture labor force to output some kind of a commodity that is sold below market price to a controlled company in a different jurisdiction, where it's resold at small discount of a market price. The company still has to function here too.

Bonus points for not even owning the asset in question, but having effective control over it through the corrupt management, this way the government still pays the bills to keep it running at loss.

What you are describing is actually very western thing, because it assumes you can exchange the asset into cash directly and then buy something with that liquidity, which assumes solid property rights. I'm not even talking about OpenAI being an actual tech company that just wasn't there before. It's not how oligarchy works in the places.

Since the US is slowly moving in a direction of oligarchy, I think the actual reference will be helpful.


Please read Sarah Kendzior. What's happening under Trump is different from what's happened under other admins precisely because he's drawing from the Russian quasi-state/mob playbook, and not from the normal "socially-caustic Capitalism" one. The difference is that one seeks to maintain a state, and one seeks to dismantle it and replace it with a quasi-state, which exists mainly to interface with other the entities that are still playing in the nation-state system, but which internally functions almost completely as a projection of the power of the elites.

You're conflating the assets the elites own before the state collapse with the ones they seek to acquire afterwards. The don't care if the ones from before function, because their only purpose is to be maximally extractive. Afterwards, there's no need to funnel tax money through the functional businesses they acquire; they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money. No laundering games necessary.


>replace it with a quasi-state, which exists mainly to interface with other the entities

I don't exactly disagree with that assessment and I think you should stay vigilant for that indeed. What I'm saying, that selling a hot potato to get cash is the opposite of what oligarchs are known to do. I could be that it's but a step to buy something else with oligarchic intentions in mind, but alternatively it could a normal westerner money-handling behavior.

>they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money.

That doesn't contradict what I wrote or at least meant. The asset in question is not the means of laundering, but a pretext for extracting money from everyone unfortunate enough to live in the forsaken place.

The laundering part usually comes when the oligarch wants to safeguard their own money from political risks, which they do by keeping the funds in a place that is outside of their (and their potential rivals) political influence. Otherwise, once the political balance shifts, the money is just gone, because no laws exist to guard it anymore. I'm not sure what this "outside" place could be for Americans, but could guess (with no confidence in the answer at all), it's either Swiss or Gulf banks. Maybe UK or whatnot. Some structures that have a combination of impartiality to their disputes, strong enough property and privacy regimes, but with zero to none ethical constrains to walk away from it.


"so that they can extricate as much personal wealth and influence as possible"

I've always thought this. If you're running something like OpenAI, it really doesn't matter to you if the company fails because you're already comfortably wealthy. But, it sure would be nice to be worth another 10x billion - though I'm not totally sure why.

So these individuals perceive a large upside and no downside. It's more of a hobby than a job. Like learning to play piano. It would be amazing to be a badass pianist...but not a big deal if that never happens.


Growth decoupled from labor costs

Cisco did this in 1999. That's how my smallish apartment building in Sweden ended up with a kick-ass Cisco 10 Gbps switch in its basement a year later - when these cost real money.

I think the HOA still only pays like $10/month/apartment for an entry level that's now defined as 250/250 Mbit/s. Someone must have been unusually savvy with the contracts.

https://newsroom.cisco.com/c/r/newsroom/en/us/a/y1999/m11/ci...

Cisco survived but it took them until late last year to recover their 1999 stock value (that's 26 years).


Nope wrong framing.

Nvidia is investing assets into OAI - it has to. Because OAI needs to become successful for Nvidia's story in the long-term to play out, to justify its current stock price.


You say calling it circular is wrong framing and the immediately proceeded to describe a circle.

Nvidia just needs the winner to be an Nvidia customer. OpenAI is replacable.

If OpenAI folded, you’d have the one LLM company that consumers know suddenly gone. Which seems like the opposite of an AI success story.

People will start looking at valuations more carefully. Investors will get jittery. Spending on GPUs will drop, as will NVidia’s stock price.

I’m not sure that NVidia views OpenAI as replaceable.


If OAI folded, there would also be a sudden tsunami of recent Nvidia hardware on the used market.

Specifically built for training and inference and not much else, and also they age like milk. I don’t see how that helps anyone.

It would be a fun day for hobbyists who want to run big open source models locally, if nothing else.

Customers comparable to openai are trending towards designing and/or using their own silicon, though.

For a power user, There is nothing even remotely comparable to Excel that exists today.

Not anymore. Today I tried to copy paste a string of 15 ascii characters into an Excel cell. Excel spun around for 20 seconds then blurted out an error that "the data is too big". I hit F2 (enter cell Edit Mode), pasted the 15 characters in the edit window and this was I was able to get the data in the cell.

Excel has gone downhill massively.


Can you name a single product that is comparable?

If it doesn't have something equivalent to Pivot Tables, it's not even worth talking about.


>I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.

If you manage a code base this way at your company, sooner or later you will face a wall. What happens when the AI can't fix an important bug or is unable to add a very important feature? now you are stuck with a big fat dirty pile of code that no human can figure out because it wasn't coded by human and was never designed to be understood by a human in the first place.


I treat code quality, and readability, as one of the goals. The LLM can help with this and refactor code much quicker than a human. If I think the code is getting too complex I change over to architecture review and refactoring until I am happy with it.


What happens when humans can’t fix a bug or build an important feature? That is a pretty common scenario, that doesn’t result in the doomsday you imply.


There will always be bugs you can't fix, that doesn't mean we should embrace having orders of magnitude more of them. And it's not just about bugs, it's also about adding new features.

This is tech debt on steroid. You are building an entire code base that no can read or understand and pray that the LLM won't fuck up too much. And when it does, no one in the company knows how to deal with it other than by throwing more LLM tokens at it and pray it works.

As I said earlier, using pure AI agents will work for a while. But when it doesn't you are fucked.


If you think making sure only citizens can vote equals "suppressing liberal voters", that sound like a big self report. The voter lists don't tell you how people voted, it only tells you who did.


Generally speaking, trackers that require a ratio above 1.0 and don't have freeleech/point system are designed so that you pay the website to fix your ratio and/or rent a seedbox from one of their partner.

It's a 0 sum game; for every account with a >1.0 ratio, that implies other people will be <1.0.

And when you compete with 10gb/s seedboxes that have scripts to automatically grab all the new torrent the second they get posted, it's extremely difficult to improve your ratio. Even for super popular torrents, you have a few minutes to seed as much as you can before upload speed goes to 0 forever. You can't slowly accumulate upload over time the same way you would with a torrent from a public tracker.


Because the ones pushing it down your throats are trying to capture the entire market and get you to adopt their AI instead of a competitor.


>LLMs don't do this

They did at the beginning. It used to be that if you wanted a full answer with an intro, bullet points, lists of pros/cons, etc., you had to explicitly ask for it in the prompt. The answers were also a lot more influenced by the tone of the prompt instead of being forced into answering with a specific format like it does right now.


>While she was moving my legs using her upper body, it felt quite intimate and I admired her for being so professional while doing her work physically and giving psychological support as a bonus.

Have you considered that from her pov, there was nothing intimate about it? I wasn't there to watch it, but in my experience, these situations are only "intimate" or awkward AFTER you start talking about how intimate and awkward there are. For people who have to touch bodies regularly at work (eg. me when I was a gymnastic coach), there is nothing intimate about it. The only ones who think it's sexual/intimate/awkward/weird/etc. are those who have no experience with it.

It's the same thing when you get a medical procedure done. Believe it or not, the nurses and the surgeon do not give a single fuck about seeing your dick. Its not intimate or sexual for them.


For her it probably did not feel intimate indeed. Still giving care can give a sense of emotional connection, with or without physical contact. Like I wrote, what made it most satisfying was the combination of the physio with empathetic conversations.


I participated in that competition a decade ago. The best teams had a hull that was less than half an inch thick and it didn't leak. We put glass fibers and iirc latex in the concrete mix.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: