Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia dethrones Apple as the most valuable company (businessinsider.com)
72 points by elsewhen 1 day ago | hide | past | favorite | 42 comments





The absolute number of $3.5T market cap seems high. But Nvidia's earnings last quarter were $17B. Annualized that is $68B.

The PE ratio calculated by current market cap of $3.5T divided by that annualized earnings is 52. Not unusual for a growing tech company.

I remember the arguments I hear here on HN these days "No company can be that big and still grow. Also competition will catch up!" from the early 2000's when Apple's market cap crossed $100B.

And from 2017 when Tesla's market cap crossed that of BMW.

It's not that easy. Companies are moving targets. Their future will be determined by revenue factors that are not generating any revenue today. With Apple it was the iPhone. With Tesla it was self-driving. With Nvidia it will be ... ?


> I remember the arguments I hear here on HN these days "No company can be that big and still grow. Also competition will catch up!" from the early 2000's when Apple's market cap crossed $100B.

Early 2000s the iPhone hadn’t been invented.

Apple’s market cap would be a tiny fraction of what it is today if they had remained a Mac + iPod + software company.

Apple’s genius has been their ability to enter and dominate (and sometimes create) completely new markets.

Since the early 2000s, the following new markets have all eclipsed the Mac or the iPod in annual revenues for Apple:

iPhone iPad Services Accessories

Basically, whatever company Apple was in the early to late 2000s, is the least financially significant bit for Apple today.


> Early 2000s the iPhone hadn’t been invented

That was my point.

Companies - especially tech companies - go through phases. Apple is now a phone company, Microsoft a cloud computing company, Tesla an AI company ...

The value of a tech company is not in their current product line. It is in the product lines they will build in the future.


iPhone sales make up 46% of Apple's revenue, so it has expanded beyond just being "a phone company". They offer an integrated experience across their products.

Another 28% is services, a good chunk of it is probably fees charged for the Appstore which is part of the phone?

The growth of the Appstore business is another example of what I am trying to convey here. That companies are moving targets. When the iPhone came out, people were estimating how fast the competition could catch up with the hard- and software. AFAIR appstore fees were not a part of the discussion back then.


The growth is one thing, but maintaining gross margins of 76% in big tech is quite challenging. Ignoring AI bubble risk, in the near term, their main risk is margin compression

Yes, the big difference between them and Apple is Nvidia’s customers would jump in a flash if one of their competitors offered a better price performance ratio.

Judging by the history of this industry that seems inevitable.


Seems evitable to me, given how poor software-wise are competitors and how hard it is to make software quickly.

Afaiu, Nvidia is a software company. It chose a headstart of a decade.

GPUs are just a platform for running it, in a sense. It’s Nvidia who could run on a competitors GPUs if it wanted, but not vice versa. With own GPUs it has a full lock-in mechanism.

Judging by the history of this industry that seems inevitable.

You mean games?


Apple had similar problem in the last century.

They have a nice money cushion nowadays, but mankind isn't going to keep rebuying iPhone and iPads forever.


CUDA has been a decent moat.

But developers won't, they will stick to cuda

Those margins also attract othertech firms to get into the industry either to compete with you or to cut costs because you are overcharging them.

Yes, competition is what causes the margin compression.

3500/68=51.5

Thanks, fixed.

The valuation is completely nuts. It assumes that Nvidia's revenues will continue to grow at the same pace for many many years, without any competition. That's a very silly assumption IMO.

If AI really is as productive as this valuation claims then by then someone will have used it to invent a better cuda, and unless that's nvidia themselves they will lose their moat

If AI isn't this productive then they are still the safest bubble company in the bubble.

This situation, to me, says that most people are unsure of how productive AI will be so they don't want to directly invest in it, but following dot com logic companies partaking in tech bubbles will still beat the market as long as their P/E isn't in the multi hundreds year on year. Nvidias' P/E is similar to intel back then iirc


> If AI really is as productive as this valuation

The modern ai flavor has been there for almost two years now yet nothing hints to a productivity boost, nothing even remotely close to a scale that would be relevant.


Sounds like Tesla! Actually, Tesla has a higher rating...

Just wrote a blog post about this, it's time to short!

https://news.ycombinator.com/item?id=41897679


The market can remain irrational longer than you can remain solvent.

Yes! Tesla is the best example for this...

I’m always skeptical of people giving away their alpha.

The thing I worry about with Nvidia in terms of valuation is that they have very successfully ridden two waves and I am not sure if a third is on the immediate horizon.

Crypto mining kept demand for Nvidia processors artificially high throughout the pandemic and now then they transitioned immediately into AI hysteria. They, of course, have a healthy business without those things, but it is not entirely clear to me if such accelerated growth is sustainable.


Well what's the next "thing" that will require massively parallel processors?

What if fast new LLMs don’t need a GPU?

The problem with this valuation is that the AMD MI300 exists. It is directly comparable to NVIDIA's accelerators, with differences measured in tens of percentage points, not orders of magnitude.

Sure, the software may not be as good as NVIDIA's at the moment, but would it take AMD $3 billion to match it, or $3 trillion? It didn't take NVIDIA trillions to develop CUDA!

Similarly, for users of NVIDIA hardware, are they willing to fork over $3 trillion of their own money instead of, say, a "mere" $2T to AMD and work out the kinks in the software compatibility with the remaining one trillion dollars!?

These valuations are not just insane, they are certifiably nuts.

I'm still waiting for Apple Intelligence to ship, Windows Copilot is looking more like Clippy every day, and ChatGPT might partially replace Google Search for me, but not the rest of Google's products.

I just don't see that $3.5T materialising as revenue before the bubble bursts, and if it doesn't, not before AMD starts taking a larger slice of the trillion-dollar-pie.


> The problem with this valuation is that the AMD MI300 exists

The problem is that AMD seems to be allergic to writing a software shim that allows users to keep using their existing Tensorflow and PyTorch code at a 10% performance penalty.


It might exist, but its software ecosystem sucks, which is the part Intel and AMD keep missing out.

Yes, but does it suck billions or trillions? The market is saying that it's willing to throw the latter at the overall problem of AI training, so even if it did take a trillion dollars to fix AMD's software issues, that ought not be a significant hurdle either.

Some things are hard and just throwing more money at it doesn't work because it's about company culture and leadership.

Compare ICE auto industry profits since 2003 when Tesla was founded to money invested in Tesla till 2019 when Model Y was finished. Tesla had very less money to work with.

Multiple iterations of Volkswagen's car software sucked so bad and the company culture and leadership just couldn't develop good software so Volkswagen created a separate startup style subsidiary just to make software. Guess what? That startup also failed and they recently bought a $5 billion stake in Rivian so that they could use Rivian's car software. Meanwhile their sales are tanking.

Also look at SpaceX vs. ESA or Boeing/ULA with reusable boosters. Or China with EUV and chips. Even nation state level with essentially unlimited funding may not work.


> Some things are hard and just throwing more money at it doesn't work because it's about company culture and leadership.

Indeed, though that sword cuts both ways, if a hard problem gets fixed with lots of money being thrown at it then a competitor may be able to fix it for less if they have the right culture and leadership.

> look at SpaceX vs. ESA or Boeing/ULA with reusable boosters.

The way I understood it was that SpaceX was granted $3B of tax payer money to set up a moon mission and all they achieved was a single booster catch. That project is gonna go so far over budget both in terms of time and money that it's giving me a stomach ache just thinking about it.


A Starship plus its booster costs less than a single one of the non-reusable rocket engines of the Space Launch System... which has five engines.

There's simply no comparison when it comes to cost efficiency.


GPUs do more than PyTorch.

Sure, but most of the money is being sunk into the execution of the a small number of distinct codes, scaled out.

AMD doesn't need to duplicate everything NVIDIA provides, they just need to duplicate the parts relevant to most of the $3T spend the market seems to be expecting.

Just make llama.cpp work robustly on AMD accelerators, and that might unlock a $500B slice of the pie by itself.


And this is why NVidia keeps winning.

cough Apple could be winning if they weren't a pussy about OpenCL. cough

Sorry, got something in my throat.


Why? Intel, AMD and Google never made anything useful with OpenCL.

That is OpenCL 3.0 is OpenCL 1.0 without the OpenCL 2.x stuff that no one ever adopted.


> AMD doesn't need to duplicate everything NVIDIA provides, they just need to duplicate the parts relevant to most of the $3T spend the market seems to be expecting.

In effect, they already have. Both AMD, Apple and a number of smaller OEMs all wrote GPU compute shaders to do "the AI inference stuff" and shipped it upstream. That's about as much as they can do without redesigning their hardware, and they've already done it.

Nvidia wins not because they have everything sorted out in software. They win because CUDA is baked into the design of every Nvidia GPU made in the past decade. The software helps, but it's so bloated at this point that only a small subset of it's functionality is ever used in production at any one point. What makes Nvidia fast and flexible is integration of compute in hardware at the SM-level. This is an architecture AMD and Apple both had the opportunity to reprise, even working together if they wanted, but chose not to. Now we're here.

I tend to steelman the idea that it was AMD and especially Apple's mistake for eschewing this vision and abandoning OpenCL. But apparently a lot of people tend to think that AMD and Apple were right despite being less efficient at both raster and compute operations.


With CUDA a researcher can use C++20 (minus modules), Fortran, Julia, Python, Haskell, Java, C#, and a couple of other stuff that compiles to PTX, have nice graphical debuggers for the GPU, IDE integration, a large ecosystem of libraries.

With OpenCL, C99, some C++ support, printf debugging, and that is about it.

For good C++ experience, one needs to reach out to Intel's Data Parallel C++, which has Intel's special sauce on top of SYCL efforts, which only became a reality since a British company specialised in compilers for game consoles decided to pivot their target market and produce ComputeCpp, followed by being acquired by Intel.


The AMD MI300 is nowhere near Hopper.

A real threat is Cerberas. Their approach avoids the memory bottleneck of loading the whole model DRAM->SRAM for each token batch. I hope their IPO goes well before the AI bubble pops.


I can see variants of the Cerebras approach taking slices of the VC investment pie.

For example, arrays of identical and simple chips with in-chip memory could have similar performance as the monolithic Cerebras wafers-scale chips. Not for all workloads, but some, such as inference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: