Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Nvidia is most valuable company (companiesmarketcap.com)
16 points by cal85 16 days ago | hide | past | favorite | 16 comments



why has no one created an asic for matrix multiplication that is cheaper and can easily outperform a gpu that is general purpose. gpu demand shouldn't be as big as it is.


You observing that GPUs are TOO FLEXIBLE for ML, in theory, and a more specialized ASIC could do better in perf per watt and perf per mm^2 is notable, and much attention has been given to this thought.

Modern ML is mainly made of a few parts:

1. large matrix multiplication

2. a few other things like element-wise operations, often exp() or other sources of nonlinearity.

3. data movement and reshaping

Accelerating all of those is easy. The issue is that ML is still not yet stabilized in terms of data flow between those steps. RAM bandwidths are not growing fast enough, which is why some latest ML developments trade off more compute for less RAM bandwidth. The field is developing fast. New proposals come out very often that change the way those building blocks get used.

GPUs are used because they are:

1. fast enough at mat-mul

2. flexible enough to reorder things and change what is sent where when and how.

Once ML field stabilizes a bit, and we're done with new papers coming out every month with new proposals for the ways to flow data through the steps, you can fully expect real ASICs for this. But if you build one now, either you need enough flexibility to change how it works on the fly (read: you're building a GPU) or your accelerator becomes useless once a new paper comes out (see: 4NX from imagination once the attention paper was published). Once the field has stabilizes a bit, it will happen. The reason bitcoin ASICs happened is because sha256 was stable and nobody had found new ways to calculate it much faster

"But what about TPUs?" you might scream! They are a large mat-mul block with a nearby general-purpose core with a large vector unit to try to do "the rest of the things". This is the alternative approach, with pros and cons compared to GPUs, but it is also basically a hedge against "new ML paper comes out between chip design being locked and chip taped out and in use". You could argue whether this is better or worse than GPUs. There are valid points on both sides


What do you think a TPU is?


I guess the real question is, why have we been seeing exploding demand for GPU's rather than TPU's (or a non-Google equivalent) in the AI space?

Why has Nvidia's stock exploded, rather than some company manufacturing TPU equivalents that you'd think would be higher-performance and more cost-effective for AI?


Don’t own any shares. Might finally go buy a few shares of Nvidia now. I know it will dip the moment I buy.Might even trigger a recession.


What would hurt more: if Nvidia ceased to exist, or if ASML or TSMC ceased to exist?


Nvidia doesn’t do the manufacturing, so presumably it’d be worse if the fabs ceased to exist. We’re only dependent on Nvidia because they exist and the fabs need to pay them to license their IP…


The fabs don't "license" the IP. They manufacture the product based on the designer's design and the contract's terms.


Sure, but if Nvidia “ceased to exist,” the fabs could keep manufacturing the same chips. So at least in the near-term, the world wouldn’t change much. Whereas if TSMC “ceased to exist” then there would be an immediate worldwide chip shortage.


That's not how it works. They're Nvidia's chips. Would Foxconn go on and keep making iphones if Apple whatevered?


If someone wanted to buy them from Foxconn, then probably…


not without purchasing the license or additional contracts. its nvidia IP.


And if nvidia vanished, there would be no one to care about the IP violation.


It's either a bubble or we're all getting UBI :)


AI boom propping up the US economy


Basically a meme stock at this point




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: