Hacker News new | past | comments | ask | show | jobs | submit login
How Jensen Huang's Nvidia is powering the A.I. revolution (newyorker.com)
149 points by paladin314159 9 months ago | hide | past | favorite | 94 comments



I remember when GLSL was being proposed and there was a lot of debate about how General Purpose it should be. Would people run non-graphical shaders on their GPUs? Yup.


Aren't compute shaders basically a poor man's cuda?



Should have never sold NVDA stock when I bought them in 2007


And I should never have sold the AMD I bought just before ryzen 1.

Hidesight though


Better to showsight


Did Nvidia power it or are they stifling it behind its monopoly.

Tricky question to answer.


Nvidia doesn't have a monopoly here. Google have their TPU, AMD have MI300, Intel have Data Centre GPU Max, and then there are start ups like Graphcore. There are many alternatives. I think people choose Nvidia because it's easier for them to use. That's because everyone else uses them, and that's because for years and years their competitors (other than Google perhaps) have dropped the ball on good software.


NVIDIA is the leader because most academic AI setups are NVIDIA based.

When AI moves further away from academia NVIDIA will have less of a grip.

Proprietary, hardware specific APIs never stand the test of time. Ask 3Dfx.

Either CUDA will open up, if it is to survive or open API use will spread.

Weirdly, NVIDIA hardware only outperforms competitors on its own API. When you compare NVIDIA on a level playing field, they aren't the clear winners. Nobody is right now.

I suspect the battle ground for AI will be accuracy rather than speed in the medium term and on paper AMD could win there...purely because they aren't shy about over speccing the RAM in their kit and certain price points.

For me, I want to run the largest models I can with the least amount of quantization for the best bang for the buck...and AMD is right there as soon as people start picking up APIs outside of CUDA.


I work for one of their big competitors, and all my conversations with customers tend to follow the same string; "NVIDIA is milking us dry, we want an alternative, but all the alternatives require significant redesign in languages and tools people are unfamiliar with and we can't afford that overhead". It tends to be very cut and dry.

Until university labs get people working in open frameworks and not CUDA, every student joining the industry will default to NVIDIA GPUs until they're forced otherwise. The few people I've managed to convert have been forced by supply constraints, not any desire to innovate or save themselves money. As long as NVIDIA can keep the market satiated with a critical mass of compute, they'll sit on their throne for a long ol' while.


> but all the alternatives require significant redesign in languages and tools people are unfamiliar with and we can't afford that overhead

Where I work, we've made it a principle to stay OpenCL-compatible even while going with NVIDIA due to their better-performing GPUs. I even go as far as writing kernels that can be compiled as either CUDA C++ or OpenCL-C, with a bit of duct-tape adapter headers:

https://github.com/eyalroz/gpu-kernel-runner/blob/main/kerne...

https://github.com/eyalroz/gpu-kernel-runner/blob/main/kerne...

of course, if you're working with higher-level frameworks then it's more difficult, and you depend on whether or not they provided different backends. So, no thrust for AMD GPUs, for example, but pytorch and TensorFlow do let you use them.


Yeah - which is to say that that competitors and other competitors aren't actually going to create a CUDA replacement any time soon. And, correct me if I'm wrong, it would be quite possible to create such a thing - AMD had a system which had a tool to do conversion a while back but I recall them not supporting it seriously.

The problem is that when a company has done serious capital investment to advance a market, anyone who invested equivalently wouldn't reap the same rewards - competition would just eat away each company's profits so no one will challenge that.


> AMD had a system which had a tool to do conversion a while back but I recall them not supporting it seriously.

They are supporting it seriously now. It is being actively developed and improved.

https://github.com/ROCm-Developer-Tools/HIPIFY


It's being done. The Mesa project has drivers for OpenCL (RustiCL) and Vulkan under development on any hardware that can provide the underlying facilities for that kind of support. This provides the basic foundation (together with other projects like SYCL) for a high-level alternative that can be properly supported across vendors (minus the expected hardware-specific quirks).


The second mover advantage though is that you can just copy the first company and avoid blind alleys.


> Either CUDA will open up, if it is to survive or open API use will spread.

I don't really think so, at least not anytime soon while the hardware functionality continues to evolve so much, and while they seem to be concentrating on the high end devices/architecture rather than low-end stuff.

I've been more or less exclusively writing CUDA for the past decade in the AI/ML space (though have spent some time with OpenCL, Vulkan and other things along the way too). What a GPU is or should be I don't think has reached an evolutionary end yet. CUDA also is not a static thing, and it has co-evolved with the hardware, not being locked into some static industry standard with a boatload of annoying glExtWhatever dangling off of it. Over the past decade or so, Nvidia has introduced new ways that the register file can be used (Kepler shuffles), changed the memory model of GPUs and the warp execution model (to avoid deadlock/starvation by breaking the lockstep behavior somewhat), slowly changing the grid/CTA model (cf. cooperative groups, CTA clusters), adding more asynchronous components to the host APIs and the hardware (async DMAs), and has constantly changed the underlying instruction set, all of which leaks into CUDA in some way.


> Either CUDA will open up, if it is to survive or open API use will spread.

CUDA won't die if Open APIs take over AI inferencing operations. It's still used and applied in so many niche industries that it can only be "replaced" in industries like AI where companies invest in moving digital mountains. Stuff like Microsoft's ONNX project will go a long way towards making CUDA unneccesary for AI acceleration, but it won't ever kill the demand for CUDA.

Just look at how lethargic the industry's response has been in the wake of AI, and look at how other companies like AMD and Apple abandoned OpenCL before it was ready. Now Apple is banking on CoreML as an integration feature and AMD is segmenting their consumer and server hardware like crazy.

> Weirdly, NVIDIA hardware only outperforms competitors on its own API. When you compare NVIDIA on a level playing field, they aren't the clear winners.

That does not reflect any of the benchmarks I've seen at all, unless by "level playing field" you mean comparing old Nvidia chips to modern AMD ones. The only systems comparable to the DGX pods Nvidia sells is Apple's hardware, which lacks the networking and OS support to be competitive server side.

AMD is an amazing company for being open and transparent with their approach, but nice guys always finish last. This is a race between the highest-density TSMC customers, which means it's Apple and Nvidia laughing their respective paths to the bank.


Calling AMD a nice guy is a huge stretch in my opinion. From my understanding they didn't even allow use of ROCm with consumer GPUs until this year... CUDA is and always was very accessible to a broad audience.


> AMD is an amazing company for being open and transparent with their approach ...

Is it known, why consumer cards do not have fp64:fp32 performance at about 1:2?


> Proprietary, hardware specific APIs never stand the test of time. Ask 3Dfx.

3DFX lost for other reasons. DirectX is doing fine despite having had open competition for decades.


How did NVIDIA entrench itself as the leader in academia? From a market perspective it's interesting how they cornered it.


CUDA runs on most recent Nvidia GPUs, which are replete on college campuses and well-supported in server software. AMD's GPGPU compute support differs from GPU to GPU, and Apple didn't start contributing acceleration patches to Pytorch and Tensorflow until stuff like Llama and Stable Diffusion took off.


They built the entire market from scratch.


They’re powering it.

they have the best hardware for it. Their vendor lock in is just CUDA, but that’s not really much lock in.

You can train models on AMD cards too, but why would you when nvidia cards are just better for it atm.


How has Nvidia stifled anything?

Nothing besides $ prevented their competitors from developing their own GPUs and software stack comparable to CUDA.


Would have been funny to see crypto competing with AI to splash ridiculous amounts of money on those chips.


Did they use AI to make that Jensen Huang face out of chips and PCBs?


Nvidia should enjoy this period while it lasts. I fail to see any future where custom ASICs don't each their lunch.

There's so much opportunity for custom silicon to massively improve compute-per-watt -- for both for training and inference. Nvidia got lucky with their high-memory GPUs being early to this space, but the usefulness of that architecture has already peaked.


I see plenty of custom ASIC startups but they all struggle to get decent funding. Which means they are all one tape-out away from insolvency.

I don't really see NVIDIA's dominance changing (other than whatever AMD has in the pipeline) Unless VC's change their risk appetite and take interest in HW startup funding which seems unlikely considering the avg returns and success rate of HW startups.


In the short-term the ASICs won't come from startups. Think companies that already have custom silicon + deep pockets + a need to squeeze every watt at scale.

Eg, Google (TPU), Apple (NPU), Amazon, Telsa (Dojo), Microsoft/OpenAI, etc.


“Employee demographics are “diverse,” sort of—I would guess, based on a visual survey of the cafeteria at lunchtime, that about a third of the staff is South Asian, a third is East Asian, and a third is white.”

This part of the article bothered me. Why was diverse placed in quotation marks? Aren’t South Asians and East Asians considered minorities?

Also South Asians are a very diverse group in it itself. As are East Asians. Many different languages and cultures….

So why was diverse in quotation marks? Why did the author say “sort of”???


Why did you drop the second sentence on that topic?

"Employee demographics are “diverse,” sort of—I would guess, based on a visual survey of the cafeteria at lunchtime, that about a third of the staff is South Asian, a third is East Asian, and a third is white. The workers are overwhelmingly male."

"Overwhelmingly male" is an obvious counterpoint the author is making to the ethnicity distribution and by itself explains the quotation marks.

Everyone is free to read in additional subtext but your question can be answered solely by not dropping the second sentence of the author's.


"Also South Asians are a very diverse group in it itself. As are East Asians. Many different languages and cultures…."

Europeans and their diaspora (aka White people) are also very diverse in the sense of many languages, many cultures, many phenotypes, varied histories, etc.

But we all know what capital-D "Diversity" means here, and it is not this.


What really pains me is the topic of diversity must somehow crop up, even though we are discussing on things that have absolutely no relevance to it at all.

What does whether someone is a White or Asian got to do with his competency on the job?


I think the writer is insinuating that ethnicity is only one type of diversity. What if 95% of those Asian employees were male, for instance?


[flagged]


The full quote is:

"Employee demographics are “diverse,” sort of—I would guess, based on a visual survey of the cafeteria at lunchtime, that about a third of the staff is South Asian, a third is East Asian, and a third is white. The workers are overwhelmingly male."

That second sentence that the original poster here dropped gives you explicit clarification vs solely trying to infer what the quotation marks are doing.


As well as the overwhelming majority-male workforce, which is mentioned right below this in the article.


Rhetorical question, what do you think?


> In May, hundreds of industry leaders endorsed a statement that equated the risk of runaway A.I. with that of nuclear war. Huang didn’t sign it. Some economists have observed that the Industrial Revolution led to a relative decline in the global population of horses, and have wondered if A.I. might do the same to humans. “Horses have limited career options,” Huang said. “For example, horses can’t type.” As he finished eating, I expressed my concerns that, someday soon, I would feed my notes from our conversation into an intelligence engine, then watch as it produced structured, superior prose. Huang didn’t dismiss this possibility, but he assured me that I had a few years before my John Henry moment. “It will come for the fiction writers first,” he said. Then he tipped the waitress a thousand dollars, and stood up to accept his award.

If you ever wondered if you had what it took to be a NYer writer, consider if you could have provoked & recorded this vignette.


My all time favorite from Ian Parker’s NYer profile of Johnny Ive:

I asked Jeff Williams, the senior vice-president, if the Apple Watch seemed more purely Ive’s than previous company products. After a silence of twenty-five seconds, during which Apple made fifty thousand dollars in profit, he said, “Yes.”


Not sure if you're implying that it's easy or hard to be a NYer writer


I’m going to say “hard”. That’s a beautiful piece of prose, both because of the social skills needed to elicit the behavior from Huang and because of the narrative skill to tell the story well.


I want to stand up and applaud them. Wonderful… no, _ideal_ non-fiction prose.


People using the horse analogy often put it to illustrate that it could be bad for us. As if the horses loved hurling stuff for humans!


But now we don't need horses so there are far fewer of them, and they are so completely unimportant to the economy that if they caused us any significant inconvenience we could kill them all without it being a significant issue.

From our perspective the lives of horses look pretty horrible, particularly their lives in service to us, but I don't think it would be very comforting to have an AI explain to me that humans aren't fully aware of the world in the way that greater intelligences are, that because of the mental issues in humanity which cause them to do annoying things like bombing data centres or cutting fibre optics cables the human population needs to be significantly resized to a more sustainable number, or anything else a human-equivalent that actually cared might explain to a horse-equivalent that could actually listen.

The point is that horses don't control their own destiny, and they aren't even important enough for us to particularly care about said destiny in anything other than vagaries. Horses that interfere with human society in some way get put down. I don't want to be put down or have no say in the future, so we shouldn't attempt to build something which has a reasonable likelihood of being as much smarter than us as we are smarter than horses.


It's not hard to insert oneself into a story. It's harder not to do it.


So did he try to feed in his notes?


The horse anecdote has always struck me as stupid as hell. Horses were replaced because they have no political power, they are tools. As long as we live in a democracy people will not become horses.


People have political power ultimately because they have military and economic power. That would potentially change if politicians have artificial people under their control that can patrol the streets and enforce any law they want and man the factories.


I think the idea that this will come from the politicians and government is misguided.

This will come from the capitalist system, businesses will be driving the obsolescence of humans, not governments.

Once the owners of the means of productions don't need humans in their workforce, what value to they bring?

It will be much harder to "revolt" against capitalism/businesses than a central government.

And the biggest danger here, is it means the end of the capitalist system as the best way to increase living standards. If capitalism doesn't work as a means of redistribution anymore, because humans have been devalued, what economic model can replace it and be as successful?

We're re-entering an era where economic systems are all going to favor a few classes, and impoverish the rest.


>This will come from the capitalist system, businesses will be driving the obsolescence of humans,

what's the point of owning the factories if there is no one to buy the crap you produce? they need us more than we need them, at least until they have their own spaceships with AI and robots to fluff their balls for them.


> what's the point of owning the factories if there is no one to buy the crap you produce

You produce things you yourself want to consume. Why produce things for others if they have nothing you need?

The only reason there is a need to sell to workers today is so you can pay them a salary to work in your factories, and then take a portion of that for yourself. When you don't need workers you don't need to sell to workers, you just take the entire output for yourself.

Edit: If you can't imagine how someone could consume that much, then think of a golf enthusiast so he tells his AI factories and builders to prepare new golf courses at the same rate he can finish them, so he can play gold all day long and never replay a course.

Basically production will become almost 100% luxury focused instead of median consumer focused like it is today.


Most everything produced today is a luxury. Compare the contents of your home with that of one 300 years ago.


Numbers still go up for whatever human sits at the top, that's the demand. But the general principle of demand and supply exists even among AI internally. AI trading bots are their own economy, it would be the same for intelligent robot factories. They will have demand for things no different from a human that has a demand for food, robots need to be repaired, maintained. They will produce whatever they were told to produce. This is even without any human or human level intelligence involved. Once robots are truly as intelligent as humans they can even have entirely fanciful demands like buying cards or flowers.


There are 400 million guns in America. The citizen's military power is not going anywhere.


In relation to artificial robotic power it will go down


The citizens will have artificial robots too. Or do you think there's some reason why AI shouldn't be protected by the second amendment? Either AI can be weaponized - in which case it's protected by the second amendment - or it can't, in which case there isn't a threat from it in the first place.


I'm not ideologically opposed to what I think you're saying, but it is actually the case that many weapons are not allowed to be owned regardless of the second amendment. Nuclear weapons, for example. It doesn't matter if you build it entirely yourself using completely novel processes not owned by anyone, and acquired all inputs legally, it is still 100% illegal for a private citizen to own a nuclear weapon.

There is no legal reason why AIs couldn't be governed under the same principle.


You think the government is going to create an ai military to kill millions of civilians without being overthrown by the other hundreds of millions of armed civilians in this country?


I think the question is more, will there be a country somewhere that emerges as an AI/robotic super power.

You shouldn't just be worried of your own government here.

And the other issue is, why would the post-revolt government/businesses abandon AI/robotics?

The truth is, this won't happen over night. People will have less work over time, be paid less and less, the gap between classes will grow bigger, etc.

I guess another way to put it, take a country like India with lots of unemployment and high corruption, I don't think if they had guns they'd all revolt. You have to believe there's a point to revolting, an alternative that's better and worth the risk.


Maybe we’ll need to have a real life Butlerian Jihad.


This is ironically a simple issue of demographics. Whether the biological humans like it or not there will be a larger demographic of artificial entities that do more of the economic and military work and therefore will have more weight in all things political.


A democracy of whom? When AGI comes and the AIs are as smart or smarter than you, what's going to keep you enfranchised? Huang's suggestion is that it will be your ability to... type faster than a computer can. (Good luck with that.)


The 400 million guns owned by the people of this country. Humans will not go quietly into the night. They might lose some sense of purpose as they work less, but they will never accept a government that is not fundamentally based on humanism and they will never accept reduction in quality of life as GDP expands.


> Horses were replaced because they have no political power, they are tools.

How much political power do you think the purveyors of AI wield?

> As long as we live in a democracy people will not become horses.

It’s estimated that 2.3 billion people—about 29% of the global population—lived in a democracy in 2021 (https://www.visualcapitalist.com/cp/how-many-people-live-in-...)

And if you live in a democracy today, as you yourself state, it is not guaranteed to remain that way.


To wrench the metaphor, if horses had banded together, they could have fought back against humans and seized some power and rights for themselves. Instead they were conditioned to accept their bad conditions. The same is true of humans. Most humans accept terrible indignities and injustices routinely through apathy.


No, horses could not have banded together and fought back. Humans have almost never accepted a backslide in quality of living. All the examples I can think of are because of short lived war time austerity.


Here's an example of a decline in living that persisted in the long-term, although it was a long time ago:

Britain experienced an economic collapse in the early 5th century AD after the Roman Empire left. Economic activity and urban life decided, and buildings were abandoned. Their supply chain was suddenly much more limited, and localized to specific areas, whereas before, the Roman Empire had allowed them to trade with other countries more. Their standard of living didn't really change for the next few centuries. I won't compare beyond that.


What is your basis for this statement? We have absolutely _numerous_ examples of people throughout history receiving and accepting horrible conditions cast upon them. Even today. And as far as I can tell, we still have similar power structures that would enable those situations to continue occurring.


Name some examples of large, militarily powerful groups accepting living conditions that were worse than the previous generation experienced, not due to natural disasters. People have never accepted that. Sure they've had shitty living conditions, but they started shitty, they didn't get worse. Sure they've been enslaved, but the civilians in this country are heavily armed enough to fight back against a government they see as allowing living conditions to slip.


In many societies, the rights of women have declined from one period to the next, although I guess "women" don't constitute a "large, militarily powerful" group.

For example, in Korea during the Joseon/Chosŏn period, the status of women gradually declined due to Neo-Confucianist ideology. Women stopped inheriting property in the seventeenth century. They lost the right to intiate divorce, while men could still intiate divorce under seven grounds (disobedience to parents-in-law, failure to bear a son, adultery, jealousy, hereditary disease, larceny, and talkativeness). Widows lost the right to remarry, and were seen as inconvenient for the family. Women were forbidden from playing games, partying outdoors, and riding horses.

The decline of women's rights happened in other societies, too; I just happened to have a book about Korean history on my desk.


There's tons of examples:

North Korea, after the Korean war, their living conditions got way way worse than when under Japanese occupation, even though their military has improved.

Afghanistan, since it's been overtaken by the Taliban, even though they are mightier from a military standpoint, the living conditions have become worst.

The Soviet Union post WWII, they came out of it as a military superpower, but the damages from the war meant their living conditions were way worse.


You keep coming back to armed civilians will not accept worse living conditions. If some calamity comes, like global warming removes say the us ability to grow enough food or we run out of oil and there's not other energy sources, the guns aren't going to fix things. You are going to have a revolt against the govt because inflation is too high or China nuked us and ruined our country?


I thought the point of the horse analogy is that total wealth will increase but some large group of people will be so left out that their quality of living actually decreases. Like mass unemployment due to robots doing all the jobs. In that scenario I do believe there would be an armed revolt without the institution of a massive welfare state.


I can’t tell if you’re being very clever here.

At the risk of being whooshed, I’m going to say that the common man has a lot more in common with horses than billionaires.


Good point! People are already horses in russia.


Horses were bread for that work as well, there was a direct 1-to-1 correlation between the two. Not so with humans.


So what exactly is it you do all day? How do you pay rent?


Survive. My survival is my own will. My parents didn't have me to benefit themselves in industry, they only want me to survive in the best capacity possible. I'm not sure how that distinction is not 100% obvious in 2023. It's not the 1700s in a planet covered in subsistence farms who need kids to work.

This is handwringing over the decline of paper or pencils after computers came about when paper didn't magically come about by itself. Horses were not wild animals in North America at any point in modern history and barely in other continents. They were purely bred for industry, travel, and hobby. When industry and travel no longer became a thing we stopped producing them: a clear 1-to-1 correlation.

There's already been reductions in human populations without AI from leisure, abundance, culture, etc, - a very very different causality than horses.


> My parents didn't have me to benefit themselves in industry, they only want me to survive in the best capacity possible. I'm not sure how that distinction is not 100% obvious in 2023. It's not the 1700s in a planet covered in subsistence farms who need kids to work.

You make a very compelling argument. It made me think from another angle. Consider that there's evolutionary pressure on humans to want to procreate. So in a certain sense, we are conceived to serve the gene and to survive we have to labor.


And what is it you do to survive?

And when you're done doing whatever it is that is, do you go home to your home, your stable, if you like, to sleep and eat?

Are you truly so different from a horse in the scheme of things just because you spend a portion of your free time staring at a mobile phone?


I think they make weird horse related spelling errors


By “bread” do you mean “bred”?



I'm an Nvidia fanboy, but the lack of competition they have right now is concerning. Without competition, the prices of computing will stop going down, which means only big players would be able to play. It means that most of the economy will be cut off from most of the benefits of AI[^1].

[^1] That could be a good thing if our AIs are destined to replace us...it means it will take longer, because the AIs will have to pay very high taxes to NVIDIA[^2].

[^2] Or to the AIs in control of NVIDIA.


There are competitions in the AI accelerator field. Google's TPU in addition to Intel Arc and AMD GPUs. The latter two have incomplete pytorch and JAX support.


Yes but those aren't whole ecosystems of hardware and software like what Nvidia has


In the long run, none of that matters.


In the long run we're all dead.


Why is this guy portayed as some kind of genius?


"This guy" founded the company in 1993 and is still leading it, quite capably, 30 years later. It is often said that many startups need one type of CEO at founding but it takes a different skill set to be CEO of an established company. To lead it from 3 people to 25,000+, to have both technical chops, business chops, and PR chops is a rare combination.

Back in the early 90s there were countless 3D chip startups, probably more than 35 (I was at two of them during those years) and they fell one by one until essentially only Nvidia and ATI remained. Ask the other 33 CEOs if they think Jensen was just lucky.

Jensen believed that GPGPU could be a new market and spent heavily to push the idea and to make it real. Significant design work and silicon area was spent adding GPGPU features to chips before there was any software or any market for it. He invested in building compiler teams and writing libraries. He invested millions each year hosting a GPGPU conference and driving the vision. No, he didn't write those libraries or design the silicon, but it was his vision and willingness to invest in it that deserves respect. He isn't some schlub that simply got lucky one time. He has been manufacturing "luck" for 30 years.


Lots of people have vision and can do all of the above. I think what you meant is that he was good at convincing investors?


State your opinion of what you think instead of pretending to put words in my mouth or know what is in my head.


My opinion is that it is common to confuse skill/genius with luck, timing, social network.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: