Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are Nvidia H100s that good?
4 points by jonathanlei 9 months ago | hide | past | favorite | 5 comments
I was playing around with some different GPUs yesterday and put all of the results here: https://www.tensordock.com/benchmarks

I tried a vLLM and Resnet training workload. The H100 outperforms the A100 about 45% to 80% consistently, but it isn’t that much faster…

What workloads would see the most speedup, because I’m really not seeing 3x+ any on vLLM or simple training workloads?




You're doing inference, not training.

https://lambdalabs.com/gpu-benchmarks


Interesting summarization. For throughput/watt, nothing beats A100 40GB PCIe cards. In terms of throughput/$, 4090 cards are >8X better than the best H100.


Hmm I did include a training workload as the second chart. My test workload was relatively small so I guess if the workload I ran spends a bit less GPU time comparatively to the CPU, given equal CPU for all workloads, would be an equalizing factor.

But even looking at the Lambda Labs benchmarks, I am surprised that the H100 PCIE barely outperforms the A100 SXM, for example. And it is meant to be a replacement for the A100 PCIE. 20% generational improvement yes, but I would have expected more?


>> My test workload was relatively small

This is the game changer. More memory and more interconnect speed = better

>> H100 PCIE barely outperforms the A100 SXM

This is the better interconnect... its only useful if your using it. IF you can fit your workload in the 80gb of the H100 then the SXM becomes far less useful.


Oops, just noticed the link isn’t clickable, here you go! https://tensordock.com/benchmarks




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: