One nice thing about this (and the new offerings from AMD) is that they will be using the "open accelerator module (OAM)" interface- which standardizes the connector that they use to put them on baseboards, similar to the SXM connections of Nvidia that use MegArray connectors to thier baseboards.
With Nvidia, the SXM connection pinouts have always been held proprietary and confidential. For example, P100's and V100's have standard PCI-e lanes connected to one of the two sides of their MegArray connectors, and if you know that pinout you could literally build PCI-e cards with SXM2/3 connectors to repurpose those now obsolete chips (this has been done by one person).
There are thousands, maybe tens of thousands of P100's you could pickup for literally <$50 apiece these days which technically give you more Tflops/$ than anything on the market, but they are useless because their interface was not ever made open and has not been reverse engineered openly and the OEM baseboards (Dell, Supermicro mainly) are still hideously expensive outside China.
I'm one of those people who finds 'retro-super-computing' a cool hobby and thus the interfaces like OAM being open means that these devices may actually have a life for hobbyists in 8~10 years instead of being sent directly to the bins due to secret interfaces and obfuscated backplane specifications.
Pascal series are cheap because they are CUDA compute capability 6.0 and lack Tensor Cores. Volta (7.0) was the first to have Tensor Cores and in many cases is the bare minimum for modern/current stacks.
See flash attention, triton, etc as core enabling libraries. Not to mention all of the custom CUDA kernels all over the place. Take all of this and then stack layers on top of them...
Unfortunately there is famously "GPU poor vs GPU rich". Pascal puts you at "GPU destitute" (regardless of assembled VRAM) and outside of implementations like llama.cpp that go incredible and impressive lengths to support these old archs you will very quickly run into show-stopping issues that make you wish you just handed over the money for >= 7.0.
I support any use of old hardware but this kind of reminds me of my "ancient" X5690 that has impressive performance (relatively speaking) but always bites me because it doesn't have AVX.
This is all very true for Machine-Learning research tasks, were yes, if you want that latest PyTorch library function to work you need to be on the latest ML code.
But my work/fun is in CFD. One of the main codes I use for work was written to be supported primarily at the time of Pascal. Other HPC stuff too that can be run via OpenCL, and is still plenty compatible. Things compiled back then will still run today; It's not a moving target like ML has been.
Exactly. Demand for FP64 is significantly lower than for ML/AI.
Pascal isn’t incredibly cheap by comparison because it’s some secret hack. It’s cheap by comparison because most of the market (AI/ML) doesn’t want it. Speaking of which…
At the risk of “No True Scotsman” what qualifies as HPC gets interesting but just today I was at a Top500 site that was talking about their Volta system not being worth the power, which is relevant to parent comment but still problematic for reasons.
I mentioned llama.cpp because the /r/locallama crowd, etc has actually driven up the cost of used Pascal hardware because they treat it as a path to get VRAM on the cheap with their very very narrow use cases.
If we’re talking about getting a little FP64 for CFD that’s one thing. ML/AI is another. HPC is yet another.
Easier said than done. I've got a dual X5690 at home in Kiev, Ukraine and I just couldn't find anything to run on it 24x7. And it doesn't produce much heat idling. I mean at all.
All the sane and rational people are rooting for you here in the U.S. I’m sorry our government is garbage and aid hasn’t been coming through as expected. Hopefully Ukraine can stick it to that chicken-fucker in the Kremlin and retake Crimea too.
I didn’t have an X5690 because the TDP was too high for my server’s heatsinks, but I had 90W variants of the same generation. To me, two at idle produced noticeable heat, though not as much as four idling in a PowerEdge R910 did. The R910 idled at around 300W.
There’s always Folding@Home if you don’t mind the electric bill. Plex is another option. I know a guy running a massive Plex server that was on Westmere/Nehalem Xeons until I gave him my R720 with Haswell Xeons.
It looks pathetic indeed. Makes many people question: if THAT'S democracy, then maybe it's not worth fighting for.
> All the sane and rational people are rooting for you here in the U.S.
The same could be said about russian people (sane and rational ones). But what do both people have in common? The answer is: currently both nations are helpless to change what their government does.
> are rooting for you here in the U.S.
I know. We all truly know and greatly appreciate that. There would be no Ukraine if not American weapons and help.
I really like this side to AMD. There's a strategic call somewhere high up to bias towards collaboration with other companies. Sharing the fabric specifications with broadcom was an amazing thing to see. It's not out of the question that we'll see single chips with chiplets made by different companies attached together.
Maybe they feel threatened by ARM on mobile and Intel on desktop / server. Companies that think they're first try to monopolize. Companies that think they're second try to cooperate.
IBM didn't want to rely solely on Intel when introducing PCs so it forced Intel to share its arch with another manufacturer that turned out to be AMD. It's not like AMD stole it. Math coprocessor was in turn invented by AMD (Am9511, Am9512) and licensed by Intel (8231, 8232).
They certainly didn't steal it. But Intel didn't second-source pentiums, or any chip with SIMD extension. AMD reverse-engineered those fair and square.
The price is low because they’re useless (except for replacing dead cards in a DGX), if you had a 40$ PCIe AIC-to-SXM adapter, the price would go up a lot.
> I'm one of those people who finds 'retro-super-computing' a cool hobby and thus the interfaces like OAM being open means that these devices may actually have a life for hobbyists in 8~10 years instead of being sent directly to the bins due to secret interfaces and obfuscated backplane specifications.
Very cool hobby. It’s also unfortunate how stringent e-waste rules lead to so much perfectly fine hardware to be scrapped. And how the remainder is typically pulled apart to the board / module level for spares. Makes it very unlikely to stumble over more or less complete-ish systems.
I'm not sure the prices would go up that much. What would anyone buy that card for?
Yes, it has a decent memory bandwidth (~750 GB/s) and it runs CUDA. But it only has 16 GB and doesn't support tensor cores or low precision floats. It's in a weird place.
The P100 has amazing double precision (FP64) flops (due to a 1:2 FP ratio that got nixed on all other cards) and a higher memory bandwidth which made it a really standout GPU for scientific computing applications. Computational Fluid Dynamics, etc.
The P40 was aimed at the image and video cloud processing market I think, and thus the GDDR ram instead of HBM, so it got more VRAM but at much less bandwidth.
The pci-e p100 is has 16gb vram and won’t go below 160 dollars. Prices for these things would pick up if you could put them in some sort of pcie adapter
As “humble” as NVIDIA’s CEO appears to be, NVIDIA the company (he’s been running this whole time), made decision after decision with the simple intention of killing off its competition (ATI/AMD). Gameworks is my favorite example- essentially if you wanted a video game to look as good as possible, you needed an NVIDIA GPU. Those same games played on AMD GPUs just didn’t look as good.
Now that video gaming is secondary (tertiary?) to Nvidia’s revenue stream, they could give a shit which brand gamers prefer. It’s small time now. All that matters is who companies are buying their GPUs from for AI stuff. Break down that CUDA wall and it’s open-season. I wonder how they plan to stave that off. It’s only a matter of time before people get tired of writing C++ code to interface with CUDA.
You don't need to use C++ to interface with CUDA or even write it.
A while ago NVIDIA and the GraalVM team demoed grCUDA which makes it easy to share memory with CUDA kernels and invoke them from any managed language that runs on GraalVM (which includes JIT compiled Python). Because it's integrated with the compiler the invocation overhead is low:
So these alternatives exist yes, but are they “production ready”- in other words, are they being used. My opinion is that while you can use another language, most companies for one reason or another are still using C++. I just don’t really know what the reason(s) are.
I think about other areas in tech where you can use whatever language, but it isn’t practical to do so. I can write a backend API server in Swift… or perhaps more relevant- I can use AMD’s ROCm to do… anything.
I had read their documents such as the spec for the Big Basin JBOG, where everything is documented except the actual pinouts on the base board. Everything leading up to it and from it is there but the actual MegArray pinout connection to a single P100/V100 I never found.
But maybe there was more I missed. I'll take another look.
Upon further review... I think any actual base board schematics / pinouts touching the Nvidia hardware directly is indeed kept behind some sort of NDA or OEM license agreement and is specifically kept out of any of those documents for the Open Compute project JBOG rigs.
I think this is literally the impetus for their OAM spec which makes the pinout open and shareable. Up until that, they had to keep the actual designs of the baseboards out of the public due to that part being still controlled Nvidia IP.
Hmm interesting, I was linked to an OCP dropbox with a version that did have the connector pinouts. Maybe something someone shouldn’t have posted then…
I could find OCP accelerator spec but it looks like an open source reimplementation, not actual SXM2. That said, the photos of SXM2-PCIe adapters I could find look almost entirely passive, so I don't think all hopes are lost either.
couldn't someone just buy one of those chinese sxm2 to pcie adapter boards and test continuity to get the pinouts? I have one that could take like 10 minutes
I have a theory some big cloud provider moved a ton of racks from SXM2 P100's to SXM2 V100's (those were a thing) and thus orphaned an absolute ton of P100's without their baseboards.
Or, these salvage operations just stripped racks and kept the small stuff and e-waste the racks because they think it's the more efficient use of their storage space and would be easier to sell, without thinking correctly.
A bit surprised that they're using HBM2e, which is what Nvidia A100 (80GB) used back in 2020. But Intel is using 8 stacks here, so Gaudi 3 achieves comparable total bandwidth (3.7TB/s) to H100 (3.4TB/s) which uses 5 stacks of HBM3. Hopefully the older HBM has better supply - HBM3 is hard to get right now!
The Gaudi 3 multi-chip package also looks interesting. I see 2 central compute dies, 8 HBM die stacks, and then 6 small dies interleaved between the HBM stacks - curious to know whether those are also functional, or just structural elements for mechanical support.
> A bit surprised that they're using HBM2e, which is what Nvidia A100 (80GB) used back in 2020.
This is one of the secret recipes of Intel. They can use older tech and push it a little further to catch/surpass current gen tech until current gen becomes easier/cheaper to produce/acquire/integrate.
They have done it with their first quad core processors by merging two dual core processors (Q6xxx series), or by creating absurdly clocked single core processors aimed at very niche market segments.
We have not seen it until now, because they were sleeping at the wheel, and knocked unconscious by AMD.
Any other examples of this? I remember the secret sauce being a process advantage over the competition, exactly the opposite of making old tech outperform the state of the art.
Intels surprisingly fast 14nm processors come to mind. Born of necessity as they couldn't get their 10 and later 7nm processes working for years. Despite that Intel managed to keep up in single core performance with newer 7nm AMD chips, although at a mich higher power draw.
For like half of 14nm intel era, there was no competition on CPU market in any segment for them. Intel was able to improve their 14nm process and be better at branch prediction. Moving things to hardware implementation is what kept improving.
This isn't the same as getting more out of the same over and over again.
Or today with Alder Lake and Raptor Lake(Refresh), where their CPUs made on Intel 7 (10nm) are on par if not slightly better than AMD's offerings made on TSMC 5nm.
Back in the day, Intel was great for overclocking because all of their chips could run at significantly higher speeds and voltages than on the tin. This was because they basically just targeted the higher specs, and sold the underperforming silicon as lower-tier products.
Don't know if this counts, but feels directionally similar.
No, this means Intel has woken up and trying. There's no guarantee in anything. I'm more of an AMD person, but I want to see fierce competition, not monopoly, even if it's "my team's monopoly".
EPYC is actually pretty good. It’s true that Intel was sleeping, but AMD’s new architecture is a beast. Has better memory support, more PCIe lanes and better overall system latency and throughput.
Intel’s TDP problems and AVX clock issues leave a bitter taste in the mouth.
The ABit BP6 bought me so much "cred" at LAN Parties back in the day - the only dual socket motherboard in the building, and paired with two Creative Voodoo 2 GPUs in SLI mode, that thing was a beast (for the late nineties).
I seem to recall that only Quake 2 or 3 was capable of actually using that second processor during a game, but that wasn't the point ;)
Well overclocked I don't know, but out-of-the box single-core performance completely sucked. And in 2007 not enough applications had threads to make it up in the number of cores.
It was fun to play with but you'd also expect the higher-end desktop to e.g. handle x264 videos which was not the case (search for q6600 on videolan forum). And depressingly many cheaper CPUs of the time did it easily.
This is a bit snarky — but will Intel actually keep this product line alive for more than a few years? Having been bitten by building products around some of their non-x86 offerings where they killed good IP off and then failed to support it… I’m skeptical.
I truly do hope it is successful so we can have some alternative accelerators.
The real question is, how long does it actually have to hang around really? With the way this market is going, it probably only has to be supported in earnest for a few years by which point it'll be so far obsolete that everyone who matters will have moved on.
We're talking about the architecture, not the hardware model. What people want is to have a new, faster version in a few years that will run the same code written for this one.
Also, hardware has a lifecycle. At some point the old hardware isn't worth running in a large scale operation because it consumes more in electricity to run 24/7 than it would cost to replace with newer hardware. But then it falls into the hands of people who aren't going to run it 24/7, like hobbyists and students, which as a manufacturer you still want to support because that's how you get people to invest their time in your stuff instead of a competitor's.
What’s Next: Intel Gaudi 3 accelerators' momentum will be foundational for Falcon Shores, Intel’s next-generation graphics processing unit (GPU) for AI and high-performance computing (HPC). Falcon Shores will integrate the Intel Gaudi and Intel® Xe intellectual property (IP) with a single GPU programming interface built on the Intel® oneAPI specification.
I can't tell if your comment is sarcastic or genuine :). It goes to show how out of touch I am on AI hw and sw matters.
Yesterday I thought about installing and trying to use https://news.ycombinator.com/item?id=39372159 (Reor is an open-source AI note-taking app that runs models locally.) and feed it my markdown folder but I stop midway, asking myself "don't I need some kind of powerful GPU for that ?". And now I am thinking "wait, should I wait for `standard` pluggable AI computing hardware device ? Is that Intel Gaudi 3 something like that ?".
I think it's a valid question. Intel has a habit of whispering away anything that doesn't immediately ship millions of units or that they're contractually obligated to support.
I'm not very involved in the broader topic, but isn't the shortage of hardware for AI-related workloads intense enough so as to grant them the benefit of the doubt?
Itanium only failed because AMD was allowed to come up with AMD64, Intel would have managed to push Itanium no matter what, if there were no alternatives to a 64bit compatible x86 CPU.
Itanium wasn't x86 compatible, it used the EPIC VLIW instruction set. It relied heavily on compiler optimization that never really materialized. I think it was called speculative precompilation or something like that. The Itanium suffered in two ways that had interplay with one another. The first is that it was very latency sensitive and non-deterministic fetches stalled it. The second was there often weren't enough parallel instructions to execute simultaneously. In both cases the processor spent a lot of time executing NOPs.
Modern CPUs have moved towards becoming simpler and more flexible in their execution with specialized hardware (GPUs, etc) for the more parallel and repetitive tasks that Itanium excelled at.
I doubt that Itanium would have ever perked down to consumer level devices. It was ill suited for that workload because it was designed for highly parallel work loads. It was still struggling with server workloads at the time it was discontinued.
At Itanium's launch, an x86 Windows Server could use Physical Address Extension to support 128GBs of RAM. In an alt timeline where x86-64 never happened, we'd have likely seen PAE perk down to consumer level operating systems to support greater than 4GB of RAM. It was supported on all popular consumer x86 CPUs from Intel and AMD at the time.
The primary reasons we have the technologies we have today was wide availability and wide support. Itanium never achieved either. In a timeline without x86-64 there might have been room for IBM Power to compete with Xeon/Opteron/Itanium. The console wars would have still developed the underlying technologies used by Nvidia for it's ML products, and Intel would likely be devoting resource into making Itanium an ML powerhouse.
We'd be stuck with x86, ARM or Power as a desktop option.
I haven’t read the article but my first question would be “what problem is this accelerator solving?” and if the answer is simply “you can AI without Nvidia”, that’s not good enough, because that’s the pot calling the kettle black. None of these companies is “altruistic” but between the three of them I expect AMD to be the nicest to its customers. Nvidia will squeeze the most money out of theirs, and Intel will leave theirs out to dry when corporate leadership decides it’s a failure.
> Twenty-four 200 gigabit (Gb) Ethernet ports are integrated into every Intel Gaudi 3 accelerator
WHAT‽ It's basically got the equivalent of a 24-port, 200-gigabit switch built into it. How does that make sense? Can you imaging stringing 24 Cat 8 cables between servers in a single rack? Wait: How do you even decide where those cables go? Do you buy 24 Gaudi 3 accelerators and run cables directly between every single one of them so they can all talk 200-gigabit ethernet to each other?
Also: If you've got that many Cat 8 cables coming out the back of the thing how do you even access it? You'll have to unplug half of them (better keep track of which was connected to what port!) just to be able to grab the shell of the device in the rack. 24 ports is usually enough to take up the majority of horizontal space in the rack so maybe this thing requires a minimum of 2-4U just to use it? That would make more sense but not help in the density department.
I'm imagining a lot of orders for "a gradient" of colors of cables so the data center folks wiring the things can keep track of which cable is supposed to go where.
> The Gaudi 3 accelerators inside of the nodes are connected using the same OSFP links to the outside world as happened with the Gaudi 2 designs, but in this case the doubling of the speed means that Intel has had to add retimers between the Ethernet ports on the Gaudi 3 cards and the six 800 Gb/sec OSFP ports that come out of the back of the system board. Of the 24 ports on each Gaudi 3, 21 of them are used to make a high-bandwidth all-to-all network linking those Gaudi 3 devices tightly to each other. Like this:
> As you scale, you build a sub-cluster with sixteen of these eight-way Gaudi 3 nodes, with three leaf switches – generally based on the 51.2 Tb/sec “Tomahawk 5” StrataXGS switch ASICs from Broadcom, according to Medina – that have half of their 64 ports running at 800 GB/sec pointing down to the servers and half of their ports pointing up to the spine network. You need three leaf switches to do the trick:
> To get to 4,096 Gaudi 3 accelerators across 512 server nodes, you build 32 sub-clusters and you cross link the 96 leaf switches with a three banks of sixteen spine switches, which will give you three different paths to link any Gaudi 3 to any other Gaudi 3 through two layers of network. Like this:
The cabling works out neatly in the rack configurations they envision. The idea here is to use standard Ethernet instead of proprietary Infiniband (which Nvidia got from acquiring Mellanox). Because each accelerator can reach other accelerators via multiple paths that will (ideally) not be over-utilized, you will be able to perform large operations across them efficiently without needing to get especially optimized about how your software manages communication.
Infiniband I've heard as incredibly annoying to deal with procuring as well as some other aspects of it, so lots of folks very happy to get RoCE (ethernet) working instead, even if it is a bit cumbersome.
> RDMA over Converged Ethernet (RoCE) or InfiniBand over Ethernet (IBoE)[1] is a network protocol which allows remote direct memory access (RDMA) over an Ethernet network. It does this by encapsulating an InfiniBand (IB) transport packet over Ethernet.
It will most likely use copper QSFP56 cables since these interfaces are either used in inter rack or adjacent rack direct attachments or to the nearest switch.
O.5-1.5/2m copper cables are easily available and cheap and 4-8m (and even longer) is also possible with copper but tends to be more expensive and harder to get by.
For Gaudi2, it looks like 21/24 ports are internal to the server. I highly doubt those have actual individual cables. Most likely they're just carried on PCBs like any other signal.
100GBe is only supported on twinax anyway, so Cat8 is irrelevant here. The other 3 ports are probably QSFP or something.
Probably not. An 40GB Nvidia A100 is arguably reasonable for a workstation at $6000. Depending on your definition an 80GB A100 for $16000 is still reasonable. I don't see this being cheaper than an 80GB A100. Probably a good bit more expensive, seeing as it has more RAM, compares itself favorably to the H100, and has enough compelling features that it probably doesn't have to (strongly) compete on price.
Surely NVidia’s pricing is more what the market will bear vs an intrinsic cost to build. Intel being the underdog should be willing to offer a discount just to get their foot in the door.
But if your competitor's price is dramatically above your cost, you can provide a huge discount as an incentive for customers to pay the transition cost to your system while still turning a tidy profit.
Macs don't support CUDA which means all that wonderful hardware will be useless when trying to do anything with AI for at least a few years. There's Metal but it has its own set of problems, biggest one being it isn't a drop in CUDA replacement.
I think you're right on the price, but just to give some false hope. I think newish hbm (and this is hbm2e which is a little older) is around $15/gb so for 128 gb thats $1920. There are some other cogs, but in theory they could sell this for like $3-4k and make some gross profit while getting some hobbyist mindshare/research code written for it.
I doubt they will though, it might eat too much into profits from the non pcie variants.
128GB in one chip seems important with the rise of sparse architectures like MoE. Hopefully these are competitive with Nvidia's offerings, though in the end they will be competing for the same fab space as Nvidia if I'm not mistaken.
There's a number of scaled AMD deployments, including Lamini (https://www.lamini.ai/blog/lamini-amd-paving-the-road-to-gpu...) specifically for LLM's. There's also a number of HPC configurations, including the world's largest publicly disclosed supercomputer (Frontier) and Europe's largest supercomputer (LUMI) running on MI250x. Multiple teams have trained models on those HPC setups too.
Do you have any more evidence as to why these categorically don't work?
Just go have a look around Github issues in their ROCm repositories on Github. A few months back the top excuse re: AMD was that we're not supposed to use their "consumer" cards, however the datacenter stuff is kosher. Well, guess what, we have purchased their datacenter card, MI50, and it's similarly screwed. Too many bugs in the kernel, kernel crashes, hangs, and the ROCm code is buggy / incomplete. When it works, it works for a short period of time, and yes HBM memory is kind of nice, but the whole thing is not worth it. Some say MI210 and MI300 are better, but it's just wishful thinking as all the bugs are in the software, kernel driver, and firmware. I have spent too many hours troubleshooting entry-level datacenter-grade Instinct cards with no recourse from AMD whatsoever to pay 10+ thousands for MI210 a couple-year old underpowered hardware, and MI300 is just unavailable.
Not even from cloud providers which should be telling enough.
We absolutely hammered the MI50 in internal testing for ages. Was solid as far as I can tell.
Rocm is sensitive to matching kernel version to driver version to userspace version. Staying very much on the kernel version from a official release and using the corresponding driver is drastically more robust than optimistically mixing different components. In particular, rocm is released and tested as one large blob, and running that large blob on a slightly different kernel version can go very badly. Mixing things from GitHub with things from your package manager is also optimistic.
Imagine it as huge ball of code where cross version compatibility of pieces is totally untested.
I would run simple llama.cpp batch jobs for 10 minutes when it would suddenly fail, and require a restart. Random VM_L2_PROTECTION_FAULT in dmesg, something having to do with doorbells. I did report this, never heard back from them.
Did you run on the blessed Ubuntu version with the blessed kernel version and the blessed driver version? As otherwise you really are in a development branch.
If you can point me to a repro I'll add it to my todo list. You can probably tag me in the github issue if that's where you reported it.
I feel like this goes both ways. You also don't want to have to run bleeding edge for everything because there are so many bugs in things. You kind of want known stable versions to at least base yourself off of.
Hey man have seen you around here, very knowledgeable, thanks for your input!
What's your take on projects like https://github.com/corundum/corundum I'm trying to get better at FPGA design, perhaps learn PCIe and some such but Vivado is intimidating (as opposed to Yosys/nextpnr which you seem to hate) should I just get involved with a project like this to acclimatise somewhat?
> Vivado is intimidating (as opposed to Yosys/nextpnr which you seem to hate)
i never said i hated yosys/nextpnr? i said somewhere that yosys makes the uber strange decision to use C++ as effectively a scripting language ie gluing and scheduling "passes" together - like they seemed to make the firm decision to diverge from tcl but diverged into absurd territory. i wish yosys were great because it's open source and then i could solve my own problems as they occurred. but it's not great and i doubt it ever will be because building logic synthesis, techmapping, timing analysis, place and route, etc. is just too many extremely hard problems for OSS.
all the vendor tools suck. it's just a fact that both big fpga manufacturers have completely shit software devs working on those tools. the only tools that i've heard are decent are the very expensive suites from cadence/siemenns/synopsis but i have yet to be in a place that has licenses (neither school nor day job - at least not in my team). and mind you, you will still need to feed the RTL or netlist or whatever that those tools generate into vivado (so you're still fucked).
so i don't have advice for you on RTL - i moved one level up (ISA, compilers, etc.) primarily because i could not effectively learn by myself i.e., without going to "apprentice" under someone that just has enough experience to navigate around the potholes (because fundamentally if that's what it takes to learn then you're basically working on ineluctably entrenched tech).
Yeah, this has stopped me from trying anything with them. They need to lead with their consumer cards so that developers can test/build/evaluate/gain trust locally and then their enterprise offerings need to 100% guarantee that the stuff developers worked on will work in the data center. I keep hoping to see this but every time I look it isn't there. There is way more support for apple silicon out there than ROCm and that has no path to enterprise. AMD is missing the boat.
In fairness it wasn't Apple who implemented the non-mac uses of their hardware.
AMD's driver is in your kernel, all the userspace is on GitHub. The ISA is documented. It's entirely possible to treat the ASICs as mass market subsidized floating point machines and run your own code on them.
Modulo firmware. I'm vaguely on the path to working out what's going on there. Changing that without talking to the hardware guys in real time might be rather difficult even with the code available though.
You are ignoring that AMD doesn't use an intermediate representation and every ROCm driver is basically compiling to a GPU specific ISA. It wouldn't surprise me that there are bugs they have fixed for one ISA that they didn't bother porting to the others. The other problem is that most likely their firmware contains classic C bugs like buffer overflows, undefined behaviour, or stuff like deadlocks.
This is sort of true. Graphics compiles to spir-v, moves that around as deployment, then runs it through llvm to create the compiled shaders. Compute doesn't bother with spir-v (to the distress of some of our engineers) and moves llvm IR around instead. That goes through the llvm backend which does mostly the same stuff for each target machine. There probably are some bugs that were fixed on one machine and accidentally missed on another - the compiler is quite branchy - but it's nothing like as bad as a separate codebase per ISA. Nvidia has a specific ISA per card too, they just expose PTX and SASS as abstractions over it.
I haven't found the firmware source code yet - digging through confluence and perforce tries my patience and I'm supposed to be working on llvm - but I hear it's written in assembly, where one of the hurdles to open sourcing it is the assembler is proprietary. I suspect there's some common information shared with the hardware description language (tcl and verilog or whatever they're using). To the extent that turns out to be true, it'll be immune to C style undefined behaviour, but I wouldn't bet on it being free from buffer overflows.
You are right, AMD should do more with consumer cards, but I understand why they aren't today. It is a big ship, they've really only started changing course as of last Oct/Nov, before the release of MI300x in Dec. If you have limited resources and a whole culture to change, you have to give them time to fix that.
That said, if you're on the inside, like I am, and you talk to people at AMD (just got off two separate back to back calls with them), rest assured, they are dedicated to making this stuff work.
Part of that is to build a developer flywheel by making their top end hardware available to end users. That's where my company Hot Aisle comes into play. Something that wasn't available before outside of the HPC markets, is now going to be made available.
I look forward to seeing it. NVIDIA needs real competition for their own benefit if not the market as a whole. I want a richer ecosystem where Intel, AMD, NVIDIA and other players all join in with the winner being the consumer. From a selfish point of view I also want to do more home experimentation. LLMs are so new that you can make breakthroughs without a huge team but it really helps to have hardware to make it easier to play with ideas. Consumer card memory limitations are hurting that right now.
Yeah, I think AMD will really struggle with the cloud providers.
Even Nvidia GPU's are tricky to sandbox, and it sounds like the AMD cards are really easy for the tenant to break (or at least force a restart of the underlying host).
AWS does have a Gaudi instance which is interesting, but overall I don't see why Azure, AWS & Google would deploy AMD or Intel GPU's at scale vs their own chips.
They need some competitor to Nvidia to help negotiate, but if its going to be a painful software support story suited to only a few enterprise customers, why not do it with your own chip?
We are the 4th non-hyperscaler business on the planet to even get access to MI300x and we just got it in early March. From what I understand, hyperscalers have had fantastic uptake of this hardware.
I find it hard to believe "everyone" comes away with these opinions.
I wonder if with the advent of LLMs being able to spit out perfect corpo-speak everyone will recenter to succint and short "here's the gist" as the long version will become associated to cheap automated output.
Has anyone here bought an AI accelerator to run their AI SaaS service from their home to customers instead of trying to make a profit on top of OpenAI or Replicate
Seems like an okay $8,000 - $30,000 investment, and bare metal server maintenance isn’t that complicated these days.
Only for networking, not for anything measured inside a node. Disk bandwidth, cache bandwidth, and memory bandwidth is nearly always measured in bytes/sec (bandwidth), or NS/cache line or similar (which is mix of bandwidth and latency).
The design and decision to make it Fab with TSMC was way ahead of Intel's Foundry services offering. ( And it is not like Intel had the extra capacity planned at the time for Intel's GPU )
Process matters. Intel was ahead for a long time, and has been behind for a long time. Perhaps they will be ahead again, but maybe not. I’d rather see them competitive.
I wonder if someone knowledgeable could comment on OneAPI vs Cuda. I feel like if Intel is going to be a serious competitor to Nvidia, both software and hardware are going to be equally important.
I'm not familiar with the particulars of OneAPI, but it's just a matter of rewriting CUDA kernels into OneAPI. This is pretty trivial for the vast majority of small (<5 LoC) kernels. Unlike AMD, it looks like they're serious about dogfooding their own chips, and they have a much better reputation for their driver quality.
All the dev work at AMD is on our own hardware. Even things like the corporate laptops are ryzen based. The first gen ryzen laptop I got was terrible but it wasn't intel. We also do things like develop ROCm on the non-qualified cards and build our tools with our tools. It would be crazy not to.
Like opencl was an open alternative? Or HSA? Or HIP? Or openmp? Or spir-v? There are lots of GPU programming languages for amdgpu.
Opencl and hip compilers are in llvm trunk, just bring a runtime from GitHub. Openmp likewise though with much more of the runtime in trunk, just bring libhsa.so from GitHub or debian repos. All of it open source.
There's also a bunch of machine learning stuff. Pytorch and Triton, maybe others. And non-C++ languages, notably Fortran, but Julia and Mojo have mostly third party implementations as well.
I don't know what the UXL foundation is. I do know what sycl is, but aside from using code from intel I don't see what it brings over any of the other single source languages.
At some point sycl will probably be implemented on the llvm offload infra Johannes is currently deriving from the openmp runtime, maybe by intel or maybe by one of my colleagues, at which point I expect people to continue using cuda and complaining about amdgpu. It seems very clear to me that extra GPU languages aren't the solution to people buying everything from Nvidia.
Yes that's why I qualified "serious" dogfooding. Of course you use your hardware for your own development work, but it's clearly not enough given that showstopper driver issues are going unfixed for half a year.
(reply to Zoomer from further down, moving up because I ended up writing a lot)
This experience is largely a misalignment between what AMD thinks their product is and what the Linux world thinks software is. My pet theory is it's a holdover from the GPU being primarily a games console product as that's what kept the company alive through the recent dark times. There's money now but some of the best practices are sticky.
In games dev, you ship a SDK. Speaking with personal experience here as I was on the playstation dev tools team. That's a compiler, debugger, profiler, language runtimes, bunch of math libs etc all packaged together with a single version number for the whole thing. A games studio downloads that and uses it for the entire dev cycle of the game. They've noticed that compiler bugs move so each game is essentially dependent on the "characteristics" of that toolchain and persuading them to gamble on a toolchain upgrade mid cycle requires some feature they really badly want.
HPC has some things in common with this. You "module load rocm-5.2" or whatever and now your whole environment is that particular toolchain release. That's where the math libraries are and where the compiler is.
With that context, the internal testing process makes a lot of sense. At some point AMD picks a target OS. I think it's literally "LTS Ubuntu" or a RedHat release or similar. Something that is already available anyway. That gets installed on a lot of CI machines, test machines, developer machines. Most of the boxes I can ssh into have Ubuntu on them. The userspace details don't matter much but what this does do is fix the kernel version for a given release number. Possibly to one of two similar kernel versions. Then there's a multiple month dev and testing process, all on that kernel.
Testing involves some largish number of programs that customers care about. Whatever they're running on the clusters, or some AI things these days. It also involves a lot of performance testing where things getting slower is a bug. The release team are very clear on things not going out the door if things are broken or slower and it's not a fun time to have your commit from months ago pulled out of the bisection as the root cause. That as-shipped configuration - kernel 5.whatever, the driver you build yourself as opposed to the one that kernel shipped with, the ROCm userspace version 4.1 or so - taken together is pretty solid. It sometimes falls over in the field anyway when running applications that aren't in the internal testing set but users of it don't seem anything like as cross as the HN crowd.
This pretty much gives you the discrepancy in user experience. If you've got a rocm release running on one of the HPC machines, or you've got a gaming SDK on a specific console version, things work fairly well and because it's a fixed point things that don't work can be patched around.
In contrast, you can take whatever linux kernel you like and use the amdkfd driver in that, combined with whatever ROCm packages your distribution has bundled. Last I looked it was ROCm 5.2 in debian, lightly patched. A colleague runs Arch which I think is more recent. Gentoo will be different again. I don't know about the others. That kernel probably isn't from the magic list of hammered on under testing. The driver definitely isn't. The driver people work largely upstream but the gitlab fork can be quite divergent from it, much like the rocm llvm can be quite divergent from the upstream llvm.
So when you take the happy path on Linux and use whatever kernel you happen to have installed, that's a codebase that went through whatever testing the kernel project does on the driver and reflects the fraction of a kernel dev branch that was upstream at that point in time. Sometimes it's very stable, sometimes it's really not. I stubbornly refuse to use the binary release of ROCm and use whatever driver is in Debian testing and occasionally have a bad time with stability as a result. But that's because I'm deliberately running a bleeding edge dev build because bugs I stumble across have a chance of me fixing them before users run into it.
I don't think people using apt-get install rocm necessarily know whether they're using a kernel that the userspace is expected to work with or a dev version of excitement since they look the same. The documentation says to use the approved linux release - some Ubuntu flavour with a specific version number - but doesn't draw much attention to the expected experience if you ignore that command.
This is strongly related to the "approved cards list" that HN also hates. It literally means the release testing passed on the cards in that list, and the release testing was not run on the other ones. So you're back into the YMMV region, along with people like me stubbornly running non-approved gaming hardware on non-approved kernels with a bunch of code I built from source using a different compiler to the one used for the production binaries.
None of this is remotely apparent to me from our documentation but it does follow pretty directly from the games dev / HPC design space.
If your metric is memory bandwidth or memory size, then this announcement gives you some concrete information. But - suppose my metric for performance is matrix-multiply-add (or just matrix-multiply) bandwidth. What MMA primitives does Gaudi offer (i.e. type combinations and matrix dimension combinations), and how many of such ops per second, in practice? The linked page says "64,000 in parallel", but that does not actually tell me much.
Gaudi 3 has PCIe 4.0 (vs. H100 PCIe 5.0, so 2x the bandwidth). Probably not a deal-breaker but it's strange for Intel (of all vendors) to lag behind in PCIe.
Good point, it's built on TSMC while Intel is pushing to become the #2 foundry. Probably it's because Gaudi was made by an Israeli company Intel acquired in 2019 (not an internal project). Who knows.
I liked my 5700XT. That seems to be $200 now. Ran arbitrary code on it just fine. Lots of machine learning seems to be obsessed with amount of memory though and increasing that is likely to increase the price. Also HN doesn't like ROCm much, so there's that.
What else in on the BOM? Volume? At that price you likely want to use whatever resources are on the SoC that runs the thing and work around that. Feel free to e-mail me.
>Intel Gaudi software integrates the PyTorch framework and provides optimized Hugging Face community-based models – the most-common AI framework for GenAI developers today. This allows GenAI developers to operate at a high abstraction level for ease of use and productivity and ease of model porting across hardware types.
what is the programming interface here ? this is not CUDA right ...so how is this being done ?
Intel makes oneAPI. They have corresponding toolkits to cuDNN like oneMKL, oneDNN, etc.
However the Gaudi chips are built on top of SynapseAI, another API from before the Habana acquisition. I don’t know if there’s a plan to support oneAPI on Gaudi, but it doesn’t look like it at the moment.
I feel a little misled by the speedup numbers. They are comparing lower batch size h100/200 numbers to higher batch size gaudi 3 numbers for throughput (which is heavily improved by increasing batch size). I feel like there are some inference scenarios where this is better, but its really hard to tell from the numbers in the paper.
> Twenty-four 200 gigabit (Gb) Ethernet ports are integrated into every Intel Gaudi 3 accelerator
How much does a single 200Gbit active (or inactive) fiber cable cost? Probably thousands of dollars.. making even the cabling for each card Very Expensive. Nevermind the network switches themselves..
My off-the-cuff take: AOC's are a specific kind of fiber optic cable, typically used in data center applications for 100Gbit+ connections. The alternate types of fiber are typically referred to as passive fiber cables, e.g. simplex or duplex, single-mode (single fiber strands, usually in a yellow jacket) or multi-mode (multiple fiber strands, usually in an orange jacket). Each type of passive fiber cable has specific applications and requires matching transceivers, whereas AOCs are self-contained with the transceivers pre-terminated on.
If you search for "AOC Fiber", lots of resources will pop up. FS.com is one helpful resource.
> Active optical cable (AOC) can be defined as an optical fiber jumper cable terminated with optical transceivers on both ends. It uses electrical-to-optical conversion on the cable ends to improve speed and distance performance of the cable without sacrificing compatibility with standard electrical interfaces.
Honestly, I thought the same thing upon reading the name. I'm aware of the reference to Antoni Gaudí, but having the name sound so close to gaudy seems a bit unfortunate. Surely they must've had better options? Then again I don't know how these sorts of names get decided anymore.
'Gaudi' is properly pronounced Ga-oo-DEE in his native Catalan, whereas (in my dialect) 'gaudy' is pronounced GAW-dee. My guess is Intel wasn't even thinking about 'gaudy' because they were thinking about "famous architects" or whatever the naming pool was. Although, I had heard that the 'gaudy' came from the architect's name because of what people thought of his work. (I'm not sure this is correct, it was just my first introduction to the word.)
With Nvidia, the SXM connection pinouts have always been held proprietary and confidential. For example, P100's and V100's have standard PCI-e lanes connected to one of the two sides of their MegArray connectors, and if you know that pinout you could literally build PCI-e cards with SXM2/3 connectors to repurpose those now obsolete chips (this has been done by one person).
There are thousands, maybe tens of thousands of P100's you could pickup for literally <$50 apiece these days which technically give you more Tflops/$ than anything on the market, but they are useless because their interface was not ever made open and has not been reverse engineered openly and the OEM baseboards (Dell, Supermicro mainly) are still hideously expensive outside China.
I'm one of those people who finds 'retro-super-computing' a cool hobby and thus the interfaces like OAM being open means that these devices may actually have a life for hobbyists in 8~10 years instead of being sent directly to the bins due to secret interfaces and obfuscated backplane specifications.