In the span of a few months, with a small team of researchers and engineers, we trained a 70B parameter model from scratch on our own infrastructure that outperformed zero-shot GPT-4o on reasoning-related tasks. Using our cluster for high performance training meant that every component — InfiniBand, Ethernet, GPUs, and the nodes themselves — had to work perfectly. If even a single one of the over 12,000 connections was a little flaky, it could slow down the entire training run.
We're sharing open-source scripts and an end-to-end guide for infrastructure set-up that details the process of making everything work perfectly, and ensuring that it stays that way.
This is one of a three-part toolkit on training a 70b model from scratch. The other two sections focus on evaluations and CARBS, our hyperparameter optimizer; you can find them here: https://imbue.com/research/70b-intro/
Loved this and the detail - thank you. It’s the best inside detail on the engineering work behind these models I’ve ever read.
Two things I’m curious about- first, what, if any difference would you imagine in training a 400b parameter model? It seems that you have plenty of vram across the cluster, but I want to know what you think.
Second, do you think this sort of architecture is the end game for model training? It seems sooo fragile. Are there better shared training mechanisms/architectures? Are there better cluster geometries?
> If even a single one of the over 12,000 connections was a little flaky, it could slow down the entire training run
It's an unusual enough sentence to be remarkable and I was like "I read this exact same sentence before". Indeed, this and most of the writeup appeared on Twitter, LinkedIn, Reddit it seems word-by-word. Is this just spam ?
This is the kind of criticism that could only come from someone without much formal writing experience.
This is a very normal workflow: You write a full-length text detailing the project you worked on. You then trim it down to a summary which you share with a group of people X. You then trim it down into a different summary which you share with a group of people Y.
When you do this multiple times you unsurprisingly end up with some sentences that make it into multiple summaries because they're that important to the thesis!
(Also, the summaries on Twitter and Reddit aren't anything close to "most of the writeup"—the full text is 6000+ words!)
I'd rather some company copy&paste the same text multiple places -- if the alternative was that those places would instead get obfuscation of the same information to appear novel each time (so I'd have to read all of them to realize they're all just the same info).
I dont inderstand your issue with this. Is it that they share their work several places, or that they don't describe their work in an unique way every time?
Eh, seems like legit marketing to me. Yes, they are trying to sell you something, but they are doing that by releasing non-trivial research and open source code.
This is hella cool. Cisco has a new nvidia collab with 800G per-port. I don’t recall if it was RoCE or not. The infiniband is accessible by the GPUs here? Beautiful.
Thank you for sharing all this. One of the more directly useful posts.
im not used to conducting these kinds of interviews and felt out of my depth. please suggestions questions that you felt should have been asked but werent.
I am fascinated by the total electrical power drawn to build models - power and cooling I guess. Do you have any numbers on that (the point being Zuckerberg in a podcast suggested the next 1GW model was being planned - basically a data centre with a mid sized power plant attached)
This is such a valuable piece.
I've learned so much reading it! And your open-source code is great as well.
Some open questions I have:
1) Why did you choose to setup your own cluster? How was the experience with your cloud partner regarding faulty machines / switches?
2) What were your considerations choosing the cluster architecture that have proven the most valuable ? (apart from the all2all comms)
3) Can you share a bit more about your logging infra apart from the fact that it was Loki based?
4) What necessitated the use of a local docker registry? did you use other images apart from nvidia-container-runtime?
Honest question: why is there so much PC hardware in the mix here? Why don't we have PCI + infiniband backends with GPUs and a little tiny orchestrating ARM controller and just let them all coordinate with each other? Is it just "momentum" from previous designs and/or lack of "market" for specialized GPU controllers?
Are you asking why pay extra for a CPU and RAM? Not everything can be done on a GPU, for example, .png decompression. If you really analyzed your training code and preprocessed your data substantially you could probably get away with very lightweight CPU/RAM resources but I think the reality is that it's such a minor contribution of cost to the overall system (GPU are expensive) that wasting development cycles on that degree of optimization isn't strictly necessary. When you're a hyperscaler you are likely chasing those fractions of a percent of cost efficiency though. To use my original example, you would likely want to preprocess your .png to either .webp (multi-threaded lossless) or .jpeg (lossy), but likely it wouldn't make sense to turn it into a GPU decompressible format as you would save on CPU cost during training but would pay more in storage (and maybe transfer) cost.
Edit: To be more clear, if the CPU work is bottlenecking training, you want to optimize that as much as possible by preprocessing your data/tweaking training scripts. What I'm discussing here is the gap between "fast enough" and "faster":
CPU is not fast enough for training < CPU is exactly fast enough for training < CPU is faster than needed for training
Cause when you have quarter million dollars of GPU on each machine, it is dumb to worry about few thousand for the controlling hardware. Too risky to use something new.
Another problem is that all the hardware, drivers, and experience for GPU are on PC. It would take a lot of work to get running on ARM since would be starting from scratch. Then more work to get it stable. All to save a little on processor.
Keeping the GPUs feed is a actually a rather demanding job for deep learning training. I do not have experience with LLM/NLP, but for image and audio workloads one can struggle to reach full utilization of even a RTX2/3/4xxx GPU with a typical 4-8 core CPU. It does not take much to be bottlenecked by the CPU and/or IO.
I wonder if it's possible for a huge number of hobbyists to team up and train a model together in a distributed manner like seti@home or folding@home. Or does this kind of workload not really lend itself to that approach?
Those things were of course characterised by the ability to spread the work into pretty self-contained work packages. Not sure if that can be done with model training.
Imbue was telling Voltage Park how to setup and wire their network and booting from bare metal, so it’s definitely lower level than what the big clouds provide access to.
We're sharing open-source scripts and an end-to-end guide for infrastructure set-up that details the process of making everything work perfectly, and ensuring that it stays that way.
This is one of a three-part toolkit on training a 70b model from scratch. The other two sections focus on evaluations and CARBS, our hyperparameter optimizer; you can find them here: https://imbue.com/research/70b-intro/
Thoughts and questions welcome! :)