> while also keeping the maximum accuracy of the results
All of these papers/algos are for the ML hype-train. ML algos are approximate anyway so no one cares about absolute accuracy, only the precision of the overall pipeline (class labels shouldn't change, at least not too much). Consider that very many papers/techniques quantize down to 8 or even 4 bits (yes sometimes even during training) for the purposes of perf.
This whole research area should just be renamed to something like approximate computing so that people don't confuse it (and the goals) with classical numerical analysis.
In nn training, you often get better perf at lower cost by increasing the number of parameters than by increasing parameter accuracy. Not sure if we actually know why, but that is usually the case.
E.G. a 7B model at FP32 will perform worse than a 14B model at BF16, all else being equal.
FP32 -> BF16 is really good, because modern GPUs are actually far faster at BF16 multiplications than at multiplications in FP32, not to mention the decreased memory and memory throughput requirements. 4-bit quants are much more of a mixed bag. You get the memory savings, which often means a difference between a cheap consumer GPU and a much more expensive datacenter GPU, or a single GPU versus a node with multiple ones and all the interconnect that entails. Your raw speed won't improve as dramatically, as you still need to convert from BF16 and back to do the actual computation. BF16 would probably make you memory-bound anyway, and 4-bit effectively gives you more throughput, so you get some savings, but the difference won't be as dramatic.
Why surely? Eg on GPU style chips today you'd tile to warpsize (or half or something like that) to target warp cooperative primitives (so called tensor cores). On AMD and NV that's like 16x16 or 32x32 depending on the data type. That's not that far from 4x4 and it's not like all chips in the world have 32-lane warps. Anyway if a trick is good enough and people start trying to shoehorn it into everything (not saying this one is) then the next gen (or the one after that or after) will just have units to support the trick (it's an expensive way to gain ground on mlperf rankings).
Blackwell is doing matrix multiplies across thread blocks these days, and doesn’t understand anything smaller than 64x16x16. I assume large matrix multiples are using the bigger sizes like 128x256x16 or whatever.
I get your point, neural networks could be some kind of black boxes and how the sequence of operations evolves might not be so dependent on the accuracy of each operations -- at least, it would be a good intuition to believe "accuracy at each step matters" but it's not rigorously proven ? We could empirically check this by training a neural net and adding some random noise at each step
All of these papers/algos are for the ML hype-train. ML algos are approximate anyway so no one cares about absolute accuracy, only the precision of the overall pipeline (class labels shouldn't change, at least not too much). Consider that very many papers/techniques quantize down to 8 or even 4 bits (yes sometimes even during training) for the purposes of perf.
This whole research area should just be renamed to something like approximate computing so that people don't confuse it (and the goals) with classical numerical analysis.