Probably just an extreme version of quantization-aware training? During training you round the prediction to the range you want, but keep it as a float.
Since rounding isn’t differentiable there’s fancy techniques to approximate that as well.
> QAT backward pass typically uses straight-through estimators (STE), a mechanism to estimate the gradients flowing through non-smooth functions
Since rounding isn’t differentiable there’s fancy techniques to approximate that as well.
> QAT backward pass typically uses straight-through estimators (STE), a mechanism to estimate the gradients flowing through non-smooth functions
https://pytorch.org/blog/quantization-aware-training/