Hacker News new | past | comments | ask | show | jobs | submit login

It got me thinking, what might it look like to natively train a binary quantized embedding model? You can’t do calculus per se on {0,1}, but maybe you could do something like randomly flip bits with a probability weighted by the severity of the error during backprop… anyway, I’m sure there’s plenty of literature about this.



Probably just an extreme version of quantization-aware training? During training you round the prediction to the range you want, but keep it as a float.

Since rounding isn’t differentiable there’s fancy techniques to approximate that as well.

> QAT backward pass typically uses straight-through estimators (STE), a mechanism to estimate the gradients flowing through non-smooth functions

https://pytorch.org/blog/quantization-aware-training/





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: