Hacker News new | past | comments | ask | show | jobs | submit login

That’s really interesting to see the breadcrumb trail goes back that far.

So what are the most important insights in this paper compared to what was previously done?

I assume there’s more context to the story and it’s not just that no one thought to apply the concepts to LLM’s until now?




I don't think there is anything conceptually new in this work, other than it is applied to LLMs.

But in fairness, getting these techniques to work at scale is no small feat. In my experience quantization aware training at these low bit depths was always finicky and required a very careful hand. I'd be interested to know if it has become easier to do, now that there are so many more parameters in LLMs.

In any case full kudos to the authors and I'm glad to see people continuing this work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: