Hacker News new | past | comments | ask | show | jobs | submit login

https://github.com/microsoft/BitNet

"The first release of bitnet.cpp is to support inference on CPUs. bitnet.cpp achieves speedups of 1.37x to 5.07x on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by 55.4% to 70.0%, further boosting overall efficiency. On x86 CPUs, speedups range from 2.37x to 6.17x with energy reductions between 71.9% to 82.2%. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. More details will be provided soon."






Damn. Seems almost too good to be true. Let’s see where this goes in two weeks.

Intel and AMD will be extremely happy.

Nvidia will be very unhappy.


Their GPU will still be needed to do training. As far as I understand this will improve only interference performance and efficiency.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: