Hacker News new | past | comments | ask | show | jobs | submit login




It is Llama-2-70b-chat. I quantized it to 2q_k using `quantize` with llama.cpp.


So your experience isn’t representative of the work presented by this post? Or does llama.cpp use the same technique for quantization?


I don't know, hopefully it will help set general expectations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: