Hacker News new | past | comments | ask | show | jobs | submit login

Not per bit or per watt. LLaMA-30B is 16GB and draws 40 watts from a 4090 GPU.



LLaMa isn't instruction tuned




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: