Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)


In that regard, it’s even easier to use one Apple Studio with sufficient RAM and llama.cpp or even PyTorch for inference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: