Hacker News new | past | comments | ask | show | jobs | submit login

Llama 3 8B is pretty much the king of its model class right now, so yeah. Meta’s instruct fine tune is also a safe choice, really the only thing you have to play with is the quantization level. Llama 8b 4bit isn’t great, but 8bit might be pushing it on the gtx 1080. I’d almost consider offloading a few layers to the cpu just to avoid dealing with the 4bit model.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: