Hacker News new | past | comments | ask | show | jobs | submit login

If consumer cards can run the big models, then datacenter cards will be able to efficiently run the really big models.



Some tasks we are using LLMs for are performing very close to GPT-4 levels using 7B models, so really depends on what value you are looking to get.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: