Hacker News new | past | comments | ask | show | jobs | submit login

I'm not so certain. You can already run a GPT-4-quality model locally on a decent desktop, and GPT-3-quality models on low-powered chips - plus data centers will benefit from scale. A lot of third-party services are using paid APIs that (based on cases like Mistral where some models are publicly available) appear to more than cover the inference costs.

There are also plenty of uses for LLMs beyond generating hopefully-accurate answers, such as for fictional content or use as foundation models for tasks like translation. Though we are definitely in the "throw things at the wall and see what sticks" stage currently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: