Hacker News new | past | comments | ask | show | jobs | submit login

I think the problem is that it’s just fundamentally difficult and very expensive to scale up LLM inference to millions of users. OpenAI did not invent this type of model and it’s not like the tech is secret—they were just the first to scale it up. It’s hard to imagine a scrappy startup being able to even afford the compute cost of such an undertaking without significant venture backing



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: