Hacker News new | past | comments | ask | show | jobs | submit login

I don’t think you really disagree with GP? I think the argument is we peaked on “throw GPUs at it”?

We have all kinds of advancements to make training cheaper, models computationally cheaper, smaller, etc.

Once that happens/happened, it benefits OAI to throw up walls via legislation.




No way has training hit any kind of cost, computing or training data efficiency peak.

Big tech advances, like the models of the last year or so, don't happen without a long tail of significant improvements based on fine tuning, at a minimum.

The number of advances being announced by disparate groups, even individuals, also indicates improvements are going to continue at a fast pace.


Yeah, it's a little bit RTFC to be honest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: