Hacker News new | past | comments | ask | show | jobs | submit login

What data are you using to back up this belief?

Progress on benchmarks continues to improve (see GPT-o1).

The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.

o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.




> Progress on benchmarks continues to improve (see GPT-o1).

thats not evidence of a step change.

> The big guys are building synthetic training sets

Yes, that helps to pre-train models, but its not a replacement for real data.

> not worried about running out of data.

they totally are. The more data, the more expensive it is to train. Exponentially more expensive.

> o1 shows that you can also throw more inference compute

I suspect that its not actually just compute, its changes to training and model design.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: