Progress on benchmarks continues to improve (see GPT-o1).
The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.
o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.
Progress on benchmarks continues to improve (see GPT-o1).
The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.
o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.