Exactly, let’s raise the regulatory burden on corporations to the equivalence point at which the producer of the most valuable product of the last 10 years considers dissolution. For society.
The diff between budget forcing on and off is all within the (surprisingly large) confidence intervals of evaluation datasets. Why spend more compute for no significant gain? Seems to distract from the high-value minimal reasoning ft set
Also - In the main/first figure, why are r1 and o1 (the best performing models in Table 1) omitted?
If you collect 59K and then pick the best 1K, is it really fair to say your approach is simple? Sifting through 59K examples doesn't seem simple.
Good stuff though, cool to see how minimal we can get to distill good models (esp. at the manageable 32 size).