> Third, in developing our reasoning models, we’ve optimized somewhat less for math and computer science competition problems, and instead shifted focus towards real-world tasks that better reflect how businesses actually use LLMs.
Company: we find that optimizing for LeetCode level programming is not a good use of resources, and we should be training AI less on competition problems.
Also Company: we hire SWEs based on how much time they trained themselves on LeetCode
My manager explained to me that LeetCode is proving that you are willing to dance the dance. Same as PhD requirements etc - you probably won't be doing anything related and definitely nothing related to LeetCode, but you display dedication and ability.
I kinda agree that this is probably reason why companies are doing it. I don't like it, but this is besides the matter.
Using Claude other models in interviews probably won't be allowed any time soon, but I do use it the work. So it does make sense.
And it's also the reality of hiring practices for most VC-backed and public companies
Some try to do something more like "real-world" tasks, but those end up either being either just toy problems, or long take homes
Personally, I feel the most important things to prioritize when hiring are: is the candidate going to get along with their teammates (colleagues, boss, etc), and do they have the basic skills to relatively quickly learn their jobs once they start?
Company: we find that optimizing for LeetCode level programming is not a good use of resources, and we should be training AI less on competition problems.
Also Company: we hire SWEs based on how much time they trained themselves on LeetCode
/joke of course