I don't think it's literally "winner takes all" - I regularly cycle between Gemini, DeepSeek and Claude for coding tasks. I'm sure any GPT model would be fine too, and I could even fall back to Qwen in a pinch (exactly what I did when I was in China recently with no ability to access foreign servers).
Claude does have a slight edge in quality (which is why it's my default) but infrastructure/cost/speed are all relevant too. Different providers may focus on one at the expense of the others.
One interesting scenario where we could end up is using large hosted models for planning/logic, and handing off to local models for execution.
Claude does have a slight edge in quality (which is why it's my default) but infrastructure/cost/speed are all relevant too. Different providers may focus on one at the expense of the others.
One interesting scenario where we could end up is using large hosted models for planning/logic, and handing off to local models for execution.