I see, perhaps with Gemini because the model is larger it takes longer to generate the completions. I would expect with a larger model it would perform better on larger codebases. It sounds like for you, it's faster to work on a smaller model with shorter more accurate completions rather than letting the model guess what you're trying to write.