Thanks for this tool, Paul. I saw you talk about it a few times here and I was totally sleeping on it (to be fair, there was a new tool every day). The file ingestion is the nuts.
Is the new, huge context size going to change how internals work? Seems like if you were being careful and efficient before, maybe that can be traded for quality somehow.
Aider is helping me write rust, which I'm very new to. It's doing a reasonable job, despite people saying GPT4 does a bad job at rust. It's a very small project--LLM based, of course! It makes frequent errors, but it can fix them by just feeding it `/run cargo check`--sometimes a few iterations.
It's a cool workflow, and I'm already a terminal nerd, having been using a tmux/cloud dev environment for years. Aider's ergonomics are perfect for me. I was previously making use of chatblade in the terminal to pipe files into etc.
I'm looking forward to see how good aider gets "for free" as the models get better.
Anyway, I'll be using it, and watching for any news you might have. Thanks again!
The new large context size certainly opens up some possibilities. Most LLMs start to get distracted/confused if you just put everything and the kitchen sink into their context window. I expect that would be a concern if you just dump in 128k tokens worth of code, most of which isn't relevant to the task at hand.
I imagine we still want to focus GPT on the relevant code and use the larger context window to provide more "code context" along the lines of aider's repository map. Right now the repo-map is optimized to fit within 1k tokens by default. It seems likely that the new GPT-4 Turbo model should be given a much larger map than that.
I'll be experimenting and optimizing aider for this new model over the coming days and weeks.
Is the new, huge context size going to change how internals work? Seems like if you were being careful and efficient before, maybe that can be traded for quality somehow.
Aider is helping me write rust, which I'm very new to. It's doing a reasonable job, despite people saying GPT4 does a bad job at rust. It's a very small project--LLM based, of course! It makes frequent errors, but it can fix them by just feeding it `/run cargo check`--sometimes a few iterations.
It's a cool workflow, and I'm already a terminal nerd, having been using a tmux/cloud dev environment for years. Aider's ergonomics are perfect for me. I was previously making use of chatblade in the terminal to pipe files into etc.
I'm looking forward to see how good aider gets "for free" as the models get better.
Anyway, I'll be using it, and watching for any news you might have. Thanks again!