Thanks for the feedback! That's a really good idea and we will definitely give that a shot!
If it provides more clarity, we are trying to tackle this in stages. Can the AI produce helpful and descriptive code for
1. Basic + Short (less than 100 lines) files?
2. Basic + Long (more than 100 lines) files?
3. Complex / Vague + Long files?
After stage 3, which I believe is what you are referring to, we then hope to explore stage 4.
4. Can AI incorporate programmers' intentions into the comments?
We played around with this for a bit actually. One idea we had was to generate a PlantUML diagram to show how the different components of a file or even a repository connected with one another. However, given the current limitations with GPT context, even when using GPT-4, this quickly became impractical for large files. We would need to leverage an AI with a much larger context length.
That said, perhaps if the entire repository is fed into a vectorised database, a high-level overview would be possible? Just thinking aloud right now and am happy to collaborate with anyone interested in exploring this further!
It might be… I would say 95% of code doesn’t need comments. But then you encounter that one complicated function and you go wtf? So I think I would prefer something more targeted. Like a vs code extension that lets you summarize any code snippet.
Interesting! We are personally not the most comfortable with editing things directly from the terminal, especially when GPT hallucinates, but we can definitely see how this would provide users with more flexibility. Thanks for sharing!
You can easily extend the PR workflow to local git: just check that it's run inside a git repo and error out if there are any unstaged changes. Add a --dangerous flag for non-git repo use cases where data might be lost. You can use the git API directly and commit to a new branch without editing the active user branch on disk.
It notices if your local repo is dirty and asks if you'd like to commit before proceeding with the GPT chat. It will even provide a suggestion for the commit message.
You can run aider with --no-auto-commits if you don't want it to commit to the repo. This is similar to your suggested --dangerous flag.
I have considered various magic/automatic branching strategies. But I suspect they would be too confusing. And people probably have their own preferred git workflows. I feel like it's probably better to let folks explicitly manage their branches and PRs however they like.
I agree, sometimes you need to carefully review the changes that GPT suggests.
My aider tool tries to make this easy by leveraging git. While it automatically commits the edits from GPT, it also provides in-chat commands like /diff and /undo. These commands let you quickly check exactly what edits GPT made, and undo them if they're not correct.
Aider will notify GPT if you /undo its changes, and GPT will probably ask why and then try again with your concerns in mind.
To manage a longer chat that includes a sequence of changes, you can also use your preferred standard git workflows like branches, PRs, etc.
Thanks for the feedback! We can't figure out how to hack together something like that just yet (and if it is even something that should / can be solved by a tech product) but if we do, we'll definitely share that as an update! :)
Wow, thanks for the insightful discussion and feedback! This is definitely something that we will take into consideration and ideally, provide as an option.
If it provides more clarity, we are trying to tackle this in stages. Can the AI produce helpful and descriptive code for 1. Basic + Short (less than 100 lines) files? 2. Basic + Long (more than 100 lines) files? 3. Complex / Vague + Long files?
After stage 3, which I believe is what you are referring to, we then hope to explore stage 4. 4. Can AI incorporate programmers' intentions into the comments?