Your Cody page doesn't answer a very obvious question: does the LLM run locally or is this going to send all my code to Sourcegraph? I assume that is a deliberate omission and the answer is the latter.
> Although most LLMs are trained on corpora that include several programming languages, we often observe differential performance across languages, especially languages like Rust that are not well represented in popular training datasets.
Interesting! My experience with Rust and Copilot is that Rust is actually a relatively strong language for Copilot, in many ways. I suspect that this might be a mix of different things:
- Is the training set code for Rust relatively high-quality?
- Do strongly-typed languages have a better chance of catching incorrect completions quickly?
My very limited experience with GPT and Rust is that the code it generates rarely satisfies the borrow checker out of the box. I’ve had much better luck with garbage collected languages.
Interesting. I find that Copilot is quite good with the borrow checker, at least if it can "see" the signature of the function that it's calling. Sometimes it inserts an extra "&" where it really shouldn't. That's a no-op, clippy catches it, and I take it out.
But I tend to write my code in a style that easily passes the borrow checker 99% of the time. I keep mutation very local, and rely on a more "functional" architecture. So Copilot is naturally going to write new code in the same borrow-checker-friendly style, which may keep it out of trouble.
When using ChatGPT, I've actually had more luck asking it to write Python, and then to translate the Python into Rust. ChatGPT seems to be better at algorithms in Python, and it's surprisingly good (in my personal experience) at inserting the fine details of Rust types and references when translating.
For 1, set `"cody.experimental.commitMessage": true` in VS Code and then click the Cody icon above the commit message box in VS Code's source control sidebar. Let me know how that works for you (here or at https://community.sourcegraph.com).
For 2, we're working on it! And eventually we'll support cross-repository refactors in Cody using our batch changes (https://sourcegraph.com/docs/batch-changes). What kind of refactors do you want to make?
1. I use Jetbrains. I don't suppose experimental flags are supported there too?
2. A reasonable near-term goal could be to make changes in one file propagate across the code base, so it remains consistent. This requires analyzing the call graph, which is something Sourcegraph already knows how to do. The next step could be to execute multi-file refactors without specifying which files to modify; e.g., "add support for WebAuthn using our auth provider's SDK".
1. Ah, this commit message feature isn't available in JetBrains yet, unfortunately. Most things carry over since they both use the same underlying code, but this is more editor-specific.
2. Your "reasonable near-term goal" is right in line with what we are thinking. Basically, "auto-edits" is f(edit that meets deterministic trigger criteria) --> multi-file diff. Here are some examples we have in mind; any others you had in mind?
update a type signature --> ripple that change across to all call sites/value literals (kinda like symbolic rename does today, but for more than just the name)
rename something --> update the docs
change code with a comment nearby --> update the comment if it's now invalid
I’ve been using Cody and liking it, but one thing I noticed is there is no loop that even takes the data from the IDE and incorporates the IDE’s feedback into the code it generates. It’ll happily generate code the IDE draws red wiggle lines under and just continue on with its day. Would be nice to see a tighter integration
Yeah, that is needed to make everything more accurate. We are working on 3 related things to make Cody better understand the structure of your code and take feedback from the compiler (and other tools).
All are experimental and not enabled by default, but you can peek at them if you're interested.
This is true, but only if you have a GPU (/accelerator) comparable in performance to the one backing the service, or at least comparable after accounting for the local benefit. This is an expensive proposition because it will be sitting idle between completions and when you're not coding.
BTW, Cody is open source (Apache 2): https://github.com/sourcegraph/cody.