Does anyone actually use Copilot for their work? I can't imagine it's anywhere near as reliable as OpenAI claims. I'd imagine a user would spend more time fixing mistakes or re-trying with different queries than they'd actually save.
Just wanted to have a less glowing counterpoint to the other claims. I've used Copilot a bit, and found that the automatic completion was frequently interrupting my train of thought, making it harder to concentrate ("intrusive thoughts as a service"). I preferred only triggering completions upon pressing a keystroke, so I choose when to take a shortcut and ask to have code generated. I found it very helpful for generating boilerplate code, and debug logging I shouldn't have to think too hard about. Also it sometimes gave me clever ideas better than I thought of myself (like Rust code matching on a HashMap's Entry). Nonetheless I felt uneasy because I noticed myself getting too "lazy", not thinking about what code I want written before asking for help.
In the end, aside from boilerplate, I spend most of my time in Qt Creator (which doesn't have a Copilot plugin) rather than VS Code, so I mostly stopped using Copilot anyway.
I often have to turn it off. It's very useful when I know what needs to be typed but am too lazy. It's also sometimes amazing useful at coding stuff from a comment, and like you say it can teach you new idioms. It would be a great way of learning a new language from examples.
When I'm problem solving or not sure how to code something though, its constant suggestions are just noise, and then I turn it off.
I use it constantly and hope I never have to code without it (or something better) again. It does a good job writing the kind of boring code I don't want to write, and it generally seems to include fewer errors than I write in my first drafts of code. More than once I ignored its suggestion and wrote my own version, only to later realize its version was more correct and efficient. It does a good (sometimes incredible) job of even handling pretty specialized subject matter, and of using the context of other code and comments you've written to suggest exactly what you need next.
Maybe once you eliminate one level of "boring" code, that just means parts of the next-higher level of code become rote and boring. It reminds me a little of Richard Gabriel's reply to Guy Steele, when Steele said something like "Lisp doesn't need design patterns; it has macros." Gabriel said "That just moves the patterns up a level of abstraction."
(I probably remember that story all wrong. But I like it anyway!)
One of the problems with (classic) Lisp macros is that they aren't first class, ie you can pass them around like you can do with functions (or numbers etc).
I agree, I always use it now when I have to code in Python. I find the completions easy to ignore and easy to accept. For something, like generating code with embedded SQL or SPARQL queries, I pause and test the queries independently.
I use it all the time, it's significantly improved my productivity.
It's a little like pair programming with an incredibly eager junior developer who has read a lot of the documentation of every popular API in the world. I need to review the code it produces, but it's very fast, and its suggestions are usually great.
It's annoying when I know exactly what I want to write, and most helpful when I'm unsure (either because I'm trying things out, or if I'm using a new API or a language I'm rusty at).
This seems like a Google problem. Google was really helpful when it had a tonne of organic content that it could systematically steal from, plagiarise, and rip off to the point of devaluing the entire internet. The result with google was absolute centralization of that content and therefore a withering of the organic content. I wonder if we're going to see the same thing with these AI tools. It's fine to learn from 100,000 developers all writing code. But if you steal their IP, rip off their designs and plagiarise their work to build your tool what you end up with is a tool that is basically just learning from itself. No one really writes basic code anymore, but as a result there's no source for the AI to learn from. In other words, it centralizes knowledge but doesn't advance knowledge, and in the process it devlues anyone else advancing knowledge.
I’ve been using it every day for a few months (for Typescript/React), and it still astounds me.
I can write a comment outlining what I want a function to do, and 90% of the time it will generate the code I need (or very close with a couple of small tweaks needed).
Coincidentally, my Stack Overflow visits have decreased by approximately 90%.
I was going to write exactly this and I'm glad you beat me to it (and that someone else is experiencing this benefit). It's obviously not a replacement for a human writing a big program (yet...), but it does a damn fine job of giving you a shell of a function based on a comment.
PS - it is also AMAZING with CSS and saves so much time on easy things that I just haven't memorized. "Style the LI so there are no bullet points" and boom...'list-style: none;'
It’s also quite impressive how Copilot will learn from any patterns you’ve typed on previous lines — so when I’m using a certain color name in a variable, it knows that I likely want to use it in subsequent code.
Or, more concretely, when using tokens for colors (e.g. blue.50 for light blue, blue.900 for dark blue) it can figure out that I probably want my background to be an x.50 colour and my text to be, say, x.700. So cool!
At first I thought copilot would be pretty useless until I actually tried it. It turns out that a lot of code is boilerplate and the same simple patterns, even with abstraction. Copilot is not particularly genius, but it fills in simple patterns (e.g. do the same for the Y-axis that you did for the X-axis), and autocompletes typical utility functions (e.g. add 2 2d positions, shuffle an array, setTimeout promise, etc. which I have to write functions for because they are not in the JavaScript standard library.) These may seem like odd scenarios but there are actually a lot of them.
Big supporter of Copilot. I am still amazed how good it is and I feel it's getting better and better. So much boilerplate code gone. Also it really gives you a boost in confidence when the A.I writes the same code you're thinking. I feel so lucky having to see these amazing developments in A.I and V.R. recently.
Personally using Codex, not Copilot, but similar engine.
It's really good for boilerplate. Things like TDD tests where I'm just modifying a few parameters. You can get it to write functions like "parse this DateTime object into a format like Tuesday, 15 May 2020".
It's a useful lookup too. Like often I just want to extract a variable from a List and would spend 15 mins looking up the docs or sifting Stack Overflow. Codex is more faster and accurate.
With GPT-3 it's garbage in, garbage out. You have to invest a few days in learning the prompts that work.
Yeah, just copying to a preset on the playground. Whenever I try to reach out to Stack Overflow or get stuck on documentation, I'd open up Codex.
It's possible to set up an API to it, but it's not yet at the point where it's worth doing. I think it's possible to get Codex to write the script, now that I think about it.
I use copilot, it's much more useful than you'd expect. Really helpful for places where you would normally need to record a small macro, copilot can infer the completions easily
It prompted the (joke) thought that perhaps it is making me less productive because of how often I end up sitting back and marveling at how amazing it is. I really can’t believe how good it is.
I've tried Copilot, Tabnine, and a couple of the others out there.
I find them to be an annoying Clippy-like companion that interrupts my train of thought and introduces a whole separate class of programming bugs: "autocomplete errors" which are like copy & paste errors, but written by another programmer.
I like my auto-complete functionality built in to Jetbrains and Visual Studio for hinting at variable names, function parameters, classes, imports, and so forth. But the boiler-plate code that "assistant AI programmers" provide are not worth the effort at this time and do not live up to the hype. I think they help less experienced developers out, but the kind of code I am writing won't usually be found in Copilot or Tabnine. I could see the appeal if all I was doing was churning out boilerplate CRUD app code all day long, which honestly, I would outsource to Craigslist for thirty bucks an hour. Or just hand off to my assistant.
It's quick to scan and ignore things that aren't right, and it's either completely right or close enough that it definitely feels like a timesaver.
The best parts are where it's doing something long-winded but fairly straight forward (e.g. assigning variables). But it has moments of shocking ability with more complex things.
I've used it briefly in someone else's IDE (who swears by it) and it blew me away. It pretty much removed ever having to google syntax or snippets from SO in a language I wasn't totally familiar with (python).
One thing that it’s really good at is writing boilerplate-y code. For example, web scrapers. It can even read the function’s name and deduce some proper variable names, or use variable names to deduce whether I want a list of elements or one element. Not 100% correct, of course, but good enough if you treat it like an advanced snippet manager.
It’s too good. I found myself getting very lazy with coming up solutions on my own, it felt like my problem solving capabilities declined.
IMO you should install, and then disable it. Only use it sparingly, when time is very limited or to write tests/docs.
How polyglot is Copilot? I see plenty of positive results in the replies for (Java|Type)Script and Python, but how does it fare with a Lisp-like language or Prolog (for examples)?