Yeah, it makes mistakes, sometimes it shows you i.e. the most common way to do something, even if that way has a bug in it.
Yes, sometimes it writes a complete blunder.
And yes again, sometimes there are very subtle logical mistakes in the code it proposes.
But overall? It's been *great*! Definitely worth the 10 bucks a month (especially with a developer salary). :insert shut up and take my money gif:
It's excellent for quickly writing slightly repetitive test cases; it's great as an autocomplete on steroids that completes entire lines + fills in all arguments, instead of just a single identifier; it's great for quickly writing nice contextual error messages (especially useful for Go developers and the constant errors.Wrap, Copilot is really good at writing meaningful error messages there); and it's also great for technical documentation, as it's able to autocomplete markdown (and it does it surprisingly well).
Overall, I definitely wouldn't want to go back to writing code without it. It just takes care of most of the mundane and obvious code for you, so you can take care of the interesting bits. It's like having the stereotypical "intern" as an associate built-in to your editor.
And sometimes, fairly rarely, but it happens, it's just surprising how good of a suggestion it can make.
It's also ridiculously flexible. When I start writing graphs in ASCII (cause I'm just quickly writing something down in a scratch file) it'll actually understand what I'm doing and start autocompleting textual nodes in that ASCII graph.
As a polyglot who works in 4-5 different languages every 3-6 months, it's been very valuable.
I forget a lot of things, simple dumb stuff like type conversions or specific keywords spelling. Copilot takes care of 99% of that so I can focus on my higher level spec.
If anything sometimes it's too agressive. I start typing a word and it's already building up the rest of the application in a different direction...
Spaced repetition doesn't help with the use case of needing to recall lots of little details spread across a wide number of knowledge domains. You'd spend more time trying to remember everything than on actually doing useful work.
Also, that little "building confidence in yourself" rider that you added suggests that you think the OP doesn't have confidence in themselves. Careful about those assumptions; in this case it comes across as a little patronizing.
> Also, that little "building confidence in yourself" rider that you added suggests that you think the OP doesn't have confidence in themselves.
They certainly do have confidence, but it doesn't hurt building upon it. I don't think confidence is a discrete 0 or 1 variable. It's a continuous variable, from 0 to infinity.
By the way, I made a question, not a statement. I welcome the argument on whether it's worth the time to memorize the stuff Autopilot can autocomplete. That's something to measure and debate.
> Careful about those assumptions; in this case it comes across as a little patronizing.
Thanks for the heads up. This is important. But I didn't make any assumption. It seems maybe you made an assumption about me making an assumption?
If that's the case, don't worry as I did not feel patronized.
No. Spaced repetition is a good tool for learning vocabulary.
It's not a silver bullet / holy grail / MacGuffin / etc.
There's a community of people obsessed with spaced repetition. None of them seem to have accomplished spectacular feats of learning. There's a good reason for that.
(The flip side, however, is that many people who have accomplished spectacular feats of learning DO often use spaced repetition, but among a broader repertoire).
I'll give an example: Landmines are a good tool for slowing an enemy army. However, if your military consists of *just* landmines, it won't be very effective. That doesn't make landmines a bad tool. Indeed, even a super-weapon, like the first jet fighter, won't win a war if it's your *only* tool.
Learning -- even a language -- is a complex process, and you need many tools. Spaced repetition is awesome for factoids. If you want to learn a language, you need to memorize vocabulary. SR is great for that. If you add 5-15 minutes of spaced repetition to a good language program, it will help a lot. If that's where you spend a majority of your time, you'll learn very little. However, SR won't help you practice a broad range of skills around listening, speaking, understanding communication styles, or quite a few other things.
Ditto with physics and math. If you know equations, it accelerates everything else. However, the bulk of the knowledge isn't factual or procedural; it's conceptual. Simply memorizing formulas won't help you. On the other hand, in most cases, once you learn conceptual knowledge in physics, you never forget it.
"Coding vocabulary" isn't in my top-50 problem with junior developers. Naming variables, algorithms, systems design, etc. are. Most of those don't align to SR. I'd take a programmer who spends 8 hours coding over one who spends 8 hours memorizing library calls in an SR system.
Note 2:
The spacing effect is /somewhat/ broadly applicable (but far from universal), but spaced repetition specifically is only helpful for factoid-style knowledge. You can look over the different classifications of knowledge, skills, and abilities (factual, conceptual, procedural, declarative, etc.).
I find it’s less about actually forgetting and more about all the mundane details I usually bounce around a codebase to find. What’s the third argument to that function again? Oh, copilot autocompleted it correctly.
It’s not always right but it’s right enough it’s a big timesaver.
I agree in concept, in practice I find the relative frequency of "I can't quite remember that thing" is way higher than learning new things I don't know if copilot is right about.
> what will be the consequences?
for me, usually a failed build or unit test :P low stakes stuff
> And sometimes, fairly rarely, but it happens, it's just surprising how good of a suggestion it can make.
I've had this experience too. Usually it's meh, but at one point it wrote an ENTIRE function by itself and it was correct. IT WAS CORRECT! And it wasn't some dumb boilerplate initialization either, it was actual logic with some loops. The context awareness with it is off the charts sometimes.
Regardless I find that while it's good for the generic python stuff I do in my free time, for the stuff I'm actually paid for? Basically useless since it's too niche. So not exactly worth the investment for me.
I've found that even in the specialist domains I'm working in, it does pretty well. The caveats being that you need to guide it quite a bit, but once a workflow starts to come together then it really starts to hum.
Copilot has been amazing for me too. It's gotten to the point where I want similar smart autocomplete features in other software, such as spreadsheets or doing music in my DAW. I think those will come too eventually.
I turned off auto-suggest and that made a huge difference. Now I'll use it when I know I'm doing something repetitive that it'll get easily, or if I'm not 100% sure what I want to do and I'm curious what it suggests. This way I get the help without having it interrupt my thoughts with its suggestions.
I have to toggle it on and off. I find that if I'm thinking about what to write it is absurdly distracting to see a suggestion, which is usually wrong, because the suggestion steals my focus. On the other hand when I don't really need to think that much Copilot often gives valuable suggestions that I accept.
I really like it too but I can't help but feel like all the same people panicked about Microsoft acquiring GitHub are suddenly quiet about the fact that Microsoft has found the ultimate way to profit off of open-source
There's a ton of developer effort that went into Copilot and those devs should be paid fairly. But the majority of what fuels Copilot is the millions of lines of open source code submitted every day.
I think I'd feel a lot better about it if they committed a good chunk of that money back into the same open source communities they depend on. Otherwise its a parasitic (or at least not fully consensual) relationship
Microsoft gives free copies of Office to schools. Why do you think it does that? MS benefits from people being locked into its service and suite. It provides premium subscriptions for additional features.
Hosting code "for free" is part of its business model. It's not a way of "giving back"
Oh, is Bing still a thing? Honest question, haven't heard it mentioned for years. Went from Altavista to Google (I believe, it's a long time ago...) and now to DDG a couple of years ago. Never considered Bing directly although I believe DDG results come or came from Bing.
No. They're not. They're advertising that they are.
They are providing it to a very small set of high-profile OSS maintainers some opaque algorithm picked out. Having high-profile adopters is just good business.
Didn't OSS maintainers already voluntarily approved such uses when they published their work under an OSS license?
One fundamental aspect of being open source is not limiting the purposes of use. If we now say that "code-generation AI training" is not allowed without prior approval (in addition to the license itself), then it's not open source anymore...
I approved some of my code for being reused under the terms of the AGPL. Co-pilot is very welcome to scan it and generate derivative AGPL code.
If I write AGPL code, and co-pilot scans it and makes a very similar program to it for a FAANG, who then proceeds to compete with my open-source tool by using the creative ideas generated there-in, but with a proprietary tool, that's not very fair. That's why I chose the license I did.
FAANG is more than welcome (indeed, encouraged) to use my code for any purpose permitted under the license. That includes everything except making it proprietary.
I've tried running copilot with the starting lines of my code. It generated code with identical creative ideas. It was the equivalent of taking Star Trek, and generating a new movie with the same plot line, but with names changed. That's not legal.
My code was specific enough that this wasn't just chance or other similar code. I work in a pretty narrow domain.
I did use copilot for coding myself, and a lot of what it generated was unique. But it is also a good paraphrasing tool. Running a movie script backwards and forwards through Google Translate to get different phrasing, and then swapping out new names, does not a new movie make. Ditto here.
Perhaps they chose maintainers from OSS projects they scanned?
I'm not defending Microsoft's market tactics, for obvious reasons, but we do have to consider that anyone can publish whatever insignificant code as OSS and become an "OSS maintainer" out of nothing.
They have to draw the line somewhere. Nowhere they draw will make everyone happy.
One example of Copilot saving time - the other day I was trying to remember how to access a value from a map in Go. This doesn't take much time - alt tab to browser, ctrl + t for a new tab, type in "golang get value from map", click on a stackoverflow result, scroll down, glance at the result, alt tab back to IDE, and do it. With Copilot, Copilot knew what I was trying to do and suggested the right code to me and I accepted the suggestion by pressing tab.
But! I think the savings are even bigger because there is no context switch. If I have my browser open I might find myself going to hackernews, checking my email, looking at my stackoverflow notifications, browsing twitter - whatever. Copilot is not only faster, it keeps my focus on the code without giving me a chance to get distracted. In some sense, for this example, Copilot saved me 5-10 seconds by not needing to Google something. In another sense, it might have saved me an hour because I didn't decide to just check something on twitter while I had my browser open.
I don’t think the average brain works like this. If you know you have a piece of code to deliver or are in flow, I don’t think you’d willingly start browsing HN just because you went to SO to c&p some boilerplate code.
I find that if my brain is willing to be distracted, it’s made some sort of calculation saying that the cost of being distracted isn’t going to have a significant bearing on deliverables…and I’m pretty sure I’m no better or worse at estimations than anyone else.
I get the desire to ask that question, but it also feels like the main thing Copilot saves me is energy, which can’t be quantified in a very useful way. You’ll just have to take our words for it.
Like, writing a really boring unit test might only take 60 seconds, but if Copilot can do it for me (even if I have to quickly scan it for correctness) that saves me… well something other than 60 seconds. It sure feels like a big deal.
Why write that boring unit test at all? If it is not only boring, but also automatically generated, then what value does it add to the code? I would argue it actually _decreases_ the value, because you now have that much more code to maintain and understand. 90% of unit tests people write are garbage according to my experience.
This take is too reductionist. Every website ever has a boring unit test called “Is it up?” where you check if the site is still working after a deployment.
Boring and repetitive but of infinite value if you can detect early that your deployment broke something.
If a boring/boilerplate unit test like this can deliver value, other b/b unit tests are probably going to similar impacts and hence, saying they all “decrease” value is reductionist.
Goal of library/language/framework designers: limit boilerplate and unnecessary code
Goal of tool/IDE designers: make it easy to not spend time on boilerplate and unnecessary code.
Both sides have built in limitations that will keep them from completely solving the problem. Terse, highly DRY code tends to also be highly abstracted and hard to read, with a lot of implicit behavior. On the other side, large amounts of generated code lead to tool lockin and cruft accumulation.
No product starts with an elaborate ci/cd pipeline and automated test suite. Many evolve into needing such tooling after having had an incident or two when a deployment broke the site…and the response invariably is that “we should have written a suite of really basic tests which goes to the home page and checks if the hero image is visible.”
If co-pilot’s autogenerated test cases can help prevent this head smacking, it will have proved that basic/boilerplate code was valuable.
The site checker included in the tooling is just a more mature version of the boilerplate unit test co-pilot gives us.
Btw, I have no skin in the game -never used co-pilot…just surprised that HN commenters can be so dismissive of wanting to get the basics right - like having some test coverage.
My own experience with it is rather limited. My workplace doesn't allow the use of it (due to confidentiality), so i've only used it for school work.
It wasn't much help in designing/implementing classes in Java or .NET, but when it came to implementing unit tests it practically wrote everything after i named the class and designated it as a test. It was able to extract all the different methods from the classes being implemented, and create appropriate unit tests based on that.
Now, it was school homework, so not representative of a complex business application, but if i can just handle the basics/boilerplate, it would be worth it.
Assuming a (European) work week of 37.5 hours per week, $10 comes down to $0.06 per working hour, and if it can just save me 5 minutes of work every day it will be worth it.
in some tasks, like refactoring, I would easily say that 75% of my time is saved by copilot.
For example, I just had to convert some OCaml code to Rust. I wrote the first few conversions, and then I would just paste the OCaml code in a comment, and it would auto-complete the equivalent code in Rust. I just had to check that it was correct, which it was most of the time, rinse and repeat and wow. One would have to be blind to not find copilot impressive, really, it's the future.
Yeah, it makes mistakes, sometimes it shows you i.e. the most common way to do something, even if that way has a bug in it.
Yes, sometimes it writes a complete blunder.
And yes again, sometimes there are very subtle logical mistakes in the code it proposes.
But overall? It's been *great*! Definitely worth the 10 bucks a month (especially with a developer salary). :insert shut up and take my money gif:
It's excellent for quickly writing slightly repetitive test cases; it's great as an autocomplete on steroids that completes entire lines + fills in all arguments, instead of just a single identifier; it's great for quickly writing nice contextual error messages (especially useful for Go developers and the constant errors.Wrap, Copilot is really good at writing meaningful error messages there); and it's also great for technical documentation, as it's able to autocomplete markdown (and it does it surprisingly well).
Overall, I definitely wouldn't want to go back to writing code without it. It just takes care of most of the mundane and obvious code for you, so you can take care of the interesting bits. It's like having the stereotypical "intern" as an associate built-in to your editor.
And sometimes, fairly rarely, but it happens, it's just surprising how good of a suggestion it can make.
It's also ridiculously flexible. When I start writing graphs in ASCII (cause I'm just quickly writing something down in a scratch file) it'll actually understand what I'm doing and start autocompleting textual nodes in that ASCII graph.