Hacker News new | past | comments | ask | show | jobs | submit login
Re: I Don't Use Copilot (vivekhaldar.com)
55 points by gandalfgeek on June 9, 2023 | hide | past | favorite | 111 comments



I signed up for a Copilot trial a few weeks ago and so far my feelings on the tool are mixed.

When a code suggestion is correct it’s nice but it honestly feels like a very rare occurrence so far.

What often ends up happening is I will get suggested a snippet that is close to, but not exactly what I need. So if I accept the suggestion, I end up doing a fair amount of editing anyways to correct the suggestion.

Even worse, occasionally I will accept a suggestion and only later notice a subtle mistake.

The place I’ve seen the biggest benefit is with writing comments or user facing text copy.

I see people on Twitter and Hacker News talking about how much their productivity is being boosted by Copilot and ChatGPT…

But I’m just left scratching my head because I’m barely certain I’m seeing ANY boost from either of these tools so far. I feel like I’m missing something?


> Even worse, occasionally I will accept a suggestion and only later notice a subtle “mistake”.

Same, I got burned one time by accepting a suggestion I thought I understood.

I think the general problem is that Copilot shifts you from writing code to reading code, and sometimes reading is harder. You can't really take a "yeah that seems right" attitude because it's just throwing guesses at you and seeing what sticks. The safe way to use it is as a jumping off point for writing your own code.


Do you use it for writing tests? It’s really helpful there. And in my experience it’s bad with non-standard algo heavy code. What language and domain are you using it for?

IMO it’s a nice 10-15% speed boost for web dev. And much more than that for unit tests. I expect you can measure a massive jump in test coverage by engineers that use copilot because it takes work that most people hate and speeds it up significantly.


I’ve heard many people say that but I have yet to try it with that particular task. I will give it a shot.


This has been my experience as well. I cannot understand how people are saying it has given them order of magnitude speed improvements in shipping.

Most code on github is by definition average, which is usually a few levels below the quality of an expert programmer. So it's not surprising the output of code LLMs is poor to average quality. For any larger block of code it outputs, a non-trivial amount of editing is required if a higher bar of quality is wanted.


Around half of developers write below average code, by definition. So it isn't surprising that raising the quality to average _for them_, and doing it faster, would be a productivity boon.

I'm a crappy marketing copy writer. A professional writer could do much better. But with chatGPT I too can write hack-quality marketing-copy that roughly conveys the a message and give it others to put in various marketing outlets.


> Around half of developers write below average code, by definition.

By definition, half the code out there is of below median quality. Whether or not half the code out there is of quality below the arithmetic mean depends on assumptions about the distribution of code quality. I would suspect that much more than half the code out there is "below average", so to speak.


Yep, there are assumptions all around. On the developer side, I also suspect there is a huge group casual "low-code" type programmers out there that could be assisted by copilot type tools. My main point was that the argument that the code produced by LLM type systems is low-quality is not really a barrier to adoption, and _might_ actually _raise_ the average quality of systems built in the wild. I'm less concerned with the actually distributions and more noting the point that the different relative value-point exists.


All pedantry aside, even if I'm skeptical and concerned at the moment, your hypothesis could well end up being true in the mid to long term.


> Even worse, occasionally I will accept a suggestion and only later notice a subtle mistake.

This is what hits me hard, as when I write it myself these things are more clear.

> I see people on Twitter and Hacker News talking about how much their productivity is being boosted by Copilot and ChatGPT…

I use ChatGPT quite a lot, but not really for coding. It does increase my productivity, but only because it better acts as a fuzzy search/query machine than Google. Often I use the two together (which leads to using Bard). I can remember the concept, but not the name, so GPT/Bard tells me the name, and then I can google from there. With this type of searching the hallucinations are not a big issue as they are only minor interruptions and are far outweighed just by the amount of searching I'd need to do to even find the thing I need in the first place. I should mention that I'm a researcher so I'm often looking for new tools, ways things have been done before, and frequently probing for things that are just outside my area of knowledge. But I should also say that I won't trust them to teach me a concept because when I've tested it on things I know, there are serious mistakes. Realistically the accuracy depends on how common/frequent the information is. If it is something that many people write about in a general field, it'll be accurate. If it is a topic, even popular, of a niche subfield, good luck.


Exactly the same for me. I will often get variable names autocompleted, but 90% of the time they're wrong, even when the right variables are declared sometimes in the same function or file I'm working on.

More than once has it shoved Python code into my Ruby file. Bash or YML files will often get comments added, but the wrong style for the language (which since I can never freakin' remember the style for every language actually makes me less productive since I have run the code, have it crash, and still go look up the language spec).

It keeps trying to autocomplete large hash manipulations, and it looks right, but subtly wrong in the syntax (missing commas or something) and then it's the same compile, crash, then stare at the code and have to add the same damn change for 50 lines of a hash when I should have just copied and pasted in the first place.

So far: net negative on my productivity. Plus, it sucks on shitty wifi like a plane or most hotels.


From what I understand (and I might be totally wrong), Copilot is useless for dealing with codebases where you often need to call internal functions, which makes it pretty much useless for anything but simple projects. Also, I just checked the FAQ for Copilot, and it says that users only accept 26% of suggestions, so 3/4 suggestions are garbage.


> Also, I just checked the FAQ for Copilot, and it says that users only accept 26% of suggestions, so 3/4 suggestions are garbage.

Without knowing the specific statistic being reported I’m not sure you can reach that conclusion. Copilots plug-ins can be configured to suggest as you type just like any other auto complete. It may just be that the user has paused and doesn’t need the suggestion being offered.

And it’s not like people use 100% of the suggestions their auto complete tools provide, but that doesn’t make those suggestions garbage.


Also 25% seems pretty good! If I could cut out writing 25% of the code I write per day, I'd consider that a good value, even it cost me reviewing the 3/4 of other code snippets that were stinkers. Some of the latter might also help form in my head how the manually written code should be done anyway.

I haven't tried copilot yet, but chatgpt code generation is probably useful to me 1/3 of time as a stack-overflow/reference-doc replacement. "I need a thing foo to perform bar in context baz" which I then will right myself using the pattern. But the lookup was way faster than googling tons of results.


    From what I understand (and I might be totally wrong), 
    Copilot is useless for dealing with codebases where 
    you often need to call internal functions
Perhaps this is the next big frontier or opportunity.

LLMs have finally gotten to the point where they are sometimes useful. And local project-level code intelligence (intellisense, static analysis, etc) has been pretty good for a while.

The first team to marry these two things is going to really have something great. A true force multiplier.

It certainly doesn't seem like it will be easy, but it also certainly doesn't seem to be impossible.


It definitely calls other functions in the same file. So some internal functions.

It's quite excellent even for large projects. It still saves a lot of mundane coding


I personally don’t use copilot. Sometimes I ask chatGPT for help on small, atomic functions. It has saved me lots of time, and the fact that I have to explicitly copy+paste makes me more aware of the code I’m potentially copying over


A good analogy is gps navigation. Imagine if instead of telling it where you want to go and having it generate a route with turn by turn directions, it reacted to every press of the accelerator or brake pedal and every turn of the steering wheel to guess what you wanted to do based on what others who braked or accelerated or changed lanes at the same point in the road did. It might be useful if you were completely new to the city and trying to go to the airport. It would guess that you're on this stretch of freeway to go to the airport and would JIT plot that you take exit 7 and get into the airport traffic. But if you're actually trying to go to a different location and just driving past the airport you'll have to reject that suggestion and then it would say, oh, you must be going to the arena, take exit 8!! And you'll have to reject that too. It's annoying as shit.

Oh, but it will learn that you're driving to work and will be able to prioritize that above the airport and the arena? Oh yay, so when I drive to someplace I drive to 5 times a week it can give me directions I don't need, but when I drive somewhere I haven't been before it can only tell me how to go to a bunch of places other people went on the way.

It's practically useless if your knowledge of your codebase and language is better than 0.

It's probabbly useful for languages and crap you don't use a lot or where you'd end up copypasting 90% of it anyway, like crappy webdev. (Good webdev is a different story)

I'd rather use it to explain what the fuck I was thinking when I wrote this shit 6 months ago, or what Nate was thinking when he seemingly fucked it all up (or maybe he fixed it). And maybe, maybe, playing the analogy back in reverse, it will get to where I tell it I'm trying to go somewhere specific and it gives me turn by turn directions, instead of arbitrarily suggesting shit that I drive past. Like I could say I'm trying to extend this to accept triangles and not just squares and it would say, oh use the visitor pattern and here are the places to change it, or something. But right now it would just flip it's shit and start suggesting triangle, polygon, stars, circles or something like that. Do you want to reverse a linked list of triangles? No, clippy. I know better than you how to change this shitty code. Thanks anyway.


Same feeling. I use GPT sometimes to get a reminder of a usage of a certain command or part of what I am doing, but for anything beyond that it usually wastes my time.


We are still in a exploration area where the use cases are in flux. If you find a set of use cases that work for you, then it will be a boon, but if not it is worthless. These tools aren't magic black boxes that do everything. On the one hand, that can be discouraging when you don't find immediate value. But for those that keep trying different things you'll probably start to see some value.

Some things I've done this week:

- I was asked to write a white paper on a topic I had previously built a slide presentation and given a talk on. I exported the outline, grabbed a few links to blog posts on the broader topic, and asked ChatGPT (with link-crawling) to write the whitepaper. I didn't like the output at all. So I asked chatgpt for several options on the outline for the whitepaper, and several options for narrative. I think selected an overall narrative arch for the paper, and the outline (with a few manual tweaks) and asked it to rewrite the paper with a revised prompt. It was pretty good and then I asked to change the style and tone. I then asked it to critique itself and suggest several improvements, several of which I then asked it to take into account and revise the paper. Then I asked to several variations of the document for different audiences. I picked out a few different parts I liked from each variant and ran another pass asking for improvements, including suggestions for visual aids. This all only took about 10 minutes, after which I shared this first draft with a colleague for review. That would have taken me a whole afternoon (and likely longer since I procrastinate when starting a blank page)

- I wanted a chrome plugin to use the archive.ph api for gated papers without needing to trust that sketchy extensions might steal my web history. I asked chatgpt to generate the code. Upon review I made a couple tweaks and had a private extension in about 3 minutes.

- I wanted to produce a menu for guests that staying with us this weekend. There were several dietary restrictions and preferences that need to be accounted for. I entered these in requesting a variety options. I selected those options, then asked for recipes and a suggested shopping list. I then asked for wine and beer pairings and got some good generic responses of varieties - but the brand-specific suggestions were bad. Time 2 minutes

- I wanted to some pro/con brainstorms on a new technology decision - I asked it to search for expert blogs and summarize their arguments. I also got it to generate hello-world level examples for each to compare and contrast. I then had it generate code for a more complex use case using the tool for each framework. I was then able to make a much more informed decision and start evaluating deeper a smaller set of options 2 from the dozen+ that existed. Time - 20 minutes

All of these have a pattern of iteratively working with a tool to semi-automate what I want. For now it sort hits a sweet spot of either summarizing research I would normally google and low-key generation of document or code that doesn't need a tone of context outside either what I researched or have on hand and can be self-contained.


Yeah that makes sense and it's why I'm forcing myself to try and use these tools. I feel like it can be a boon if I find the right use case in my workflow.


Is nearly everyone using Copilot at this point?

Maybe I'm one of the hangers on who refuses to use it, not out of IP concerns but because I'd hate for it to cause my skills to rot or for me to become too dependent on it. I do sometimes have GPT-4 write code for me, but somehow I feel like it's different reading and reviewing its code than to punch a few characters into my IDE and have code magically appear.


For me, Copilot has had the opposite effect from skill rot: it's pretty lousy at producing big chunks of code so I tend to use it for small chunks where I have a pretty good sense for what I want to do. However, sometimes it surprises me by offering something that I wouldn't have chosen that turns out to be far better, and I learn new things from those suggestions.

For example, I gave Copilot something like this:

    const textFile = longfile
    const numLines =
I expected it to give me a regular expression that would count the number of new line characters, but instead it split the string on \n and asked for the length of that array. These are decent sized files (300ish lines) and this code is running in a hot loop over thousands of them, but I decided to give it a chance and benchmarked both choices.

It turns out that even with all of the extra allocations, the regular expression solution was 4-5x slower than splitting and getting the length. Not something I would have guessed, but now I know.


If you care about performance, those are both insane ways to count newlines. Try a loop of indexOf() and watch it absolutely spank the splitting method. That was the first thing that popped into my head, and in testing it appears to be 5x the speed of splitting (and 0x the memory consumption).


Haha, clearly this was a bad example, because you're right, both are bad. I'm recently coming back to JavaScript after years of Kotlin, and I've clearly gotten too comfortable with standard libraries that have decent one-liner functions for this kind of thing.

I'm patching in a loop now (though I feel like a regular for loop is more readable than a loop with indexOf).


indexOf() is faster than the naive character-by-character loop because (presumably) it does all the work outside of JavaScript. You only run JavaScript code for each newline that is found rather than for every character. indexOf() may even use SIMD instructions for speed.


Ah, good point.

In my case, after benchmarking inside my actual app on real data, all three perform equally well (including the String.split version). Dropping the Regex made the difference between this being the heavy part of the process to it being a rounding error, and there isn't much additional value to be gained. With that in mind, I'd rather go for legibility over speed.

I'm going to go with the for loop, though, because the extra memory usage could come back to bite me later.


I think both solutions vs just counting the number of newlines are pretty bad


Haha, yep, an iterator would definitely be faster than either!

I'm coming from Kotlin where I expect there to be a standard library function for simple tasks like this, but I clearly need to get back in the habit of writing loops by hand.


You'll never achieve takeoff if you keep thinking like that.


I’ve dabbled but never actually committed to it, and I currently don’t have an active subscription or intend to try it out again anytime soon.

I found it to just be… not how I wanted to work. I’m not sure it ever made me faster? Like it typed things for me, so there’s some savings there, but because I wasn’t precisely sure what all of this magic code does before I read thru it and renamed everything, it just felt like I was the copilot, reviewing the others work.

It sure is cool but not how I want to work.


I use it every day 100 times, but it is just finishing the end of my line. Sometimes it comes in clutch when, for example i need a hex code for "Green", though. Based on this, I don't have any concerns about skill rot. I think we are a long ways away from it being able to write entire components or files for me - the amount of time I spend reviewing it and finding hidden bugs later on negates the benefits.


Let me give you an example.

I need to write a cron expression like once in couple years. So, I don't have working memory of it. So, to write it, I need to

1. Google "cron run something periodically".

2. Filter spam such as geeksforgeeks and irrelevant stackoverflows

3. Find either good enough approximation

4. If didn't work, go to cron man and start reading it. Example that I look for is 3rd from below in examples table https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cro...

OR

I can go to chatgpt/copilot and directly ask what I need

"Tell me a cron expression to run command on each third Monday of a month and explain it"

and it will give me good enough approximation right from the start. I won't need to fight popups that will shame me for using adblockers, I won't need to go through tons of spam. I will get much better headstart, that will save my time and energy.


> "Tell me a cron expression to run command on each third Monday of a month and explain it"

ChatGPT responds:

> 0 0 0 ? * MON#3 *

Regarding MON#3, it explains: "Day of the week (MON#3): This is the crucial part of the expression. We use MON to specify that we want the command to run on a Monday. Then we append #3 to indicate that we want the third occurrence of Monday. This ensures that the command will run on the third Monday of the month."

It does not mention though, even with the smallest hint, that standard cron does not support this syntax. Or a 7-field syntax.

(By "standard cron" I mean Vixie cron derivatives like the one that ships with Debian)

The correct answer in this case would have been: "This is not possible with standard cron syntax alone. You can either use a different cron implementation that supports MON#3, or schedule for every Monday, but then in your script check if it's the third Monday."


But can you trust that the expression it generated is accurate and it's not lying to you?

I'd rather use something like https://crontab.guru/ to verify my syntax, thus I have it bookmarked for the 1-2 times a year it comes up.


As I said - it gives me very good initial approximation, so I can go to docs with an example that I'm trying to figure.


> Tell me a cron expression to run command on each third Monday of a month

An LLM is probably the only thing which could tell you this.


I gotta say, LLMs are an epic solution to things like cron expressions, regexes, and date-time formatting templates.


Yes, I'm sure in 2-3 years I'll be finding plenty of regexes analogous to that cron expression, like one matching balanced parentheses.


Why would only an LLM be able to tell you this?


Because it needs to be hallucinated.


I don't understand, a Stack Overflow post could tell you the same thing as the LLM. Or are you just being sarcastic? Recent LLMs are fine in terms of smaller things like cron or regex, I use them daily, but yes, for larger pieces of code, they're more likely to be incorrect.


I mean yes an SO answer could also be wrong, or you could ask a human pathological liar, but these are much less likely to begin with, and it's far less likely they could "explain" their answer convincingly.

> Recent LLMs are fine in terms of smaller things like cron

This is obviously false given the contents of this thread.

Indeed, SO (well, SU) got this right: https://superuser.com/questions/428807/run-a-cron-job-on-the...

And crontab(5) explicitly calls out why this is impossible:

       Note: The day of a command's execution can be specified by two fields —
       day  of  month,  and  day  of week.  If both fields are restricted (ie,
       aren't *), the command will be run when either field matches


> This is obviously false given the contents of this thread.

Not in my experience. GPT4 works a lot better than previous LLMs. Have you used GPT4 yet? Typically in my experience people who talk about LLMs not being that useful (or "pathological liars") have not used them recently, or at all. So, I mean, not a useful opinion worth considering then.


Sorry, I'm talking about the actual claims in this thread, not your religion.


I have an "actual claim" in this thread, but because mine doesn't agree with your beliefs about LLMs being bad, you ironically claim it's my religion instead, lol.


Your claim then seems to be that galkk is just plain lying? That seems uncharitable.


Lying? No, I never said that, their version of ChatGPT (3.5 which is older) or Copilot (the Codex model it uses is much older) might very well be hallucinating wrong answers, sure. But my claim was that the newer models such as GPT 4 work well for certain classes of problems, so no, it's not "obviously false" that they don't work, which is what you claimed. Does GPT 4 get stuff wrong too sometimes? Sure, I never claimed they're perfect either, but unlike people in this thread, it has worked well generally for me based on my experience, but if you want to discount my experience and only listen to other people who confirm your beliefs, you do you.


This is absolutely what I imagine using Copilot for and also exactly why I don't understand the rapturous praise for increasing productivity. I have no doubt it will speed up the tasks I only need to do once or twice a year. I look forward to it! But until then where are these productivity gains people are talking about?


I'm working on a product and a lot of the code was written by ChatGPT, to be honest. It sped up my work by 3x, and I could ask it very specific questions about the code as well, including giving me multiple ways to optimize it. If you're working with specific libraries and languages, it can work great.


I'm playing with it, but not for work stuff, and not for everything. Really haven't done a ton, but has given me some useful insights. I'm normally a vim-only guy, so this has been a doubly-useful bit of experiment with using VSCode and using Copilot.

The key thing to me is it's nice for certain kinds of work. Loosely/offhand, those categories:

- Writing stuff from scratch: Sometimes useful. Particularly if it's a "write it, get it working, forget it" script.

- "Polishing" code with close attention to detail/docs/implementation: This is an area where I'm more likely to immerse myself in understanding the code, and wouldn't be likely to use Copilot, as it would break the immersion.

- Writing accessory scripts: E.g. a sqlite alembic migration script; I really am not a DB guy, so outlined some steps in a comment, read what Copilot wrote, tested it out, called it a day.

- Adding stuff in an existing codebase: In cases where there's hundreds of functions across dozens of files, the idea of copilot knowing the functions and suggesting them for me, without my needing to look them up, sounds great. I'd definitely like to attempt this with work's codebase someday. I think a lot of precursor tools could do much of this; but I just never got in the habit of needing them.

I spend more time with ChatGPT currently, but I'm pretty sure I'm nowhere near being a heavy user there, either. I think 75% of my peers are basically doing nothing with AI tools currently. (It helps that I do side projects a lot more than they do, too...)


Senior engineer here, none of the strongest developers I know bother with it. It’s fairly useless if you know exactly what you’re doing.


> Is nearly everyone using Copilot at this point?

I think the big difference is what type of programming people do. LLMs are great at handing you code that is routine, but I find them problematic when I try to do anything even a bit uncommon. I spend more time debugging than it would take to write it myself. I think a lot of this confusion is coming down to treating coding as coding and not looking at tasks. The same way writing a fantasy novel doesn't make you good at writing poetry, or a legal document, or a technical report.


I started using it for some simple stuff at work and it was helpful to generate long annoying statements in a web backend.

I recently started learning Go and I felt like I hated it, I couldn't make any mistakes or think about the syntax because co-pilot was constantly trying to impress me.

One thing I really hate is it will always try to auto-complete my comments. Sorry robot, I am a divine being and these comment ramblings are mine alone.


No, because my workplace bans it, so I don't find a subscription for personal use worth it. Before my workplace banned it, I used the free version last year, and I found it slightly useful, but not essential - more of a smart Autocomplete than anything truly intelligent. ChatGPT and the GPT API have been much more useful for my personal coding (mostly as an enhanced Stack Overflow).


> because I'd hate for it to cause my skills to rot or for me to become too dependent on it.

You think it's bad now, imagine 5 years from now. This version of Copilot is just the beginning. Will you care about skill rot when everyone else around you knows the skills to effectively use the new generative AI models and is producing 5x?


    I'd hate for it to cause my skills to rot
Well, good news: it's nowhere near good or useful enough for that. Currently it's mostly helpful with boilerplate stuff and coding quiz style challenges.

It doesn't know anything about which algorithm might be appropriate for a novel challenge, it doesn't know anything about your database or codebase, etc.

My personal opinion so far is that this is going to be a very useful tool. A big leap will occur if/when it can work in tandem with knowledge of your local code+data. However I don't see it ever replacing higher level reasoning or rotting one's skills. Unless the "skill" in question is memorizing boilerplate stuff.


Copilot just gets in my way by overriding my usual suggestions of class names, etc. I have it, but generally disabled.

GPT writes larger blocks with more context, on demand. It's quite slow and ignores certain commands though which is irritating.


I have not tried it yet, and don't feel much interest. Typing in code is not much of a bottleneck for me; I spend far more time reading, thinking, and discussing than I do writing.


I am using it; the process is writing documentation and getting things done. If your API is reasonable, it will figure things out. I am not using it for things I don't know how to write, I am using it for things I know how to solve relatively fast but don't want to bother writing completely.

There are cases though where it came in clutch and implemented a solution much better than I could by leveraging some very hidden numpy function call that I couldn't find even if my life depended on it.


I don't use Copilot but sometimes I ask ChatGPT to write some small algorithmic function. I describe the inputs and what it should do as I would do with you and most of the times it works. It saves me the time to check the reference. I keep working on the code calling that function while it's generating. When it doesn't do the right thing I attempt to explain me better. If I get a feeling that it's hallucinationg, I give up and write it myself.


> Is nearly everyone using Copilot at this point?

Nope. For what it's worth, I use Clojure(Script) these days, and working with the REPL is my preferred way to interactively "chat" with my program.

I asked family member how they use AI for programming. He said its mostly ChatGPT for soft skills like emails or JIRA tickets, or rubber ducking a design problem. He doesn't use any AI tooling for development. It seems both authors recognize this as well.


I haven't used Copilot, but I have used ChatGPT to generate some demo HTML5 pages using canvas and animations. While the results didn't match what I described, it at least produced some interesting output that was (technically) functional. It helped guide me towards what I wanted to accomplish while providing some basic boilerplate code for me to run with


> it's different reading and reviewing its code than to punch a few characters into my IDE and have code magically appear.

I feel you missed something here.

ChatGPT: reading and reviewing its code

Copilot: punch a few characters into my IDE and have code magically appear... and you read and review the magically appeared code, just like you do with ChatGPT?


or write comments and co-pilot will act like gpt


How useful are your skills when people with less skills are creating at a higher value and velocity with arguably better code, more testing, etc... I think that's where we're going to get to. Young minds will geek out on AI, and they won't nearly be as good in terms of knowledge, but they'll produce at a rate that justifies the means.

I think combining or crafting your offboard brain/prompts at this point is par for the course, or maybe even a no brainer. Take your knowledge, and things you don't like or don't pique your interest, give it to the robots. That's what I've been doing so I can focus more on the critical path. I don't trust any of these robots, but not everything they produce is bad, some of it is actually pretty good.

I see this mentality a lot, and I don't understand it, but people seem to take satisfaction from doing things the hardest way. While I can appreciate the feeling of accomplishment, and being able to overcome an obstacle while learning, if there is a solution that takes less time to build and create while providing the same outcome, that tool is going to have a home in my tool chest.


Not GP.

If Copilot/GPT produces stuff with subtle problems, and it always will, then it will not be better code than someone with skills.

More testing? Generated by the same LLM? Sure, but it might be testing the wrong thing.

I think that those with skills should lean on them. Let code monkeys use LLM's; experts should be human, not a fleshy bot feeding another bot and uncritically using the spew.


Humans produce code with subtle problems, and they always will, even with expert skills.

Sure, but humans could be testing the wrong thing too.

They should, but let's not pretend that experts or any human for that matter are infallible.

I wouldn't refer to people as code monkeys, it's rude. I can see on the site listed in your profile you're having issues aligning your navigation <div> (left or center?). Like you I too make mistakes. In this case ChatGPT recommend using a <container>, while I'm not sure if it's the right path forward, it seems to fix the issue on your page keeping the <nav> left aligned.


Anyone who uses an LLM to write lots of code for them is a code monkey. They're turning off their brain.

My navigation is where I want it to be. It's not a mistake. Nice try.

Humans may be infallible, but if they wrote the code, they'll have an easier time figuring out where their assumptions were wrong because at least there were assumptions. An LLM just regurgitates statistically-massaged word lists.


Name calling people who don't think like you tells me everything I need to know about you. You're not better or smarter than anyone else. You think you are, but just poking around your wares, you're really not.

Your navigation looks like utter trash. I've seen high school graduates do considerably better. It is a mistake. Just own it.

You've taken a stance based on regurgitated talking points. Do you get upset with yourself when you learn something from stackoverflow?


I don't use Copilot because there's no plugin for it for my Vim version.



Thanks, it needs 9 and I have 8. I'll have to find a deb just for this, though, as Copilot is pretty handy.


Yea this.


I don't use it and have no interest in it myself, but I recognise the value it has for many others.


I use it and not having it feels like there's something important (helpful) missing when coding.


So you don't like autosuggest either? I could write out the rest of my search, or just type less


> So you don't like autosuggest either? I could write out the rest of my search, or just type less

I'd like to think that we should all want to formulate our questions well.

Having someone(something) complete our questions is like having someone interrupt you half-way through a thought and finish your sentence for you.

For the one completing the thought, they think they're being helpful. For the one being interrupted, there's a decent chance it's a <wtf?> moment.

Being able to form a complete thought by yourself is something that's important. Don't throw this away....


Actually, I don't. I turn off most autosuggest, autofill, etc in practically every app I use.


Depends on the implementation. I abhor systems that constantly flash text in your face that may or may not be the word you are typing. The iOS typing suggestions bar is the worst at this. It feels like trying to talk while listening to your own voice at a delay, or others shouting words at you that you think you might mean.


I'm with you, but for different reasons.

I'm sometimes asked to explain a piece of my code.

If I told my boss, "I dunno. ChatGPT wrote that part, and to me it looks like it works OK" I'd probably be fired.


Perhaps you should ask ChatGPT to explain the code you are not clear on. It's a great opportunity to learn, and honestly that's part of the 'fun' of using CoPilot, I've learned some new things that I wasn't aware of. Mostly they're just newer syntactic sugar, but it's still good to keep up on a language's latest syntax / features.


If your code is written by ChatGPT-n, and you can only explain it to your boss with ChatGPT-n, you will be replaced by ChatGPT-(n+1).


People said this type of thing about Visual Basic, and code-generating wizards, and using libraries instead of coding up your own binary trees, and so on.

At some level somebody with engineering knowledge is going to have to understand the problem domain, the code, the data, the use cases, ancillary data stores, etc.


The way I use, I only accept suggestions that are what I was planning to type. For me, it usually only suggests single lines. So it's not hard to understand.

So it is more like it is predicting what I already wanted to type, instead of being something like pair programming and giving snippets.


You're suggesting a false dichotomy here: that one can understand their code or use ChatGPT but not both.

Outside of Leetcode-style stuff it's never going to produce reams of correct code in non-trivial projects with so little supervision that this is a major problem.


I don't know anyone who uses it anymore


Is it bad for your skills to rot if you don't need them anymore? I also don't know how to make fire without a lighter or a matchstick.


You’ll grow dependent on a technology that might not be sustainable to run on the long term, or will increase in price. Once the product seizes to be affordable, you’d need to switch away and relearn how you did certain things.

A similar thing happened with me using Adobe products at a student discount, only to realize its hefty $60 per month price tag was not something I was willing to spend.

And if the gained productivity is only marginal at best, it’s better to not grow dependent on these technologies.


Isn't it way more likely that the technology will become more available and cheaper/for free? It doesn't really have a lock-in effect like a UI that you learn.


Your lighter or matchstick aren't going to change their algorithms; they always work in predictable ways and you know the conditions under which they won't work (wet). So you can safely forget how to find and use flint for firestarting.

This puts you closer to institutionalized domestic violence. Give up your existing career and become dependent on [service provider]; you won't ever need to be able to provide for yourself because my behavior will never change for the worse, right? No provider has ever decided to become withholding, vindictive or capricious, right?

Your lighter won't do this to you.


Will you need to demonstrate the skill in an interview during your next job search?

^ I'd say that's a pretty good litmus test of whether you should let the skill rot.


Yes, it's bad to lose any skill.


I can drive a car but I can't ride a horse.

Somewhere along my family tree we didn't pass this skill forward and it was lost.


_You_ can't lose a skill if you never had it.


It happens all the time because as humans we are limited.

Personally I see it as a rite of passage for a senior developer to take note of and embrace their own limits.


I've been programming for a living since the 90s. So many programming languages over the years.

Recently started doing this in a for me new (but old) language: Python. Copilot is incredibly helpful in learning the ideosyncracies of it and its libraries. And the suggestions do actually keep getting better and better. Been using it for about 3 months now.

At least for Python I deeply recommend it. It accellerates the learning of a new language.


I wholeheartedly agree with this.

If I’m using an unfamiliar library, I’ll type a comment on what I want to do then start attempting to use the library. Copilot is amazing at getting a close enough solution. I can often skip reading a lot of irrelevant documentation.


For $10 a month, all Copilot has to do is save me a handful of minutes each month for it to be worth it.

Most of my current work is creating new React components. In these tasks, Copilot probably saves me 20 or 30 minutes a day on average. Often more. It's not always right, but often close enough, and I'm not committing my code without testing it anyway.


I’ve found Copilot to be most helpful when working with unfamiliar code. Even when it’s slightly wrong, it does an amazing job of landing me in the right place.

A simple example. If I’m using a library, like datefns, I can describe what I’m looking to do, copilot will find the correct method for me, but might reference the wrong local variable. This is even more helpful when a library has a bunch of similar APIs (parse, parseUTC, parseISO, etc, etc).

It also really helps with the under documented prep work you might need to do. Some libraries have really poor default configurations and setup documentation. They require non-obvious setup that can be hard to track down in documentation. Copilot does an amazing job of getting you in the ballpark. Even if it’s missing settings, it become incredibly clear how a library intends for me to use it.


Couple of comments:

> But the exact same argument can be made of every advance that has continously raised the level of abstraction of programming over the decades.

Languages and libraries are written (at least up until now) by humans, who _care_ about writing good working code. Generally, if you use a library, it will do it's job flawlessly, in the bounds of what it's expected to do. You rely on those people writing and maintaining it to perfect the job it was designed to do. Co-pilot is making suggestions, each a entirely independent "guess" at what you're trying to accomplish (apologies I don't know _that_ much about ML, but don't think this is far fetched). Meaning that each suggestion produces code in "untested waters" and wasn't code written specifically to do this job, used over and over by others... It's not like it's part of a project and a bug report going to filed for a bug...

Edit: Moved the top portion below, as it was sort of just repeating what was said:

I think the comparison of sifting through copilot suggestions to sifting through errors in google is that, generally you are looking for the error that fixes your problem. Not 5 different (probably working) solutions, which each may contain different bugs. Meaning that I (or X developer of any level would) _need_ to continue searching google to find a working answer to my problem. But validating a co-pilot answer for any potential flaws is much more error prone.


I don't use copilot but I did try a few tools lately like phind and chatgpt and tried to use them during daily coding chores. So far without much success. I tried using them as glorified code generator but the problem is that the generated code didn't take into account existing conventions in the private codebase.

This makes sense since it wasn't trained on the private codebase but not using existing helpers, naming, abstractions makes the results largely useless since only a minority of output is useful. Like the author mentions what these tools do with the input is a gray area and nobody wants to publicize their code through chatbots. It'd be cool if there was a selfhosted alternative out there, trainable on private codebases.

I've also tried using the bots for answering questions about tech problems I've been having. Answers are mostly off point, have a lot of nonrelated info and most importantly are generated slowly. In the time I get an answer from a chatbot I can scan through a dozen Stack Overflow pages.

As far as I'm concerned we're not there yet but never say never.


Don't just blindly accept or reject what the model is giving you. On one side, these are just tools, but on the other side they are powerful tools. The more you know the programming language, libraries, frameworks, design patterns, etc the more you can benefit from these tools. If you know what the good code should look like, and tell the model to generate that code, and you're quick to check it the better. On the other side, if you're not programmer and the model generates code that compiles - it's bad for you if you don't know what this code is doing...

I personally love these toys and the only downside is that they are services and not local tools (yet).


Other than the points on potential issues with intellectual property I find the article that is referenced completely unconvincing. It's just modern moral anxiety split into bullet points (eg how much energy does it use?, we don't need more code (speak for yourself)). Is there a name for this kind of self hating west coast tech liberal yet?

As for copilot, it's ok. GPT-4 is much much better when you learn how to use it. Busy work is completely automated. I need to transform strings in one form into structs? Just paste a representative sample of the data and tell it to. Is the data dirty? Just tell it how it's dirty and it will handle those edge cases. Want unit tests to verify the function it wrote works for you? Tell it to write the unit tests, then read them, point out what it got wrong, then check again.


> Is there a name for this kind of self hating west coast tech liberal yet?

When I was a mouthy teenager, my mom used to call this "It's easy to shit when your butt is full".

Sounds better in Slovenian. It's the same idea as The Man in The Arena – it's easy to have opinions when all your needs are met. If you really believe in what you're saying, get in the trenches and start doing the work.


What work are we talking about here? I'm pointing out that there exists this stereotype of people in tech that hate themselves.


I'm pointing out that they're spoiled and it's very easy to moralize about all sorts of things from that position.

For example: All the people railing against capitalism while getting large parts of their comp from equity and stocks and such. And making top 5% incomes. If you really believe capitalism is bad, why don't you leave Tech, sell off all your equity/stocks, and go live in a commune somewhere? hmmmm


>> $10/month when translated to most places in Asia would be significant enough to keep this out of the reach of most young programmers.

I find this hard to believe. I know that $10 / month (for anything) is prohibitive for some people in some regions but hard to believe that if you already have a computer, internet connection and a cell phone, that an incremental $10 / month would break the bank.


I am fairly sure by not using AI, I learn faster than I would otherwise. That's my main motivation. It helps that I have found I learn a tier faster than any peer I've run into too.


Neither do I...


ok.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: