Hacker News new | past | comments | ask | show | jobs | submit login

I’ve never understood the value proposition for Copilot.

In terms of difficulty, writing code is maybe on average a two out of ten.

On average, maintaining code you wrote recently is probably a three out of ten in terms of difficulty, and maintaining code somebody else wrote or code from a long time ago probably rises to around a five out of ten.

Debugging misbehaving code is probably a seven out of ten or higher.

GitHub Copilot is optimising the part of the process that was already the easiest, and makes the other parts harder because it moves you from the “I wrote this” path to the “somebody else wrote this” path.

Even during the initial write, it changes the writing process from programming (which is easy) to understanding somebody else’s code to ensure that it’s right before accepting the suggestion (which is much less easy). I just don’t understand how this is a net time/energy savings?

Did you try it? Because I've been using it for weeks and it makes me read these types of comments as "I don't understand the value of the internet" or "what's the purpose of owning a phone".

It's night and day if you have it enabled or not. There's just no question about the value proposition once you start using it.

I mean, you can tell comments here from people who have actually been using it, and people who have not tried it.

> Because I've been using it for weeks and it makes me read these types of comments as "I don't understand the value of the internet" or "what's the purpose of owning a phone".

And yet apart from making this very inaccurate comparison you haven't made any argument for why such a thing as Copilot would be useful to anyone. How do you personally find Copilot useful? And why do you think someone whose job demands more than copy/pasting boilerplate code should try Copilot? The onus is on you to convince the skeptics.

I think current co-pilot is less useful for person who entirely knows the language and libraries.

Right know I see more use for people who understand the language and libraries, but are frequently Googling ”how do I do xyz in P” (because they can’t recall certain things).

Bouncing around languages and ecosystems is pretty common, isn't it?

For the average career developer, less common than being stuck with the same legacy code base and language/framework for years.

Yep. A lot of the negative comments seem to come from people who haven’t worked on any new technology/language/framework in years.

Only within startups. Larger corporations have stable toolchains with lifetimes measured in years.

Based on my experience at a very large corporation, not the case. I've had to work on more languages within a year here than any other job I've had.

Almost everyone does search like that, its not about doing something but about finding best and easiest way to do something.

If you mainly deal with 1-3 languages on a daily basis that you have mastered, you don't routinely search for "How do I do xyz in P". Maybe if you're a junior or intermediate developer, or have a poor memory. But doing that frequently is a clear indicator that mastery have yet to been achieved.

It's not wrong or bad to search for help, but it doesn't indicate mastery of the language you're using.

I would say if you work in a narrow domain with a single language then yes, you might not need much searching.

However if you routinely switch among 3-5 languages you will get confused by naming and idiomatic approaches.

Ex: 1. Was it toUpper or upper or upperCase ?

2. What was the most idiomatic way to filter some collection?

3. Was it justify-content and align-items or vice versa?

A good IDE will generally solve first

Presumably copilot should help with second by supplanting search.

Third one I do hope copilot would help there..

I would say not remembering the names of some method is not an indication of lack of mastery.

Even creators of popular libraries and programming languages have admitted they will use search to refresh their memory.

Knowing a language or a tool doesn't mean you will always know the best or the smartest way to do something. This is not necessarily a test of your programming ability. And best practise is more often ever changing. Almost every language or tool is always ever changing and improving itself which best practise also keeps evolving.

Secondly you don't necessarily need to know or master a language or tool for every kind of work. You can just choose to learn as you go along with it, in which case knowing how to search and use the most effective way to do something is very useful.

> The onus is on you to convince the skeptics.

Is it?

Look, I don't care at all if you use copilot; you can use notepad to write your code if that floats your boat; do whatever you want.

What the parent post said is: Copilot is useful; it helps you write code with autocomplete suggestions.

If you think that you don't get productivity gains from an IDE or you're in the 'no IDE makes you more hardcore and better programmer so never program with an IDE' camp, we just have to agree to disagree.

So, I have no interest in that conversation.


However, there is a more interesting conversation here we can have:

Given that you have an IDE and use autocomplete:

1) Does Copilot give suggestions are meaningfully useful?

Yes. I honestly can't give you a better answer than that. Yes, it does, it's quite good.

If you don't believe me, try it.

2) Is it better than regular autocomplete?

Look, forget the 'Look, I typed 'process user form and display on UI' and it autocompleted a whole application for me!!' hype.

That's stupid and that's not how it works.

It's an autocomplete. It can autocomplete large chunks, but they're generally rubbish. ...but it does two very interesting things:

- It suggests things that are contextually correct.

For example: even though its rubbish at C++ syntax, it generates valid UPROPERTY and UFUNCTION blocks for unreal code. If I write a `for (y = 0;...` on an array, it generates the associated `for (x = 0;...` when I'm iterating over a 2D array.

If I have a function which takes a pointer like:

> UFvContainerWidget* UFvContainerWidget::ForEquipmentDef(UFvEquipmentDef* EquipmentDef, bool& IsValid)

I press enter and it pops up:

    if (!EquipmentDef) {
      IsValid = false; <--- WTF! 
      return nullptr;
Sure, it's a similar pattern to other code in related files, but still. This surprised me. I've never encountered an autocomplete that does that before.

Sometimes the suggestions don't make sense, and the larger the chunk, the less sense they make.

...but the suggestions that do make sense, make you regret not having it when you don't have it.

Like... regular autocomplete.

It's just a tool; it works very well at small scale autocomplete tasks.

- It can suggest comments from code as well as code from comments.

Literally, I can go above a function and type "/*" and it'll suggest a comment.

These don't always make sense, but often they're pretty close, and it saves me 20 seconds typing.

You have to carefully read these comments, because they tend to get random crap in them, but once again, for short comments its not bad.

Again... it's surprisingly good. Not perfect. It doesn't write your comments for you... but, it's easy to get into the habit of getting it to autocomplete "Returns null if the object is invalid" for you instead of typing it out.

3) Should I use it?

Look, I literally do not care if you do or don't.

What I take issue with is people saying 'it has no value'.

Does autocomplete have a value? Then this has a value.

Saying it has no value is just trolling.

Is it worth the cost?

Well, it's free to use right now.... so, well, you can't beat free right? :)

Longer term, would I pay for it?

Probably (** Personal opinion only: Maybe... you should try it and decide for yourself?)

I'm probably going to use it, but isn't it valuable to be a bit skeptical and question what the long-term effects may be?

Over the past decades we went from snail mail to email to instant messaging, each step made it easier to write text to a person. Today, we are writing so many messages to each other that people have started arguing for less instant messaging and less email. Mainly because distractions and frequent context shifts allegedly reduce productivity and happiness.

With Copilot, we have a similar evolution where writing code becomes easier. Could this result in people writing more and more code since it is a smaller effort? What would this do to the developer ecosystem in the long term? Maybe code reviews will take longer because there is more code and because it is more likely for junior developers to introduce bugs using copilot code. Maybe this results in more bugs slipping through code reviews and into production and eventually lower productivity and happiness since more time is spent stress-fully fixing production errors. I can't predict the future, but I do think it's valuable to ask these questions before it's too late.

> Literally, I can go above a function and type "/*" and it'll suggest a comment.

Finally we can getting somewhere with this AI stuff....

> Well, it's free to use right now....

From copilot "additional telemetry"

> If you use GitHub Copilot, the GitHub Copilot extension/plugin will collect usage information about events generated by interacting with the integrated development environment (IDE). These events include GitHub Copilot performance, features used, and suggestions accepted, modified and accepted, or dismissed. This information may include personal data, including your User Personal Information

So, not really free is it?

> Should I use it?

> Look, I literally do not care if you do or don't.

> What I take issue with is people saying 'it has no value'.

It depends on semantics and your interpretation of value.

In the eyes of most people free == 'something that i don't have to pay for through my bank accounts or other means', as opposed to caring about analytics, telemetry etc.

At least AFAIK that's the common usage and what almost everyone means, though it's definitely worth it to talk in more detail about what hides under that term most of the time!

> suggestions accepted, modified and accepted, or dismissed. This information may include personal data, including your User Personal Information

Seriously wtf, my legal department will have a heart attack if they read this.

Yes it’s free

That sounds about as useful as Tesla's "auto-pilot" - good when it works but you have to always pay attention that it's not trying to kill you (or your code in this case).

A bad proposition for most people, because you can't trust it.

The whole trust thing is an interesting topic. I thought the same thing but then I got Open Pilot. I would feel comfortable falling asleep with that on at this point. It takes time though. Around 400 miles for me before I "fully" trust.

Sounds like you haven’t tried it :)

First, why would I need to argue this? Just try it and see if it’s useful for you. I know people who don’t use IDEs or don’t use syntax highlighting and it doesn’t seem to bother them much, so who knows.

Second, I’ve actually answered elsewhere in this thread.

you can sometimes just tap enter, inside a model or another file (in a framework like Laravel for example), and it'll literally guess the entire function. at the most If you just make a comment about what the function should do, or name the function something sensible it'll get the whole function maybe 58% of the time, and when it's wrong there's other options to choose from and one might get you 90% to the end goal, and just need a few modifications.

I've used kite, and tabnine and loved tabnine, but this is something different and way more like magic. I can't explain how, it just feels like it's reading my brain as I type..

I've only used it a bit and it's like glorified autocompletion. It messes up a lot in some ways too, often suggesting getFoo when in kotlin we just use foo.

It is really fun to see it sort of understand your code though, and every once in a while it does a smart suggestion that could've taken a while for me to figure out myself.

It's certainly a good autocomplete but it can't be used for anything more complex because you just can't trust it. Every now and then it produces an entire function filled with complexity and I think "This looks right but I would need to independently verify everything" and at that point it's easier to write from scratch.

Whenever it happens it’s actually a good foundation as it still does a lot of boilerplate

Are you worried about copyright issues from the code it produces?

It only spits out functions at most 10 lines long. There is very little that could actually be copywritten in a 10 line block. And most of those suggestions are incredibly generic problems like find the distance between two coords.

This is a cloud-based code suggestion platform. No corporation with a solid secrecy policy will allow you to use it. For private use; it costs so I prefer just learning the field. What precisely convinced you?

Plenty of corporations trust GitHub itself with their code, why should trusting copilot be any different?

Did you comment on the wrong post?

There's so many people in here saying "did you try it?" - and I signed up on the waiting list the day it was announced and still don't have access.

Was in the same boat! Just got approved literally 2 days ago. And I have tons of activity...

Could not have said it any better. Enabled it a couple of days ago and I was flabbergasted at the results it produced for me.

I don't see it either. The context switch between being in the zone/flow and writing the exact code I'm thinking of to suddenly reviewing blocks of foreign and quite possibly wrong code seems like a net negative value proposition. I can't even get autocorrect on my phone to do the right thing half the time.

Writing code is easy. Architecture, refactoring, and solving business problems are the hard parts of the job.

Writing new code is also generally the most rewarding aspect of the job. Co-pilot promises to turn that into just another unrewarding chore, like slinging 3rd party libraries together.

Its very useful for certain types of tasks that are inherently repetitive. It will construct debug strings very easily, for example

e.g. type `pr` in a function with x & t

and it may predict...

The `print(f"The value of 'x' was {x} at time {t:0.00}s")`

There are some code in your life you would wish someone could write that for you.

Because such code is dumb, tedious and joyless. I have to bite my lip sometimes to convince myself writing them is not a waste of time, because people demand it, but I hate it to my core.

Copilot is that unfortunate boy that has to do all that manual work. It is the ultimate code boilerplate mixer.

It is not going to write all code for you, if you goal is to have it THINK for your then you are due to disappointment. But it would be able to help you be a more efficient and slightly happier programmer.

Copilot also optimizes for speed to a degree. It's akin to advanced auto complete. IntelliJ auto-completion is great. As much as it pains to say this, I don't think I will be as effective writing Java in Vim as much as I am with IntelliJ. The key differentiator is the auto complete speed. Copilot I feel is just auto complete on steroids. It may not be perfect yet, but there is definitely a problem it solves.

Have you used it? My experience was quite atrocious. Copilot is not auto complete. It’s nonsense. I attempted to use it continuously for three weeks. I tried because I know someone who built it and I wanted to give them the benefit of the doubt.

It never prompted me with any code that was useful. It only ever slowed me down and caused me frustration. It’s nothing like Intellisense. It’s just trash.

I pretty much use it every day at this point and I notice when it is disabled.

> It’s just trash

I have a hard time relating to this kind of experience considering how useful it has been for me. What language are you writing in btw? When I use it for OCaml it's not that useful, perhaps because there isn't as much OCaml code to learn from :D

A lot of TS React or Node

Could you cite an actual specific example? I have a difficult time believing what you are asserting that it provided absolutely no value at any point - it just sounds like a baseless ad hominem attack.

I find that it's fantastic for typescript and JavaScript allowing me to flesh out the completion of basic data object containers class definitions etc. extremely quickly.

If I don't remember the exact parameters that you have to pass into a certain NPM packages methods it will usually auto prompt me and help me complete it without me having to context switch to a browser and look it up.


TLDR; the subtleness of its wrongness destroys my ability to follow my train of thought. I always had to take myself out of my train of thought and evaluate the correctness of the suggestion instead of just writing.

For simple things, intellisense, autocomplete and snippets are far more effective.

For anything more complex, I already know what I want to write.

For exploratory stuff I RTFM

copilot was ineffective at every level for me.

It's helped me out quite a bit.

When I'm writing Angular code, it often fills in the correct boilerplate code, and is especially helpful when writing unit tests. I'm also quite surprised when it autocomplete various filter functions.

It isn't perfect, but it's been helpful filling in the mundane, simple stuff.

Sorry to tell you, but if you’re constantly, or even regularly, writing this much boilerplate code, then you probably need to change how you write code. Maybe try a different framework.

Try tabnine? It doesn’t generate so much nonsense because it’s all generated based on only your own codebase.

Tried it a bit. Not useful for me.

I pair program with a guy who completely refuses to use the keyboard unless there is no other option. He uses the mouse to cut/copy/paste everything possible. He is not handicapped. It is frustrating for me to pair program with him because of that.

He will spend an extra 3-5 seconds using his mouse in order to avoid typing.

Perhaps he is the target market.

Then making it work with Neovim seems like an odd choice.

The only drawback is....constraints. Autocomplete constrains suggestions to those that are valid or at least valid-adjacent (like it'll use something but auto-import it to make it valid, etc). Copilot fails miserably here and I don't yet see it improving anytime soon. Maybe it will, and if it does, it'll be great. But I won't hold my breath for it.

I mean, why only pick vim or IntelliJ? I get them both with IntelliJ and the plugin.

> value proposition for Copilot.

now, instead of copying off stackoverflow, it's gonna be off copilot. It will enable a lot more people to code who otherwise would not. Whether this is a good outcome or not...

This is even another example of just optimising for the easy bit.

I could hire 50 juniors that can code tomorrow if I wanted to. But even with an unlimited budget, finding good devs that can make it through a 2 year project without coming out of it with a big ball of unmaintainable shit is difficult.

The gulf from beginner to expert is already big, and the more crutches you use early on, the bigger it's going to get. There's a lot of people that wash out of the industry before they reach the point of being able to comfortably build good software (and be solely responsible for it).

I think copilot is another item in a long list of things that's good for big businesses (who optimise heavily for getting passable results with 1,000 mediocre devs instead of 50 good ones) and terrible for individuals in the long run.

Sooner than later using copilot will be an interview question and it will trigger a big red flag if the company cares about talent.

Your comment hides some truth! Imagine coding today without stackoverflow. Possible, but you'd lose so much time looking for simple answers.

The more experience I gain the less I use SO and more I just go to the sources or read the docs.

With googling for SO answers I have to parse the question, find a modern answer (because the accepted one is 10 years old and won’t work), parse that and adapt it to my problem. With documentation I just search for what I need and go straight to solving my problem and I’ve never felt more productive.

I feel like people new to programming focus too much on a specific problem at hand instead of learning the problem solving themselves. I wish I would’ve learned to figure out the issue myself from the start.

I feel like jobs where you can constantly rely on your already acrued knowledge are rare. Or maybe it’s just me and I work in fields where I have to learn new technologies constantly?

Recently I’ve been doing a lot of OCaml and it’s been tough as there’s very little on stackoverflow. Every time I have a question I have to spend a lot of effort looking for the answer instead of relying on someone before me having the same problem and posting the answer online.

> programming focus too much on a specific problem at hand instead of learning the problem solving themselves

you got the importance and focus wrong. People don't care for problem solving skills - they care for the specific problem being solved. That's why they pay someone to fix it.

They don't want to pay someone to "learn" problem solving (because the stakeholder don't care).

Stackoverflow immensely helped this sort of use-case - may be at the detriment of quality - but i cannot deny that copilot is going to accelerate this use-case.

Same. I was wanting to learn more about ActivityPub recently, and after reading the first two web search results, I remembered: this'll probably just be easier if I just read parts of the W3 spec (and it was)

My use of stackoverflow has vastly faded over time. I only ever go there to look for hints on ways to do something better, better practice, newer ways or a simpler way.

There are more and more wrong answers posted by what looks like the same people who have MVP on the Apple, Microsoft and Google forums who just whore for kudos points. I can't understand the motivation to dilute something of value.

And you'll get the same "works for me" results as you would from SO. I put my thoughts down a while back: https://bostik.iki.fi/aivoituksia/random/minimum-viable-copy...

>...now, instead of copying off stackoverflow, it's gonna be off copilot.

Eventually, I'm seeing another breed of SO questions, making sense of Copilot suggested fragments and seeking reassurance or alternatives... Then possibly the copying off, just as now.

By the same logic you've presented, what's the value proposition of the plain-old auto-completion? What's the value proposition of a slick editor? All you need is the built-in notepad and a debugger.

Speaking from my personal experience, I usually write code in TDD style, in which I test the properties of the software I desire upfront, then make it pass with a minimal amount of effort. When I see there's a need for refactoring, I refactor. And I repeat this process until it is done.

The three parts take roughly the same portion of time, and when I'm writing tests, I'm thinking about the functionality and value of the software. When I'm refactoring I'm thinking about the design. When I'm writing the implementation initially, I want it to Just Work™ in the first place, and I find Copilot is great for this matter: why not delegate the boring part to the machine?

You know, perhaps this is tangential to the point that you're making at best, but i still couldn't help but to notice:

> The three parts take roughly the same portion of time, and when I'm writing tests

that bit and have some strong feelings about it. At my current dayjob, writing tests (if it was even done for all code) would easily take anywhere between 50% and 75% of the total development time.

I wish things were easy enough for writing test code not to be a total slog, but sadly there are too many factors in place:

  - what should the test class be annotated with and which bits of the Spring context (Java) will get started with it
  - i can't test the DB because the tests don't have a local one with 100% automated migrations, nor an in memory one because of the need to use Oracle, so i need to prevent it from ever being called
  - that said, the logic that i need to test involves at least 5 to 10 different service calls, which them use another 5 to 20 DB mappers (myBatis) and possibly dozens of different DB calls
  - and when i finally figure out what i want to test, the logic for mocking will definitely fail the first time due to Mockito idiosyncrasies
  - after that's been resolved, i'll probably need to stub out a whole bunch of fake DB calls, that will return deeply nested data structures
  - of course, i still need all of this to make sense, since the DB is full of EAV and OTLT patterns (https://tonyandrews.blogspot.com/2004/10/otlt-and-eav-two-big-design-mistakes.html) as opposed to proper foreign keys (instead you end up with something like target_table and target_table_row_id, except named way worse and not containing a table name but some enum that's stored in the app, so you can't just figure out how everything works without looking through both)
  - and once i've finally mocked all of the service calls, DB calls and data initialization, there's also validation logic that does its own service calls which may or may not be the same, thus doubling the work
  - of course, the validators are initialized based on reflection and target types, such as EntityValidator being injected, however actually being one of ~100 supported subclasses, which may or may not be the ones you expect due to years of cruft, you can't just do ctrl+click to open the definition, since that opens the superclass not the subclass
  - and once all of that works, you have to hope that 95% of the test code that vaguely correseponds to what the application would actually be doing won't fail at any number of points, just so you can do one assertion
I'm not quite sure how things can get that bad or how people can architect systems to be coupled like that in the first place, but at the conclusion of my quasi-rant i'd like to suggest that many of the systems out there definitely aren't easily testable or testable at all.

That said, it's nice that at least your workflow works out like that!

Have you read Working Effectively with Legacy Code?

It's transformative in situations like this, it has a bunch of recipes for solving these kinds of problems.

While I don't use Java or C++, this book has probably been the most useful to me in working with larger bodies of code.

While the book is indeed good, it's pretty hard to do anything to improve that particular codebase because there are developers who are actively introducing more and more of the problematic patterns and practices even as i write this.

To them it isn't "legacy code" but just "code", while attempting to offer alternatives either earns you blank stares or expressing concerns about anything new causing inconsistencies with the old (which is a valid concern but also doesn't help when the supposedly consistent code is unusable).

To me it feels like it's also a social problem, not just a technical one and if your hands are essentially tied im that situation and you fail to get the other devs and managers onboard, then you'll simply have to either be very patient or let your legs do the work instead.

Thanks for sharing this. I can feel you because I have been working on a similar project but slightly better, however, it's painful still for me. I wrote a comment last month [0] that is more or less related to what you've said. Basically, you want to write fewer tests that really matter, while the infrastructure should be fast and parallelizable.

Sadly it's easier said than done, since it's not an easy thing to fix for an existing system. We've spent quite some time improving things to ease the pain on writing tests, it was getting better but would never reach the level if we were aware of this problem in the first place - there are tens of thousand tests and we cannot rewrite them all.

I'm not too familiar with your tech stack. But there are two things you mentioned that are especially tricky to handle for testing: DB and service calls.

For DB, there are typically two ways to handle it: Use real DB, or mock it.

Real DB makes people more confident, and don't need to mock too many things. The problem is it can be slow and not parallelizable, or worse, like your case there's no impotent environment at all. We had automated migrations, but the test was run against the SQL Server on the same machine, so it was not parallelizable so the tests took more than a day to run on a single machine. On CI there are tens of machines but still takes hours to finish. In the end, we generalized things a little bit, and used SQLite for testing in a parallel manner. (Many people suggest against this because it's different from production, but the tradeoff really saved us). A more ideal approach is to have SQL sandboxing like Ecto (written in Elixir). Another ideal approach is to have in memory lib that is close to DB, for example, the ORM Entity Framework has an in-memory implementation, which is extremely handy because it's written in C# itself.

If there's no way to leverage real DB, you have to mock it. One thing that might help you is to leverage the Inversion of Control pattern to deal with DB access, there are many doctrines like DDD repositories, Hexagonal, Clean Architecture but essentially they're similar on this point. In this way, you'll have a clean layer to mock, and you can hide the patterns like EAV under those modules. As you leverage them enough, they will evolve and there would be helpers that could simplify the mocking process. According to your description, the best bet I would say is to evolve toward this direction if there's no hope on using real DBs, as you can tuck as much as domain logic into the "core" without touching any of the infrastructures. So that the infrastructure tests could be just very simple and generic.

For service calls, the obvious thing is to mock those calls. The not so obvious thing is to have well-defined service boundaries in the first place. I cannot stress this enough. When people failed to do this, they will feel they're spending a lot of time mocking services, while at the same time they feel they've tested nothing because most things are mocked. Microservices were getting too much hype over the years, but very few people pay enough attention on how to define services boundaries. The ideal microservice should be mostly independent, while occasionally calling others. DDD strategic design is a great tool for designing good service boundaries (while DDD tactic design is yet another hype, just like how people care more about Jira than real Agile, making good things toxic). We were still struggling with this because refactoring microservice is substantially harder than refactoring code within services, but we do try to avoid more mistakes by carefully designing bounded contexts across the system.

With that said, when the service boundaries are well-defined, and if you have things like SQL sandboxing, it's a breeze to test things because most of the data you're testing against is in the same service's DB, and there are very few service calls need to be mocked.

[0] https://news.ycombinator.com/item?id=28642506#28679372

The value prop for things like these is always the same: for the widely - and accurately, although that's irrelevant to my point - lambasted initial release to start the ten year journey down the road towards the creation of an eventual product that will make people link to comments like these the way they link to the original Dropbox Show HN: post.

There are levels of ease of which we have not yet dreamed, especially in the realm of information manipulation.

I mean, I'm obviously intensely skeptical that such a thing will happen at all, much less within ten years.

But I guess we'll find out eventually! And if mine does become the "640k ought to be enough for anybody" quote of this decade, then I suppose there are worse kinds of fame.

I think professional software devs won’t get much value from copilot.

OpenAI’s demo from a few months back showed it as a sort of bridge to convert natural language instructions into APIs calls. Eg converting “make all the headings bold” to calls to a word doc api.

It can still do that. Write a small comment before writing CSS and see it go!

I feel like 10 years from now we will look at thread like this and laugh akin to 64kb being enough.

640kb, not 64kb, is the value used in that apocryphal quote. A quick google search would have shown that. I wonder if there's a google copilot in the works for social media posts.

We do look 40 years back and laugh at people thinking that artificial intelligence will be easy.

Exactly, I'm already laughing reading the comments here considering I've been using copilot for weeks and it's a game changer

>>> in terms of difficulty, writing code is maybe on average a two out of ten

Imagine what's possible when that difficulty level shrinks to 0.0001 / 10

Github Copilot is a "code synthesizer"

Xmas time takes me back to one of the most popular "toys" of all time: the Casio SK-1. Music sampler for the masses.

It's like that ;)

This is for outsourcing farms to dump even more garbage code on ignorant founders for pennies.

I guess it really depends on what you are working on.

If you are writing some non-trivial algorithms or working on some projects which requires delicate handling of things, then Copilot is most likely going to mess up.

But if you are working on many of those frontend code or backend CRUDs which are usually quite repetitive. Then Copilot could be helpful.

It's meant to be a souped-up autocomplete. You don't quite remember how to do a common thing, and instead of having to go look it up, the IDE suggests it for you and you can keep doing what you're doing. A bunch of small instances of that can save you lots of time.

> I just don’t understand how this is a net time/energy savings?

At the end of the day is all about trust. Do you trust code you find in SO/Copilot to be good enough for your use case?

In my case I do not trust SO code. Whenever I use SO, if I find some snippet that seems to be the code solution I'm looking for, I copy-paste the snippet on my IDE, read through it carefully, rename variable names as needed, handle edge cases, remove unused code, etc., etc. Any code solution I find in SO gives me the "starting" kick, which is about 10% of the total effort of writing code from scratch. The remaining 90% (to understand the code that is being committed) cannot simply go away. I do not expect Copilot will make much of a difference.

> In terms of difficulty, writing code is maybe on average a two out of ten.

It's not about difficulty, but time. Not the same thing. Easy can still be time consuming.

Have you seen how much time the average developer spends on Stack Overflow and googling for answers?

I also fear that Copilot will be teaching anti-patterns.

Just tried something really simple: def is_palindrome

Copilot suggestion was

  def is_palindrome(word):
      if word == word[::-1]:
          return True
          return False

So good for technically correct solution but still...

This is an anti-pattern I think in pretty much any language that I know of and something that about half of my beginning students try when they learn about branching..

UPDATE: more howlers along the same vein

  def haystack_contains_needle(haystack, needle):
      if needle in haystack:
          return True
          return False

UPDATE: The above howlers were in IntelliJ...

However in Visual Studio Code on a different computer, I got much better idiomatic suggestions.

such as

  def is_palindrome(word):
      return word == word[::-1]
Very puzzling.

It's not so much the parts where think hard and implement that perfect feature. It's insanely useful when you have to make sweeping pesky changes.

Like a pulling values from a config dict and initializing a bunch of methods from the class? Or setting up a testcase similar to the one you already have, but with different values? Or cleaning values from a form? It's not a bulk edit. But it's also not thoughtful code-writing. It's monotonous and mundane. And lots of people do a lot of this on a daily basis.

Copilot makes this a cinch.

you're only thinking about the 'write a comment get a block of code' feature. it also has autocomplete/predictive functionality that speeds up coding quite a bit when it works

>GitHub Copilot is optimising the part of the process that was already the easiest, and makes the other parts harder because it moves you from the “I wrote this” path to the “somebody else wrote this” path.

It is worth mentioning, I suppose, that from Copilot's point of view it is the inverse. Maybe a necessary or at least desirable step towards the inevitable 'Copilot debugger'.

I've had a lot of fun having it generate weird web pages without me writing any code.

Stuff like:

    // Add 100 divs to the DOM in random places
    // Randomize the color and text of all divs every 1 second
Other than that novelty, it can be genuinely useful if you think about it as a more intelligent autocomplete.

> In terms of difficulty, writing code is maybe on average a two out of ten.

I’ve never heard talented programmers say that

Yes, if you use copilot irresponsibly, you will end up with irresponsible code.

maintaining code someone else wrote is much higher on your rating scale. probably need the top end. because it nearly always involves some debugging and usually is not obvious what footguns exist

i've actually grown to like it, it seems like it's getting better with each use

Maintaining code that you wrote rises to a nine out of ten. Debugging your code is a ten out of ten or higher.

You never understood something that only came out this year??? Never???

To make it faster to pump out go which is maybe > 50 noise with it's poor language features and it's anemic standard library that does not even have abs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact