Hacker News new | past | comments | ask | show | jobs | submit login
GitHub Copilot (copilot.github.com)
2905 points by todsacerdoti 35 days ago | hide | past | favorite | 1272 comments



To see all the 1100+ comments, you'll need to click More at the bottom of the page, or like this:

https://news.ycombinator.com/item?id=27676266&p=2

https://news.ycombinator.com/item?id=27676266&p=3

https://news.ycombinator.com/item?id=27676266&p=4

https://news.ycombinator.com/item?id=27676266&p=5

(Comments like this will go away when we turn off pagination. I know it's annoying. Sorry.)


I've been using the alpha for the past 2 weeks, and I'm blown away. Copilot guesses the exact code I want to write about one in ten times, and the rest of the time it suggests something rather good, or completely off. But when it guesses right, it feels like it's reading my mind.

It's really like pair programming, even though I'm coding alone. I have a better understanding of my own code, and I tend to give better names and descriptions to my methods. I write better code, documentation, and tests.

Copilot has made me a better programmer. No kidding. This is a huge achievement. Kudos to the GitHub Copilot team!


I’ve also been using the Alpha for around two weeks. I'm impressed by how GitHub Copilot seems to know exactly what I want to type next. Sometimes it even suggests code I was about to look up, such as a snippet to pick a random hex color or completing an array with all the common image mime-types.

Copilot is particularly helpful when working on React components where it makes eerily accurate predictions. I see technology like Copilot becoming an indispensable part of the programmer toolbelt similar to IDE autocomplete for many people.

I also see it changing the way that programmers document their code. With Copilot if you write a really good descriptive comment before jumping into the implementation, it does a much better job of suggesting the right code, sometimes even writing the entire function for you.


They finally did it. They finally found a way to make me write comments.


The real purpose of this tool.


Jesus. From a guy with your track record that means a lot.


Has anyone used Copilot with a more succinct language? It appears to only automate boilerplate and rudimentary patterns, which while useful in repetitive low signal to noise ratio languages like React or Java, sounds less appealing if you're writing Clojure.


Or the converse?

If Copilot is as good as it gets but only for some languages, won’t it influence what languages will be chosen by devs or companies?


Indeed, I could imagine it becoming more difficult to adopt a language that doesn't already have a large corpus to train on.


Could there be a boom followed by a bust? Sometimes a greedy algorithm looks good until it doesn't. It's at least imaginable that AI coding helps you do stuff locally that eventually ties you in knots globally, because you were able to rush in without thinking things through.

(It's also conceivable I'm put out of a job, but I'm not worried yet. So far I can only imagine AI automates away the boring part, and I super-duper-want that.)


I really wonder sometimes if Java would never have made it that far if it wasn’t for eclipse and later IntelliJ.


Well, there are few things many programmers enjoy better than automating away repetitive tasks. If not exactly intellij or eclipse, something that achieved the same end would certainly have arisen.

I'm sure there's at least a few relevant XKCD strips to insert here.


Would be an interesting fact so see companies adopt languages that need significantly more code to reach the same result just because some AI can automate a lot.

Thinking about generating 10 times the code you need, just because you can generate it instead of writing (perfomant?) code.


I've not used Copilot but I've experimented with two other AI driven autocompletion engines in Java and Kotlin. In both cases I uninstalled the plugins due to a combination of two problems:

1. The AI suggestions were often less helpful than the type driven IDE autocompletions (using IntelliJ).

2. The AI plugins were very aggressive in pushing their completions to the top of the suggestions list, even when they were strictly less helpful than the defaults.

The result was it actually slowed me down.

Looking at the marketing materials for these services, they're often focused on dynamic languages like Python or JavaScript where there's far less information available for the IDE to help you with. If you've picked your language partly due to the excellent IDE support, it's probably harder for the AI to compete with hand-written logic and type system information.


I'd recommend TabNine, it is extremely helpful. I tried Kite once, and it is WAY overrated. So slow that by the time it provided me suggestions I was only a few characters away from finishing. Tabnine has saved me hours.


Good luck using type-based autocomplete to write entire functions for you.


This is a good point and it will be interesting to see if something like copilot will get developers/companies to adopt a language that is better supported by AI.

Edit: You are honestly downvoting me for saying something that might actually happen. If copilot lives up to the hype, but for a limited number of languages, this can have a profound affect on what languages people might decide to use in the future.


Boilerplate is the most annoying type of code to write/try to remember, having all of that automated away would be awesome.


This approach is kind of a hack, though. The proper way to automate boilerplate is better PL/library design that makes it unnecessary.


Wait until it's time to maintain all this autogenerated code. It's going to be a nightmare.


Don't worry, the most common bug fixes will become part of the suggested code, so when you start writing patches, Copilot will have great suggestions. And then when you have to fix the new bugs, same deal.


The real nightmares will revolve around changing requirements. That's where a statistical analyzer is not going to be smart enough to know what's going on, and you're going to have to understand all this autogenerated code yourself.


[flagged]


Is this a joke? Top 3 comments follow almost an identical format?


It's a feature not a bug. Copilot also assists with HN comments!


Amazing, this is exactly what I was going to type!


I suppose you were both trained on the same data set.


Right? Im getting heavy astroturf vibes from these repetitive, nearly perfectly phrased, corporate sounding paragraphs of pure praise.


This is the HN copilot, which writes comments for you.

Basically, if you look at what’s not yet available to the public, we have engines that can write an entire program that does what you want, a test suite, documentation, and write thoughtful comments on all the forums about why the program is good. They could make about 100,000 programs an hour on a cluster, together wih effusive praise for each, but that would make everyone supicious.


Ferross is a well known figure and a monster talent. I seriously doubt he has sold out as a Github shill.


> it does a mucho mejor trabajo de sugerir the right code, sometimes it is even writing the entire function para ti.

Whether it's a joke or astroturfing, this alone makes it brilliant.


It's a direct reply to the other comment you mention and it parses like it's autogenerated but then swerves off into another language. I'm pretty sure it's a joke.


Yes it was a joke. I wrote it myself. Seems many people didn't get it. Oh well...


Does it sometimes switch into a different language mid-function?


I do it todo el tiempo


Is there a rule for banning users that sell their account for this type of comment ? If not there should be


[flagged]


They were replaced by GPT too :)


More like GPT three :)


What is the licensing for code generated in this way? GPT-3 has memorized hundreds of texts verbatim and can be prompted to regurgitate that text. Has this model only been trained on code that doesn't require attribution as part of the license?


The landing page for it states the below, so hopefully not too much of an issue (though I guess some folks may find a 0.1% risk high).

> GitHub Copilot is a code synthesizer, not a search engine: the vast majority of the code that it suggests is uniquely generated and has never been seen before. We found that about 0.1% of the time, the suggestion may contain some snippets that are verbatim from the training set.


If you pulled a marble out of a bag ten times a day, with a 0.1% chance each time that it was red: after a day you'd have a 1% chance of seeing a red marble, the first week you'd have a 6.7% chance, the first month you'd have a 26% chance, and the first working year you'd have a 92.6% chance of having seen at least one red marble.

Probabilities are fun!


Well within the margin of fair use.


That's not how fair use works. It doesn't matter how unlikely it is, if Copilot one day decides to "suggest" a significant snippet of ckxd from a GPLed app, you'd better be planning to GPL your project.


No, and this has been outlined in the past why that is not the case.

e.g. https://lwn.net/Articles/61292/ and most likely only one opinion.

on the other hand, it would be interesting to learn about what the copyright implications are of

a) creating a utility like copilot (it is a software program) and contains a corpus based on copyrighted material (the database that has been trained)

b) using it to create code based on the corpus and resulting in software as a work under copyright.


And you would have a whole lot of blue marbles.


I only murdered them once isn't the best of legal defenses.


Automatic refactoring could be useful for avoiding a lot of dumb legal disputes.

I say dumb because I am, perhaps chauvinistically, assuming that no brilliant algorithmic insights will be transferred via the AI copilot, only work that might have been conceptually hard to a beginner but feels routine to a day laborer.

Then again that assumption suggests there'd be nothing to sue over.


True, but that definitely wouldn't stop Oracle from suing over it anyway. (See the rangeCheck chapter of Oracle v Google [0])

Also, Oracle v Google opens the possibility of a fair-use defense in the event that Copilot does regurgitate a small snippet of code.

[0] https://news.ycombinator.com/item?id=11722514


I'd be surprised if a company's legal department would be OK with that 0.1% risk.


Google already learned that one. "There's only a tiny chance we may be copying some public code from Oracle." may not be a good explanation there.


Like wouldn’t be entertaining without the License Nazis. No code for you! (Seinfeld reference)


Did they have a license to use public source code as a data source for data set though?


God I wish contracts were encoded semantically rather than as plain text. I just tried to look through Github's terms of service[1]. I'd search for "Github can <verb> with <adjective> code" if I could. Instead I'm giving up.

[1] https://docs.github.com/en/github/site-policy/github-terms-o...


A world in which all laws and contracts were required to be written in Lojban would be interesting.


That looks hard. More politically feasible might be a language I've unfortunately forgotten the name of, ordinary English but with some extra rules designed to eliminate ambiguity -- every noun has to carry a specifier like "some" or "all" or "the" or "a", etc.


Legalese might be similar to code, and there is lots of interest in making law machine readable. So don't give up; check back later.


Yes, it's public source code.


Public doesn’t mean it’s not encumbered by copyrights


Pretty much everything is trained on copyrighted content: machine translation software, TWDNE, DALL-E, and all the GPTs. Software people are bringing this up now because it's their ox being gored. It's the same as when furries got upset about This Fursona Does Not Exist.[1][2]

1. https://news.ycombinator.com/item?id=23093911

2. https://www.reddit.com/r/HobbyDrama/comments/gfam2y/furries_...


To expand on your argument, pretty much every person is trained on copyrighted content too. Doesn't make their generated content automatically subject to copyright either.


yeah, except that Oracle and google have way more lawyer power than furries artists.


You have no idea how much money they make. Some of them have payment plans for commissions.


This is an argument for why this is a bigger problem, not a smaller one.


If it's BSD-licensed, the encumbrance doesn't matter much.


Update: Nat Friedman answered this as part of this thread on twitter:

https://twitter.com/natfriedman/status/1409883713786241032

Basically they are building a system to find explicit copying and warn developers when the output is verbatim.


Not sure how this is handled in US, but in Germany a few lines of code have in general not enough uniqueness to be licensed.


So the corpus has been compiled under license and the derivative work is eligible for distribution?


Finally, a faster way to spread bugs than copy/paste.


You're using the word "memorized" in a very loose way.


His point still holds, GPT-3 can output large chunks of licensed code, verbatim


How is it loose? Both in the colloquial sense and in the sense it is used in machine learning it is fitting. https://bair.berkeley.edu/blog/2020/12/20/lmmem/ is a post demonstrating it.


Pack it all up, boys, programming's over. Hello, AI.

Anyone want to hire me to teach your grandma how to use the internet?


Few days back, Sam Altman tweeted this

"Prediction: AI will cause the price of work that can happen in front of a computer to decrease much faster than the price of work that happens in the physical world. This is the opposite of what most people (including me) expected, and will have strange effects"

And I was like yeah I gotta start preparing for next decade.


>Prediction: AI will cause the price of work that can happen in front of a computer to decrease much faster than the price of work that happens in the physical world.

I'm skeptical.

The envelope of "programming" will continue to shift as things get more and more complex. Your mother-in-law is not going to install Copilot and start knocking out web apps. Tools like this allow programmers to become more productive, which increases demand for the skills.


I strongly agree with you.

Reminds me of something I read that claimed when drum machines came out, the music industry thought it was the end of drummers. Until people realized that drummers tended to be the best people at programming cool beats on the drum machine.

Every single technological advancement meant to make technology more accessible and eliminate expertise has instead only redefined what expertise means. And the overall trend has been a lot more work opportunities created, not less.


I had a front row seat to the technological changes in the music industry. I opened a recording studio when you had to "edit" using a razor blade and tape to cut tape together. I remember my first time doing digital editing on a Macintosh IIfx. What happened to drummers is that technology advanced to the point where studio magic could make decent drummers sound great. But it's still cheaper and faster (and arguably better) to get a great session drummer who doesn't need it. Those pros are still in high demand.


Yeah, but less drummers are being hired than before drum machines came out. What you describe sounds like work has become more concentrated into fewer hands. Perhaps this will happen with software as well.


What happened is what typically happens: Concentration of expertise. The lower expertise jobs (just mechanically play what someone else wrote/arranged) went away and there was increased demand for higher expertise (be an actual expert in beats _and_ drum machines).

So the winners were those that adapted earlier and the losers were those that didn't/couldn't adapt.

This translates to: If you're mindlessly doing the same thing over and over again, then it's a low value prop and is at risk. But if you're solving actual problems that require thought/expertise then the value prop is high and probably going to get higher.


But there's also the subtext that if you find yourself at the lower-skill portion of your particular industry, then you should probably have a contingency plan to avoid being automated out of a job, such as retiring, learning more, or switching to an adjacent field.


Exactly, and AI only means that this adage now applies to programming as well.


but this was true anyways -- the lower your skill, the more competition you have. At the lowest skill levels, you better damn well have a contingency plan, because any slight downward shift in market demand is sword coming straight for your neck.


I think you have another thing coming. Think about what really got abstracted away. The super hard parts like scaling and infrastructure (aws), the rendering engines in React, all the networking stuff that’s hidden in your server (dare you to deal with tcp packets), that’s the stuff that goes away.

We can automate the mundane but that’s usually the stuff that requires creativity, so the automated stuff becomes uninteresting in that realm. People will seek crafted experiences.


It would be funny if after the AI automates away "all the boring stuff" we're left with the godawful job of writing tests to make sure the AI got it right.


I think it'll be much more likely that the AI writes the tests (the boring stuff) for the buggy code I write.


I can see the AI suggesting and fixing syntax in the tests. Determining their semantics, not without true AGI.


I'm not sure that all of that has really gone away.

It's just concentrated into the hands of a very few super specialists, it's much harder to get to their level but their work is much much more important.


True, and if the specialists retire, there may be some parts that no one understands properly anymore.

See: https://www.youtube.com/watch?v=ZSRHeXYDLko / Preventing the Collapse of Civilization / Jonathan Blow


Better yet -- the jobs of those specialists got better, and the people who would have done similar work did not end up unemployed, they just do some other kind of programming.


Do you have any actual data for that? Last I saw, most bands are still using live drummers and studio recordings for small to mid-sized bands are still actual drummers as well - unless it's a mostly studio band trying to save cost.

I think the analog to programming is a bit more direct in this sense; most companies aren't going to go with something like Copilot unless it's supplemental or they're on an entirely shoestring budget; it'll be the bigger companies wanting to squeeze out that extra 10% productivity that are betting hard on this - same with where larger bands would do this to have an extremely clean studio track for an album.


Source? I would actually expect there to be around the same amount of drummers, but more people making music.


Based on these very unreliable sources, the number of drummers in the US may have increased from ~1 million in 2006 to ~2.5 million in 2018. That's during a time when the population increases from 298 million to 327 million.

So, during this period, a ~10% increase in population saw a 250% increase in drummers.

It does not appear that the drum kit killed the drummer.

Big caveats about what these surveys defined as "drummer" and that this doesn't reflect professional drummer gigs, just the number of drummers.

[1] https://m.facebook.com/Bumwrapdrums/posts/how-many-drummers-...

[2] https://www.quora.com/How-many-people-play-drums-in-the-US


Are we in a drummer bubble?


If you could get by with a drum machine, did you really need a real drummer in the first place? Maybe a lot of drummers were used for lack of any automated alternative in the early days?

By the same line of thinking, If you can get by with AI generated code did you really require a seasoned, experienced developer in the first place? If your product/company/service can get by with copy pasta to run your CRUD app (which has been happening for some time now sans the AI aspect) did you ever really need a high end dev?

I think its like anything else, 80% is easy and 20% is not easy. AI will handle the 80% with increasing effectiveness but the 20% will remain the domain of humans for the foreseeable future.

Worth considering maybe.


The counter argument comes from photography. 20 years ago digital photography didn't exist and if you were a photographer it was a lot easier to make a living.

Nowadays everyone can make professional looking photos so the demand for photographers has shrunk, as the supply has increased.


Drummers do tend to be good drum programmers, but I believe they're a small fraction of the pool of pros. The drum machine made percussion feasible for lots of people living in dense housing for whom real drums were out of the question. (Also drummers tend to dislike using drum machines, because the real thing is more expressive.)

AI will be similar -- it will not just give more tools to people already in a given field (programming, writing, whatever), but also bring new people in, and also create new fields. (I personally can't wait for the gardening of AI art to catch on. It'll be so weird[1].)

[1] https://www.youtube.com/watch?v=MwtVkPKx3RA


Those drum machines didn't use machine learning though.


Exactly. I'm currently reading The Mythical Man-Month. 90% of what the book discusses in term of programming work that actually has to be done is completely irrelevant today. Still the software industry is bigger then ever. In the book it is also mentioned that programmers spend about 50% of their time on non-programming tasks. In my experience this is also true today. So no matter the tools we've got, the profession stayed the same since the early 70s.


What are notable books nowadays? It seems all the books I can cite are from 2005-2010 (Clean Code, JCIP, even the Lean Startup or Tribal Leadership…) but did the market for legendary books vanish in favor of Youtube tutorials? I’m running out of materials I can give to my interns to gobble knowledge into them in bulk.


[prioritization] Effective engineer - lau

[systems] Designing data intensive applications - kleppman

[programming] SICP - sussman & abelson

Last one is an old scheme book. No other book (that I read) can even hold a candle to this one, in terms of actually developing my thought process around abstraction & composition of ideas in code. Things that library authors often need to deal with.

For example in react - what are the right concepts to that are powerful enough to represent a dynamic website & how should they compose together.


I have the same question when it comes to a modern version of the Mythical Man Month. I know some computer history, so I can understand most examples. But still it would be great to have a comparable modern book.


The amount of "programming time" I spend actually writing code is also quite low compared to the amount of time I spend figuring out what needs to be done, the best way for it to be done, how it fits together with other things, the best way to present or make it available to the user, etc. Ie most of my time is still spent on figuring out and refining requirements, architecture and interfaces.


Same, the actual solution is pretty darn easy once I know what needs to be done.


Tools like email, instant messenger, and online calendars made secretaries much more productive which increased demand for the skills. Wait...

Replacement of programmers will follow these lines. New tools, like copilot (haven't tried, but will soon), new languages, libraries, better IDEs, stack overflows, Google, etc will make programming easier and more productive. One programmer will do the work that ten did. That a hundred did. You'll learn to become an effective programmer from a bootcamp (already possible - I know someone who went from bootcamp to Google), then from a few tutorials will.

Just like the secretary's role in the office was replaced by everyone managing their own calendars and communications the programmer will be replaced by one or two tremendously productive folks and your average business person being able to generate enough code to get the job done.


Secretaries became admin assistants who are much more productive and valuable since they switched their job to things like helping with the content of written communications, preparing presentations, and managing relationships. I saw my mother go through this transition and it wasn't really rough (though she's a smart and hard-working person).


> Secretaries became admin assistants

That doesn't mean anything. The last 20 years have seen an absurd chase of more and more stupidity in job titles to make people feel they are "executive assistants" instead of secretaries, "vice presidents" instead of whatever managerial role, etc, etc.


but secretaries are still a thing, they are just usually shared by a whole team / department these days


I had a corporate gig as a coder reporting to at most three economists at any one time. I spent at least two hours of every day getting them to explain what they wanted, explaining the implications of what they were asking for, explaining the results of the code to them, etc. So even if I didn't need to code at all my efficiency would have expanded by at most a factor of 4.


The future as I see it is that coding will become a relatively trivial skill. The economists would each know how to code and that would remove you from the equation. They would implement whatever thing they were trying to do themselves.

This would scale to support any number of economists. This would also be a simpler model and that simplicity might lead to a better product. In your model, the economists must explain to you, then you must write the code. That adds a layer where errors could happen - you misunderstand the economists or they explain poorly or you forget or whatever. If the economists could implement things themselves - less room for "telephone" type errors. This would also allow the economists to prototype, experiment, and iterate faster.


That game of telephone is certainly an enormous pain point, and I can imagine a future where I'm out of a job -- but it's extremely hard for me to see them learning to code.


And if what the economists needed to do could be programmed trivially with the help of AI then their job is probably also replaceable by AI.


That would be harder. I shuffled data into a new form; they wrote papers. All I had to understand was what they wanted, and how to get there; they had to understand enough of the world to argue that a given natural experiment showed drug X was an effective treatment for condition Y.


That was the point, I think.


>Tools like email, instant messenger, and online calendars made secretaries much more productive which increased demand for the skills. Wait...

There are more "secretaries" than ever, and they get to do far more productive things than delivering phone messages.


It may reduce the demand for the rank-and-file grunts, though.

Why would an architect bother with sending some work overseas if tools like this would enable them to crank out the code faster than it would take to do a code review?


I think the thought process is from the perspective of the employer, if you assume these two statements are true:

1) AI tools increase developer productivity, allowing projects to get completed faster; and

2) AI tools offset a nonzero amount of skill prerequisites, allowing developers to write "better" code, regardless of their skill level

With those in mind, it seems reasonable to conclude that the price to e.g. build an app or website will decrease, because it'll require either fewer man-hours 'til completion and/or less skill from the hired developers doing said work.

You do make a good point that "building an app" or "building a website" will likely shift in meaning to something more complex, wherein we get "better" outputs for the same amount of work/price though.


Now replace “AI” in your 1 & 2 points with “Github” (and the trend of open-sourcing libraries, making them available for all). All you said still works, and it did not harm programmer jobs in any way (quite the opposite).

And actually, I really don't see AI in the next decade making more of a difference than what Github did (making thousands of man-hour of works available for free). Around 2040 or 2050, maybe. But not soon, AI is still really far.


>that the price to e.g. build an app or website will decrease

Yes, and this in turn increases demand as more people/companies/etc.. can afford it.


If there are diminishing returns to expanding a given piece of software, an employer could end up making a better product, but not that much better, and employing fewer resources (esp. people) to do so.

And even that could still be fine for programmers, as other firms will be enticed into buying the creation of software -- firms that didn't want to build software when programming was less efficient/more expensive.


Decreasing the price of programming work doesn't necessarily mean decreasing the wages of programmers, any more than decreasing the price of food implies decreasing the wages of farmers.

But on the other hand, it also can mean that.


here's the difference: the farmers are large industrial companies that lobby the government for subsidies, such that they can continue to produce food that can, by law, never be eaten.

programmers, on the other hand, are wage laborers, individually selling their labor to employers who profit by paying them less.

industry is sitting on the opposite side of the equation here. I wonder what will replace "learn to code". whatever it is, the irony will be almost as rich as the businesses that profit from all this.


It's hard to use history to predict the implications of weird new technologies. But history is clear on at least two happy facts. Technology generates new jobs -- no technology has led to widespread unemployment. (True general AI could be different; niche "AI" like this probably won't be.) Technology raises standards of living generally, and making a specific sector of the economy more productive increases the wealth accrued to that sector (although it can change who is in it too).

There are exceptions -- fancy weapons don't widely raise standards of living -- but the trends are strong.


what does unemployment have to do with it? if you're 22 and paying off loans and suddenly find yourself unable to work as much but a barista, continued employment isn't exactly a perk. meanwhile, the benefits of the increased productivity brought by new technology do accrue somewhere - it's just not with workers. the trends on that are also quite clear. productivity growth has been high over the past 40 years but real wages have at best been stagnant. and on the wheel turns, finding souls to grind beneath it's weight, anew.


On your second point I agree -- the distribution within a sector matters, and most of them at the moment are disturbingly top-heavy.

On the first though, we have little reason to think tech will systematically diminish the roles people can fill. In the broad, the opposite has tended to happen throughout history -- although the narrow exceptions, like factory workers losing their jobs to robots, are real, and deserve more of a response than almost every government in the world has provided. For political stability, let alone justice.


For every programmer who can think and reason about what they are doing, there are at least 10 who just went to a bootcamp and are not afraid to copy and paste random stuff they do not understand and cannot explain.

Initially, they will appear more productive with CoPilot. Businesses will decide they do not need anybody other than those who want to work with CoPilot. This will lead to adverse selection on the quality of programmers that interact with CoPilot ... especially those who cannot judge the quality of the suggested code.

That can lead to various outcomes, but it is difficult to envision them being uniformly good.


>Tools like this allow programmers to become more productive, which increases demand for the skills.

Well like everything in life I guess it depends? The only iron rule I can always safely assume is supply and demand.

But for programming especially on the web, it seems everyone has a tendency of making things more difficult than it should be, that inherent system complexity isn't going to be solved by ML.

So in terms of squeezing out absolute efficiency from system cost, I think we have a very very long way to go.


This is just shortening the time it takes developers to Google it, search through StackOverflow and then cut and paste and modify the code they are looking for. It will definitely speed up development time. Sounds like a nice quality of life improvement for developers. I don't think it will cause the price of work to decrease, if anything a developer utilizing copilot efficiently should be paid more.


You don't know my mother-in-law.


I think this will result in classic Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox . As the price of writing any individual function/feature goes down, the demand for software will go up exponentially. Think of how many smallish projects are just never started these days because "software engineers are too expensive".

I don't think software engineers will get much cheaper, they'll just do a lot more.


I'm guessing low expertise programmers whose main contribution was googling stackoverflow will get less valuable, while high expertise programmers with real design skill will become even more valuable.


I'm both of those things, what happens to my value?


Your legs will have to move faster than your arms.


Sonic the Hedgehog's employment prospects are looking up.


It goes up/down


Googling Stackoverflow itself can sometimes be a high expertise skill, simply because sometimes you need a fairly good understanding of your issue to figure out what to search for. A recent example: we had an nginx proxy set up to cache API POST requests (don't worry - they were idempotent, but too big for a query string), and nginx sometimes returned the wrong response. I'm pretty sure I found most of the explanation on Stackoverflow, but I didn't find a question that directly addressed the issue, so Googling was a challenge. You can keep your job finding answers on Stackoverflow of you are good at it.


unfortunately companies don't make interviewing for real design skills a priority. you'll get weeded out because you forgot how to do topographical sort


Hopefully tools like this will finally pursuade companies that being able to do leetcode from memory is not a skill they need.


Certainly but the higher expertise isn't a requirement for most dev jobs I would argue; If you are developing custom algorithm and advanced data structure, you are probably in the fringe of what the dev world do.

Otherwise I am struggling explaining why there is such a great demand for devs that short courses (3-6 months) are successful, the same courses that fail at teaching the fundamental of computing.


Guessing that if all you have to do is keep your metrics green, they are not selecting for the skills they are educating for.


With AI now they are on your level. It equalizes.


> Think of how many smallish projects are just never started these days because "software engineers are too expensive".

Maybe many. If the cost/benefit equation doesn't work, it makes no sense to do the project.

> I don't think software engineers will get much cheaper, they'll just do a lot more.

If they do more for the same cost, they are cheaper. You as a developer will be earning less in relation to the value you create.


> If they do more for the same cost, they are cheaper. You as a developer will be earning less in relation to the value you create.

Welcome to the definition of productivity increases, which is the only way an economy can increase standard of living without inflation.


Inflation and productivity might be correlated but neither is a function of the other. Given any hypothetical world where increased productivity leads to inflation, there's a corresponding world equal in all respects except that the money supply shrinks enough to offset that inflation.


> You as a developer will be earning less in relation to the value you create.

Doesn't matter as long as I create 5x value and earn 2x for it. I still am earning double within the same time and effort.


Oh now I see! This is how we will enter a new era of ‘code bloat’ - Moore’s law applied to software - where lines of code double every 18 months-


We went through the same hype cycle with self driving cars. We are now ~15 years out from the DARPA challenges and to date exactly 0 drivers have been replaced by AI.

It is certainly impressive to see how much the GPT models have improved. But the devil is in the last 10%. If you can create an AI that writes perfectly functional python code, but that same AI does not know how to upgrade an EC2 instance when the application starts hitting memory limits, then you haven't really replaced engineers, you have just given them more time to browse hacker news.


Driving is qualitatively different from coding: an AI that's pretty good but messes up sometimes is vastly more useful for coding than for driving. In neither case can you let the AI "drive", but that's ok in coding as software engineering is already set up for that. Testing, pair programming and code reviews are popular ways to productively collaborate with junior developers.

You're not replacing the engineer, but you're giving every engineer a tireless companion typing suggestions faster than you ever could, to be filled in when you feel it's going to add value. My experience with the alpha was eye opening: this was the first time I've interacted with an AI and felt like its not just a toy, but actually contributing.


Writing code is by far the easiest part of my job. I certainly welcome any tools that will increase my productivity in that domain, but until an AI can figure out how to fix obscure, intermittent, and/or silent bugs that occur somewhere in a series of daisy-chained pipelines running on a stack of a half-dozen services/applications, I am not going to get too worked up about it.


I agree. It kind of amazes me though there is so much room for obscurity. I would expect standardisation to have dealt with this a long time ago. Why are problems not more isolated and manageable in general?


It's extremely hard to reason about the global emergent behavior of a complex system than the isolated behavior of a small component.


I don't think it's a function of complexity per se, but determinism. This is why Haskellers love the IO monad. Used well, it lets you quarantine IO to a thin top-level layer, below which all functions are pure and easy to unit test.


Distributed systems are anything but deterministic.


What is your definition of "replace"? Waymo operates a driverless taxi service in Phoenix. Sign ups are open to the general public. IMO this counts as replacing some drivers as there is less demand for taxi service in the operating area.

https://blog.waymo.com/2020/10/waymo-is-opening-its-fully-dr...


According to this article[1], the number of ride hailing drivers has tripled in the last decade.

I think full self driving is possible in the future, but it will likely require investments in infrastructure (smarter and safer roads), regulatory changes, and more technological progress. But for the last decade or so, we had "thought leaders" and VCs going on and on about how AI was going to put millions of drivers out of work in the next decade. I think it is safe to say that we are at least another decade away from that outcome, probably longer.

[1] https://finance.yahoo.com/news/number-american-taxi-drivers-...


Self driving is used in the mining industry, and lots of high paid drivers have been replaced.

But you are clearly more knowledgeable with your 0 drivers replaced comment.


Mining as in those big trucks or mining as in trains on tracks?



excellent if dsytopian article. thank you for sharing!


> AI does not know how to upgrade an EC2 instance when the application starts hitting memory limits

That's exactly the kind of thing "serverless" hosting has done for a while now.


Yeah really bad example there.


Ahh yes, the serverless revolution! I was told that serverless was going to make my job obsolete as well. Still waiting for that one to pan out. Not going to hold my breath.


This isn't self driving for programming, its more like GPS and lane assist.


15 years is no time at all.


I am blown away but not scared for my job... yet. I suspect the AI is only as good as the training examples from Github. If so, then this AI will never generate novel algorithms. The AI is simply performing some really amazing pattern matching to suggest code based on other pieces of code.

But over the coming decades AI could dominate coding. I now believe in my lifetime it will be possible for an AI to win almost all coding competitions!


I guess it's worth pointing out that the human brain is just an amazing pattern matcher.

They feed you all these algorithms in college and your brain suggests new algorithms based on those patterns.


Humans are more than pattern matchers because we do not passively receive and imitate information. We learn cause and effect by perturbing our environment, which is not possible by passively examining data.

An AI agent can interact with an environment and learn from its environment by reinforcement learning. It is important to remember that pattern matching is different from higher forms of learning, like reinforcement learning.

To summarize, I think there are real limitations with this AI, but these limitations are solvable problems, and I anticipate significant future progress


Fortunately the environment for coding AI is a compiler and a CPU which is much faster and cheaper than physical robots, and doesn't require humans for evaluation like dialogue agents and GANs.


Well you still have to assess validity and code quality which is a difficult task , but not unsolvable.

Also Generative Adversarial Networks original implementation was to pit neural networks against each other to train them , they don't need human intervention.


> They feed you all these algorithms in college and your brain suggests new algorithms based on those patterns.

Some come from the other end of the process.

I want to solve that problem -> Functionally, it’d mean this and that -> How would it work? -> What algorithms / patterns are there out there that could help.

Usually people with less formal education and more hands on experience, I’d wager.

More prone to end up reinventing the wheel and spend more time searching for solutions too.


What?


Most people I know who’ve been to college, or otherwise educated in the area they work in, tend to solve problems using what they know (not implying it’s a hard limit. Just an apparently well spread first instinct).

Which fits the pattern matching described by the grandparent.

A few people I know, most of which haven’t been to college, or done much learning at all, but are used to work outside of what they know (that’s an important part), tend to solve problems with things they didn’t know at the time they set out to solve said problems.

Which doesn’t really fit the pattern matching mentioned by the grandparent. At least not in the way it was meant.


To reduce intelligence to pattern matching begs the question: How do you know which patterns to match against which? By some magic we can answer questions of why, of what something means. Purpose and meaning might be slippery things to pin down, but they are real, we navigate them (usually) effortlessly, and we still have no idea how to even begin to get AI to do those things.


I think those are the distance metrics, which is what produces inductive bias, which is the core essence of what we consider 'intelligence'. - Consider a more complicated metric like a graph distance with a bit of interesting topology. That metric is the unit by which the feature space is uniformly reduced. Things which are not linearized by the metric are considered noise, so this forms a heuristic which overlooks features which may have been in reality salient. - This makes it an inductive bias.

(Some call me heterodox, I prefer 'original thinker'.)


To generate new, generally useful algorithms, we need a different type of "AI", i.e. one that combines learning and formal verification. Because algorithm design is a cycle: come up with an algorithm, prove what it can or can't do, and repeat until you are happy with the formal properties. Software can help, but we can't automate the math, yet.


I see a different path forward based on the success of AlphaGo.

This looks like a clever example of supervised learning. But supervised learning doesn't get you cause and effect, it is just pattern matching.

To get at cause and effect, you need reinforcement learning, like AlphaGo. You can imagine an AI writing code that is then scored for performing correctly. Overtime the AI will learn to write code that performs as intended. I think coding can be used as a "playground" for AI to rapidly improve itself, like how AlphaGo could play Go over and over again


Imparting a sense of objective to the AI is surely important, and an architecture like AlphaGo might be useful for the general problem of helping a coder. I'm not seeing it, however, for this particular autocomplete-flavored idiom.

AlphaGo learns a game with fixed, well-defined, measurable objectives, by trying it a bazillion times. In this autocomplete idiom the AI's objective is constantly shifting, and conveyed by extremely partial information.

But you could imagine a different arrangement, where the coder expresses the problem in a more structured way -- hopefully involving dependent types, probably involving tests. That deeper encoding would enable a deeper AI understanding (if I can responsibly use that word). The human-provided spec would have to be extremely good, because AlphaGo needs to run a bazillion times, so you can't go the autocomplete route of expecting the human to actually read the code and determine what works.


> we can't automate the math, yet

This exists: https://en.wikipedia.org/wiki/Automated_theorem_proving


This is moreso automation-assisted theorem proving. It takes a lot of human work to get a problem to the point where automation can be useful.

It's like saying that calculators can solve complex math problems; it's true in a sense, but it's not not strictly true. We solve the complex math problems using calculators.


and there's already GPT-f [0], which is a GPT-based automated theorem prover for the Metamath language, which apparently submitted novel short proofs which were accepted into Metamath's archive.

I would very much like GPT-f for something like SMT, then it could actually make Dafny efficient to check (and probably avoid needing to help it out when it gets stuck!)

0. https://analyticsindiamag.com/what-is-gpt-f/


Someone tell Gödel


You mean like AlphaGo where the neural net is combined with MCTS?


> If so, then this AI will never generate novel algorithms. This is true, but the most programmers don't need to generate novel algorithms themselves anyway.


> I now believe in my lifetime it will be possible for an AI to win almost all coding competitions!

Then we shall be reaching singularity.


We will only reach a singularity with respect to coding. There are many important problems beyond computer coding like engineering and biology and so on


Coding isn't chess playing it's likely about as general as math or thinking. If you can write novel code you can ultimately do biology or engineering or ultimately anything else.


Reading this thread it seems to me that AI is a threat for "boilerplate-heavy" programming like website frontends, I can't really imagine pre-singularity AI being able to replace a programmer in the general case.

Helping devs go through "boring", repetitive code faster seems like a good way to increase our productivity and make us more valuable, not less.

Sure, if AI evolves to the point where it reaches human-level coding abilities we're in trouble, but that's the case this is going to revolutionize humanity as a whole (for better or worse), not merely our little niche.


C’mon guys, your standard backend schema with endpoints is like way easier to automate away.


I mean, we already have? Use Django Rest Framework and simple business models and you're pretty much declaratively writing API endpoints by composing behavior. Almost nothing in a DRF endpoint definition is boilerplate.

The hard part has always been writing an API that models external behavior correctly.


Generating the code with help from AI: 25 cents

Knowing what code to generate with the AI: 200k/yr


Will this tend to blur the distinction between coder and manager? In the end a manager is just a coder who commands more resources, and relies more on natural language to do it.

Or maybe I'm thinking of tech leads. I don't know, my org is flat.


Issue is not writing code. Its changing, evolving or maintaining it.

This is the problem with things like Spreadsheets, dag-drop programming, code generators.

Its not easy to tell a program what to change and where to change.


I feel like you might be moving the goalposts. Maybe they're different problems, but it's not at all clear to me that mutation is harder than creation.


"Prediction: AI will cause the price of work that can happen in front of a computer to decrease much faster than the price of work that happens in the physical world. This is the opposite of what most people (including me) expected, and will have strange effects"

I've been saying something like that for a while, but my form was "If everything you do goes in and out over a wire, you can be replaced." By a computer, a computer with AI, or some kind of outsourcing.

A question I've been asking for a few years, pre-pandemic, is, when do we reach "peak office"? Post-pandemic, we probably already have. This has huge implications for commercial real estate, and, indeed, cities.


I just don't believe it. Having experienced terrible cheap outsourced support and things like Microsoft's troubleshooting assistant (also terrible), I'm willing to pay for quality human professionals. They have a long way to go before I change my mind.


huh... I've found that people who tend to describe their occupation as "knowledge work" are the most blind to the fact that white collar jobs are the first to get optimized away. Lawyers are going to have a really bad time after somebody manages to bridge NLP and formal logic. No, it won't result in a Lawyerbot-2000 - it will result in software that enables lawyers to do orders of magnitude more work of a higher quality. What do you think that does to a labor market? It shrinks it. That or people fill the labor glut with new, cheaper, lawsuits...


i don't think that people will ever fully trust an AI lawyer, given all the possible legal consquences of a misunderstanding between the AI and the client. You could literally go to jail because of a bug/misunderstanding due to an ambiguous term (this might make a good sci-fi story ...)

But yes, getting some kind of legal opinion will probably be cheaper with an AI.


Nor I, which is why I said so. A SaaS will pop up called "Co-chair" and that'll be that. It would definitely be a lot easier to trust than any of the black box neural networks we are all familiar with - as the field of formal logic is thousands of years old and pretty thoroughly explored. I used a SAT solver just last night to generate an exhaustive list of input values that result in a specific failure mode for some code I'm reverse engineering - I have no doubts about the answer the SAT solver provided. That definitely isn't the case with NN based solutions - which I trust to classify cat photos, but not much else.


Legal discovery and other "menial" law tasks are already quite automated.


I wouldn't describe keyword search engines or cross reference managers as "quite automated" - so I would expect little market change from whatever LexisNexis is currently selling.


I would -- I remember my mom as a lawyer having to schlep to the UCLA law library to photocopy stuff -- but current legal automation includes NLP at the level of individual clauses.

https://www.suls.org.au/citations-blog/2020/9/25/natural-lan...


Oof, as somebody who has studied the AI winter - that article hurt, suggesting that an unsupervised NN-centric approach is going to lead somewhere other than tool-assist... its the 1970s all over again.

> I would

Well you're going to have a problem describing actual automation when you encounter it. What would you call it when NLP results are fed into an inference engine that then actually executes actions - instead of just providing summarized search results? Super-duper automation?



I kinda believe this but I still think it hugely depends on what you're doing in front of a computer. If you're just a generic developer that gets a task and codes it by the spec, then you can probably be replaced by AI in a few years.

But I don't think AI will become capable of complex thought in the next one/two decades, so if you're training to be a software architect, project manager, data analyst I think you should be safe for some time.


People have been saying AI would end everything and anything since I was a wee baby. It still hasn't happened. How about instead of making the same old tired boring predictions about the impending apocalypse of things we love we start making and listening to predictions that actually talk about how life has been observed to progress. It's not impossible, science-fiction authors get it right occasionally.


As it stands, this looks like it will actually increase the productivity of existing programmers more than it will result in "now everyone is a programmer".

Over time it will certainly do more, but it's probably quite a long time before it can be completely unsupervised, and in the meantime it's increasing the output of programmers.


Honestly, the only reason I'm still doing work in front of a computer is that it pays well. I'm really starting to think I should have followed my gut instincts when I was 17 and go to trade school to become an electrician or a carpenter...


True self-improving general AI would put all labor out of business. Incomes would then be determined by only the ownership of capital and government redistribution. Could be heaven, could be hell.


I’m not sure why it’s unexpected when it’s essentially a reframing of Baumol's cost disease. Any work that does not see a productivity increase becomes comparatively more expensive over time.


So either his prediction or expectation will be correct.

I think I’ll side with his expectation, but then again, my salary depends on it.


I wonder does Sam Altman also believe that you can measure programmer productivity by lines-of-code?


Don't hold your breath. (Coming soon right after the flying self-driving car.)


Automation has always produced an increase in jobs so far, although sometimes in a disruptive way. I consider this like the switch from instruction-level programming to compiled languages, a level of abstraction added that buys a large increase in productivity and makes projects affordable that weren’t affordable before. If anything this will probably lead to a boom in development work. But there’s a bunch of low skill programmers who can’t do much more than follow project templates and copy paste things. Those people will have to level up or get out.


I feel like the inevitable path will be:

1) AI makes really good code completion to make juniors way more productive. Senior devs benefit as well.

2) AI gets so good that it becomes increasingly hard to get a job as a junior--you just need senior devs to supervise the AI. This creates a talent pipeline shortage and screws over generations that want to become devs, but we find ways to deal with it.

3) Another major advance hits and AI becomes so good that the long promised "no code" future comes within reach. The line between BA and programmer blurs until everyone's basically a BA, telling the computer what kind of code it wants.

The thing though that many fail to recognize about technology is that while advances like this happen, sometimes technology seems to stall for DECADES. (E.g. the AI winter happened, but we're finally out of it.)


I could also see an alternative to #2 where it becomes increasingly hard to get a job as a senior dev when companies can just hire juniors to produce probably-good code and slightly more QA to ensure correctness.

You'd definitely still need some seniors in this scenario, but it feels possible that tooling like this might reduce their value-per-cost (and have the opposite effect on a larger pool of juniors).

As another comment said here, "if you can generate great python code but can't upgrade the EC2 instance when it runs out of memory, you haven't replaced developers; you've just freed up more of their time" (paraphrased).


No, programmers won't be replaced, we'll just add this to our toolbox. Every time our productivity increased we found new ways to spend it. There's no limit to our wants.


The famous 10 hour work week right? I am orders of magnitude more productive than my peers 50 years ago wrt programming scope and complexity, yet we work the same 40 hour week. I just produce more (sometimes buggy) code/products.


I live in a foreign country and study the language here. I frequently use machine translation to validate my my own translations, read menus with the Google Translate augmented reality camera, and chat with friends when I'm too busy to manually look up the words I don't understand in a dictionary. What I have learned is that machine translations are extremely helpful in a pinch, but often, a tiny adjustment in syntax, adding an adjective, or other minor edit like that will produce a sentence in English with entirely different meaning.

For context-specific questions it's even worse. The other day a stop owner that sells coffee beans insisted that we try out conversing with Google translate. I was trying to find the specific terms for natural, honey, and washed process. My Chinese is okay, but there's no way to know vocab like that unless you specifically look it up and learn it. Anyway, I felt pressured to go through with the Google translate charade even though I knew how the conversation would go. I said I wanted to know if this coffee was natural process. His reply was 'of course all of our coffees are natural with no added chemicals!' Turns out the word is 日曬, sun-exposed. AI is no replacement for learning the language.

State of the art image classification still classifies black people as gorillas [1].

I rue the day we end up with AI-generated operating systems that no one really understands how or why they do what they do, but when it gives you a weird result, you just jiggle a few things and let it try again. To me, that sounds like stage 4) in your list. We have black box devices that usually do what we want, but are completely opaque, may replicate glitchy or biased behaviors that it was trained on, and when it goes wrong it will be infuriating. But the 90% of the time that it works will be enough cost savings that it will become ubiquitous.

[1]: https://www.theverge.com/2018/1/12/16882408/google-racist-go...


> For context-specific questions it's even worse. The other day a stop owner that sells coffee beans insisted that we try out conversing with Google translate. I was trying to find the specific terms for natural, honey, and washed process. My Chinese is okay, but there's no way to know vocab like that unless you specifically look it up and learn it. Anyway, I felt pressured to go through with the Google translate charade even though I knew how the conversation would go. I said I wanted to know if this coffee was natural process. His reply was 'of course all of our coffees are natural with no added chemicals!' Turns out the word is 日曬, sun-exposed. AI is no replacement for learning the language.

Does "natural process" have a Wikipedia page? I've found that for many concepts (especially multi-word ones), where the corresponding name in the other language isn't necessarily a literal translation of the word(s), the best way to find the actual correct term is to look it up on Wikipedia, then see if there is a link under "Other languages".


Looks like in this case it's only part of a Wikipedia page[0] but the Chinese edition is only a stub page. But your suggestion is absolutely spot-on. One of the things I love about Wikipedia is that it's human-curated for human evaluation, not a "knowledge engine" that produces wonky results.

[0]https://en.wikipedia.org/wiki/Coffee_production#Dry_process


I feel like you're neglecting to mention all the people who need to build and maintain this AI. Cookie cutter business logic will no longer need programmers but there will be more highly skilled jobs to keep building and improving the AI


AI will keep building and improving the AI, of course!


But you need orders of magnitude fewer people to build and maintain the AIs then you do to manually create all the software running the world. And this is the unique peril of AI. The scale of capabilities of AI have the promise to grow faster than the creation of new classes jobs.


Telling the computer what you want IS programming...

When a new language / framework / library comes around, GitHub copilot won't have any suggestions for when you write in it.


> Automation has always produced an increase in jobs so far

Do you have a source for this re the last 20 years? It seems to me automation has been shifting the demand recently towards more skilled cognitive work.


I think you are confusing correlation and causation. Not automation produces jobs, more people and more income for that people produces jobs, because more people means more demand.


Yes, but AI isn't the same as automation.

Automation is a force multiplier. AI is a cheaper way of doing what humans do.

And the AI doesn't even need to be "true" AI. It simply needs to be able to do stuff better than what humans do.


> AI is a cheaper way of doing what humans do.

Like protein solving? /s


>Pack it all up, boys, programming's over. Hello, AI.

I don't know, cranking out some suggestion for a function is not the same as writing a complete module / application.

Take the job of a translator, you would say the job would go extinct with all the advances in autotranslation? here it says that 'employment of interpreters and translators is projected to grow 20 percent from 2019 to 2029, much faster than the average for all occupations' [1]. You still need a human being to clear up all of the ambiguities of languages.

Maybe the focus of the stuff we do will change, though; but on the other hand, we do tend to get a lots of changes in programming; it goes with the job. maybe we will get to do more code reviews of what was cranked out by some model.

However, within a decade it might be harder to get an entry level job as a programmer. I am not quite sure if i should suggest my profession to my kids, we might get a more competitive environment in some not so distant future.

[1] https://www.bls.gov/ooh/media-and-communication/interpreters...


> Anyone want to hire me to teach your grandma how to use the internet?

Only for the first time to train a model for that.


The next skill will be perfectly casting the right spell to make the AI spit out the product as spec'd.

Repurposed Google-fu. We'll always have jobs :)


Nah, we will just all be low-code function couplers instead of coders..


On the contrary. This tool increase the programmer productivity, hence you will get more salary not less.

You assumption is that programming demand is finite, AND that all programmers are equal, both of those are false.

I must also say that actual programming is around 10-15 % of the programmer job, so the tool will make you around 10% more overall productive.


With VSCode, Github and a perhaps a little bit of help from OpenAI, Microsoft is poised to dominate the developer productivity tools market in the near future.

I wouldn't be surprised to see really good static analysis and automated code review tools coming out of these teams very soon.


And still Windows is a mess.


Windows is a mess and I hope it will stay that way.

The real strength of Windows is backwards compatibility, especially dealing with proprietary software. Its messiness is a relic of the way things were done in the past, it is also the reason why my 20+ year old binary still runs. And it does without containers or full VMs.

I much prefer developing on Linux (I still miss the Visual Studio debugger), but different platform, different philosophy.

Note: I don't know much about mainframes. I heard they really are the champions of backwards compatibility. But I don't think the same principles are applicable to consumer machines.


Your 20 year old statically compiled binary still works on Linux, probably.


I've been on both Windows and Ubuntu for a while. I'd say Ubuntu has a ton more issues and requires a ton more initial configuration to behave "normally".

I don't even remember the last time Windows got in my way, in fact.


I guess the difference is that you can put in a weekend of effort on an Arch Linux installation and get a machine tailored to your workflow, few bugs, fast boot times, easy to maintain for the future, etc.

But no matter how much work you put into your Windows install it will be just as slow/fast, uncustomizable, and unoptimizable as it was out of the box.


I'd bet money that the VSCode and Windows teams are basically on different planets and Microsoft.


I bet there are people that use Windows to develop VSCode and use VSCode to develop Windows, so some people probably know each other internally. I think what escapes HN is how massively successful Microsoft is. Sure, the search built into Windows sucks. There are many, many more complicated components of a platform and OS than that, and those seem to work as well as any other platform and OS.


Compared to what other operating system(s)?

wsl on windows 10 has been amazing to develop and work on.


> wsl on windows 10 has been amazing to develop and work on.

Now imagine how amazing it would be just on Ubuntu ;-)


The problem is that not many IT departments support ubuntu. They are making lots of improvements to the UI and application management, but it can be cumbersome to get some applications working on linux. Having windows to install whatever gui apps you need or whatever other apps that aren't needed in linux, then having linux there to develop on has been pretty great. It's almost like a hybrid linux+windows operating system and not at all like running a vm on windows.

e.g. this is in my .bashrc in wsl, it writes stdout to my windows clipboard:

function cb () { powershell.exe -command "\$input | set-clipboard" }

Windows gets tons of hate in our community, but I gave it a chance a couple years ago after being frustrated with osx and it has been amazing and I think a lot of people would come around to it if they gave it a chance. I am biased towards linux though since I'm an sre, so maybe that is why I never could quite get comfortable on osx. I really disliked having to learn how to do something once on a mac, then do that same thing again on linux to get something into production.


For enterprise adoption of Ubuntu, Ubuntu 21.04 now start supporting AD Group Policy thanks to samba. I wonder how does it help.


Every other OS. It's full of legacy APIs and scrapped new APIs. Every release is like two steps one step forward, one step back and one two the side. Just because thousands of companies have written software and drivers for it, it's still existing. If it were released today it wouldn't stand a chance.


Are you talking about developing for windows or developing on windows? I'm talking about developing on windows. I don't really care what the apis look like underneath it all. wsl on windows is a lot more intuitive to develop on when you have a target environment of linux compared to something like osx where its almost like linux, but not really at all.


WSL was slower than dialup internet last I used it...


IMO a lot of what Windows does isn't something you can apply Copilot-style tech to. The only thing you could train it on would be Windows, really.


Have you used any intelligent code completion in the past? E.g. I'd really be interested how it compares to TabNine[0], which already gives pretty amazing single line suggestions (haven't tried their experimental multi-line suggestions yet).

[0]: https://www.tabnine.com


Interestingly the founder of TabNine (which was acquired by Codota[0]) is currently working at Open AI (edit: comments corrected me he left in December 2020 according to his blog). I imagine they're livid about Open AI creating a competing product.

TabNine at times was magical, but I stopped using it after Codota started injecting ads directly into my editor[1]

[0] https://betakit.com/waterloo-startup-tabnine-acquired-by-isr... [1] https://github.com/codota/TabNine/issues/342


Ah, thanks for the insight! It seems though that he is no longer working with OpenAI according to his personal website[0].

[0]: https://jacobjackson.com/about


I'm curious as to how relevant Copilot would be when autocompleting code that is specific to my codebase in particular, like Tabnine completes most used filters as soon as I type the db table name for the query. I'm a big tabnine fan because it provides this feature. I'm much more often looking to be suggested a line than an entire function because I'm mostly writing business logic.

also tabnine is useless in multi-lines completes. which is where co-pilot should be strong.


Yeah, I've been very happy with Tabnine for a while, but the prospect of good multi-line completions is appealing. I might try running both Tabnine and Copilot simultaneously for a bit to A/B test.


I've been using TabNine for a couple years – constantly impresses me, especially how quickly it picks up new patterns. I wouldn't say it's doing my job for me, but definitely saves me a lot of time.


I have used IDEs with good knowledge of the types and libraries I'm using (e.g. VSCode with TypeScript). They offer good suggestions once you start typing a function name.

But nothing gets close to Copilot. It "understands" what you're trying to do, and writes the code for you. It makes type-based autocompletions useless.


tabnine works quite similarly to copilot. it's not a thing that "knows about types and libraries", it's a similar predictive machine learning method as copilot seems to use.


I tried out TabNine. It was a very frustrating experience. In almost all cases it gave completely useless suggestions which overrode the better ones already suggested by IntelliJ. I persevered for a few days and then uninstalled it.


Maybe it's just because humans are not as creative as they think. Whatever you do, thousands of others have done the same already. So no need to pay a high level programmer, just a mediocre one and the right AI assistant gives the same results.


With an AI assistant, in the best scenario, you'll get a "wisdom of crowds" effect on implementation details and program architecture. At worst, you'll get a lot of opinionated code bloat and anti-patterns as suggestions.

For most backend programming jobs, the challenge is not in writing complex code but figuring out what the business wants, needs and should have in the first place and distinguishing between them. Figuring out how to integrate with existing systems, processes, fiefdoms and code. Knowing when to say yes and no, how to make code future proof, etc. This is a task fundamentally unfit for what we currently call "AI", because it's not actually intelligent or creative yet.

On the frontend, it becomes even more nebulous. Maybe Copilot can suggest common themes like Bootstrap classes for a form, CSS properties, the basic file structure and implementations for components in an SPA, etc. As I see it, the main challenge is in UX there, not the boilerplate, which again makes it about understanding the user, figuring out how to make UI feel intuitive, etc. Again: unfit for current AI.

I cannot offer any opinion on the utility for actually complex, FANG-level code, for lack of experience.


Quite the opposite. Menial work will be automated away (e.g. CRUD) and only good programmers will be needed to do the more complicated work.


> So no need to pay a high level programmer, just a mediocre one and the right AI assistant gives the same results.

I think of it as not needing juniors for boring work, all you need as a company is seniors and AI.


So where do these seniors come from?


That’s beyond the planning horizon.


You could have a much smaller pool of juniors/journeymen that are focused on maximum learning rather than amount of output.


without juniors, how do you get more seniors?


Maybe programmers will adopt a similar model to artists and musicians. Do a lot of work for free/practically nothing hoping that some day you can make it "big" and land a full time role.


That is already the recommendation. Pick an open source project and contribute.


And most people who consider the job are driven away by the economics of it.


we already have that with unpaid interns and PhD students


I think it's more that this tool is only capable of automating the non-creative work that thousands have done already.

It's still insanely impressive (assuming the examples aren't more cherry picked than I'd expect).


Side note: I recently suffered from a tennis elbow due to sub optimal desktop setup when working from home. Copilot has drastically reduced my keystrokes, and therefore the strain on my tenders.

It's good for our health, too!


I watched the same thing happen with the introduction of Intellisense. Pre-Intellisense I had tons of RSI problems and had to use funky ergonomic keyboards like the Kinesis keyboard to function as a dev. Now I just hop on whatever laptop is in front of me and code. Same reason - massive reduction in the number of keys I have to touch to produce a line of code.


Unfortunately, one in 10 times is far from good enough (and this is with good prompt engineering which after using large language models for a while, one starts to do).

I feel like the current generation of AI is bringing us close enough to something that works once in a while but requires constant human expertise ~50% of the time. The self-driving industry is in a similar situation of despair where millions have been spent in labelling and training but something fundamental is amiss in the ML models.


You are correct. I feel this is why the service is called Copilot, not Pilot :)


I think 1/10 is incredible. If it holds up it means they may have found the right prior for a path of development that can actually lead to artificial general intelligence. With exponential improvement; humans learning to hack the AI and the AI learning better suggestions, this may in theory happen very quickly.

We live in a very small corner of the space of possible universes, which is why finding a prior in program space within it is a big deal.


I keep wondering how much time it could possibly save you, given that you're obligated to read the code and make sure it makes sense. Given that, the testimonials here are very surprising to me.


It seems to replace/shorten the loop of “Google for snippet that does X” copy, paste, tweak, no? Which of course is super cool for many tasks!


It's smarter than that. It suggests things that have never been written. It actually creates code based on the context, just like GPT-3 can create new text documents based on previous inputs.

Edit: Check this screencast for instance: https://twitter.com/francoisz/status/1409908666166349831


Anyone tried this on any sort of numeric computation? Numpy, Julia, Pandas, R, whatever?

I definitely see the utility in the linked screencast. But I am left to wonder whether this effectiveness is really a symptom of the extreme propensity for boilerplate code that seems to come with anything web-related.


I'm not convinced that code snippet in the screencast had never been written. It's fairly generic React, no?


Is the model similar to GPT-3? That seems extremely resource-intensive, particularly given how fast the suggestions appear.


How big/complicated are the functions Copilot is autocompleting for you? I'm thinking perhaps reading 10 potential candidates is actually slower and less instructive than trying to write the thing yourself.


It shows the suggestions line by line, and only shows the best guess. It's not more intrusive than Intellisense.

You can actually see all the code blocks Copilot is thinking about if you want to, but that is indeed a distraction.


The problem I see with that is that's not possible for it to understand well which code is the best, GPT-3 is trying to mimic human writing in general, the thing is most human code is garbage, if this system was able to understand how to make code better you could keep training it until you had perfect code, which is not what the current system is giving you (a lot of the times anyway).


I guess you miss the point. It's not trying to suggest the perfect code. Only you know it. It's saving you time by writing a good (sometimes perfect) first solution based on method/argument names, context, comments, and inline doc. And that is already a huge boost in productivity and coding pleasure (as you only have to focus on the smart part).


Maybe you are right, in my experience either the code you have easily available to you (either because another person or a computer wrote it) is perfect for your use case (to the best of your knowledge anyway) or rewriting it from scratch is usually better than morphing what you have into what you need.


>if this system was able to understand how to make code better you could keep training it until you had perfect code

Based on the FAQ, it looks like some information about how you interact with the suggestions is fed back to the Copilot service (and theoretically OpenAI) to better improve the model.

So, while it may not understand "how to make code better" on its own, it can learn a bit from seeing how actual devs do make code better and theoretically improve from usage.


You're missing the problem he stated: Code written by humans is usually bad so the model is trained on garbage.


The proof is in the pudding. I was skeptical too but the testimonials here are impressive.


The animated example on https://copilot.github.com/ shows it suggesting entire blocks of code, though.


It does actually suggest entire blocks of code. I haven't quite figured out yet when it suggests blocks or lines - if I create a new function / method and add a doc string it definitely suggests a block for the entire implementation for me.


I see, I think the most useful case for me would be where I write a function signature+docstring, then get a list of suggestions that I can browse and pick from.

Do you have examples of what the line ones can do? The site doesn't really provide any of those.


Take a look at this minimal example I just created where I did just that -- created a new function and docstring. This example is of course super simple - it works for much more complex things.

https://gist.github.com/berndverst/1db9bae37f3c809e5c3f56262...


Is there actually a search.json endpoint?

https://github.com/HackerNews/API doesn't list this as a valid endpoint. Indeed, when I try to access it via curl, I get a HTTP 405 with a "Permission denied" error (same result when I try to access nonexistent-endpoint.json).

Based on the HN search on the website, I'd expect the correct autocomplete to involve hn.algolia.com [0].

[0] https://hn.algolia.com/api points at https://hn.algolia.com/api/v1/search?query=...

To me, this points at the need for human input with a system like this. There is a Firebase endpoint, yes, and Copilot found that correctly! But then it invented a new endpoint that doesn't exist.


Even snippets are so bad I have to turn them off. I can't even fathom how bad the suggestions are going to be for full blocks of code. But I guess I'll see soon...


What's your estimate of the productivity boost expressed as a percentage? I.e. if it takes you 100 hours to complete a project without Copilot, how many hours will it be with Copilot?


I'm not sure I spend less time actually coding stuff (because I have to review the Copilot code). But the cost of the code I write is definitely reduced, because:

- the review from my peers is faster (the code is more correct) - I come back less to the code (because I have thought about all the corner cases when checking the copilot code) - As I care more about naming & inline docs (it helps copilot), the code is actually cheaper to maintain.


Check out how it helped me write a React component: https://twitter.com/francoisz/status/1409919670803742734

I think hit the Tab key more than I hit any other key ;)


I can't see anything due to the variable bitrate.


Having been a TabNine user for a while I can say that it's less of a productivity boost and more of a quality of life win. It's hard to measure - not because it's small, but because it's lots of tiny wins, or wins at just the right moment. It makes me happy, and that's why I pay for it - the fact that it's probably also saving me some time is second to the fact that it saves me from annoying interrupts.


In addition, I would like to see GitHub reviewing my code & giving me suggestions on how I could improve. That will be more educative & a tool to ensure consistency across code.


I'm surprised this doesn't exist. Google, FB, and Apple (and I imagine Microsoft) have a lot of this stuff built in that is light-years better than any open source solution I'm aware of.

Given that MS owns GitHub and how valuable this is - I imagine it will be coming soon.


SonarQube and IntelliJ does it in some way for me.


+1 for SonarQube. Very easy way to add value to a project without a lot of overhead.


I tried TabNine and it wasn’t a huge improvement because what costs the most time isn’t typing stuff but thinking about what to type.


How did you manage to get early access? Is there some kind of GitHub Early Access programme (or something similar part of Enterprise)?


do you still go over the generated code line by line and touchup in places where it did not do a good job?


It suggests code line by line, so yes


I guess I dont see the point if only 10% of the time it's exactly what you want and the rest of the time you have to go back and touch up the line.

Does it train a programmer for accepting less than ideal code because it was suggested? Similar to how some programmers blindly copy code from StackOverflow without modification.

Seems like there is a potential downside that's being ignored.


> Does it train a programmer for accepting less than ideal code because it was suggested? Similar to how some programmers blindly copy code from StackOverflow without modification.

Maybe juniors, but I don't see this being likely for anyone else. I've been using TabNine for ages and it's closer to just a fancy autocomplete than a code assistant. Usually it writes what I would write, and if I don't see what I would write I just write it myself (until either I wrote the whole thing or it suggests the right thing). Of course, barring some variable names or whatever.

I don't have it "write code for me" - that happens in my head. It just does the typing.


> I guess I dont see the point if only 10% of the time it's exactly what you want and the rest of the time you have to go back and touch up the line.

Think of it as tab complete. If it's wrong, don't hit tab.


This is not true - and I've been using copilot for many months :)

It suggests entire blocks of code - but not in every context.


My bad, you're right. I remember now that it suggested me entire code blocks from time to time.

Do you know in which "context" it suggests a block?


It usually suggests blocks within a function / method in my experience. Here's an example I created just now:

https://gist.github.com/berndverst/1db9bae37f3c809e5c3f56262...


for those of you like this concept but 1. did not get into the alpha (or want something that has lots of great reviews) 2. need to run locally (security or connectivity) 3. want to use any IDE... please try Tabnine


It could start to replace us in 20 years. Or reduce. It is exciting for now


Until it automatically knows when and how it's wrong, you'll still need a human to figure that out, and that human will need to actually know how to program, without the overgrown auto-complete.

May or may not reduce the demand for programmers, though. We'll see.


But that's exactly what you are doing as a programmer who uses it. If you autocomplete using it and then fix the code, you are literally telling it what it got wrong.


I was responding to "It could start to replace us in 20 years." I think if it can do that, it'll basically be AGI and a lot of things will change in a hurry. I don't think that a tool that can do that is even all that similar to this tool.


> and the rest of the time it suggests something rather good, or completely off

In what, proportion roughly?


Hard to say, really. When writing React components, Jest tests and documentation, it's often not very far. I found it off when writing HTML markup (which is hard to describe with words).


Seems like it's best classed (for now) as an automated tool to generate and apply all the boilerplate snippets I would be creating & using manually, if I weren't too lazy, and/or too often switching between projects, to set all those up and remember how to use them (and that they exist).


It sounds similar to the editor plugin called TabNine


do you have an invite? very interested to check it out


Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: