The AI coding approach is solving the wrong problem. The problem isn't with the low level detailed work. That problem should be solved by building composable, tested, audited libraries. Legos, if you will.
If the problem is trivial enough that you can completely trust the AI coded solution, then you could have either done it yourself very easily or used a premade solution from a good library or toolkit.
If the problem is not trivial, then you have the outsourcing challenges (which apply to a lot more scenarios than using AI to help you code).
If you are not personally capable of judging the outsourced work, then whether you use AI or type it yourself, you will end up with errors or misfeatures.
If you are capable of judging, then you must pay attention and read/review. So your job shifts from defining the problem and programming a solution to defining the problem and reviewing potential solution(s). Either way, you must focus and think. But again, perhaps you would be better off building a solution composed of known good blocks. <- This should be the future of software development...
Sadly, I think that open source and freedom to (re)invent has worked against us in the long run. If instead of each of us going off and thinking, "I can make a better language/framework", we had built on existing technologies, I daresay we would be further along. To be fair of course, some level of dissatisfaction and divergence would be necessary or we would still be using assembly.
Github does have one thing right though (from a business perspective) - they are making a remedy to a symptom, and in that they can expect longer term revenue than if they actually solved the core problem.
I think you’re missing what the problem they’re trying to solve is. They’re solving the problem of “writing code faster”
With that in mind I think this problem is totally real and worth solving. If copilot can save me those mundane moments during my day where I have to figure out “how to do this common thing that I already did 100 times” then it’s a win for everyone.
It’s not trying to solve the whole of programming. It’s just a nice tool to let you actually concentrate on the non-automated parts of coding such as: actually translating requirements into code.
We don’t need more code. In general, code is a liability. A tool that helps create more configurations of the same terrible boilerplate incantations that we already have to maintain in a million code based is just adding to the problem.
We've had sixty years of serious software development and the community still doesn't have a strong idea of how to actually write applications in the manner you describe and then evolve those applications according to new requirements. Every decade we get tons and tons of new best practices and the fact is that virtually all applications, even those written by world experts, end up as a mess.
"Just write code beautifully" is a great problem to try to solve, but we shouldn't let that problem prevent us from making minor local optimizations in engineering practice.
Of course lots of boilerplate should indicate we need new libraries or better frameworks. But we don’t.
We’re stuck writing things that often needs 50 lines of boilerplate.
This is the wrong solution but that doesn’t mean it’s bad. Someone (too many..) is surely writing new frameworks and better libraries. But meanwhile thousands if people really do have to add boilerplate to million line codebases.
Easy to say. Of course you do that when you can. What afflicts me is all the cases when the task is subtly different, with different requirements, but still feels like something I've done before.
At the code level, that's parameters in a library function. At the infrastructure level, currently my team is using Terraform templates. I feel like we will probably restructure that "soon" to be more about composing infrastructure out of common blocks rather than inheriting from an entire template, but either way, this does seem to be a solvable problem at the infrastructure layer as well.
As you imply, it is significantly easier to say than to do effectively, but I've yet to see any situation where it -can't- be done (but I'm curious what one looks like if someone's run into this).
> At the code level, that's parameters in a library function
Sure, so you keep adding more parameters to the library function and then pretty soon you can’t remember all the parameters so you have to look them up every time you want to invoke it from your code… which is exactly the perfect use case for AI like this to help you remember the correct parameters to call.
Keeping in mind I'm a hobbyist, not a professional, but it seems to me much of coding boils down to this process of refactoring, spinning out libraries and restructuring to use Terraform or whatever, and at some point the process itself is what's repetitive.
Not OP but I think the gist was more about solving the same/similar class of problem in a different context. You’re not going to build a library to abstract away certain things, for lots of good reasons. E.g. in OOP you may intentionally not use inheritance between a few similar (but distinct) business cases because of the overall maintenance burden/rigidness it would introduce. So you end up with more boilerplate, but your codebase is overall more flexible.
That to me is an indication of a need for an even higher level abstraction.
At this point we should be able to go through a questionnaire, and software (not an AI, but definitely algorithm with rich knowledge and rules) would generate a quality starting system.
$ I need an app
Q1. do you need it to be available on the web?
A1. yes
Q2. do you need users with accounts?
A2. yes
Q3. do you want local passwords, and/or third party connections such as Github, Google, Facebook, etc.?
A3. local and third party
...
Qm. Do you have a programming language preference? [Ruby, Java, Python, ...]
Am. Ruby
Qn. Do you have a database preference? [MySQL, PostgreSQL, ...]
An. PostgreSQL
...
Add some theme options and other goodies, and you essentially have an app starter kit. Granted, you still have to know the technologies and plumbing since inevitably you'll need to change something that was preconfigured, but this would save a lot of time (and result in more consistent foundations across apps.
Ultimately I hope this goes even higher level to no-code or nearly no-code solutions. It might mean fewer choices, but that can also be a good thing. I imagine a majority of all software needs can be met with just a few predefined configurations. Then you can decide if you want to change your needs to fit the easy solutions, or if it's really worth going custom.
Such starter kits exist, e.g. JHipster[1]. The problem is that after you've generated the starter code, this big heap of code (that you're not familiar with the details of) is now yours to maintain and upgrade.
There is also cookiecutter [1], that attempts a crack at the same problem you are describing. There is also a nice website [2] that searches for all the cookiecutter templates and lists them.
It is often recommend to not write a function if you call it only once or twice. So most functions will be called multiple times in your code.
Copilot can be useful here as it will autocomplete all arguments, based on the context (existing variables) and knowledge about the functions arguments, or previous usages. Of course sometimes it is wrong, but this is still very useful to me !
If you've done something 100 times, wouldn't it be faster to copy / paste the implementation you wrote (and know is good) than to rely on something like copilot and have to review / check the solution every single time?
I think what's actually going to happen is that copilot will be successful, but will be a much worse version of poor quality outsourcing. There are going to be people "writing code" that don't even have the ability to evaluate the implementation.
You see the same thing in some industries where the institutional knowledge of the baby boomers is disappearing. No one stays at the same job for more than a few years and you can see people making mistakes with things that are out by a factor of 10x or 100x sometimes. Very often people don't have the ability to grasp the basic concepts of the work they're doing. I think the same thing will bleed over to software development a lot over the next decade.
I also have a huge objection to having any code I write used to develop an "AI" for the benefit of a huge corporation. Do I get a cut? I doubt it. That alone should be enough for people to quit using GitHub. They're training they're own replacement and are too dumb to see it IMO.
The problem is that you need to find it. There are thousands of things you've done those hundreds of times, scattered throughout all the code you've ever written. Going to find it can take as long as deriving the information from scratch -- again.
I often end up solving it by repeating my Google, and frequently the StackOverflow page will still be marked as having been read.
If it is not something temporary a line of code is written once and then read many time by many people (for different reasons). Over the long run speed of reading more important than speed of writing. If copilot doesn’t help to write easy to read code it would make the problem worse.
The thing is, there's already a powerful AI technology we can use to solve the "boilerplate code" problem.
It's called Lisp. You know, Lisp, the language for AI written way back when brand-new Cadillac Eldorados with tailfins were still on the road that is self-reflective enough to make writing "programs that write programs" an absolute doddle? :)
Good luck getting bigco to sign off on a Lisp project, though. Even if it is of demonstrably profound practical utility.
You're right -- we have a huge problem with trying to trowel the new hotness (currently, "AI/ML" aka statistics) instead of taking advantage of tools that are highly suited to purpose. Instead of letting us use those tools, bigcos instead subscribe to the "programmer-clerk" myth in which programming is a menial task of mostly rote coding undertaken by minions in legion strength. And this affects not only the tools we use but our processes and professional values.
The underlying language doesn't actually matter as long as you can deliver the toolkit. I do agree that Lisp(y) language(s) may be very well suited, but I don't care.
I just want us to stop reinventing the wheel hundreds of times each year. At least paper shuffling and fax sending took long enough that you could get a coffee and have a chat while it was happening. Now instead we toil over configuration files (which ironically Rails, my daily toolbox, aimed to solve a decade ago).
I should be able to define a few data models and relationships, processes, and some business rules. The rest absolutely should be generated for me. If cars worked like this, we would still be custom building wooden wheels.
> The underlying language doesn't actually matter as long as you can deliver the toolkit. I do agree that Lisp(y) language(s) may be very well suited, but I don't care.
Languages matter. Lisp offers second-to-none metaprogramming capability and it's metaprogramming, not "libraries", that will bring the boilerplate problem under control. Other languages may be able to do the same, just not as well; Smalltalk and Ruby can "fake it". Metaprogramming lets you put patterns into reusable libraries, not just data and functions.
> I should be able to define a few data models and relationships, processes, and some business rules. The rest absolutely should be generated for me.
So you want CASE tools from the 80s-90s? Or "autogenerate applications from UML" from the 2000s? Because uh, there are reasons why those failed. Among them:
1) Code that's generated by machine still has to be maintained by humans. I work with one of the low/no code tools that promise to do everything for you. Lately I spend my days toiling over its flowcharts rather than configuration files, trying to figure out what broke when it stops working, because it doesn't have nearly the debugging capability that JavaScript has and even crude tools like "debug by printf" are off limits.
2) Systems that run in the real world are subject to real-world constraints. You have to integrate with other, very specific systems, you are subject to performance and scalability constraints that vary by application, etc. These are constraints that require solutions to be engineered.
You're going to still have plenty of work to do after you've defined your data models and business rules, no matter what tools you use. The question really is how much friction the tools impose when completing that work.
I very recently got access to Copilot, while I was in the middle of learning and playing around with Clojure.
It’s surprisingly useful when you’re not sure about how you want to proceed. E.g. While I was trying to make a simple function for printing all palindromic numbers under 10,000, Copilot inferred from the function name what I was trying and suggested a function using threading macros (something I hadn’t yet come across in Clojure). The result was a much neater affair than what I came up with on my own. I feel it could be a fantastic way to build familiarity with a new language.
I hear you. But I see this as a horrible thing. Now people are going to be further absolved of responsibility of thinking through the problem first and then implementing it after.
This will then fold in on itself. The AI will start learning upon its own generated code, as there is no differentiator from what it wrote before. It has got a leg up from training on lots of human code, some brilliant written, some not, but eventually it will start dog fooding its own creations.
I am honestly pleased I am not using a Microsoft IDE and won't be part of all this crap. I have already started creating new projects on GitLab from now on.
AI may improve in many ways, but it will never reliably be able to infer what the user is wanting... because many users will not even be able to accurately define the problem.
It's a catch-22. If you can express accurately your need (the problem), then you're likely to be able to implement a solution to that need without an AI helper. In fact, you may find the AI helper to be a greater time cost than if you had just done it yourself. But if you are inexperienced such that you need the AI to help, then you probably aren't going to accurately expresss the need. And as such, you'll end up with "a solution", but perhaps to a problem that is different enough that there will be a net loss in time. Or worse, your AI "solution" ends up in production, and the damage comes later and at higher cost.
Not necessarily horrible but still, a bit frightening :).
I fear hordes of inexperienced developers, mindlessly clicking accept on the first suggestion, and then on the second, third, and so on, until one seems to work. And the more advanced tools like this get, the harder it'll be to review the code and make sure nothing's messed up.
I guess we'll have to think long and hard about what safeguards to build against these risks.
This is the issue I see. Developers will employ logic they don't understand, create systems they don't really understand, and large investments in time and finances will realize absolute failure. The risk of junk code and misunderstood logic grows exponentially with this type of tool. In the hands of a skilled Computer Scientist, it is a godsend, in the hands of the majority of us developers, it will create unmanageable complexity, stress and failure.
IMHO its worse because on StackOverflow, people can comment, upvote/downvote, add their own answers etc if the random code has issues or shortcomings or is a bad idea for whatever reason. With copilot, you have to trust it or trust that you can figure out if there are issues (which, if you're a beginner learning, you probably can't and according to that security articleq[1], in many cases even experienced programmers may not notice[2]). With StackOverflow, that's also the case, except you have many more eyes vetting it.
[2] Two quotes from the article: "Oh, and they're both wrong." and "Both look plausibly correct at a glance". If the entire selling point is to write code faster, are we really going to give the code more than a glance? Personally, I find figuring out code I didn't write a lot harder than writing it from scratch, in most cases, because I understand my intent, while I don't understand the intent of others without some deep thinking.
Do people actually just copy paste? When I was much greener I never found that to work. The interface had to be thought about. And inevitably there’d be other details to modify.
If anything this just makes that easier to do wrongly. It wants to offer you code that’s not yours that you didn’t conceive of thoughtfully.
I’ve copy/pasted parts of code, but then I typically always link to the actual SO answer in the code. Sometimes it’s just the most pragmatic thing to do.
I have fixed some very broken code at work, and person who had originally added the code to the project just commented to the PR "I copied it from StackOverflow so it must be right". That person had architect title and I don't, so he must have been doing something right.
It isn’t really being sold as “are you the kind of developer who copies and pastes code you don’t understand from StackOverflow? The have we got an efficiency improvement for you!”
If anything it’ll make hiring harder. We’ll all get accused of gatekeeping for requiring candidates to really understand their output, because they don’t need to know all that junk! Just use copilot and tweak it until it seems to work!
I see co-pilot more as an intelligent suggestion engine or tab complete on steroids versus something solving all of my problems or absolving one of thought. Does pair programming result in junk code?
No, but not because there's a second person writing code, rather because there's a second person to think about it and think through how it works and interacts with the rest of the system.
As I said in another comment, actually writing code is only a small part of my work as a programmer, with pair programming, there is someone else who can do all the other things with me. Copilot cannot help with those things, certainly not in its current form.
The problem with Copilot as a fancy tab completion is that it can generate quite complex code, and, if code reviews are anything to go by, understanding other people's badly documented code is usually harder than writing your own. If its non-trivial, there's a strong urge to just accept it without really understanding all the details, because it appears to work. Or to just get a shallow grasp of it before deciding its good enough. Copilot will not help here, if anything, it makes it worse if it generates subtly broken code that otherwise looks correct and is too complex to really scrutinize and understand.
A better approach, I think, would be if you write unit tests and Copilot were to take them as input and write code to pass them. At least then it would be grounded in tests.
Right now, it seems Copilot shines for boilerplate code, but is possibly a net negative for anything actually complex (which it will still happily attempt to generate without warning you). A disciplined programmer will likely benefit from Copilot, but I shudder to think about what code a lazy, overworked or too-beginner-to-spot-problems one might let slip through (based on this recent article about copilot: https://gist.github.com/0xabad1dea/be18e11beb2e12433d93475d7...)
20 years ago I would download as many projects as I could fit onto the floppy disks I had on me within the time I had access to the internet. I'd scour planet source code, and all the others, and get all those zips. All I'd have is the title to pick from. Then I'd go home, open them up, and pour over the code, understanding it, ripping it apart, re-using it. I learnt so much.
As an educational tool, being able to generate code and then pick it apart is as important as learning to put code together from nothing.
For me the AI part of this is all the bad parts. What would really be cool is better search, templating, and best practices at your fingertips in the IDE.
Exact same problem to solve, because it is an important an interesting one, but a totally different approach. Maybe AI assisted somehow, but human curated.
Otherwise, imagine how bad legacy codebases of the future will be when they are full of autocomplete code that nobody understands or cared enough to think through even originally.
We should have AI tools to aid software engineers understanding of logic chains, and assorted visualizations like CAD, but for logic and the code creating that logic. And not UML nonsense, but some type of AI tool that begins where Doxygen ends and simply keeps going with various means of aiding the developer's understanding as they construct hierarchical logic systems.
Its not even clear to me that writing faster and more code is necessarily a good thing, or, at least, that beneficial. I've said it in a previous Copilot discussion and I'll say it again, but actually writing code is a small part of what I do as a programmer.
I spend much more time figuring out what the requirements even are, refining them, figuring out what that even means in terms of code, figuring out the overall architecture, how it fits in with other systems or existing code, what data formats it uses, how it handles faults, persistence, scale, security. How it interfaces with the outside world (UI or API). Besides the code itself, I also spend a lot of my time on writing tests (which I wouldn't want to pawn off on an AI outside of fuzzing or generating data for property-based tests; unit tests should mirror what the spec dictates and needs to test the correct things) and on writing documentation.
Yes, the code does take up a good chunk of time, but really, its the easy part of my day!
Also, speeding through the code means I'm not thinking about it very deeply. That's when I introduce the most bugs, design flaws or shortcomings that bite me later. I wonder if we'll end up with a situation like the old quote about code reviews: a ten line code review gets a hundred comments/suggestions/questions, a thousand line code review gets a ship it. If much of the code is written for us, will we have the attention span to scrutinize it and understand it deeply? Or will our eyes eventually just glaze over as we go yeah its probably fine, ship it.
This is exactly why I think Copilot (and other "AI writes code for you" solutions) are going to fail. It's harder to read code than it is to write it. That's the opposite of how English works.
> We’ll also code reviews. Lots and lots of code reviews. Like, all the time. The algorithm will have to be kept in check.
This is a repeated theme of the article. I think it’d be simpler to write the non boilerplaty code, no? Plus who’s going to be excited to have a job as “AI code reviewer”
So basically shift the cognitive burden on to your coworkers? And what if they are also using copilot hoping that you will sanity check it’s output for them? Tit for tat prisoner’s dilemma and no one even realises they are playing the game..
Maybe a future version could even write the requirements for the application. And when the users can't figure out how to use whatever comes out, we can have an AI do that part too.
So far, I am feeling positive about Copilot. I've used it for about a week and it has been useful at times. Like the author, I've mostly found it useful in situations where I am doing something repetitive. I definitely need to look over suggestions carefully though.
I don't think it will go much further than that and I don't know what that would even look like anyway, unless I could actually start discussing architectural decisions with it like I do with a human pair programmer. I guess you could say that this is what the comments are for, so who knows.
I love the fact that it can help write tedious, repetive, and simple logic code blocks! It'll save me time googling basic stuff that I tend to forget.
Will it also learn (i.e. feed GPT) from the code we're writing which it is also helping to write? How do you think it'll learn to deprecate bad practices or evolutions observed in a language (think writing concurrency code in Java 5 vs. Java 9, or any other relevant Programming Language evolutions)?
> Will it also learn (i.e. feed GPT) from the code we're writing which it is also helping to write?
Most likely not. And at least right now Tabnine's selling point is that you do inference locally (so no need to send code elsewhere) and can be trained on such (vide https://www.tabnine.com/tabnine-vs-github-copilot).
What kind of code? I'm having trouble thinking of anything for the kind of work I do, that isn't catered for by "intellisense". Anything beyond that seems to be impinging on why I actually enjoy programming.
I understand some of the legal implications to be regurgitating licensed code verbatim. But, what about this: what if the current Copilot is working out the kinks, and the real product is per-organization models with transfer learning using their repos' code?
At work, we store all our code in our GitHub repo, some public, some private. As-is, I think there's a lot of legal ambiguity around using Copilot, but if all that code just served to teach the model structure of programs and common syntactical constructs, but then it had another layer with our code and its idioms, modules, names, then maybe it would regurgitate our code in a way that's useful and doesn't run afoul of licenses.
I'm thinking of a fast.ai course I did where I took a base model trained on generic image data, and then did transfer learning on top where I fed it labeled images of Go games and Chess games, and with only maybe 100 of those images it learned to distinguish the two with shocking accuracy. As I understand it, the base model taught it how to look for things like lines, corners, contrast, etc, and then it could be easily specialized. Could something similar be the case here?
It’s early days for CoPilot, but I find myself wondering if it will eventually reduce idiomatic use of languages and increase the stickiness of old practices.
In the article, the author included this example:
alternate_word_mapping = {words[i]: words_in_english[i] for i in range(len(words))}
That line is probably more readable than the Pythonic enumerate:
alternate_word_mapping = {word: words_in_english[i] for i, word in enumerate(words)}
But it superficially reminded me of a style that I see on Leetcode, which rarely uses Python features like enumerate or dict.items.
CoPilot was trained on GitHub repos and many repos are mini-projects for learning a new language. Does that mean CoPilot will tend to suggest more generic, less language specific, implementations? If it does, will that change perceptions of what’s idiomatic? And will the volume of old code on GitHub influence CoPilot’s suggestions, making us slow to adopt new language features?
Thumbs up, I totally missed this. Didn’t encounter this approach in the suggestions either.
That’s actually one reason why I don’t think it’s current incarnation is a good fit for learning new apis, it’s been trained on a lot of code, not all of it good.
Seems like something a future version might be able to fix, perhaps by training a new layer using just demonstrably ‘good’ code?
You can complain and whine about what problems are being solved, how it'll affect human developers (making them weaker instead of stronger over time). And to some extent, that's probably true.
But it's still the future and it's coming. It's already here.
It's good to read a less dramatic take on Copilot. The initial echo chamber of outrage felt rather strange, especially considering the usual attitude when people point out risks in AI systems. I guess everyone else's reading-file-line-by-line loops in python are the epitome of creativity, and being inspired by them is its own category of crime compared to the exact same thing happening in uncreative professions such as photography, music, or writing? And not being hired because the algorithm prefers people who played lacrosse in school is, like, your problem, because waiting to release models until they do not harm anyone would seriously mess with our agile process.
As an aside, I really enjoyed the writing style. The subtle humour is better at signalling competence and friendliness than any CV ever could.
> I guess everyone else's reading-file-line-by-line loops in python are the epitome of creativity
I would certainly hope not, since there is barely any code to write:
for line in my_file: ...
For even more convenience in common cases, there is the fileinput module[1] in the standard library.
I can see how a boilerplate-generating AI could be helpful in a more boilerplate-heavy language like Java, but a better solution is to use a language that better suits your usecase and lets you express it without the boilerplate.
I'd say using a better language is an easy decision to make when you're in a team of one and not depending on any framework-specific features and somewhat harder when you're in a team of more, depending on some framework-specific features, or both.
I’m not a denier. I believe AI will vastly alter the way we write code. But, to be honest, it’s very depressing to me. I think I am someone who selfishly enjoys writing code, not necessarily getting software built. If my job became designing something at a very fine grained level, feeding it to an AI, having the AI write it and then code reviewing the AI, I’d just switch careers. Unfortunately for me, I’m super early in my career so I hope I can make enough money in the next 20 or so years such that I can retire young.
One thing not mentioned in this article but that's on Github Copilots page is that they imagine it'll be useful for learning how to code.
> ... or just learning to code
I've done some teaching and my mental model for what is necessary to learn a language and, more generally, learning how to program and my view is that the rudimentary boilerplate-y type of stuff that this tool seems to excel at are mindless to most are essential for beginners as part of their learning.
Any educators here with different ideas or thoughts?
To me it seems like copilot only helps with local code, which is absolutely not the bottleneck (it’s maybe 20% of my time)
I spend much much more time figuring out how to convert requirements to code and how to structure it non-locally (as in how it fits into the whole codebase)
Also if you’re concerned with writing clean code, copy-pasting boilerplate everywhere is not the right approach, you have to actually think about interfaces and abstractions
This would be even more difficult to achieve than previous attempts (e.g. in the Linux kernel [0]) due to the fact that an attacker needs to corrupt thousands of repositories that are guaranteed to be part of the training set.
Potential attackers would have two problems: 1) getting malicious checked into many repos and 2) making sure that these repos find their way into future deployed versions of GPT-3/Codex/CoPilot.
CoPilot generates enough vulnerable code as-is [1], so the extra effort isn't even required.
Crafting might not be necessary. You might find a vulnerability in a commonly copiloted piece of code, and now you can exploit it in many projects. Better yet, those snippets cannot be updated even if Copilot improves, and there is nothing to file a CVE against either.
The number of people who never write a vuln normally but would write a vuln if they were using machine synthesized code has got to be fewer than ten people on the planet.
Microsoft just stole all the code on github to do this. Regardless of what the minutiae of the law say, no one really expected their work to be used this way. Open source code powers a huge chunk of the industry while capturing little value for the maintainers already. Github even explicitly supports a standard format for declaring the license of a repo, which was cleverly ignored.
Here is the relevant section from Githubs privacy policy [1]
> 6. Contributions Under Repository License
> Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede.
From GPLv2, "When distributing derived works, the source code of the work must be made available under the same license."
------
This is not about technology, it is a legal endrun around using open source code without open sourcing derived work. It is using AI as a form of "license laundering".
"OpenAI" is not open at all. Truly open AI means the code, the data and the model are all open. OpenAI sold the source to GPT-3 to Microsoft, received $1 billion from them in 2019 and does not make most of their work available except behind a highly exclusive, paid API - https://beta.openai.com/pricing/. Its a joke to call that "open". I urge you to read up on OpenAI and look at what the have actually done.
Their plan in the future is to sell access to Copilot, directly monetizing work they stole from others for free:
> According to GitHub, “If the technical preview is successful, our plan is to build a commercial version of GitHub Copilot in the future.”
I've deleted all my code from github and hope others do the same. Maybe if some bigger profile project starts doing this, we can start to organize around opposing Pilot and OpenAI.
It’s a shame that copilot would not be possible without all the zillions of hours of work that went into writing that code, while the authors of that training data get zero compensation for their contribution to copilot (and zero ability to opt out).
I'm guessing that since there are hundreds of millions of repositories the typical marginal value of someone's contributions would optimistically be on the order of a few dollars. But since the consensus on HN is that they spend very little time actually coding and there is no use-case for copilot, perhaps it worth a lot less.
If I stole just $0.50 from every american, the typical marginal value of their contribution is tiny, but I still stole nearly $200M. Maybe none of those people will raise much of a stink because it's just $0.50, but it's just as bad.
Practically, it's bad in that I never got the chance dictate how they use my code. My GPL code has very little
marginal value to my users, but I got to dictate that their work that uses it is also GPL (or they can pay me for a different license). I want that choice when it comes to my work being used as ML training data.
I think it will be great if they can create some mechanism to compensate people for their data, I just suspect many people conflate the value of their data as training data and say how much they might charge a client to write some similar code.
If the problem is trivial enough that you can completely trust the AI coded solution, then you could have either done it yourself very easily or used a premade solution from a good library or toolkit.
If the problem is not trivial, then you have the outsourcing challenges (which apply to a lot more scenarios than using AI to help you code).
If you are not personally capable of judging the outsourced work, then whether you use AI or type it yourself, you will end up with errors or misfeatures.
If you are capable of judging, then you must pay attention and read/review. So your job shifts from defining the problem and programming a solution to defining the problem and reviewing potential solution(s). Either way, you must focus and think. But again, perhaps you would be better off building a solution composed of known good blocks. <- This should be the future of software development...
Sadly, I think that open source and freedom to (re)invent has worked against us in the long run. If instead of each of us going off and thinking, "I can make a better language/framework", we had built on existing technologies, I daresay we would be further along. To be fair of course, some level of dissatisfaction and divergence would be necessary or we would still be using assembly.
Github does have one thing right though (from a business perspective) - they are making a remedy to a symptom, and in that they can expect longer term revenue than if they actually solved the core problem.