It depends on what you're developing and possibly for whom, but I think in general not that many will care about using AI or any other tool.
Nobody bats an eye when you use IntelliSense or another form of autocomplete, or a plugin with code snippets. I don't worry about the implications of using StackOverflow, or some plugins to do a bit of codegen for me (e.g. ORM mappings for my app from a live database schema), nor would I worry about getting an informal suggestion for how to write a particular method from a friend.
You'd have to be in a pretty corporate environment for AI or any of that to start mattering.
Code should support human values. It's not clear that human values are best served by private code ownership. And AI code might best provide value as a publicly owned good.
That is why I write all my code under the AGPL, and think all AI assisted code should automatically be open source.
Interesting question. If the Google Vs Oracle case had played out today, and Google was able to prove an AI had created an API that looked incredibly similar to Java, would that have stood up as a defence? If code isn't owned by the developer or company that wrote it, then presumably someone else can't sue them for copyright infringement.
If that's true then there'll be no effective way to ever sue a software developer over the code they wrote (or generated, or prompted, or whatever the term will be.)
Well, it depends. It can be you, your company, a customer, a friend, a SaaS company that asked you to sign a CLA to contribute to their "open-source" project,…
SO commentors own their parts and you own yours; just like books, movies, audio works, etc. Fair use, copyright, work for hire, paraphrasing, and attribution are things that predate SO. Comments on SO are licensed as CC-BY-SA.
I think we should think of AI more like a search engine that understands deeply what we want right now than a person helping us. And if anyone owns anything the AIs regurgitate it is the authors of the training data.
If you use paint you bought at the store, who really owns the finished painting?
If you use a saw to cut wood, who owns the wood you cut?
For now, "AI" is a tool. Maybe that will change in the future when AI is indistinguishable from people and have rights and privileges, but for now, it's just a tool, and tools do not transfer ownership.
If copyrightable creative source code is the toolchain precursor to an executable binary, can a creative LLM prompt be copyrighted as a toolchain precursor to non-creative source code? Should LLM prompts be versioned alongside generated source code?
Yes, the prompt is copyrighted, just like your post here is copyrighted. But I don't think there's any point in versioning the prompts. The responses are just too nondeterministic. If you feed it the exact same prompt again, you can get something quite different back. On the other hand, if giving the same prompt got you the same response (like a reproducible build), then absolutely yes, version those prompts.
Imagine a court which ruled that LLMs must have a reproducible mode, based on a session ID provided by the user, to establish a legally binding and deterministic connection between copyrighted LLM prompt and generated code.
Are LLM vendors allowed to permanently archive user-copyrighted prompts? Presumably the vendor's EULA can force the user to grant a non-exclusive license for any purpose.
> If you feed it the exact same prompt again, you can get something quite different back
If one trusted an LLM to generate code from the first prompt, it might be worthwhile to periodically evaluate new code generated from the same prompt. E.g. can the LLM generate a better implementation, or does testing of new output identify an ambiguity in the first prompt? Would a newbie developer have written better code after N months/years of experience?
I've imagined a world where you can create a long, complex prompt that results in a reproducible response of code that does exactly what you want, and does it correctly. In that world, the prompt could be your versioned source and the build process could start from that. Implementing changes could be done by tweaking the prompt. Of course you'd still want to archive the "intermediate" source code, if only to protect yourself from an LLM outage.
In that world, you actually would be getting new code from the source prompt regularly, and yes, it hopefully would improve over time.
In our world however (at least for me), usage tends to be more conversational, so there really isn't even "a prompt" to save. Not to mention that I never use the output as-is.
I think the issue here is that it's really clear who owns the liability so if your employees are coding "with AI" then you need to understand that they don't know enough to know when they are putting you at risk.
If code needs an owner for legal purposes, then it should probably be the person or persons who commissioned the code to be created, if said ownership is not transferred to a different person or persons by agreement.
Who owns the finished product is whoever published the product (first). Otherwise a special license for AI (to differentiate spam mostly) can be created for autonomously created/published code.
Well,of course the one giving the prompts, not different from when your boss takes all the credit for a series of prompts that created a solution you wrote.
Wikipedia:
"The Sistine Chapel is the large papal chapel built within the Vatican between 1477 and 1480 by Pope Sixtus IV, for whom the chapel is named. The ceiling was painted at the commission of Pope Julius II."
I suppose the Catholic Roman Church has always owned the chapel. It then owned the paintings as part of the chapel.
This a case where the artist was so notorious that we know he is the author. But he has never been the owner, he was paid to work.
Nobody bats an eye when you use IntelliSense or another form of autocomplete, or a plugin with code snippets. I don't worry about the implications of using StackOverflow, or some plugins to do a bit of codegen for me (e.g. ORM mappings for my app from a live database schema), nor would I worry about getting an informal suggestion for how to write a particular method from a friend.
You'd have to be in a pretty corporate environment for AI or any of that to start mattering.