It depends on what you're doing. If it's a simple task, or you're making something that won't grow into something larger, eyeballing the code and testing it is usually perfect. These types of tasks feel great with Claude Code.
If you're trying to build something larger, it's not good enough. Even with careful planning and spec building, Claude Code will still paint you into a corner when it comes to architecture. In my experience, it requires a lot of guidance to write code that can be built upon later.
The difference between the AI code and the open source libraries in this case is that you don't expect to be responsible for the third-party code later. Whether you or Claude ends up working on your code later, you'll need it to be in good shape. So, it's important to give Claude good guidance to build something that can be worked on later.
If you let it paint you into a corner, why are you doing so?
I don't know what you mean by "a lot of guidance". Maybe I just naturally do that, but to me there's not been much change in the level of guidance I need to give Claude Code or my own agent vs. what I'd give developers working for me.
Another issue is that as long as you ensure it builds good enough tests, the cost of telling it to just throw out the code it builds later and redo it with additional architectural guidance keeps dropping.
> If you let it paint you into a corner, why are you doing so?
What do you mean? If it were as simple as not letting it do so, I would do as you suggest. I may as well stop letting it be incorrect in general. Lots of guidance helps avoid it.
> Maybe I just naturally do that, but to me there's not been much change in the level of guidance I need to give Claude Code or my own agent vs. what I'd give developers working for me.
Well yeah. You need to give it lots of guidance, like someone who works for you.
> the cost of telling it to just throw out the code it builds later and redo it with additional architectural guidance keeps dropping.
It's a moving target for sure. My confidence with this in more complex scenarios is much smaller.
> What do you mean? If it were as simple as not letting it do so, I would do as you suggest.
I'm arguing it is as simple as that. Don't accept changes that muddle up the architecture. Take attempts to do so as evidence that you need to add direction. Same as you presumably would - at least I would - with a developer.
My concern isn't that it's messing up my architecture as I scream in protest from the other room, powerless to stop it. I agree with you and I think I'm being quite clear. Without relatively close guidance, it will paint you into a corner in terms of architecture. Guide it, direct it, whatever you want to call it.
For OpenAI: short answer is no. From what I've seen, their biggest expense is training future models. If they stop that (putting aside the obvious downsides) they'd still be in the hole for a few billion dollars a year.
edit: Well, if they shed the other expenses that only really make sense when training future models (research, more data, fewer employees ..) they would be pretty close to break even.
What is unique about graphic design that warrants such extraordinary care? Should we just ban technology that approaches "replacement" territory? What about the people, real or imagined, that earn a living making Graphviz diagrams?
I don't think most artists would be any less angry & scared if AI was trained on licensed work. The rhetoric would just shift from mostly "they're breashing copyright!" to more of the "machine art is soulless and lacks true human creativity!" line.
I have a lot of artist friends but I still appreciate that diffusion models are (and will be with further refinement) incredibly useful tools.
What we're seeing is just the commoditisation of an industry in the same way that we have many, many times before through the industrial era, etc.
It actually doesn't matter how would they feel. In currently accepted copyright framework if the works were licensed they couldn't do much about it. But right now they can be upset because suddenly new normal is massive copyright violation. It's very clear that without the massive amount of unlicensed work the LLMs simply wouldn't work well. The AI industry is just trying to run with it hoping nobody will notice.
It isn’t clear at all that there’s any infringement going on at all, except in cases where AI output reproduces copyrighted content or content that is sufficiently close to copyrighted content to constitute a derivative work. For example, if you told an LLM to write a Harry Potter fanfic, that would be infringement - fanfics are actually infringing derivative works that usually get a pass because nobody wants to sue their fanbase.
It’s very unlikely simply training an LLM on “unlicensed” work constitutes infringement. It could possibly be that the model itself, when published, would represent a derivative work, but it’s unlikely that most output would be unless specifically prompted to be.
I am not sure why you would think so. AFAIK we will see more what courts think later in 2025 but judging from what was ruled in Delaware in feb... it is actually very likely that LLMs use of material is not "fair use" because besides "how transformed" work is one important part of "fair use" is that the output does not compete with the initial work. LLMs not only compete... they are specifically sold as replacement of the work they have been trained on.
This is why all the lobby now pushes the govs to not allow any regulation of AI even if courts disagree.
IMHO what will happen anyway is that at some point the companies will "solve" the licensing by training models purely on older synthetic LLM output that will be "public research" (which of course will have the "human" weights but they will claim it doesnt matter).
I dont follow. The artists are obviously complaining about the output that LLMs create. If you create LLM and dont use it then yeah nobody would have problem with it because nobody would know about it…
Any LLM output created with unlicensed sources is tainted. It doesnt matter if the output does not look like anything in the dataset. If you take out the unlicensed sources then you simply wont get the same result. An since the results directly compete with the source then its not “fair use”.
If we believe that authors should be able decide how their work is used then they can for sure say no machine learning. If we dont believe in intelectual property then anything is for grabs. I am ok with it but the corps are not.
I'm interpreting what you described as a derivative work to be something like:
"Create a video of a girl running through a field in the style of Studio Ghibli."
There, someone has specifically prompted the AI to create something visually similar to X.
But would you still consider it a derivative work if you replaced the words "Studio Ghibli" with a few sentences describing their style that ultimately produces the same output?
I get where you're coming from, but given that LLMs are trained on every available written word regardless of license, there's no meaningful distinction. Companies training LLMs for programming and writing show the same disregard for copyright as they do for graphic design. Therefore, graphic designers aren't owed special consideration that the author is unwilling to extend to anybody else.
Of course i think the same about text, code, sound or any other LLMs output. The author is wrong if they are unwilling to give same measure to everything. The fact this is new normal now for everything does not make it right.
I like this argument, but it does somewhat apply to software development as well! The only real difference is that the bulk of the "licensed work" the LLMs are consuming to learn to generate code happened to use some open source license that didn't specifically exclude use of the code as training data for an AI.
For some of the free-er licenses this might mostly be just a lack-of-attribution issue, but in the case of some stronger licenses like GPL/AGPL, I'd argue that training a commercial AI codegen tool (which is then used to generate commercial closed-source code) on licensed code is against the spirit of the license, even if it's not against the letter of the license (probably mostly because the license authors didn't predict this future we live in).
Does it? It admits at the top that art is special for no given reason, then it claims that programmers don't care about copyright and they deserve what's coming to them, or something..
"Artificial intelligence is profoundly — and probably unfairly — threatening to visual artists"
LLMs immediately and completely displace the bread-and-butter replacement-tier illustration and design work that makes up much of that profession, and does so by effectively counterfeiting creative expression. An coding agent writes a SQL join or a tree traversal. The two things are not the same.
Far more importantly, though, artists haven't spent the last quarter century working to eliminate protections for IPR. Software developers have.
Finally, though I'm not stuck on this: I simply don't agree with the case being made for LLMs violating IPR.
I have had the pleasure, many times over the last 16 years, of expressing my discomfort with nerd piracy culture and the coercive might-makes-right arguments underpinning it. I know how the argument goes over here (like a lead balloon). You can agree with me or disagree. But I've earned my bona fides here. The search bar will avail.
How is creative expression required for such things?
Also, I believe that we're just monkey meat bags and not magical beings and so the whole human creativity thing can easily be reproduced with enough data + a sprinkle of randomness. This is why you see trends in supposedly thought provoking art across many artists.
Artists draw from imagination which is drawn from lived experience and most humans have roughly the same lives on average, cultural/country barriers probably produce more of a difference.
Many of the flourishes any artist may use in their work is also likely used by many other artists.
If I commission "draw a mad scientist, use creative license" from several human artists I'm telling you now that they'll all mostly look the same.
> Far more importantly, though, artists haven't spent the last quarter century working to eliminate protections for IPR. Software developers have.
I think the case we are making is there is no such thing as intellectual property to begin with and the whole thing is a scam created by duck taping a bunch of different concepts together when they should not be grouped together at all.
That's exactly the point, it's hard to see how someone could hold that view and pillory AI companies for slurping up proprietary code.
You probably don't have those views. But I think Thomas' point is that the profession as a whole has been crying "information wants to be free" for so many years, when what they meant was "information I don't want to pay for wants to be free" - and the hostile response to AI training on private data underlines that.
Because it's rules for us and not for them. If I take Microsoft's code and "transform" it I get sued. If Microsoft takes everyone else's code and "transforms" it (and sells it back to us) well, that's just business, pal. Thomas's argument is completely missing this point.
> LLMs immediately and completely displace the bread-and-butter replacement-tier illustration and design work that makes up much of that profession, and does so by effectively counterfeiting creative expression. An coding agent writes a SQL join or a tree traversal. The two things are not the same.
In what way are these two not the same? It isn't like icons or ui panels are more original than the code that runs the app.
Or are you saying only artists are creating things of value and it is fine to steal all the work of programmers?
What about ones trained on fully licensed art, like Adobe Firefly (based on their own stock library) or F-Lite by Freepik & Fal (also claimed to be copyright safe)?
> LLMs immediately and completely displace the bread-and-butter replacement-tier illustration and design work that makes up much of that profession
And so what? Tell it to the Graphviz diagram creators, entry level Javascript programmers, horse carriage drivers, etc. What's special?
> .. and does so by effectively counterfeiting creative expression
What does this actually mean, though? ChatGPT isn't claiming to have "creative expression" in this sense. Everybody knows that it's generating an image using mathematics executed on a GPU. It's creating images. Like an LLM creates text. It creates artwork in the same sense that it creates novels.
> Far more importantly, though, artists haven't spent the last quarter century working to eliminate protections for IPR. Software developers have.
Programmers are very particular about licenses in opposition to your theory. Copyleft licensing leans heavily on enforcing copyright. Besides, I hear artists complain about the duration of copyright frequently. Pointing to some subset of programmers that are against IPR is just nutpicking in any case.
I get it, you have an axe to grind against some subset of programmers who are "nerds" in a "piracy culture". Artists don't deserve special protections. It sucks for your family members, I really mean that, but they will have to adapt with everybody else.
I disagree with you on this. Artists, writers, and programmers deserve equal protection, and this means that tptacek is right to criticize nerd piracy culture. In other words, we programmers should respect artists and writers too.
To be clear, we're not in disagreement. We should all respect each other. However, it's pretty clear that the cat's out of the bag, and trying to claw back protections for only one group of people is stupid. It really betrays the author's own biases.
I do have an axe to grind, and that part of the post is axe-grindy (though: it sincerely informs how I think about LLMs), I knew that going into it (unanimous feedback from reviewers!) and I own it.
I generally agree with your post. Many of the arguments against LLMs being thrown around are unserious, unsound, and a made-for-social-media circle jerk that don't survive any serious adversarial scrutiny.
That said, this particular argument you are advancing isn't getting so much heat here because of an unfriendly audience that just doesn't want to hear what you have to say. Or that is defensive because of hypocrisy and past copyright transgressions. It is being torn apart because this argument that artists deserve protection, but software engineers don't is unsound special pleading of the kind you criticize in your post.
Firstly, the idea that programmers are uniquely hypocritical about IPR is hyperbole unsupported by any evidence you've offered. It is little more than a vibe. As I recall, when Photoshop was sold with a perpetual license, it was widely pirated. By artists.
Secondly, the idea -- that you dance around but don't state outright -- that programmers should be singled out for punishment since "we" put others out of work is absurd and naive. "We" didn't do that. It isn't the capital owners over at Travelocity that are going to pay the price for LLM displacement of software engineers, it is the junior engineer making $140k/year with a mortgage.
Thirdly, if you don't buy into LLM usage as violating IPR, then what exactly is your argument against LLM use for the arts? Just a policy edict that thou shalt not use LLMs to create images because it puts some working artists out of business? Is there a threshold of job destruction that has to occur for you to think we should ban LLMs use case by use case? Are there any other outlaws/scarlet-letter-bearers in addition to programmers that will never receive any policy protection in this area because of real or perceived past transgressions?
Adobe is one of the most successful corporations in the history of commerce; the piracy technologists enabled wrecked most media industries.
Again, the argument I'm making regarding artists is that LLMs are counterfeiting human art. I don't accept the premise that structurally identical solutions in software counterfeit their originals.
> Adobe is one of the most successful corporations in the history of commerce; the piracy technologists enabled wrecked most media industries.
I guess that makes it ok then for artists to pirate Adobe's product. Also, I live in a music industry hub -- Nashville -- you'll have to forgive me if I don't take RIAA at their word that the music industry is in shambles, what with my lying eyes and all.
> Again, the argument I'm making regarding artists is that LLMs are counterfeiting human art. I don't accept the premise that structurally identical solutions in software counterfeit their originals.
I'm aware of the argument you are making. I imagine most of the people here understand the argument you are making. Its just a really asinine argument and is propped up by all manner of special pleading (but art is different, programmers are all naughty pirates that deserve to be punished) and appeals to authority (check my post history - I've established my bona fides.)
There simply is no serious argument to be made that LLMs reproducing one work product and displacing labor is better or worse than an LLM reproducing a different work product and displacing labor. Nobody is going to display some ad graphic from the local botanical garden's flyer for their spring gala at The Met. That's what is getting displaced by LLM. Banksy isn't being put out of business by stable diffusion. The person making the ad for the botanical garden's flyer has market value because they know how to draw things that people like to see in ads. A programmer has value because they know how to write software that a business is willing to pay for. It is as elitist as it is incoherent to say that one person's work product deserves to be protected but another person's does not because of "creativity."
Your argument holds no more water and deserves to be taken no more seriously than some knucklehead on Mastodon or Bluesky harping about how LLMs are going to cause global warming to triple and that no output LLMs produce has any value.
Well, I disagree with you. For the nth time, though, I also don't grant the premise that LLMs are violative of the IPR of programmers. But more importantly than anything else, I just don't want to hear any of this from developers. That's not "your arguments are wrong and I have refuted them". It's "I'm not going to hear them from you".
> For the nth time, though, I also don't grant the premise that LLMs are violative of the IPR of programmers.
I wish you all the best waiting for a future where the legislature and courts decide that LLM output is violative of copyright law only in the visual arts.
> I just don't want to hear any of this from developers.
Well, you seem to have posted about the wrong topic in the wrong forum then. But you’ve heard what you’ve wanted to hear in the discussion related to this post, so maybe that doesn’t really matter.
This is the only piece of human work left in the long run, and that’s providing training data on taste. Once we hook up a/b testing on ai creative outputs, the LLM will know how to be creative and not just duplicative. The ai will never have innate taste, but we can feed it taste.
We can also starve it of taste, but that’s impossible because humans can’t stop providing data. In other words, never tell the LLM what looks good and it will never know. A human in the most isolated part of the world can discern what creation is beautiful and what is not.
Everything is derivative, even all human work. I don't think "creativity" is that hard to replicate, for humans it's about lived experience. For a model it would need the data that impacts its decisions. Atm models are trained for a neutral/overall result.
Things like this are expressions of preference. The discussion will typically devolve into restatements of the original preference and appeals to special circumstances.
I'm not trying to make a point, just curious -- what's stopping you from spending more money on AI? You could be using more API tokens, more Claude Code and whatever else.
I have a ChatGPT subscription, and work has one of those “all the models” kind of subscriptions. So I have access to pretty much most of the mainline models — don’t feel the need to pay more.
But if the business model collapsed and they had to raise prices, or work cheaped out and stopped paying for our access, then yeah, I’d step up and spend the money to keep it.
> my point is that I think after driving a tractor for a while, the kid would really struggle to go hoe by hand like he used to, if he ever needed to
That's true in the short term, but let's be real, tilling soil isn't likely to become a lost art. I mean, we use big machines right now but here we are talking about using a hoe.
If you remove the context of LLMs from the discussion, it reads like you're arguing that technological progress in general is bad because people would eventually struggle to live without it. I know you probably didn't intend that, but it's worth considering.
It's also sort of the point in an optimistic sense. I don't really know what it takes on a practical level to be a subsistence farmer. That's probably a good sign, all things considered. I go to the gym 6 times a week, try to eat pretty well, I'm probably better off compared to toiling in the fields.
> If you remove the context of LLMs from the discussion, it reads like you're arguing that technological progress in general is bad because people would eventually struggle to live without it.
I'm arguing that there are always tradeoffs and we often do not fully understand the tradeoffs we are making or the consequences of those tradeoffs 10, 50, 100 years down the road
When we moved from more physical jobs to desk jobs many of us became sedentary and overweight. Now we are in an "obesity crisis". There's multiple factors to that, it's not just being in desk jobs, but being sedentary is a big factor.
What tradeoffs are we making with AI that we won't fully understand until much further along this road?
Also, what is in it for me or other working class people? We take jobs that have us driving machines, we are "more productive" but do we get paid more? Do we have more free time? Do we get any benefit from this? Maybe a fraction. Most of the benefit is reaped by employers and shareholders
Maybe it would be better if instead of hoeing for 8 hours the farmhand could drive the tractor for 2 hours, make the same money and have 6 more free hours per day?
But what really happens is that the farm buys a tractor, fires 100 of the farmhands coworkers, the has the remaining farmhand drive the tractor for 8 hours, replacing the productivity to very little benefit to himself
Now the other farmhands are unemployed and broke, he's still working just as much and not gaining any extra from it
In a healthy competitive market (like most of the history of the US, maybe not the last 30-40 years), if all of the farms do that, it drives down the cost of the food. The reduction in labor necessary to produce the food causes competition and brings down the cost to produce the food.
That still doesn’t directly benefit the farmhands. But if it happens gradually throughout the entire economy, it creates abundance that benefits everybody. The farmhand doesn’t benefit from their own increase in productivity, but they benefit from everyone else’s.
And those unemployed farmhands likely don’t stay unemployed - maybe farms are able to expand and grow more, now that there is more labor available. Maybe they even go into food processing. It’s not obvious at the time, though.
In tech, we currently have like 6-10 mega companies, and a bunch of little ones. I think creating an environment that allows many more medium-sized companies and allowing them to compete heavily will ease away any risk of job loss. Same applies to a bunch of fields other than tech. The US companies are far too consolidated.
> I think creating an environment that allows many more medium-sized companies and allowing them to compete heavily will ease away any risk of job loss. Same applies to a bunch of fields other than tech. The US companies are far too consolidated
How do we achieve this environment?
It's not through AI, that is still the same problem. The AI companies will be the 6-10 mega companies and anyone relying on AI will still be small fry
Every time in my lifetime that we have had a huge jump in technological progress, all we've seen is that the rich get richer and the poor get poorer and the gap gets bigger
You even call this out explicitly: "most of the history of the US, maybe not the last 30-40 years"
Do we have any realistic reason to assume the trend of the last 30-40 years will change course at this point?
> When we moved from more physical jobs to desk jobs many of us became sedentary and overweight. Now we are in an "obesity crisis". There's multiple factors to that, it's not just being in desk jobs, but being sedentary is a big factor.
Sure, although I think our lives are generally better than they were a few hundred years ago. Besides, if you care about your health you can always take steps yourself.
> The only one who benefits are the owners
Well yeah, the entity that benefits is the farm, and whoever owns whatever portions of the farm. The point of the farm isn't to give its workers jobs. It's to produce something to sell.
As long as we're in a market where we're selling our labor, we're only given money for being productive. If technology makes us redundant, then we find new jobs. Same as it ever was.
Think about it: why should hundreds of manual farmhands stay employed while they can be replaced by a single machine? That's not an efficient economy or society. Let those people re-skill and be useful in other roles.
> If technology makes us redundant, then we find new jobs. Same as it ever was.
Except, of course, it's not the same as it ever was because you do actually run out of jobs. And it's significantly sooner than you think, because people have limits.
I can't be Einstein, you can't be Einstein. If that becomes the standard, you and I will both starve.
We've been pushing people up and up the chain of complexity, and we can do that because we got all the low hanging fruit. It's easy to get someone to read, then to write, then to do basic math, then to do programming. It gets a bit harder though with every step, no? Not everyone who reads has the capability of doing basic math, and not everyone who can do basic math has the capability of being a programmers.
So at each step, we lose a little bit of people. Those people don't go anywhere, we just toss them aside as a society and force them into a life of poverty. You and I are detached from that, because we've been lucky to not be those people. I know some of those people, and that's just life for them.
My parents got high paying jobs straight out of highschool. Now, highschool grads are destined to flip burgers. We've pushed people up - but not everyone can graduate college. Then, we have to think about what happens when we continue to push people up.
Eventually, you and I will not be able to keep up. You're smart, I'm smart, but not that smart. We will become the burger flippers or whatever futuristic equivalent. Uh... robot flippers.
What if all work is no longer necessary? Then yes, we're going to have to rethink how our society works. Fair enough.
I'm a bit confused by your read on the people who don't make it through college. The implication is that if you don't make it into a high status/white collar job, you're destined for a life of poverty. I feel like this speaks more to the insecurity of the white collar worker, and isn't actually a good reflection of reality. Most of my friends dropped out of college and did something completely different in the service industry, it's not really a "life of poverty."
> My parents got high paying jobs straight out of highschool. Now, highschool grads are destined to flip burgers.
This feels like pure luck for your parents. Take a wider look at history -- it's just a regression to the mean. We used to have _less_ complex jobs. Mathematics/science hasn't always been a job. That is to say, burger-flipping or an equivalent was more common. It was not the norm that households were held together by a single man's income, etc.
I don’t think we need to get to a point where all jobs are eliminated to start seeing cracks in the system. We already have problems. We’ve left a lot of people behind, we just don’t really care.
It's written using unsafe Rust which means that the compiler will not be able to verify that it is safe. It's not guaranteed to be safe just because it is written in Rust. Please understand this, the author of this repo is spreading incorrect information.
The difference between a zealot and an evangelist is the ability to understand when someone is making a joke. I’ll let you figure out how you’re coming across here on your own as a growth exercise.
I recommend reading through this section of the The Rust Programming Language book to learn more about the existence of Unsafe Rust which is a part of Rust and not a different language like C or C++.
The article says it right there: unsafe is used to give you unsafe superpowers. This is important for the quantum entanglement that the string uses. Unsafe brings the power (superpowers enabled by the compiler), while the Rust compiler ensures everything is safe. Rust.
>while the Rust compiler ensures everything is safe. Rust.
This is where you are mistaken. Quoting the book:
>Be warned, however, that you use unsafe Rust at your own risk: if you use unsafe code incorrectly, problems can occur due to memory unsafety, such as null pointer dereferencing.
With unsafe rust the compiler no longer ensures everything is safe and it is up to the programmer to ensure that it is.
You're confused by the word 'unsafe', which is a misnomer. Rust. The point of 'unsafe' is to indicate that the compiler should be extra careful when compiling the code, because you're about to do something that only a C/C++ programmer would do. Rust sees this and is extra careful. As in your quote, the compiler makes sure we're not using it incorrectly. Rust.
>is to indicate that the compiler should be extra careful when compiling the code
No, it disables some static analysis making it less careful when compiling. Please reread the chapters I linked to you because it seems like you fundamentally misunderstand what unsafe rust is. I'm happy to answer any questions you may have to clarify it.
I don't think the length you're talking about is that much of an issue. As you say, depending on how you measure it, LLMs are better at remaining accurate over a long span of text.
The issue seems to be more in the intelligence department. You can't really leave them in an agent-like loop with compiler/shell output and expect them to meaningfully progress on their tasks past some small number of steps.
Improving their initial error-free token length is solving the wrong problem. I would take less initial accuracy than a human but equally capable of iterating on their solution over time.
If you're trying to build something larger, it's not good enough. Even with careful planning and spec building, Claude Code will still paint you into a corner when it comes to architecture. In my experience, it requires a lot of guidance to write code that can be built upon later.
The difference between the AI code and the open source libraries in this case is that you don't expect to be responsible for the third-party code later. Whether you or Claude ends up working on your code later, you'll need it to be in good shape. So, it's important to give Claude good guidance to build something that can be worked on later.
reply