Hacker News new | past | comments | ask | show | jobs | submit login

I've been finding that the strangest part of discussions around art AI among technical people is the complete lack of identification or empathy: it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!! I am a failed artist (read, I studied painting in school and tried to make a go at being a commercial artist in animation and couldn't make the cut), and so I decided to do something easier and became a computer programmer, working for FAANG and other large companies and making absurd (to me!!) amounts of cash. In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done. Art AI is terrifying if you want to make art for a living- and, if AI is able to do these astonishingly difficult things, why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?

The lack of empathy is incredibly depressing...




Artists have all my sympathy. I'm also a hobbyist painter. But I have very little sympathy for those perpetuating this tiresome moral panic (a small amount of actual artists, whatever the word "artist" means), because I think that:

a) the panic is entirely misguided and based on two wrong assumptions. The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute. Text alone is a toy; the field will just become more and more complex and technically involved, just like 3D CGI did, because if you don't use every trick available, you're missing out. The second wrong assumption is that it's going to replace anyone, instead of making many people re-learn a new tool and produce what was previously unfeasible due to the amount of mechanistic work involved. This second assumption stems from the fundamental misunderstanding of the value artists provide, which is conceptualization, even in a seemingly routine job.

b) the panic is entirely blown out of proportion by the social media. Most people have neither time nor desire to actually dive into this tech and find out what works and what doesn't. They just believe that a magical machine steals their works to replace them, because that's what everyone reposts on Twitter endlessly.


That's exactly that lack of empathy the OP was on about: if you don't see that there is something wrong by a bunch of programmers feeding everybody's work into the meatgrinder and then to start spitting out stuff that they claim is original work when they probably couldn't draw a stick figure themselves then it is clear that something isn't quite right. At least, to me.


As someone who has always had a huge gap between what I can imagine and what I can manifest, outside of text anyway, I find the whole thing amazing and massively enabling. And I think it is possible to come up with original images, even though the styles are usually derivative.

At the same time I recognise that this is a massive threat to artists, both low-visibility folks who throw out concepts and logos for companies, and people who may sell their art to the public. Because I can spend a couple of dollars and half an hour to come up with an image I’d be happy to put on my wall.

I’m not sure what the answer is here, but I don’t think a sort of “human origin art” Puritanism is going to hold back the flood, though it may secure a niche like handmade craft goods and organic food…


What will happen is exactly the same thing that happened when email made mass mailings possible: a torrent of very low quality art will begin to drown out the better stuff because there is no skill required to produce a flood of trash whereas to produce original work takes talent and time.

As the price of a bit dropped the quality of the comms dropped. It is inevitable that the price of the creation of (crappy) art will do the same thing if only because it will drag down the average.


a torrent of very low quality art will begin to drown out the better stuff because there is no skill required to produce a flood of trash

Case in point: https://stackoverflow.com/help/gpt-policy

> This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.


The goal of programming as a discipline is to create tools that allow problems to be solved. Art is a problem - how do I express myself to others? The entire industry is designed for moments like this.


Unlike many other professions, I don’t think there is much crictical thought from the tech community as to what tech and programming is and isn’t for.

A few people engaged in “hand ringing” but not deep, regular discourse on the evolving nature of what we want “tech” and “programming” to be going forward.

Despite delivering transformative social shifts, even this last decade, where is the collective reflection?


> "The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute."

This is the first wave of half decent AI.

But more importantly, you are vastly underestimating the millions of small jobs out there that artists use as a stepping stone.

Think of the millions of managers who would happily be presented with a choice of 10 artistic interpretations, and pick one for the sake of getting a quick job done.

No way on earth this isn't going to make a major impact. Empathy absolutely required.


If small pointless jobs able to be done by machines are so great then lets get rid of computers, power tools, and automation so we can get those unemployment numbers down… why can’t we find a solution that doesn’t hamper progress? At the end of the day progress saves lives.


This is very black and white thinking.

One can see AI tools as progress here while also recognising that this is likely to have a huge impact on a lot of lives.


Actually all progress will definitely will have a huge impact on a lot of lives—otherwise it is not progress. By definition it will impact many, by displacing those who were doing it the old way by doing it better and faster. The trouble is when people hold back progress just to prevent the impact. No one should be disagreeing that the impact shouldn't be prevented, but it should not be at the cost of progress.


I very much agree, and I feel the campaigns to stop AI image generation in its tracks are misguided.

I do wonder what happens as the market for the “old way” dries up, because it implies that there is no career path to lead to doing things better - any fool (I include myself) can be an AI jockey, but without people that need the skills of average designers, from what pool will the greats spring?


The gun made it so that even a dainty person could kill a strong person. However, some people are better shooters than others. It will just shift the goal post so that a new skill is required. Being strong is still a thing… just maybe not the most important when in a gun fight.


I don’t see this situation as analogous or even particularly useful - we’re not talking about gun fights, we’re talking about art and design, and whether we will see fewer great artists and designers as the market for moderate or learner artists and designers dries up.

It doesn’t really matter to humanity if strong people can still win fights, but it might matter if artists and designers who do produce great, original work stop being produced. It probably even matters to the AI models because that forms part of their input.


So empathy as in being considerate they are losing their jobs right? Not that AI art generation is inherently a bad thing? Or that they or I can doing anything about it?


Also those millions of managers will soon be redundant, what they do is often quite trivial.


You are demonstrating that lack of empathy. Artist's works are being stolen and used to train AI, that then produces work that will affect that artist's career. The advancement of this tech in the past 6 months, if it maintains this trajectory, demonstrates this.


> Artist's works are being stolen

It has been fascinating to watch “copyright infringement is not theft” morph into “actually yes it’s stealing” over the last few years.

It used to be incredibly rare to find copyright maximalists on HackerNews, but with GitHub Co-pilot and StableDiffusion it seems to have created a new generation of them.


But it's not even copyright. Copyright does not protect general styles. It protects specific works, or specific designs (e.g. Mickey Mouse). It doesn't allow someone to claim ownership over a general concept like "a painting of a knight with a castle and a dragon in the background".

Are there any documented cases where copyright law didn't seem to offer sufficient protection against something that really did seem like copyright infringement but done using AI tooling? I started looking for some a few weeks ago because of this debate and still haven't seen anything conclusive.


The problem with "AI" here is that it copies like no other. It copies everything and learns everything like a master because it is fed off a us.


"copyright infringement is not theft" is not an especially common view among artists or musicians, since copyright infringement threatens their livelihood. I don't think there's anything inconsistent about this. Yes, techies tend to hold the opposite view.

Personally, I think "copyright infringement is not theft" but I also think that using artists' work without their permission for profit is never OK, and that's what's happening here.


> I don't think there's anything inconsistent about this.

It amounts to saying that anything that benefits me is good and anything to my detriment is bad. Sure, there's a consistency to that. However, if that's the foundation of one's positions, it leads to all manner of other logical inconsistencies and hypocrisies.


Individual humans copying corporate products vs corporations copying the work of individual humans they didn't pay.

The confusion is that “copyright infringement is not theft” really was about being against corporate abuse of individuals. It's still the same situation here.


So it’s okay to infringe on copyright against a group of people getting paid by a corporation. But not individual artists and you should definitely not break open source copyright rules?


"group of people getting paid by a corporation" they are not involved at all. Corporations are their own person's, remember.

It's almost like the real problem is asymmetry and abuse of power.


Who is being “abused” by you not having access to other people’s content in the way you want?


I think we miscommunicated somewhere. I was being sarcastic when I said cooperations were people. If we had a model of capitalism dominated by collective employee ownership I think your ethical argument might work. We don't.


Copyright should not exist, but artists do need support somehow and doing away with copyright without other radical changes to economy/society leaves them high and dry. Copyright not existing should pair with other forms of support such as UBI or worker councilization, instead of ridding it while clutching capitalist pearls and ultimately only accelerating capitalism at their expense


> stolen

Is it though? What if I were to look at your art style and replicate that style manually in my own works? I see no difference whether it's done by a machine, or done by hand. The reality is that every art is a derivative of some other art. Interestingly, the music industry has been doing this for years. Ever since samplers became a thing, musicians spliced and diced loops into their own tracks for donkeys years, and created an explosion of new genres and sound. Hip-hop, techno, dark ambient, EDM, ..., all fall into the same category. Machine learning is just another new tool to create something.


It’s not stolen. If I create a work mimicking the style of whomever, I’ve not taken anything from them besides an idea. Ideas are not protected. Ideas are the point. If you don’t want to share your ideas, feel free not to.

Most people do not understand the purpose of copyright. Copyright is a bargain between society and the creator. The creator receives limited protection of the work for a limited time. Why is this the deal?

The purpose of copyright is to advance the progress of science and the useful arts. It is to benefit humanity as a whole.

AI takes nothing more than an idea. It does not take a “creative expression fixed in a tangible media”.


I'd say it's more similar to an artist drawing influence from another artist, and there is a difference in that the machines can do it much more efficiently.

Personally, I'm all for AI training and using human artwork. I think telling it not to prevents progress/innovation, and that innovation is going to happen somewhere.

If it happens somewhere, humans who live in that somewhere will just use those tools to launder the AI-generated artwork, and companies will hire those offshore humans and reap the benefits, all the while, the effect on local artists' wages is even more negative because now they don't have access to the tools to compete in this ar(tificial intelligence)ms race.


If I take your source code, copy it and then change the variable names, did I take inspiration or copy it?


That's a false analogy. Variable renames does not change anything, it's still the exact replica of the algorithm in question. Also, in engineering and computer science circles, cloning designs or code is often regarded as an acceptable practice, even encouraged (within the bounds of licensing). And for good reason, if there is a good solution to a problem, then why reinvent the wheel?


This discussion hinges solely on whether it’s a false or a true analogy, therefore necessitating a copyright cleared training dataset or not.


last time this happened on human, people are so angry. the guy who copy other artwork even got cancelled by company. but actually not in music region, you are right.


As someone who's shifted careers twice because disruptive technologies made some other options impractical, I can definitely appreciate that some artists are very upset about the idea of maybe having to change their plans for the future (or maybe not, depending on the kind of art they make), but all art is built on art that came before.

How is training AI on imagery from the internet without permission different than decades of film and game artists borrowing H. R. Giger's style for alien technology?[1]

How is it different from decades of professional and amateur artists using the characteristic big-eyed manga/anime look without getting permission from Osamu Tezuka?

Copyright law doesn't cover general "style". Try to imagine the minefield that would exist if it were changed to work that way.

[1] No, I don't mean Alien, or other works that actually involved Giger himself.


> Copyright law doesn't cover general "style". Try to imagine the minefield that would exist if it were changed to work that way.

We don’t need to “try to imagine”, we just need to wait a bit and watch Walt’s reanimated corpse and army of undead lawyers come out swinging for those “mice in the general style of Mickey Mouse”.


Intellectual property and copyright are entirely different, and you'd be come after by Disney for making those kinds of images with or without AI. I wish people in the fight against AI would stop trotting this argument out, it muddies stronger arguments against it.


Copyright is a subset of intellectual property.

Intellectual property generally includes copyright, patents, trademark, and trade secrets, though there are broader claims such as likeness, celebrity rights, moral rights (e.g., droit d'auteur in French/EU law), and probably a few others since I began writing this comment (the scope seems to be increasing, generally).

I suspect you intended to distinguish trademark and copyright.


So who's that mythical artist that hasn't seen and learned from the works of other artists? After all, these works will have left an imprint in their neural connections, so by the same argument their works are just as derivative, or "stolen".


These are not artists being inspired by the works of other artists, these are programmers taking the work of artists and then claiming to create original works when in fact they are automatically generated derivatives.

Try telling one of the programmers to produce a work of art based on a review of all of the works that went into training the models and see how it works out.


Modern artists use photoshop and benefit from a lot of computational tools already. There isn’t much difference between a computational or AI-assisted tool such as a “paint style digital brush” or “inpainting” and a tool such as a physical brush, paint knife, or toothbrush when used by the artist to achieve an effect. There is no universal rule that says only me mechanically made art is real art. Collage artists who literally copy and paste other people’s photos are also making art. In fact Photoshop already incorporates many AI assisted tools to add to the artist’s repertoire, and being able to generate unique images from a statistical merging of all the art styles online is just another tool in this fashion. Automation is the foundation of all our progress, as it is just the enhancement of another tool that replaces our hands and makes them bigger (metaphorically) so that we can build bigger and better things constantly.

Ok so now many more people can generate cool looking photos now in an automatic fashion. So what? It just means we’ve raised the bar… for what can be considered cool.


What distinguishes a derivative from an original work? What is it about AI-generated art which makes it so clearly derivative, in your mind?


That the process is automated. That is one of the important tests of originality, that something is not created in a mechanical fashion.


> that something is not created in a mechanical fashion.

I wonder if the nerds have shot themselves in the foot here with terminology? I suspect the nerd’s lawyers would have been much happier if the entire field was named “automated mechanical creativity” instead of “artificial intelligence”. It’d be kinda amusing to see the whole field of study lose in court because of their own persistent claims that what they’re doing is not just “creating in a mechanical fashion” but creating “intelligence” which can therefore be held to account for copyright infringement. Shades of Al Capone getting busted for taxes…


Good point, I had not thought of that, but terminology really matters with stuff like this and you may well be right.


I submit that human artists are, at the most fundamental level, no less "mechanical". They're just more complex.

Also, should a human artist creating a pastiche count as copyright infringement as well?


Your division of 'artists' and 'programmers' into separate tribes is almost too telling.


Existing art trains the neural nets in human artists as well. All art is derivative. No art is wholly unique.

Will human artists be able to compete with artificial artists commercially? If not, is that bad or is it progress, like Photoshop or Autotune?


So I employ quite a few artists, and I don't see the problem. This whole thing basically seems more like a filter on photoshop then something that will take a persons job.

If artists I employ want to incorporate this stuff into their workflow, that sounds great. They can get more done. There won't be less artists on payroll, just more and better art will be produced. I don't even think it is at the point of incorporating it into a workflow yet though, so this really seems like a nothing burger to me.

At least github copilot is useful. This stuff is really not useful in a professional context, and the idea that it is going to take artists jobs really doesn't make any sense to me. I mean, if there aren't any artists then who exactly do I have that is using these AI tools to make new designs? If you think the answer to that is just some intern, then you really don't know what you're talking about.


With respect, you need to pay more attention to how and why these networks are used. People write complex prompts containing things like "trending on artstation" or "<skilled artist's name>" then use unmodified AI output in places like blog articles, profile headers, etc where you normally would have put art made by an artist.

Yes, artists can also utilize AI as a photoshop filter, and some artists have started using it to fill in backgrounds in drawings, etc. Inpainting can also be used to do unimportant textures for 3d models. But that doesn't mean that AI art is no threat to artists' livelihoods, especially for scenarios like "I need a dozen illustrations to go with these articles" where quality isn't so important to the commissioner that they are willing to spend an extra few hundred bucks instead of spending 15 minutes in midjourney or stable diffusion.

As long as these networks continue being trained on artists' work without permission or compensation, they will continue to improve in output quality and muscle the actual artists out of work.


That's only one side of a coin. If a tool is so advanced that it takes away the easy applications, then it's also advanced enough to create novel fields.

Take for example video games. They distracted many people from movies, but also created a huge new field, hungry for talents. Or another one, quite a few genres calcified into distinctive boring styles over the years (see anything related to manga/anime as an example) simply because those styles require less mechanical work and are cheaper to produce. They could use a deep refresh. This tech will also lead to novel applications, created by those who embraced it and are willing to learn the increasingly complex toolset. That's what been happening the last several decades, which have seen several tech revolutions.

>As long as these networks continue being trained on artists' work

This misses the point. The real power of those things is not in the collection of styles baked into it. It's in the ability to learn new stuff. Finetuning and style transfer is what all the wizards do. Construct your own visual style by hand, make it produce more of that. And that's not just about static 2D images; neither do 2D illustrators represent all artists in the broad sense. Everyone who types "blah blah in the style of Ilya Kuvshinov" or is using img2img or whatever is just missing out, because the same stuff is going to be everywhere real soon.


If you are looking for a bunch of low quality art there are tons of free sources for that already. If this is what you mean when you say "putting artists out of work" you are really talking about less than 1% of where artist money is spent.


OK, so your argument here is "it doesn't matter because the art being replaced by AI is cheap and/or mass-produced"? What happens once the quality of the network-generated art goes up and it's able to displace more expensive works? What is the basis for your argument that this is "less than 1%"?


Art will get better and we will have artists that use AI tools to produce a lot more of it faster and entirely new professions will emerge as an evolution in art occurs and the world gets better.

This is like saying that photoshop is going to put all the artists out of work because one artist can now do the work of a team of people drawing by hand. So far these AIs are just tools. Tools help humans to produce more and the economy keeps chugging ever upwards.

There is no upper limit of how much art we need. Marvel movies and videogames will just keep looking better and better as our artists increase their capabilities using AI tools to assist them.

Daz3d didn't put modelers and artists out of work, and what Daz and iClone can do is way way more impressive(and useful in a professional setting) than AI Art.


Is 'looking at something' equivalent to stealing it? The use by all these diffusion networks is pretty much the definition of transformative. If a person was doing this it wouldn't even be interesting enough to talk about it. When a machine does it somehow that is morally distinct?


>Artists have all my sympathy.

Humans have my sympathy. We are literally at the brink of the multiple major industries being wiped out. What was only theoretical for the last 10-15 years started to happen right now.

In few short years most humans will not be able to find any employment because machine will be more efficient and cheaper. Society will transform beyond any previous transformations in history. Most likely it's going to be very rough. But we just argue that of course our specific jobs are going to stay.


You literally just did what the parent just argued against.


That is the point of my comment :) I argue that coming changes are underestimated, there is not enough awareness, and as such discussion and preparedness for them. I would rather have stable societal transition than hunger, riots, civil or world war.


If we take your vision at face value, what do you think should be done?


We should start having a conversation about what the new social contract will look like and when and how it should be phased in.


Honestly, I don't know. I spent last few days thinking about all this more seriously than in the last 20 years.

Essentially we are going to get away from market economy, money, private property. The problem is that once these things go personal freedom goes as well. So either accept the inevitable totalitarian society, or something else? But what?


This sort of thing was thought about 20 years ago in the story “manna” by Marshall Brain - https://marshallbrain.com/manna1

I have no idea how well it holds up to modern reading, but I found it interesting at the time.

He posits two outcomes - in the fictionalised US the ownership class owns more and more of everything, because automation and intelligence remove the need for workers and even most technicians over time. Everyone else is basically a prisoner given the minimum needed to maintain life.

Or we can become “socialist” in a sort of techno-utopian way, realising that the economy and our laws should work for us and that a post-labor society should be one in which humans are free from dependence on work rather than defined by it.

Does this latter one imply a total lack of freedom? It certainly implies dependence on the state, but for most people (more or less by definition) an equal share would be a better share than they can get now, and they would be free to pursue art or learning or just leisure.


Thank you for that fascinating read!


> But I have very little sympathy for those perpetuating this tiresome moral panic (a small amount of actual artists, whatever the word "artist" means)

> A small amount of actual artists

It's extremely funny that you say this, because taking a look at the Trending on Artstation page tells a different story.

https://www.artstation.com/?sort_by=trending


That's what the b) was about, yes.

And ironically, the overwhelming majority of knowledge used by these models to produce pictures that superficially look like their work (usually not at all), is not coming from any artworks at all. It's as simple as that. They are mostly trained on photos which constitute the bulk of models' knowledge about the real world. They are the main source of coherency. Artist names and keywords like "trending on artstation" are just easily discoverable and very rough handles for pieces of the memory of the models.


I don't think the fact that photos are making up the vast majority of the training set is of any particular significance.

Can SD create artistic renderings without actual art being incorporated? Just from photos alone? I don't believe so, unless someone shows me evidence to the contrary.

Hence, SD necessitates having artwork in it's training corpus in order to emulate style, no matter how little it's represented in the training data.


SD has several separate parts. In the most simplistic sense (not entirely accurate to how it functions), one translates English into a semantic address inside the "main memory", and another one extracts the contents of the memory that the address refers to. If you prevent the first one (CLIP) from understanding artists names by removing the correspondence between names and addresses, the data will still be there and can be addressed in any other way, for example custom trained embeddings. Even if you remove artworks from the dataset entirely, you can easily finetune it on anything you want using various techniques, because the bulk of the training ($$$!) has already been done for you, and the coherency, knowledge of how things look in general, shapes, lighting, poses, etc is already there. You only need to skew it towards your desired style a bit.

Style transfer combined with the overall coherency of pre-trained models is the real power of these. "Country house in the style of Picasso" is generally not how you use this at full power, because "Picasso" is a poor descriptor for particular memory coordinates. You type "Country house" (a generic descriptor it knows very well) and provide your own embedding or any kind of finetuned addon to precisely lean the result towards the desired style, whether constructed by you or anyone else.

So, if anyone believes that this thing would drive the artists out of their jobs, then removing their works from the training set will change very little as it will still be able to generate anything given a few examples, on a consumer GPU. And that's only the current generation of such models and tools. (which admittedly doesn't pass the quality/controllability threshold required for serious work, just yet)


> The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute. Text alone is a toy

Some artists just do the descriptive part though, right? The name I can think of is Sol LeWitt, but I'm sure there are others. A lot of it looks like it could be programmed, but might be tricky.


I'm mostly seeing software developers looking at the textual equivalent, GPT-3, and giving a spectrum of responses from "This is fantastic! Take my money so I can use it to help me with my work!" to "Meh, buggy code, worse than dealing with a junior dev."

I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.

[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.

It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.


> code that's only 95% right is just wrong,

I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)


This depends entirely on _how_ the code is wrong. I asked chatGPT to write me code in python that would calculate SHAP values when given a sklearn model the other day. It returned code that ran, and even _looked_ like it did the right thing at a cursory glance. But I've written SHAP a package before, and there were several manipulations it got wrong. I mean completely wrong. You would never have known the code was wrong unless you knew how to write the code in the first place.

To me, code that is 95% correct will either fail catastrophically or give very wrong results. Imagine if the code you wrote was off 5% for every number it was supposed to generate. Code that is 99.99% correct will introduce subtle bugs.

* No shade to chatGPT, writing a function that calculates shap values is tough lol, I just wanted to see what it could do. I do think that, given time, it'll be able to write a days worth of high quality code in a few seconds.


The thing about ChatGPT is that it warning shot. And all these people I see talking about it, laughing about how the shooter missed them.

Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.


Yeah, but people were also saying this about self-driving cars, and guess what that long tail is super long, and its also far fatter than we expected. 10 years ago people were saying AI was coming for taxi drivers, and as far as I can tell we're still 10 years away.

I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).


I wonder if some of this is the 80 20 rule. We're seeing the easy 80 percent of the solutions which has taken 20% of the time. We still have the hard 80% (or most of) to go for some of these new techs


Replacing 80% of a truck driver's skill would suck but replacing 80% of our skill would be an OK programmer.


Considering the deep conv nets that melted the last AI winter happened in 2012, you are basically giving it 40 years till 100%.


Tesla makes self-driving cars that drive better than humans. The reason you have to touch the steering wheel periodically is political/social, not technical. An acquaintance of mine read books while he commutes 90 minutes from Chattanooga to work in Atlanta once or twice a week. He's sitting in the driver's seat but he's certainly not driving.

The political/social factors which apply to the life-and-death decisions made driving a car, don't apply to whether one of the websites I work on works perfectly.

I'm 35, and I've paid to write code for about 15 years. To be honest, ChatGPT probably writes better code than I did at my first paid internship. It's got a ways to go to catch up with even a junior developer in my opinion, but it's only a matter of time.

And how much time? The expectation in the US is that my career will last until I'm 65ish. That's 30 years from now. Tesla has only been around 19 years and now makes self-driving cars.

So yeah, I'm not immediately worried that I'm going to lose my job to ChatGPT in the next year, but I am quite confident that my role will either cease existing or drastically change because of AI before the end of my career. The idea that we won't see AI replacing professional coders in the next 30 years strains credulity.

Luckily for me, I already have considered some career changes I'd want to do even if I weren't forced to by AI. But if folks my age were planning to finish out their careers in this field, they should come up with an alternative plan. And people starting this field are already in direct competition to stay ahead of AI.


I was of the impression that Tesla's self driving is still not fully reliable yet. For example a recent video shows a famous youtuber having to take manual control 3 times in a 20 min drive to work [0]. He mentioned how stressful it was compared to normal driving as well.

[0] https://www.youtube.com/watch?v=9nF0K2nJ7N8


If you watch the video you linked, he admits he's not taking manual control because it's unsafe--it's because he's embarrassed. It's hard to tell from the video, but it seems like the choices he makes out of embarrassment are actually more risky than what the Tesla was going to do.

It makes sense. My own experience driving a non-Tesla car the speed limit nearly always, is that other drivers will try to pressure you to do dangerous stuff so they can get where they're going a few seconds faster. I sometimes give into that pressure, but the AI doesn't feel that pressure at all. So if you're paying attention and see the AI not giving into that pressure, the tendency is to take manual control so you can. But that's not safer--quite the opposite. That's an example of the AI driving better than the human.

On the opposite end of the social anxiety spectrum, there's a genre of pornography where people are having sex in the driver's seats of Teslas while the AI is driving. They certainly aren't intervening 3 times in 20 minutes, and so far I don't know of any of these people getting in car accidents.


I'm doubtful - There's a pretty big difference between writing a basic function and even a small program, and that's all I've seen out of these kinds of AIs thus far, and it still gets those wrong regularly because it doesn't really understand what it's doing - just mixing and matching its training set.

Roads are extremely regular, as things go, and as soon as you are off the beaten path with those AIs start having trouble too.

It seems that in general that the long tail will be problematic for a while yet.


> [...] Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).

In what sense did Copilot fizzle badly? It's a tool that you incorporated into your workflow and that you pay money for.

Does it solve all programming? No, of course not, and it's far from there. I think even if improves a lot it will not be close to replacing a programmer.

But a tool that lets you write code 10x,100x faster is a big deal. I don't think we're far away from a world in which every programmer has to use AI to be somewhat proficient in their job.


Sure, it will improve, but I think a lot of people think "Hey, it almost looks human quality now! Just a bit more tweaking and it will be human quality or better!". But a more likely case is that the relatively simple statistical modeling tools (which are very different from how our brains work, not that we fully understand how our brains work) that chatGPT uses have a limit to how well they work and they will hit a plateau (and are probably near it now). I'm not one of those people who believe strong AI is impossible, but I have a feeling that strong AI will take more than that just manipulating a text corpus.


I'd be surprised if it did only take text (or even language in general), but if it does only need that, then given how few parameters even big GPT-3 models have compared to humans, it will strongly imply that PETA was right all along.


Excellent summation. Majority of the software developers work on crud based frontend or backend development. When this thing's attention goes beyond the 4k tokens its limited to, there will be very less number of developers needed in general. Same way less number of artists or illustrators will be needed for making run of the mill marketing brochures.

I think majority wouldn't know what hit them when the time comes. My experience with chatgpt has been highly positive changing me from a skeptic to a believer. It takes a bit of skill to tune the prompts but I got it to write frontend, backend, unit test cases, automation test cases, generate test data flawlessly. I have seen and worked with much worse developers than what this current iteration is.


The thing is though, it's trained on human text. And most humans are per difinition, very fallible. Unless someone made it so that it can never get trained on subtly wrong code, how will it ever improve? Imho AI can be great for suggestions as for which method to use (visual studio has this, and I think there is an extension for visual studio code for a couple of languages). I think fine grained things like this are very useful, but I think code snippets are just too coarse to actually be helpful.


Improve itself through experimentation with reinforcement learning. This is how humans improve too. AlphaZero does it.


The amount of work in that area of research is substantial. You will see world shattering results in a few years.

Current SOTA: https://openai.com/blog/vpt/


Anyone who has doubts has to look at the price. It’s free for now, and will be cheap enough when openai starts monetizing. Price wins over quality. It’s demonstrated time and time again.


Depends on the details. Skip all the boring health and safety steps, you can make very cheap skyscrapers. They might fall down in a strong wind, but they'll be cheap.


After watching lots of videos from 3rd world countries where skyscrapers are built and then tore down a few years later, I think I know exactly how this is going to go.


It does depend on the details. In special fields, like medical software, regulation might alter the market—although code even there is often revealed to be of poor quality.

But of all the examples of cheap and convenient beating quality: photography, film, music, et al, the many industries that digital technology has disrupted, newspapers are more analogous than builders. Software companies are publishers, like newspapers. And newspapers had entire building floors occupied by highly skilled mechanical typesetters, who have long been replaced. A handful of employees on a couple computers could do the job faster, more easily, and of good enough quality.

Software has already disrupted everything else, eventually it would disrupt the process of making software.


This is magical thinking, no different than a cult.

The fundamental design of transformer architecture isn't capable of what you think it is.

There are still radical, fundamental breakthroughs needed. It's not a matter of incremental improvement over time.


I experienced ChatGPT confidently giving incorrect answers about the Schwarzchild radius of the black hole at the center of our galaxy, Saggitarius A-star. Both when asked about "the Scharzchild radius of a black hole with 4 million solar masses" (a calculation) and "the Scharzchild radius of Saggitarius A-star" (a simple lookup).

Both answers were orders of magnitude wrong, and vastly different from each other.

JS code suggested for a simple database connection had glaring SQL injection vulnerabilities.

I think it's an ok tool for discovering new libraries and getting oriented quickly to languages and coding domains you're unfamiliar with. But it's more like a forum post from a novice who read a tutorial and otherwise has little experience.


My understanding is that ChatGPT (and similar things) are purely language models; they do not have any kind of "understanding" of anything like reality. Basically, they have a complex statistical model of how words are related.

I'm a bit surprised that it got a lookup wrong, but for any other domain, describing it as a "novice" is understating the situation a lot.


Over the weekend I tried to tease out a sed command that would fix an uber simple compiler error from ChatGPT [0]. I gave up after 4 or 5 tries - while it got the root cause correct ("." instead of "->" because the property was a pointer), it just couldn't figure out the right sed command. That's such a simple task, its failure doesn't inspire confidence in getting more complicated things correct.

This is the main reason I haven't actually incorporated any AI tools into my daily programming yet - I'm mindful that I might end up spending more time tracking down issues in the auto-generated code than I saved using it in the first place.

[0] You can see the results here https://twitter.com/NickFisherAU/status/1601838829882986496


Who is going to debug this code when it is wrong?

Whether 95% or 99.9% correct, when there is a serious bug, you're still going to need people that can fix the gap between almost correct and actually correct.


Sure, but how much of the total work time in software development is writing relatively straightforward, boilerplate type code that could reasonably be copied from the top answer from stackoverflow with variable names changed? Now maybe instead of 5 FTE equivalents doing that work, you just need the 1 guy to debug the AI's shot at it. Now 4 people are out of work, or applying to be the 1 guy at some other company.


> Sure, but how much of the total work time in software development is writing relatively straightforward, boilerplate type code that could reasonably be copied from the top answer from stackoverflow with variable names changed?

It may be a significant chunk of the butt-in-seat-time under our archaic 40hour/week paradigm, but it's not a significant chunk of the programmer's actual mental effort. You're not going to be able to get people to work 5x more intensely by automating the boring stuff, that was never the limiting factor.


Does anyone remember the old maxim, "Don't write code as cleverly as you can because it's harder to debug than it is to write and you won't be clever enough"?


Or the company just delivers features when they are estimated to be done, instead of it taking 5 times longer than expected


Two issues. First, when a human gets something 5% wrong, it's more likely to be a corner case or similar "right most of the time" scenario, whereas when AI gets something 5% wrong, it's likely to look almost right but never produce correct output. Second, when a human writes something wrong they have familiarity with the code and can more easily identify the problem and fix it, whereas fixing AI code (either via human or AI) is more likely to be fraught.


You (and everyone else) seem to be making the classic "mistake" of looking at an early version and not appreciating that things improve. Ten years ago, AI-generated art was at 50%. 2 years ago, 80%. Now it's at 95% and winning competitions.

I have no idea if the AI that's getting code 80% right today will get it 95% right in two years, but given current progress, I wouldn't bet against it. I don't think there's any fundamental reason it can't produce better code than I can, at least not at the "write a function that does X" level.

Whole systems are a way harder problem that I wouldn't even think of making guesses about.


It might improve like Go AI and shock everyone by beating the world expert at everything, or it might improve like Tesla FSD which is annoyingly harder than "make creative artwork".

There's no fundamental reason it can't be the world expert at everything, but that's not a reason to assume we know how to get there from here.


What scares me is a death of progress situation. Maybe it cant be an expert, but it can be good enough, and now the supply pipeline of people who could be experts basically gets shut off, because to become an expert you needed to do the work and gain the experiences that are now completely owned by AI.


Exactly this.

The problem of a vengeful god who demands the slaughter of infidels lies not in his existence or nonexistence, but peoples' belief in such a god.

Similarly, it does not matter whether AI works or it doesn't. It's irrelevant how good it actually is. What matters is whether people "believe" in it.

AI is not a technology, it's an ideology.

Given time it will fulfil it's own prophecy as "we who believe" steer the world toward that.

That's what's changing now. It's in the air.

The ruling classes (those who own capital and industry) are looking at this. The workers are looking too. Both of them see a new world approaching, and actually everyone is worried. What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.

It isn't the only way computers can be. There is IA instead of AI. But intelligence amplification goes against the principles of capital at this stage. Our trajectory has been to make people dumber in service of profit.


What's under attack is the notion that humans are special - that there's some kind of magic to them that is fundamentally impossible to replicate. No wonder there's a full-blown moral panic about this.


Agreed, but that train left the station in the late 1800s, driven by Darwin and Nietzsche. The intervening one and a half centuries haven't dislodged the "human spirit" in its secular form. We thought we'd overcome "gods". Now, out of discontent and self-loathing we're going to do what Freud warned against, and find a new external something to subjugate ourselves to. We simply refuse to shoulder the burden of being free.


Maybe AI can replicate everything humans can do. But this technology isnt that. It just mass reads and replicates what humans have already done, but actual novel implementations seem out of its grasp. (for now) The art scene is freaking out because a lot of art is basically derivative already, but everyone pretended it was not. Coders already knew and admitted they stole all the time.

The other patterns of AI that seem to be able to arrive at novel solutions basically use a brute force approach of predicting every outcome if it has perfect information or a brute force process where it tries everything until it finds the thing that "works". Both of those seem approaches seem problematic in the "real world". (though i would find convincing the argument that the billions of people all trying things act as a de facto brute force approach in practice)

For someone to be able to do a novel implementation in a field dominated by AI might be impossible, because core foundational skills cant get developed anymore by humans for them to achieve heights that the AI hasn't reached yet. We are now stuck, things cant really get "better", we just get maybe iterative improvements on how the AI implements the already arrived at solutions.

TLDR, lets sic the AI on making a new Javascript framework and see what happens :)


> What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.

Wow, yes. This is exactly what I've been thinking but you summed it up more eloquently.


can't agree more! if anyone start to believe it, it will work in some terrible way, even there is only one algorithm in black box.



But it could also make it easier to train experts, by acting as a coach and teacher.


Tesla is limited by the processing power contained in the chip of each car. That's not the case for language models; they can get arbitrarily large without much problem with latency. If Tesla could train just one huge model in a data center and deliver it by API to every car I bet self driving cars would have already been a reality.


To be fair to those assumptions, there've been a lot of cases of machine-learning (among other tech) looking very promising, and advancing so quickly that a huge revolution seems imminent—then stalling out at a local maximum for a really long time.


The architecture behind the chatGPT and the other AIs that are making the news won't ever improve so it can correctly write non-trivial code. There is a fundamental reason for that.

Other architectures exist, but you can notice from the lack of people talking about them that they don't produce any output nearly as developed as the chatGPT kind. They will get there eventually, but that's not what we are seeing here.


> The architecture behind the chatGPT and the other AIs that are making the news won't ever improve so it can correctly write non-trivial code. There is a fundamental reason for that.

What is that?


Probably because it doesn't maintain long term cohesion. Transformer models are great at producing things that look right over short distances, but as the output length increases it often becomes contradictory or nonsensical.

To get good output on larger scales we're going to need a model that is hierarchical with longer term self attention.


Whole systems from a single prompt are probably a ways away, but I was able to get further than I expected by asking it what classes would makeup the task I was trying to do and then having it write those classes.


And GPT can't fix a bug, it can only generate new text that will have a different collection of bugs. The catch is that programming isn't text generation. But AI should be able to make good actually intelligent fuzzers, that should be realistic and useful.


> GPT can't fix a bug

It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.


Those are cherry picked, and most importantly, all of the examples where it can fix a bug are examples where it's working with a stack trace, or with an extremely small section of code (<200 lines). At what point will it be able to fix a bug in a 20,000 line codebase, with only "When the user does X, Y unintended consequence happens" to go off of?

It's obvious how an expert at regurgitating StackOverflow would be able to correct an NPE or an off-by-one error when given the exact line of code that error is on. Going any deeper, and actually being able to find a bug, requires understanding of the codebase as a whole and the ability to map the code to what the code actually does in real life. GPT has shown none of this.

"But it will get better over time" arguments fail for this because the thing that's needed is a fundamentally new ability, not just "the same but better." Understanding a codebase is a different thing from regurgitating StackOverflow. It's the same thing as saying in 1980, "We have bipedal robots that can hobble, so if we just improve on that enough we'll eventually have bipedal robots that beat humans at football."


Which examples the ones where they were right or wrong. It goes back to trusting the source not to introduce new ever evolving bugs.


It is only a matter of time. It can understand error stacktrace and suggest a fix. Somebody has to plug it to IDE then it will start converting requirements to code.


Yes it can, I've been using it for exactly that. "This code is supposed to do X but does Y or haz Z error fix the code."

Sure you can't stick an entire project in there, but if you know the problem is in class Baz, just toss in the relevant code and it does a pretty damn good job.


sure but now you only need testers and one coder to fix bugs, where you used to need testers and 20 coders. AI code generators are force multipliers, maybe not strict replacements. And the level of creativity to fix a bug relative to programming something wholly original is days apart.


It can, in some cases. Have you tried it?


Maybe for certain domains it's okay to fail 5% of the time but a lot of code really does need to be perfect. You wouldn't be able to work with a filesystem that loses 5% of your files.


Or a filesystem that loses all of your files 5% of the time.


No need to rag on btrfs.


>> code that's only 95% right is just wrong,

> I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

All software has bugs, but it's usually far better that "95% right." Code that's only 95% right probably wouldn't pass half-ass testing or a couple of days of actual use.


Fixing the last 5% requires that you understand 100% of all. And understanding is the main value added by programmer, not typing characters into text editor.


I agree with you. Even software that had no bugs today (if that is possible) could start having bugs tomorrow, as the environment changes (new law, new hardware, etc.)


When AI can debug its own code I’ll start looking for another career.


When it can do that, it's already too late.


> code that's only 95% right is just wrong

It's still worth it on the whole but I have already gotten caught up on subtly wrong Copilot code a few times.


EDIT: I posted this comment twice by accident! This comment has more details but the other more answers, so please check the other one!

> code that's only 95% right is just wrong,

I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)

The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.

Secondly, we do a lot more than write one-off pieces of code. We write much, much larger systems, and the connections between different pieces of code, even on a function-to-function level, are very complex.


> The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.

The role that is possibly highly streamlined with a near-future ChatGPT/CoPilot are requirements-gathering business analysts, but developers at Staff level on up sits closer to requiring AGI to even become 30% good. We'll likely see a bifurcation/barbell: Moravec's Paradox on one end, AGI on the other.

An LLM that can transcribe a verbal discussion directly with a domain expert for a particular business process with high fidelity, give a precis of domain jargon to a developer in a sidebar, extracts out further jargon created by the conversation, summarize the discussion into documentation, and extract how the how's and why's like a judicious editor might at 80% fidelity, then put out semi-working code at even 50% fidelity, that works 24x7x365 and automatically incorporates everything from GitHub it created for you before and that your team polished into working code and final documentation?

I have clients who would pay for an initial deployment of that for an appliance/container head end of that which transits the processing through the vendor SaaS' GPU farm but holds the model data at rest within their network / cloud account boundary. Being able to condense weeks or even months of work by a team into several hours that requires say a team to tighten and polish it up by a handful of developers would be interesting to explore as a new way to work.


>> "I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0];"

Art can also be extremely wrong in a way everyone notices and still be highly successful. For example: Rob Liefeld.


And in the same way as Liefeld has a problem drawing hands! Maybe he was actually ahead of us all and had an AI art tool before the rest of us.


>> whereas art can be very wrong before a client even notices

No actually, that's not how that works. You're demonstrating the lack of empathy that the parent comment brings up as alarming.

Regarding programming, code that's only 95% right can just be run through code assist to fix everything.


Artists are, necessarily, perfectionists about their work — it's the only way to get better than the crude line drawings and wildly wrong anatomy that most people can do.

Frustratingly, most people don't fully appreciate the art, and are quite happy for artists to put in only 20% of the effort. Heck, old enough to remember people who regarded Quake as "photorealistic", some in a negative way saying this made it a terrible threat to the minds of children who might see the violence it depicted, and others in a positive way saying it was so good that Riven should've used that engine instead of being pre-rendered.

Bugs like this are easy to fix: `x = x – 4;` which should be `x = x - 4;`

Bugs like this, much harder:

    #define TOBYTE(x) (x) & 255
    #define SWAP(x,y) do { x^=y; y^=x; x^=y; } while (0)

    static unsigned char A[256];
    static int i=0, j=0;

    void init(char \*passphrase) {
        int passlen = strlen(passphrase);
        for (i=0; i<256; i++)
            A[i] = i;
        for (i=0; i<256; i++) {
            j = TOBYTE(j + A[TOBYTE(i)] + passphrase[j % passlen]);
            SWAP(A[TOBYTE(i)], A[j]);
        }
        i = 0; j = 0;
    }

    unsigned char encrypt_one_byte(unsigned char c) {
        int k;
        i = TOBYTE(i+1);
        j = TOBYTE(j + A[i]);
        SWAP(A[i], A[j]);
        k = TOBYTE(A[i] + A[j]);
        return c ^ A[k];
    }


I do appreciate that the way in which a piece of code "works" and the way in which an piece of art "works" is in some ways totally different- but, I also think that in many cases, notably automated systems that create reports or dashboards, they aren't so far apart. In the end, the result just has to seem right. Even in computer programming, amateur hour level correctness isn't so uncommon, I would say.

I would personally be astonished if any of the distributed systems I've worked on in my career were even close to 95% correct, haha.


Understanding what you are plotting and displaying in the dashboard is the complicated part, not writing the dashboard. Programmers are not very afraid of AI because it is still just a glorified fronted to stackoverflow, and SO has not destroyed the demand for programmers so far. Also, understanding the subtle logical bugs and errors introduced by such boilerplate AI-tools requires no less expertise than knowing how write the code upfront. Debugging is not a very popular activity among programmers for a reason.

It may be that one day AI will also make their creators obsolete. But at that point so many professions will be replaced by it already, that we will live in a massively changed society where talking about the "job" has no meaning anymore.


A misleading dashboard is a really really bad. This is absolutely not something where I would be happy to give it to an AI to do just because "no one will notice". The fact that no one will notice errors until it's too late is why dashboards need extra effort by their author to actually test the thing.

If you want to give programming work to an AI, give it the things where incorrect behaviour is going to be really obvious, so that it can be fixed. Don't give it the stuff where everyone will just naively trust the computer without thinking about it.


The other day I copied a question from leetcode and asked GPT to solve it. The solution had the correct structure to be interpreted by leetcode(Solution class, with the correct method name and signature, and with the same implementation of a linked list that leetcode would use). It made me feel like GPT was not implementing the solution for anything. Just copying and pasting some code it has read on the internet.


A lot depends what the business costs are of that wrong %5.

If the actual business costs are less than the price of a team of developers... welp, it was fun while it lasted.


Setting aside questions of whether there is copyright infringement going on, I think this is an unprecedented case in the history of automation replacing human labor.

Jobs have been automated since the industrial revolution, but this usually takes the form of someone inventing a widget that makes human labor unnecessary. From a worker's perspective, the automation is coming from "the outside". What's novel with AI models is that the workers' own work is used to create the thing that replaces them. It's one thing to be automated away, it's another to have your own work used against you like this, and I'm sure it feels extra-shitty as a result.


> From a worker's perspective, the automation is coming from "the outside".

Not, if the worker is an engineer or similar. Some engineers built tools that improved building tools.

And this started even earlier than the industrial revolution. Think for example of Johannes Gutenberg. His real important invention was not the printing press (this already existed) and not even moveable types, but a process by which a printer could mold his own set of identical moveable types.

I see a certain analogy between what Gutenberg's invention meant for scribes then and what Stable Diffusion means for artists today.

Another thought: In engineering we do not have extremly long lasting copyright, but a lot shorter protection periods via patents. I have never understood why software has to be protected for such long copyright periods and not for much shorter patent-like periods. Perhaps we should look for something similar for AI and artists: An artist as copyright as usual for close reproductions, but after 20 years after publication it may be used without her or his consent for training AI models.


>Not, if the worker is an engineer or similar. Some engineers built tools that improved building tools

Those engineers consented to creating the new tools so that's different


That was not what was at issue in my comment. It referred to a sentence where the Parent was not talking about Stable Diffusion in particular, but about what he claimed was a general difference from the usual conditions since the industrial revolution. My comment merely referred to the fact that this is not generally true everywhere (in most specific cases, of course, it may very well be true). In this context, the real difference, however, with regard to Stable Diffusion is not the involuntary nature of the artists' "contributions", but the fact that the artists are not usually the developers of the AI software. In this respect, the Parent is right that for them all this comes from "the outside". It is just that I wanted to point out that this does not apply equally to all professional groups.


Some did, others did not. But those who did could still use the entire engineering corpus of knowledge they have studied towards their goal, even if they learned it from those who would not approve.


I don't know why we keep framing artists like they're textile workers or machinists.

The whole point of art is human expression. The idea that artists can be "automated away" is just sad and disgusting and the amount of people who want art but don't want to pay the artist is astounding.

Why are we so eager to rid ourselves of what makes us human to save a buck? This isn't innovation, its self destruction.


Most art consumed today isn't about human expression, and it hasn't been for a very long time. Most art is produced for commercial reasons with the intent of making as much profit as possible.

Art-as-human-expression isn't going anywhere because it's intrinsically motivated. It's what people do because they love doing it. Just like people still do woodworking even though it's cheaper to buy a chair from Walmart, people will still paint and draw.

What is going to go away is design work for low-end advertising agencies or for publishers of cheap novels or any of the other dozens of jobs that were never bastions of human creativity to begin with.


It's an important distinction you make and hard to talk about without a vocabulary. The terms I've seen music historians use for this concept were:

- generic expression: commercial/pop/entertainment; audience makes demands on the art

- autonomous expression: artist's vision is paramount; art makes demands on the audience

Obviously these are idealized antipodes. The question about whether it is the art making the demands on the audience or the audience making demands on the art is especially insightful in my opinion. Given this rubric, I'd say AI-generated art must necessarily belong to "generic expression" simply because it's output has to meet fitness criteria.


I think fine artists and others who make and sell individual art pieces for a living will probably be fine, yeah. (Or at least won't be struggling much worse than they are already.)

There are a lot of working commercial artists in between the fine art world and the "cheap novels and low-end advertising agencies" you dismiss, and there's no reason to think AI art won't eat a lot of their employment.


Of course it will. Their employment isn't sacred. They have a skill, we're teaching that skill to computers, and their skill will be worth less.

I don't pay someone to run calculations for me, either, also a difficult and sometimes creative process. I use a computer. And when the computer can't, then I either employ my creativity, or hire a creative.


Okay, but that's a different argument from your original. First you said "only bad artists will lose their jobs," now it's "good artists will lose their jobs but I don't care."


It's a different person. I'm the person you first replied to, and I don't believe good artists will lose their jobs.

This was my reply: https://news.ycombinator.com/item?id=34005604

I also agree that artist employment isn't sacred, but after extensive use of the generation tools I don't see them replacing anything but the lowest end of the industry, where they just need something to fill a space. The tools can give you something that matches a prompt, but they're only really good if you don't have strong opinions about details, which most middle tier customers will.


Just like AI can't replace programmers completely because most people are terrible at defining their own software requirements, AI won't replace middle-tier commercial artists because most people have no design sense.

Commercial art needs to be eye catching and on brand if it's going to be worth anything, and a random intern isn't going to be able to generate anything with an AI that matches the vision of stakeholders. Artists will still be needed in that middle zone to create things that are on brand, that match stakeholder expectations, and that stand out from every other AI generated piece. These artists will likely start using AI tools, but they're unlikely to be replaced completely any time soon.

That's why I only mentioned the bottom tier of commercial art as being in danger. The only jobs that can be replaced by AI with the technology that we're seeing right now are in the cases where it really doesn't matter exactly what the art looks like, there just has to be something.


Because when people discuss "art" they are really discussing two things.

Static 2D images that usually serve a commercial purpose. Ex logos, clip art, game sprites, web page design and the like.

And the second is pure art whose purpose is more for the enjoyment of the creator or the viewer.

Business wants to fully automate the first case and must people view it has nothing to do with the essence of humanity. It's simply dollars for products - but it's also one of the very few ways that artists can actually have paying careers for their skills.

The second will still exist, although almost nobody in the world can pay bills off of it. And I wouldn't be shocked it ML models start encroaching there as well.

So a lot of what's being referred to is more like textile workers. And anyone who can type a few sentences can now make "art" significantly lowering barriers to entry. Maybe a designer comes and touches it up.

The short sighted part, is people thinking that this will somehow stay specific to Art and that their cherished field is immune.

Programming will soon follow. Any PM "soon enough" will be able to write text to generate a fully working app. And maybe a coder comes in to touch it up.


You're defining the word "art" in one sentence and then using a completely different definition in the next sentence. Where are these people who want art, as you've defined it, but don't want to pay? Most of the people you're referring to want visual representations of their fursonas, or D&D characters, or want marketing material for their product. They're not trying to get human expression.

In the sense that art is a 2D visual representation of something, or a marketing tool that evokes a biological response in the viewer, art is easy to automate away. This is no different than when the camera replaced portraitists. We've just invented a camera that shows us things that don't exist.

In the sense that art is human expression, nobody has even tried to automate that yet and I've seen no evidence that expressionary artists are threatened.


It's ironic seeing your earlier comment on chatgpt coding and then this. If anything is easier to automate, it's programming which can be rigorous and have rules while art really isn't, it's only "easy" for those who don't understand it, which is what the person is actually talking about.

You're in for a rude awakening when you get laid off and replaced with a bot that creates garbage code that is slow and buggy but works and so the boss gets to save on your salary. "But it's slow, redundant, looks like it was made by some who just copy and pasted endlessly from stackoverflow" but your boss won't care, he just needs to make a buck.


> The whole point of art is human expression.

For someone seeking sound/imagery/etc. resulting from human expression (i.e., art), it makes sense that it can't be automated away.

For someone seeking sound/imagery/etc. without caring whether it's the result of human expression (e.g., AI artifacts that aren't art), it can be automated away.


The idea that artists can be automated away is really just kind of dumb, not because people like AI created art and can get it cheap, but because it has no real impact on the "whole point" of the art... for the creation of the art. Pure art, as human expression, has no dependency on money. Anecdotally I very much enjoy painting and music (and coding) as art forms but have never sold a painting nor a song in my life. Just because someone won't pay you for something doesn't mean it has no value.

As far as money goes... long run artists will still make money fine as people will value the people generated (artisanal) works. Just as people like hand-made stuff today, even though you can get machine-made stuff way cheaper. You may not have the generic jobs of cranking out stuff for advertisements (and such) but you'll still have artists.


The conversation isn't about you or your hobby, it's about professional artists and illustrators, who are already being automated away by AI.


Professional artists have no chance of being automated away. They need all the productivity tools they can get.

The ones at risk (and complaining the most) are semipro online artists who sell one image at a time, like fanart commissions.


I follow plenty of artists on Elon's hellsite and professional artists of all stripes are upset about it. Jobs are already disappearing, being replaced entirely by AI and "prompt engineers" or people just using AI to copy someone's style for their portfolio. Granted, it isn't endemic yet, but the big Indiana Jones stone ball of progress is definitely rolling in that direction.


That is not what the post I was responding to was about, it was about the art as human expression. Nothing was said about it as a profession and making money creating art makes zero difference as to the worth of the art.


Is not programming a type of art? what is type of it is debatable theme in many years.


I wouldn't say saying it came from the inside is unique to AI art. You very much need a welder's understanding of welding in order to be able to automate it for example.

I'd just say the scale is different. Old school automation just required one expert to guide the development of an automation. AI art requires the expertise of thousands.


We need a better way to reward the contributing artists making the diffusion models possible. Might we be able to come up with a royalty model, where the artist that made the original source content used in training the diffusion model, gets a fractional royalty based on how heavily it is used when generating the prompted art piece? We want to incentivize artists to feed their works, and original styles, into future AI models.


This doesn't seem very helpful at all. First it seems impossibly difficult to execute and probably won't benefit the artists much.


Absolutely this -- and in many (maybe most cases), there was no consent for the use of the work in training the model, and quite possibly no notice or compensation at all.

That's a huge ethical issue whether or not it's explicitly addressed in copyright/ip law.


It is not a huge ethical issue. The artists have always been at risk of someone learning their style if they make their work available for public viewing.

We've just made "learning style" easier, so a thing that was always a risk is now happening.


Let's shift your risk of immediate assault and death up by a few orders of magnitude. I'm sure that you'll see that as "just" something that was always a risk, pretty much status quo, right right?

Oh, life & death is different? Don't be so sure; there's good reasons to believe that livelihood (not to mention social credit) and life are closely related -- and also, the fundamental point doesn't depend on the specific example: you can't point to an orders-of-magnitude change and then claim we're dealing with a situation that's qualitatively like it's "always" been.

"Easier" doesn't begin to honestly represent what's happened here: we've crossed a threshold where we have technology for production by automated imitation at scale. And where that tech works primarily because of imitation, the work of those imitated has been a crucial part of that. Where that work has a reasonable claim of ownership, those who own it deserve to be recognized & compensated.


The 'reasonable claim of ownership' extends to restricting transmission, not use after transmission.

Artists are poets, and they're railing against Trurl's electronic bard.

[https://electricliterature.com/wp-content/uploads/2017/11/Tr...]


> The 'reasonable claim of ownership' extends to restricting transmission, not use after transmission.

It's not even clear you're correct by the apparent (if limited) support of your own argument. "Transmission" of some sort is certainly occurring when the work is given as input. It's probably even tenable to argue that a copy is created in the representation of the model.

You probably mean to argue something to the effect that dissemination by the model is the key threshold by which we'd recognize something like the current copyright law might fail to apply, the transformative nature of output being a key distinction. But some people have already shown that some outputs are much less transformative than others -- and even that's not the overall point, which is that this is a qualitative change much like those that gave birth to industrial-revolution copyright itself, and calls for a similar kind of renegotiation to protect the underlying ethics.

People should have a say in how the fruits of their labor are bargained for and used. Including into how machines and models that drive them are used. That's part of intentionally creating a society that's built for humans, including artists and poets.


I wasn't speaking about dissemination by the model at all. It's possible for an AI to create an infringing work.

It's not possible for training an AI using data that was obtained legally to be copyright infringement. This is what I was talking about regarding transmission. Copyright provides a legal means for a rights holder to limit the creation of a copy of their image in order to be transmitted to me. If a rights holder has placed their image on the internet for me to view, then copyright does not provide them a means to restrict how I choose to consume that image.

The AI may or may not create outputs that can be considered derivative works, or contain characters protected by copyright.

You seem to be making an argument that we should be changing this somehow. I suppose I'll say "maybe". But it is apparent to me that many people don't know how intellectual property works.


There's a separate question of whether the AI model, once trained on a copyrighted input, constitutes a derived work of that input. In cases where the model can, with the right prompt, produce a near-identical (as far as humans are concerned) image to the input, it's hard to see how it is not just a special case of compression; and, of course, compressed images are still protected by copyright.


You mean the AI model itself, the weights?

A derivative work is a creative expression based on another work that receives its own copyright protection. It's very unlikely that AI weights would be considered a creative expression, and would thus not be considered a derivative work. At this point, you probably can't copyright your AI weights.

An AI might create work that could be considered derivative if it were the creative output of a human, but it's not a human, and thus the outputs are unlikely to be considered derivative works, though they may be infringing.


Yes, I mean the weights.

If the original is a creative expression, then recording it using some different tech is still a creative expression. I don't see the qualitative difference between a bunch of numbers that constitutes weights in a neural net, and a bunch of numbers that constitute bytes in a compressed image file, if both can be used to recreate the original with minor deviations (like compression artifacts in the latter case).


This is like saying that continuously surveilling people when they are outside of their private property and live-reporting it to the internet is not a huge ethical issue. For you are always at risk of being seen when in public and the rest is merely exercising freedom of speech.

Something being currently legal and possible doesn’t mean being morally right.

Technology enables things and sometimes the change is qualitatively different.


Make open source code open source always has the risk of someone copying it and distributing it in proprietary code. That doesn't make it right or ethical. Stealing an unlocked car is unethical. Raping someone who is weaker than you is unethical. Just because something isn't difficult doesn't make something ethical.


This is kind of silly.

Both personal autonomy and private property are social constructs we agree are valuable. Stealing a car and raping a person are things we've identified as unacceptable and codified into law.

And in stark contrast, intellectual property is something we've identified as being valuable to extend limited protections to in order to incentivize creative and technological development. It is not a sacred right, it's a gambit.

It's us saying, "We identify that if we have no IP protection whatsoever, many people will have no incentive to create, and nobody will ever have an incentive to share. Therefore, we will create some protection in these specific ways in order to spur on creativity and development."

There's no (or very little) ethics to it. We've created a system not out of respect for people's connections to their creations, but in order to entice them to create so we can ultimately expropriate it for society as a whole. And that system affords protection in particular ways. Any usage that is permitted by the system is not only not unethical, it is the system working.


That is a hard fight to have, since it is the same for people. An artist will have watched some Disney movie, and that could influence their art in some small way. Does Disney have a right to take a small amount from every bit of art which they produce from then on? Obviously not.

The real answer is AI are not people, and it is ok to have different rules for them, and that is where the fight would need to be.


I really think there's likely to be gigantic class action lawsuits in the near future, and I support them. People did not consent for their data and work to be used in this way. In many cases people have already demonstrated using custom tailored prompts that these models have been trained on copyrighted works that are not public domain.


Consent isn't required if they're making their work available for public viewing.


For VIEWING. This is like blatantly taking your gpl licensed code and using it for commercial purposes


A thing that can be viewed can be learned from.

I can't copy your GPL code. I might be able to write my own code that does the same thing.

I'm going to defend this statement in advance. A lot of software developers white knight more than they strictly have to; they claim that learning from GPL code unavoidably results in infringing reproduction of that code.

Courts, however, apply a test [1], in an attempt to determine the degree to which the idea is separable from the expression of that idea. Copyright protects particular expression, not idea, and in the case that the idea cannot be separated from the expression, the expression cannot be copyrighted. So either I'm able to produce a non-infringing expression of the idea, or the expression cannot be copyrighted, and the GPL license is redundant.

[1] https://en.wikipedia.org/wiki/Abstraction-Filtration-Compari...


It's already explicitly legal to train AI using copyrighted data in many countries. You can ignore opt-outs too, especially if you're training AI for non-commercial purposes. Search up TDM exceptions.


Making money through art is already not a feasible career, as you yourself learned. If you want a job that millions of people do for fun in their free time, you can expect that job to be extremely hard to get and to pay very little.

The solution isn't to halt technological progress to try to defend the few jobs that are actually available in that sector, the solution is to fight forward to a future where no one has to do dull and boring things just to put food on the table. Fight for future where people can pursue what they want regardless of whether it's profitable.

Most of that fight is social and political, but progress in ML is an important precursor. We can't free everyone from the dull and repetitive until we have automated all of it.


>The solution isn't to halt technological progress

Technological progress is not a linear deterministic progression. We decide how to progress every step of the way. The problem is that we are making dogshit decisions for some reason

Maybe we lack the creativity to envision alternative futures. How does a society become so uncreative I wonder


> We decide how to progress every step of the way.

I think the wheels are turning. It's just a resultant movement from thousands of small movements, but nobody is controlling it. If you take a look not even wars dent the steady progress of science and technology.


But do you know what reducing the progress of generative modeling will do? Because there seems to be this confusion that generative modeling is about art/music/text.


You'll find its nearly impossible to imagine a world without capitalism.

Capitalism is particularly good at weaponizing our own ideas against us. See large corporations co-opting anti-capitalist movements for sales and PR.

Pepsi-co was probably mad that they couldn't co-op "defund the police", "fuck 12", and "ACAB" like they could with "black lives matter".

Anything near and dear to us will be manipulated into a scientific formula to make a profit, and anything that cannot is rejected by any kind of mainstream media.

See: Capitalist Realism and Manufactured Consent(for how advertising effects freedom of speech in any media platform).


Perhaps it would be better to say you can't imagine "the future" without capitalism, as history prior to maybe the 1600s offers a less technologically advanced illustration.


Yes thanks.

A lot used to escape the market logic. And I hope we go back to some of that. Not everything has to be profitable / a market.

Example: commons infrastructure, common grazing place for cattle, the woods.

What I wish would be pulled of the markets : School, hospital, energy infra


The escape from a pure profit driven world would go so far.

Imagine all the good things that aren't done because they just don't make any money. Instead we put resources towards things that make our lives worse because they're profitable.


This is part of the reason why I am disappointed, but not surprised, by all the flippant response to the concerns voiced here.

So AI puts artists out of a job and in some utopian vision, one day puts programmers out of a job, and nobody has jobs and that's what we should want, right, so why are you complaining about your personal suffering on the inevitable march of progress?

There is little to no worthwhile discussion from those same people about if the Puritanical worldview of work-to-live will be addressed, or how billionaires/capitalists/holders-of-the-resources respond to a world where no one has jobs, an income stream, and thus money to buy their products. Because Capitalist Realism has permeated, and we can no longer imagine a plausibly possible future that isn't increasingly technofeudalist. Welcome back to Dune?


It’s pretty easy to imagine a world without capitalism. It’s the one where the government declares you a counterrevolutionary hedonist for wanting to do art and forces you to work for the state owned lithium mine.

Mixed social-democratic economies are nice and better than plutocracies, but they have capitalism; they just have other economic forms alongside it.

(Needing to profit isn’t exclusive to capitalism either. Socialist societies also need productivity and profit, because they need to reinvest.)


Or a world where the gouvernement runs hospitals & schools and pay for it, no matter the cost. Effectively pulling those out of the market.

It’s not a fantasy idea. I grow up there and it’s still working.

It’s not out of beautiful idea either. But sheer pragmatism.

A country will always need those things and those are important things. We might as well invest in them for the long run.

Clearly those are not hip idea anymore. Oh well.


If it's so important, we could at least pay the people who create the training set. Otherwise, we're relying on unpaid labor for this important progress and if the unpaid labor disappears, we're screwed. How does it seem sensible to construct a business this way?


My empathy for artists is fighting with my concern for everyone else's future, and losing.

It would be very easy to make training ML models on publicly available data illegal. I think that would be a very bad thing because it would legally enshrine a difference between human learning and machine learning in a broader sense, and I think machine learning has huge potential to improve everyone's lives.

Artists are in a similar position to grooms and farriers demanding the combustion engine be banned from the roads for spooking horses. They have a good point, but could easily screw everyone else over and halt technological progress for decades. I want to help them, but want to unblock ML progress more.


Everyone else's future?

I see this as another step toward having a smaller and smaller space in which to find our own meaning or "point" to life, which is the only option left after the march of secularization. Recording and mass media / reproduction already curtailed that really badly on the "art" side of things. Work is staring at glowing rectangles and tapping clacky plastic boards—almost nobody finds it satisfying or fulfilling or engaging, which is why so many take pills to be able to tolerate it. Work, art... if this tech fulfills its promise and makes major cuts to the role for people in those areas, what's left?

The space in which to find human meaning seems to shrink by the day, the circle in which we can provide personal value and joy to others without it becoming a question of cold economics shrinks by the day, et c.

I don't think that's great for everyone's future. Though admittedly we've already done so much harm to that, that this may hardly matter in the scheme of things.

I'm not sure the direction we're going looks like success, even if it happens to also mean medicine gets really good or whatever.

Then again I'm a bit of a technological-determinist and almost nobody agrees with this take anyway, so it's not like there's anything to be done about it. If we don't do [bad but economically-advantageous-on-a-state-level thing], someone else will, then we'll also have to, because fucking Moloch. It'll turn out how it turns out, and no meaningful part in determining that direction is whether it'll put us somewhere good, except "good" as blind-ass Moloch judges it.


What role exactly is it going to take? The role we currently have, where the vast majority of people do work not because they particularly enjoy it but because they’re forced to in order to survive?

That’s really what we’re protecting here?

I’d rather live in the future where automation does practically everything not for the benefit of some billionaire born into wealth but because the automation is supposed to. Similar to the economy in Factorio.

Then people can derive meaning from themselves rather that whatever this dystopian nightmare we’re currently living in.

It’s absurdly depressing that some people want to stifle this progress only because it’s going to remove this god awful and completely made up idea that work is freedom or work is what life is about.


I am happy to write code for a hobby. Who is going to pay for that? The oligarchs of our time pay their tax to their own 'charities'. Companies with insane profits buy their own shares.

AI powered surveillance and the ongoing destruction of public institutions will make it hard to stand up for the collective interest.

We are not in hell, but the road to it has not been closed.


The ideal situation is that nobody pays for it. Picture a scenario where the vast majority of resource gathering, manufacturing, and production are all automated. Programmers are out of a job, factory workers are out of a job, miners are out of a job, etc.

Basically the current argument of artists being out of a job but taken to its extreme.

Why would these robots get paid? They wouldn’t. They’d just mine, manufacture, and produce on request.

Imagine a world where chatgpt version 3000 is connected to that swarm of robots and you can type “produce a 7 inch phone with an OLED screen, removable battery, 5 physical buttons, a physical shutter, and removable storage” and X days later arrives that phone, delivered by automation, of course.

Same would work with food, where automation plants the seeds, waters the crops, removes pests, harvests the food, and delivers it to your home.

All of these are simply artists going out of a job, except it’s not artists it’s practically every job humans are forced to do today.

There’d be very little need to work for almost every human on earth. Then I could happily spend all day taking shitty photographs that AI can easily replicate today far better than I could photograph in real life but I don’t have to feel like a waste of life because I enjoy doing it for fun and not because I’m forced to in order to survive.


Look, I like the paradise you created. You only forgot about who we are.

> There’d be very little need to work for almost every human on earth.

When mankind made a pact with the devil, the burden we got was that we had to earn our bread though sweat and hard labor. This story has survived millennia, there is something to it.

Why is the bottom layer in society not automated by robots? No need to if they are cheaper than robots. If you don't care about humans, you can get quite some labor for a little bit of sugar. If you can work one job to pay your rent, you can possibly do two or three even. If you don't have those social hobbies like universal healthcare and public education, people will be competitive for a very long time with robots. If people are less valuable, they will be treated as such.

Hell is nearer than paradise.


Humans have existed for close to 200,000 years. Who we ‘are’ is nothing close to what we have today. What humans actually are is an invasive species capable of subjugating nature to fit its needs. I want to just push that further and subjugate nature with automation that can feed us and manufacture worthless plastic and metal media consumption devices for us.

Your diatribe about not caring about humans is ironic. I don’t know where you got all that from, but it certainly wasn’t my previous comment.

I also don’t know what pact you’re on about. The idea of working for survival is used to exploit people for their labor. I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?


> Who we ‘are’ is nothing close to what we have today.

I am wondering why you define being in terms of having. Is that a slip, or is that related to this:

> I want to just push that further and subjugate nature with automation that can feed us and manufacture worthless plastic and metal media consumption devices for us.

Because I can hear sadness in these words. I think we can feel thankful for having the opportunity to observe beauty and the universe and feel belonging to where we are and with who we are. Those free smartphones are not going to substitute that.

I do not mean we have to work because it is our fate or something like that.

> Your diatribe about not caring about humans is ironic.

A pity you feel that way. Maybe you interpreted "If you don't care about humans" as literally you, whereas I meant is as "If one doesn't care".

What I meant was is the assumption you seem to make that when a few have plenty of production means without needing the other 'human resources' anymore, those few will not spontaneously share their wealth with the world, so the others can have free smart phones and a life of consumption. Instead, those others will have to double down and start to compete with increasingly cheaper robots.

----

The pact in that old story I was talking about deals with the idea that we as humans know how to be evil. In the story, the consequence is that those first people had to leave paradise and from then on have to work for their survival.

I just mentioned it because the fact that we exploit not only nature, but other humans too if we are evil enough. People that end up controlling the largest amounts of wealth are usually the most ruthless. That's why we need rules.

----

> I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?

On the contrary, I think I have been misunderstood.:)


I hear more sadness in your words that are stuck on the idea of having to compete. The idea is to escape that and make exploiting people not an option. If you feel evil and competition for survival is what defines humans, that’s truly sad.

I like my ideal world a lot better.


> The idea is to escape that and make exploiting people not an option.

I am in, but just wanted to let you know many had this idea before. People thought in the past we would barely work these days anymore. What they got wrong is that productivity gains didn't reach the common man. It was partly lost through mass consumption, fueled by advertising, and wealth concentration. Instead, people at the bottom of the pyramid have to work harder.

> I like my ideal world a lot better.

Me too, without being consumption oriented though. Nonetheless, people that take a blind eye to the weaknesses of humankind often runs into unpleasant surprises. It requires work, lots of work.


IMO it’s impossible with the idea that survival=work. It’s evident here, with people desperately fighting against AI art because it’ll take away people’s jobs. It’s not even just that, though. It’s also the belief that AI art takes away from human art, as if AI chess existing makes Magnus vs. Niemann less exciting.

That same work=survival idea is what incentivizes competitiveness and of course, under that construct, some humans will put on their competitive goggles and exploit others.

There are a lot of human constructs that need to fade away before we can get to a fully automated world. But that’s okay. Humans aren’t the type to get stuck on a problem forever.


I agree with those points, especially competition is an important one. It has been the furnace of our progress too, so this is a double edged sword.

I think people will not stop forming a social hierarchy, and so competition remains a sticky trait I think.

> work=survival idea is what incentivizes competitiveness

True, the idea that you can do better than the Jones through hard work is alluring. Having a job is now a requirement for being worthy, the kind of job defines your social position. Compare with the days of nobility though, where those nobleman had everything but a job ("what is a weekend?").


>When mankind made a pact with the devil, the burden we got was that we had to earn our bread though sweat and hard labor. This story has survived millennia, there is something to it.

This sounds mystical and mysterious; it would be a mistake to project one mode of production as being the brand all humans must live with until we go extinct.


> it would be a mistake to project one mode of production as being the brand all humans must live with until we go extinct.

Indeed, you should not read it as an imperative. The other commentator was also put on the wrong foot by this.

Maybe I should not have assumed people would know Genesis, https://en.wikipedia.org/wiki/Book_of_Genesis. I should be more explicit: we are not some holy creatures. Don't assume that the few who are gonna reap the rewards will spontaneously share them with others. We are able to let others suffer to gain a personal advantage.


Every other living thing on the planet spends most of it's time just fighting to survive. I think that's evidence it's not a 'made up idea' and likely may be what life is actually about.


What’re you doing on the internet? No other living thing on this planet spends time on the internet. Or maybe we shouldn’t be copying things from nature just because.

Also kinda curious how you deal with people that have disabilities and can’t exactly fight to survive. Me, I’m practically blind without glasses/contacts, so I’ll not be taking life lessons from the local mountain lion, thanks.


Taking a break from my struggle just like a lion takes a nap. I wouldn't agree we are copying nature, rather, we are an inseparable part of it. The fact that we do some things other members don't isn't a convincing argument for me that we're not part of nature.

If you can't support yourself for whatever reason, you rely on others to do that work on your behalf. Social animals, wolves for example, try to provide for their sick and handicapped, but that's only after their own needs are met first.


This is the dictionary definition of appeal to nature fallacy.


That fallacy asserts a judgement that natural=good. I'm not claiming that.

We have physical needs just like other members of the natural world - food for example, if we can't provide food for ourselves, we'll starve to death just like an animal. Why bother judging this situation as good or bad when it's not something that can be changed.


I hope this will happen, but don't you think result just no one can get food, rich watch people dying?


Food will be automated. A lot of it is automated even today. Robots that will till the soil, manage nutrients, plant the seeds, water the plants, and pick the crops. It’ll even be done without pesticides, as robots with vision and plant detection can work 24/7 to remove weeds and pests. Or we’ll switch to hydroponics, still fully automated and done on a mass scale. In this world, there’s no purchasing food. You would just request it and that’s it.

Now imagine that automation in food and expand it to everything. A table factory wouldn’t purchase wood from another company. There’s automation to extract wood from trees and the table factory just requests it and automation produces a table. With robots at every step of the process, there are no labor costs. There’s no shift manager, there’s no CEO with a CEO salary, there’s no table factory worker spending 12+ hours a day drilling a table leg to a table for $3 an hour in China.

That former factory worker in China is instead pursuing their passions in life.


yes, I hope this will happen.


> The space in which to find human meaning seems to shrink by the day

I don’t understand this. It reminds me of the Go player who announced he was giving up the game after AlphaGo’s success. To me that’s exactly the same as saying you’re going to give up running, hiking, or walking because horses or cars are faster. That has nothing to do with human meaning, and thinking it does is making a really obvious category error.


A lot of human meaning comes from providing value to others.

The more computers and machines and institutions take that over, the fewer opportunities there are to do that, and the more doing that kind of thing feels forced, or even like an indulgence of the person providing the "service" and an imposition on those served.

Vonnegut wrote quite a bit about this phenomenon in the arts—how recording, broadcast, and mechanical reproduction vastly diminished the social and even economic value of small-time artistic talent. Uncle Bob's storytelling can't compete with Walt Disney Corporation. Grandma's piano playing stopped mattering much when we began turning on the radio instead of having sing-alongs around the upright. Nobody wants your cousin's quite good (but not excellent) sketches of them, or of any other subject—you're doing him a favor if you sit for him, and when you pretend to give a shit about the results. Aunt Gertrude's quilt-making is still kinda cool and you don't mind receiving a quilt from her, but you always feel kinda bad that she spent dozens of hours making something when you could have had a functional equivalent for perhaps $20. It's a nice gesture, and you may appreciate it, but she needed to give it more than you needed to receive it.

Meanwhile, social shifts shrink the set of people for whom any of this might even apply, for most of us. I dunno, maybe online spaces partially replace that, but most of that, especially the creative spaces, seem full of fake-feeling positivity and obligatory engagement, not the same thing at all as meeting another person you know's actual needs or desires.

That's the kind of thing I mean.

The areas where this isn't true are mostly ones that machines and markets are having trouble automating, so they're still expensive relative to the effort to do it yourself. Cooking's a notable one. The last part of our pre-industrial social animal to go extinct may well be meal-focused major holidays.


> a lot of human meaning comes from providing value to others

This is not intrinsic, though. It is a cultural imperative, so perhaps we need to revisit that?


I've pretty sure that's about as fundamental as it gets. Help tribe = feel good; tribe values your contributions = feel good; your talents and interests genuinely help the tribe = feel very good.

I don't mean this in a "people love work, actually", hooray-capitalism sense (LOL, god no), but the sense that humans tend to be happier and more content when they're helpful to those around them. It used to be a lot easier to provide that kind of value through creative and self-expressive efforts, than it is now. Any true need for artists and creative work (and, for the most part, craftspeople) at the scale of friend & family circles or towns or whatever, is all but completely gone.


Thank you for that response, it did help me understand.

My probably perverse takeaway is that Barbara Streisand might have been wrong: people who need people (to appreciate their work) may not be the luckiest people in the world. One can enjoy one’s accomplishments without needing to have everyone else appreciate them. Or you can find other people with similar interests, and enjoy shared appreciation.

In the extreme, the need for external validation seems to lead to people like Trump and Musk. Perhaps a shift in how we view this would be beneficial for society?


I agree and think the same way. The "just make numbers go up" mentality of happiness is a fallacy. If this was the case, plugging everyone up to heroin hibernation machines would be the most optimal path. But anyone with an iota of human sensitivity will see that as horrific, unhappy and a destruction of the human spirit.

Happiness needs loss, fulfillment, pain, hunger, boredom, fear and they need to be experiences backed up by both chemical feelings and experiences and memory and they have to be true.

But here's the thing, already the damage is done beyond just some art. I don't mean to diminish art, but frankly, look at how hostile, ugly and inhuman the world outside is in any regular city. Literal death worlds in fantasy 40k settings look more homey, comfortable, fulfilling, and human.


> My empathy for artists is fighting with my concern for everyone else's future, and losing.

My empathy for artists is aligned with my concern for everyone else's future.

> I want to help them, but want to unblock ML progress more.

But progress towards what end? The ML future looks very bleak to me, the world of "The Machine Stops," with humans perhaps reduced to organic effectors for the few remaining tasks that the machine cannot perform economically on its own: carrying packages upstairs, fixing pipes, etc.

We used to imagine that machines would take up the burden our physical labor, freeing our minds for more creative and interesting pursuits: art, science, the study of history, the study of human society, etc. Now it seems the opposite will happen.


> We used to imagine that machines would take up the burden our physical labor, freeing our minds for more creative and interesting pursuits: art, science, the study of history, the study of human society, etc.

You’re like half a step away from the realization that almost everything you do today is done better if not by AI then someone that can do it better than you but you still do it because you enjoy it.

Now just flip those two, almost everything you do in the future will be done better by AI if not another human.

But that doesn’t remove the fact that you enjoy it.

For example, today I want to spend my day taking photographs and trying to do stupid graphic design in After Effects. I can promise you that there are thousands of humans and even AI that can do a far better job than me at both these things. Yet I have over a terabyte of photographs and failed After Effects experiments. Do I stop enjoying it because I can’t make money from these hobbies? Do I stop enjoying it because there’s some digital artist at corporation X that can take everything I have and do it better, faster, and get paid while doing it?

No. So why would this change things if instead of a human at corporation X, it’s an AI?


I don't get this argument. Artists will not be replaced by AI. AI will become a tool like Photoshop for artists. AI will not replace creativity.


I see two realistic possibilities:

1) It'll no longer be possible to work as an artist without being incredibly productive. Output, output, output. The value of each individual thing will be so low that you have to be both excellent at what you do (which will largely be curating and tweaking AI-generated art) and extremely prolific. There will be a very few exceptions to this, but even fewer than today.

2) Art becomes another thing lots of people in the office are expected to do simply as a part of their non-artist job, like a whole bunch of other things that used to be specialized roles but become a little part of everyone's job thanks to computers. It'll be like being semi-OK at using Excel.

I expect a mix of both to happen. It's not gonna be a good thing for artists, in general.


3) The scope and scale of “art” that gets made gets bigger and we still have plenty of pro artists, designers. AKA art eats the world


Maybe. But art was already so cheap, and talent so abundant, that it was notoriously difficult to make serious money doing it, so I doubt it'll have that effect in general.

It might in a few areas, though. I think film making is poised to get really weird, for instance, possibly in some interesting and not-terrible ways, compared with what we're used to. That's mostly because automation might replace entire teams that had to spend thousands of hours before anyone could see the finished work or pay for it, not just a few hours of one or two artists' time on a more-incremental basis. And even that's not quite a revolution—we used to have very-small-crew films, including tons that were big hits, and films with credits lists like the average Summer blockbuster these days were unheard of, so that's more a return to how things were before computer graphics entered the picture (even 70s and 80s films, after the advent of the spectacle- and FX-heavy Summer blockbuster, had crews so small that it's almost hard to believe, when you're used to seeing the list of hundreds of people who work on, say, a Marvel film)


> But art was already so cheap

Art is really not cheap. I think people think about how little artists generate in income and assume that means art is cheap, but non-mass-produced art is pretty much inaccessible for the vast majority of people.


It does just that though? Don't tell me nobody is surpised sometimes while prompting a diffusion model, that can only happen if a significant portion of creation happens, in a non-intuitive way for the user - what you could describe as 'coming up with something'.


> AI will not replace creativity.

If people can't differentiate between computer and human generated art, wouldn't that be the definition of being replaceable?


Work like this helps us work towards new approaches for the more difficult issues involved with replacing physical labor. The diffusion techniques that have gained popularity recently will surely enable new ways for machines to learn things that simply weren't possible before. Art is getting a lot of attention first because many people (including the developers working on making this possible) want to be able to create their own artwork and don't have the talent to put their mental images down on paper (or tablet). You worry that this prevents us from following more creative and interesting pursuits, but I feel that this enables us to follow those pursuits without the massive time investment needed to practice a skill. The future you describe is very bleak indeed, but I highly doubt those things won't be automated as well.


> I think that would be a very bad thing because it would legally enshrine a difference between human learning and machine learning in a broader sense, and I think machine learning has huge potential to improve everyone's lives.

How about we legally enshrine a difference between human learning and corporate product learning? If you want to use things others made for free, you should give back for free. Otherwise if you’re profiting off of it, you have to come to some agreement with the people whose work you’re profiting off of.


Well Stable Diffusion did give back.

This doesn’t seem to satisfy the artists.


I’m thinking about the people who use SD commercially. There’s a transitive aspect to this that upsets people. If it’s unacceptable for a company to profit off your work without compensating you or asking for your permission, then it doesn’t become suddenly acceptable if some third party hands your work to the company.

Ideally we’d see something opt-in to decide exactly how much you have to give back, and how much you have to constrain your own downstream users. And in fact we do see that. We have copyleft licenses for tons of code and media released to the public (e.g. GPL, CC-BY-SA NC, etc). It lets you define how someone can use your stuff without talking to you, and lays out the parameters for exactly how/whether you have to give back.


"Giving back" is cute but it doesn't make up for taking without permission in the first place. Taking someone's stuff for your own use and saying "here's some compensation I decided was appropriate" is called Eminent Domain when the government does it and it's not popular.

Many people would probably happily allow use of their work for this if asked first, or would grant it for a small fee. Lots of stuff is in the public domain. But you have to actually go through the trouble of getting permission/verifying PD status, and that's apparently Too Hard


> It would be very easy to make training ML models on publicly available data illegal

This isn't the only option though? You could restrict it to data where permission has been acquired, and many people would probably grant permission for free or for a small fee. Lots of stuff already exists in the public domain.

What ML people seem to want is the ability to just scoop up a billion images off the net with a spider and then feed it into their network, utilizing the unpaid labor of thousands-to-millions for free and turning it into profit. That is transparently unfair, I think. If you're going to enrich yourself, you should also enrich the people who made your success possible.


Because we know it's not going to happen any time soon, and when it does happen it won't matter only to devs, that's the singularity.

You'll find out because you're now an enlightened immortal being, or you won't find out at all because the thermonuclear blast (or the engineered plague, or the terminators...) killed you and everybody else.

Does that mean there won't be some enterprising fellas who will hook up a chat prompt to some website thing? And that you can demo something like "Add a banner. More to the right. Blue button under it" and that works? Sure. And when it's time to fiddle with the details of how the bloody button doesn't do the right thing when clicked, it's back to hiring a professional that knows how to talk to the machine so it does what you want. Not a developer! No, of course not, no, no, we don't do development here, no. We do prompts.


> In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done.

You're comparing apples to oranges. Digging a trench by hand is also vastly more difficult than art or programming.

There's just as much AI hype around code generation, and some programmers are also complaining (https://www.theverge.com/2022/11/8/23446821/microsoft-openai...).

Overall though the sentiment is that AI tools are useful and are a sign of progress. The fact that they are stirring so much contention and controversy is just a sign of how revolutionary they are.


I think i have a fair bit of empathy in this area and, well like you said, i think my job (software) is likely to be displaced too. Furthermore, i think companies have data sets regardless of if we allow public use or not. Ie if we ban public use, then only massive companies (Google/etc) will have enough data to train these. Which.. seems worse to me.

At the end of the day though, i think i'm an oddball in this camp. I just don't think there's that much difference between ML and Human Learning (HL). I believe we are nearly infinitely more complex but as time goes on i think the gulf between ML and HL complexity will shrink.

I recently saw some of MKBHD's critiques of ML and my takeaway was that he believes ML cannot possibly be creative. That it's just inputs and outputs.. and, well, isn't that what i am? Would the art i create (i am also trying to get into art) not be entirely influenced by my experiences in life, the memories i retain from it, etc? Humans also unknowingly reproduce work all the time. "Inspiration" sits in the back of their minds and then we regurgitate it out thinking it as original.. but often it's not, it's derivative.

Given that all creative work is learned, though, the line between derivative and originality seems to just be about how close it is to pre-existing work. We mash together ideas, and try to distance it from other works. It doesn't matter what we take as inspiration, or so we claim, as long as the output doesn't overlap too much with pre-existing work.

ML is coming for many jobs and we need to spend a lot of time and effort thinking about how to adapt. Fighting it seems an uphill battle. One we will lose, eventually. The question is what will we do when that day comes? How will society function? Will we be able to pay rent?

What bothers me personally is just that companies get so much free-reign in these scenarios. To me it isn't about ML vs HL. Rather it's that companies get to use all our works for their profit.


> We mash together ideas, and try to distance it from other works. It doesn't matter what we take as inspiration, or so we claim, as long as the output doesn't overlap too much with pre-existing work.

I feel a big part what makes it okay or not okay here is intention and capability. Early in an artistic journey things can be highly derivative but that's due to the student's capabilities. A beginner may not intend to be derivative but can't do better.

I see pages of applications of ML out there being derivative on purpose (Edit: seemingly trying to 'outperform' given freelance artists with glee, in their own styles).


But the ML itself doesn't have intention. The author of the ML does, and that i would think is no different than an artist that purposefully makes copied/derived work.

TBH given how derivative humans tend to be, with such a deeper "Human Learning" model and years and years of experiences.. i'm kinda shocked ML is even capable of even appearing non-derivative. Throw a child in a room, starve it of any interaction and somehow (lol) only feed it select images and then ask it to draw something.. i'd expect it to perform similarly. A contrived example, but i'm illustrating the depth of our experiences when compared to ML.

I half expect that the "next generation" of ML is fed by a larger dataset by many orders of magnitude more similarly matching our own. A video feed of years worth of data, simulating the complex inputs that Human Learning gets to benefit from. If/when that day comes i can't imagine we will seem that much more unique than ML.

I should be clear though; i am in no way defending how companies are using these products. I just don't agree that we're so unique in how we think, how we create, and if we're truly unique in any way shape or fashion. (Code, Input) => Output is all i think we are, i guess.


Of course it's the intention of the user that matters here, I just see that these models give easy access to make extremely derivative works from existing artist's work - and I feel that's an unethical use of the unethically sourced models.

Anyone finding their own artistic voice with the tools, I respect that, those people are artists - but training with the aim to create derivative models, that should be called out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: