In terms of art, population tends to put value not on the result, but origin and process. People will just look down on any art that’s AI generated in a couple of years when it becomes ubiquitous.
> population tends to put value not on the result, but origin and process
I think population tends to value "looks pretty", and it's other artists, connoisseurs, and art critics who value origin and process. Exit Through the Gift Shop sums this up nicely
I disagree. I definitely value modern digital art more than most historical art, because it just looks better. If AI art looks better (and in some cases it does) then I'll prefer that.
That’s totally fine, everyone’s definition of art is subjective. But general value of an art as a piece will just still be zero for AI generated ones, just like any IKEA / Amazon print piece. You just pay for the “looks pretty”, frame and paper.
>You just pay for the “looks pretty”, frame and paper.
But you pay that for any piece of art though? You appreciate it because you like what it looks like. The utility of it is in how good it looks, it's not how much effort was put into it.
If you need a ditch you're not going to value the ditch more if the worker dug it by hand instead of using an excavator. You value it based on the utility it provides you.
That analogy doesn’t work for art, since worker’s ditch is result based. There are no feelings like “i like this ditch”, “experience of a ditch” or “i’m curious how this ditch was dug”.
Again, i’m not saying buying a mass made AI art will be wrong. Just personally speaking, it will never evoke any feelings other than “looks neat” for me. So its inherent “art value” is close to 0 as I can guess its history is basically someone put in a prompt and sent it to print (which I can do myself on my phone too!). It’s the same as looking at cool building pics on my phone (0 art value) versus actually seeing them in person (non-0), mostly because the feelings I get from it. That being said, if it makes others happy, it’s not my place to judge.
This is already the case. Art is a process, a form of human expression, not an end result.
I'm sure OpenAI's models can shit out an approximation of a new Terry Pratchett or Douglas Adams novel, but nobody with any level of literary appreciation would give a damn unless fraud was committed to trick readers into buying it. It's not the author's work, and there's no human message behind it.
Thing is there are way more good books written, than any single person can consume in their lifetimes. An average person like me, reading a mixed diet of classics, obscure recommendations and what's popular right now, I still don't feel like I'm making a dent in the pile of high quality written content.
Given all that, the purpose of LLMs should be to create tailor made content to everyone's tastes. However, it seems the hardcore guardrails put into GPT4 and Claude prevent it from generating anything enjoyable. It seems, even the plot of the average Star Wars movie is too spicy for modern LLM sensibilities, never mind something like Stephen King.
That's where you spin up a local LLaMA instance. The largest models that are still runnable on consumer grade hardware actually beat GPT-3.5 at this point. And there are numerous finetunes all over the "spiciness" spectrum.
Novels aren't about a message. They're entertainment. If the novel is entertaining then it's irrelevant whether there is or isn't a message in it. Besides, literature enthusiasts will invent a message for a popular story even if there never was one.
Also, I'm sure that you can eventually just prompt the model with the message you want to put into the story, if you can't already do that.
I sounds like you don't value art as the purest form of human expression but you'll never be able to convince others to think like you with logic. For my part I think you fundamentally misunderstand the value of creativity but I know I won't change your mind either.
If it was really about the message, then why waste all the time with the rest of the novel? Describe the message in a sentence or two. You could read an entire library of books worth of messages in a few days.
But that wouldn't be helpful. It would've been memorable, because novels aren't just about the message.
I haven’t read anything “shit out” by any LLM that even nearly approaches the level of quality by the authors you named — would very much like to see something like that - do you have any evidence for your claims?
AFAICT current text generation is something approaching bad mimicry at best and downright abysmal in general.
I think you still need a very skilled author and meaty brain with a story to tell to make use of an LLM for storytelling.
Sure it’s a useful tool that will make authors more effective but we are far from the point where you tell the LLM “write a story set in Pratchetts Discworld” and something acceptable or even entertaining will be spit out - if such a thing can even be achieved.
> According to Marx, value is only created with human labour. This is not just a Marxist theory, it is an observation.
And yet it's completely and absolutely wrong. Value is created by the subjective utility offered to the consumer, irrespective of what inputs created the thing conveying that utility.
You are using marginal utility value theory. Parent comment is using labor value theory. In fact, there are also other value theories in economy. It's a mostly philosophical choice, and like other philosophical choices, it's not possible to accuse one of them of being wrong. It's a matter of choosing your philosophy, and understanding different philosophies.
> You are using marginal utility value theory. Parent comment is using labor value theory.
Yes, I'm aware. This is precisely why I'm stating the prior comment to be "absolutely wrong". Marginal utility is a substantially valid model, LTV is not.
> In fact, there are also other value theories in economy. It's a mostly philosophical choice, and like other philosophical choices, it's not possible to accuse one of them of being wrong.
Sure it is. These aren't theories in a normative sense, they're models of causality for manifest phenomena. They're closer to scientific theories than they are to philosophical axioms. LTV simply doesn't bear out with observation.
Labor theory of value is quite controversial, many economists call it tautological or even metaphysical. I also don't really see what LTV has to say about AI art, if anything, except that the economic value generated by AI art should be distributed to everybody and not just funneled to a few capitalists at the top. I would agree with that. It's true that more jobs get created even as jobs are destroyed, but it's also true that just as our ancestors fought for a 40 hour work week and a social safety net, we should be able to ask for more as computers become ever so productive.
In Marx's time, you needed humans to perform any kind of labor. Even machines needed operators. But there's nothing about LTV that would make it a hard requirement. The point of Marx's claim is that without someone performing labor using capital, there wouldn't be any value for the owner of said capital to pocket. This is just as true if you replace workers with AI.
There is somewhat famous digital artist from Russia - Alexey Andreev. Google it, he has very distinctive style of realistic technique and surrealistic situations, like landing big manta ray on the deck of aircraft carrier. Or you can see his old works in his 5-years-not-updates LJ [1].
Now he uses generative AI as one of his tools. As Photoshop, as different (unrealistic!) brushes in Photoshop, as other digital tools. His style is still 100% recognizable and his works don't become worse or more "generic". Is he still artist? I think so.
Your definition assumes that photography is not art and/or doesn't involve conscious skill and creative imagination. That's not the consensus, to put it mildly.