I think at one point Krita will fork to two apps because of this exact reason. AI-based tools are clearly the next step of painting apps (to me at least, but I can't be the only one who believes in this).
I think in 3~5 years an painting app without AI generation feature is just like a painting app without pen pressure today. It's still usable, you can make great art with it if you have the skill, but it will be so out of fashion to a point it starts becoming cool again.
Is it that different from how we view Github copilot ?
As far as I know there's a sizeable number of devs who don't intend to ever rely on copilot, and I would expect the a similar trend in the drawing community with amateurs and pros not specially anti-AI, but not wanting to have a random generator meddle with their art.
"As far as I know there's a sizeable number of devs who don't intend to ever rely on copilot"
Is that really a thing?
I mean, I also don't want to rely on Microsoft and therefore also not on Copilot, but not using AI tools in general out of principle is probably a very rare minority. I simply would prefer my own local LLM.
But in the thread linked above I read "AI never had and never will have it’s place in art." And this stance would be very weird for me for devs.
> "As far as I know there's a sizeable number of devs who don't intend to ever rely on copilot"
I'm one of those. I've been programming for a while now and there's no way i'm gonna trust a neural network with my code. Debugging is painful enough without having to deal with subtle bugs hallucinated by ML.
Some machines are really useful to reduce human suffering and augment our collective capabilities. Some machines are just useless, polluting gadgets. I think ML sits in the middleground: if your job is pissing meaningless code all day that's very repetitive it can probably do it for you... but if you have to actually do R&D to develop new tools i don't think ML will be any use.
So yes AI can reduce work, but arguably work that was never required nor beneficial to humanity to begin with. I would be way more interested in society reflecting on "bullshit jobs" and how to actually share the workload so that we can have 1-day work-weeks planet-wide, just as the scientists from the 19th/20th century envisioned. Instead of continuing to destroy the planet so we can run bullshitting neural nets in the cloud that produce arguably little value.
But sure, ML is fun. Let's just pretend we don't see the whole world burning outside the window.
"So yes AI can reduce work, but arguably work that was never required nor beneficial to humanity to begin with"
Hm, just a suggestion, I would be careful with such statements, if you don't want to insult peoples work you know nothing about.
Because LLMs enable a very broad spectrum of work. I don't use them in my current workflow(nor am I that easily insulted), but the times I did use them, they were useful. My problem with them was mainly ChatGPT4 was out of date, but it did produce very useful results for me for WebGPU and Pixijs, which I had not used before and the solutions it gave me, I could not find on the internet. So for my novel work, they don't help me in general, but they do help me if I need a new custom part, without having to reinvent the wheel.
And then of course there are people who greatly benefit from them, who did not study CS, like a friend who is ecologist and all he wants are some custom python scripts, to modify his GIS tool. I think he is doing useful work and with LLMs he is indeed spending less time on his (freelance) work and has more time for his children. Isn't that, what you are also hoping for?
I think to a large extent this is correct but, being the devils advocate, perhaps if the GIS tool were better then your friend wouldn’t have had quite as big a gain in free time?
Answering my own question somewhat, I think that LLMs are becoming a kind UI layer over many applications/tools for many users. Which is interesting. And in some ways they show signs of fulfilling the promise of AI.
It seems like you tried used copilot did you? To me the best thing about it is not the full function generation, which doesn't work very reliably, it's to finish the end of the line, when you already know what you will type, and it just types it for you faster. It feels like magic and checking the code is extrême fast as it's just one line, much faster than writing it.
So it’s "just" providing the vim advantage to typing speed? As a vim user myself, I’d like to point to the tired argument that typing takes up the least amount of time yadda yadda
We don't have 1 day a week workweeks not because it isn't possible but because we don't live late 19th / early 20th century lifestyles anymore. We have modern cities, infrastructure, transportation, manufacturing, power generation, diets, and recreation. I don't think many would want to go back to an agrarian lifestyle where you live in a one or two room brick cottage, walk everywhere, till a field with a very simple, small tractor, eat only what you grow seasonally, have two changes of clothes, and own basically nothing but the bare essentials to clean and feed yourself. If you did that then sure you could share the work between a little commune and maybe get by on a rotation of duties if you had enough up front capital to buy all the labor saving devices and could manage to keep everyone happy enough to share it all equally but I have a feeling it would still wind up being a hard life of poverty. There's also the question of how you keep all of the manufacturing and professional services going with so little demand for their outputs. There are very good reasons why the vast majority of the populace used to be stuck in subsistence farming for life and why that only changed with the advent of mass production and market economies.
> I don't think many would want to go back to an agrarian lifestyle where you live in a one or two room brick cottage, walk everywhere, till a field with a very simple, small tractor, eat only what you grow seasonally, have two changes of clothes, and own basically nothing but the bare essentials to clean and feed yourself.
You could not possibly have made it sound more attractive and compelling.
You should really try it. I’ve been coding professionally since 2000, and some 10+ years before that as a kid with a hobby. The AI takes away quite a bit of tedious stuff. When you think “I’ll just have to code out that annoying little dumb piece”, where it is obvious what you need, but boring to write it, the AI typically knows what you want. Sometimes it is really mind-warping, like wow, that’s pretty heavy context you dragged in there! And sometimes it comes with a little left field that you hadn’t thought of, and even if it didn’t nail it, you got a new idea.
It’s like having an over-eager coworker to pair-program with - and can kinda boss around as you please, it never tiring or needing a break. Not senior-level (outside of knowing “deep” small pieces and snippets), but not fresh out of school either.
And it is great for fleshing out comments (if you’re into that, as I am), picking up your style and notation as you go.
While impressive, the two issues I have with codepilot and other AI tools are:
1. The code is usually the same code I'd get a few web searches away, except then it would have the appropriate copyright. As a FOSS developer (in my free time), I do not want to risk using code I don't have a license for, and thus dirtying up my entire project and putting it in danger of being taken down.
2. I really don't need it. At very few points in a project do I both think "I want to continue this" and also "I want my code written for me". I like autocomplete, I use autocomplete, and I like Visual Studio's suggestions, too. It's only wrong 50% of the time, around about. I have no interest in a tool that writes my code for me, because I have learned everything I know from solving problems myself.
Edit: Clauses in the AI's ToS like "all code generated is yours" or something is akin to a sign on a bar saying "if you hit someone in here it's not assault" -- it doesn't change the facts whatsoever, and the fact is that it's still a crime to hit somebody, even if the bar's ToS say otherwise.
> The code is usually the same code I'd get a few web searches away
My impression is that people normally don't use Copilot as a substitute for finding solutions (ChatGPT is much better for that), but as a way to help with otherwise tedious tasks that are really specific to your codebase. Check out 6:05 and 6:25 in this Andreas Kling video for a good example: https://www.youtube.com/watch?v=8mxubNQC5O8
Regarding your second point Copilot helps me when I least expects it. I think the video illustrates what I mean with that as well.
3. To be truly useful, you have to send your company's proprietary code to a 3rd-party AI, which may or may not use it for training their AI, or which may or may not have security issues and leak your proprietary code. Yes, we do this already with GitHub/GitLab, etc. but those are mature and (AFAIK) haven't had big security issues like OpenAI has had in the past year.
4. For ChatGPT at least, you have to give them your phone number to sign up. For me this is a deal-breaker, but I get others are fine with it.
I'm a Vim user with 100 WPM typing speed, and I can say with confidence that Copilot isn't that useful to me. Typing boilerplate is not an issue - understanding what I wrote is most of the work. And having an AI spew code that I have to read is more work for me than just writing it myself.
What a tragic waste of fast typing speed. If only you were using Emacs, your typing speed would be multiplied by Emacs's superior capabilities, with multiple shell windows and keyboard macros and many other powerful packages thanks to its deep and flexible extensibility, and you would be so much more productive and powerful! Typing speed isn't everything. ;)
Disclaimer: My cat is named Emacs, so if you say anything bad about Emacs, I will take it personally, because he is such a fine cat, named after such a fine text editor.
The point I was _actually_ trying to illustrate was that wpm and vim are irrelevant, and that Copilot is worth a try even if you're a fast typist that uses (your favorite editor here).
The initial question was "Is there really a sizeable number of devs who don't intend to ever rely on Copilot?", not "Are all Vim users who type fast anti-Copilot?".
I don't mean to dismiss your data point, just to put things in context. Since Copilot is generally recognized as a useful tool, there will of course be Vim users with great typing speed who still find it useful regardless.
Now, when it comes to Go, I find that there isn't much repetitive code to write (especially since generics landed in Go 1.18). Some people say that error handling is repetitive, but I find that those people just bubble up errors without adding appropriate context, which makes them less useful. But I haven't personally found a scenario in which I explicitly thought "damn, I know exactly what I need to write, but it's so long - I wish someone would write it for me".
And yet the comment I'm replying to didn't say "I won't ever rely on Copilot," it said "I can say with confidence that Copilot isn't that useful to me."
I made my comment because I hope others who are fast typists, and familiar with their tools, do give Copilot a try. I expected to hate it, didn't try it for a long time, and was quite surprised when I did.
> But I haven't personally found a scenario in which I explicitly thought "damn, I know exactly what I need to write, but it's so long - I wish someone would write it for me".
Test case setup comes to mind. Another place it's useful is for writing long function-interface signatures. Or adding a bunch of similar "case" statements to a switch.
In an ideal world I'd choose stacks with not enough boilerplate to warrant copilot.
I had my share of auto generation with enterprise Java stacks, and tried as hard as I could to move to stacks where what we write is concise and relevant (rails is the closest I came to this, not perfect but clearly going in the right direction).
I think AI has its place, but I also hope to be lucky enough to not have to use it.
Illustrators might have similar issues, where some of them need to produce boilerplate drawings a lot, but I think they'd also prefer working on project that aren't that.
Yeah these Copilot type tools shouldn’t help much if a language is well-designed, or a project is structured in a fashion that doesn’t require a ton of boilerplate. If we’re doing things well, we’ll only have to tell the computer something once, right?
If it is possible to guess what we’re going to write, then we aren’t transmitting much information to the computer.
Well think of how AI was trained. GitHub trained copilot on _mostly_ open source data with permissive licenses.
Giving away code and their rights for free is commonplace. Also it’s not like you can use “by Ryan Dhal” to make the output from copilot better.
But these art AI were trained on. CC, CC-BY, and closed license pieces of art. And you can use “by Greg Rotowski” to get art in that artist’s specific style.
I don’t think comparing the use of AI or the general attitude towards AI between artists and devs makes sense.
Very apples and oranges.
Hmm, is it possible to include a bit of prompt in copilot, something like “follow the formatting guidelines of the Linux kernel,” or something like that?
I dunno if it would help, but people do seem(?) to be improving their ChatGPT responses by telling it to answer as if it is an expert on a topic.
I’ll use ChatGPT to help surface docs and examples or to get a very rough high level overview if what a task might involve, but I don’t really want an LLM in my IDE.
Not only does it feel like something that might be dangerous to become reliant on (what happens when it’s not working or I don’t have access to it), I have no idea what material it was trained on which makes it ethically gray. I might be more receptive to a local LLM where I can personally vet what it was trained on (primarily, I’m concerned with if the material was obtained fully consensually or not).
My attitude towards image generators is similar. Adobe’s is totally out of the question for example, because though they claim it’s 100% ethically trained because all material came from their stock image service, I know that’s bullshit because I’ve seen stolen art put up for sale there more times than I can count (and worse, they’re unresponsive when theft is reported).
> The license issue is something I expect will be solved in the next few years (a dropdown menu to choose from, maybe).
I'm not sure it will, as everyone who uses it don't appear to really care about other people's licences anyway. It's just a method of BSD washing GPL code.
Yes, original copilot, but not new copilot with expanded features. I'm a vim person and wold rather give up copilot than move to vs code, so I do hope they aren't going to leave vim behind and focus only on vs code moving forward.
I see a different future where people continue to write their own code rather than trust Microsoft AutoPlagiarist™. Perhaps I am wrong and this no-code solution will at last relieve us of our onerous cognitive burdens.
There is a big difference between AI generating a piece of picture which you completely observe, and AI generating a piece of code that may contain a subtle, hard-to-spot bug.
In both cases you're also risking being accused of plagiarism, when the model literally remembers, or reconstructs, a piece it has seen, and finds it perfectly matching your request.
I think "AI" tools in Krita may have their place: object detection and selection / tracing, upsampling, seamless resizing, cutting and pasting, texture generation, light adjustment, stuff like that. An integrated analog of DALL-E or Midjourney would likely be a poor for.
>I would expect the a similar trend in the drawing community with amateurs and pros not specially anti-AI, but not wanting to have a random generator meddle with their art.
Hi! This is me! I'm good enough that I can draw and paint whatever I want manually. I (generally) don't want it in my (main) workflow and I don't want telemetry training models against my work (without knowledge & consent). However, I don't have any qualms against other people using it and I think it's exciting technology.
There are a lot of people who draw and paint. Of course there will be people who reject AI. There are people who restrictly only use traditional media too. That's why I said Krita will (and should?) fork into 2 apps, one for people who reject AI.
But the line between "random generator" and "artistic finer control" isn't that sharp and clear. How do digital artists draw leaves and bushes in background? If not photobashing, most experienced people will use some kind of brushes[0] with some radnomness built into them, like random rotation or spray.
Randomness is even more prevelant in traditional media.
And I'm 100% sure AI will evolve to cover as much as both ends.
[0]: Not necessarily a leaf brush. A common misconception held by digital painting newbies are you need X brush to paint X efficiently. Experienced aritsts don't want X -- they want some controllable randomness.
Has anyone found out what the "sizable number" actually looks like? Is it sizable as in "13% of devs, 35% in niche areas like HN, say they are against it wholesale" or sizable as in "3% of devs can make a lot of noise, especially in nice areas, when it's something they care about"? Even 0.1% of devs would be quite sizable by number but still irrelevant in context of why one group's opinion is like another.
Of those that don't ever intend to rely on something like copilot - is the majority because "I can code better without it in the current capabilities" or a principled matter about the technology wronging them in some way?
I think it's a bit of an apples-and-oranges comparison. There are vastly different downstream consequences and risk exposure associated with using AI to design and implement functionality for mission-critical infrastructure vs. using AI to draw pictures.
Like the artists, this won't be an option. Market pressures will force devs to use AI assistance.
For example, this recent GitHub presentation about productivity improvements: 35% acceptance rate, 50% more pull requests, etc. I believe these numbers, and even if you don't, they will be a reality soon.
That's true. However, as an adjacent point, I do want to highlight how the impact will be totally different in art than in development, because many seem to be equating them.
The main difference is that in development, more of the tedium gets removed-- e.g. interacting with some API or UI boilerplate-- and more of the more satisfying work-- how the program, generally, is going to solve a problem-- remains. In art, the more satisfying part-- conceptualization and forming those ideas into images-- is entirely removed but the tedium remains.
Commissioning a piece of art from an artist entails describing what you want, maybe supplying some inspo images, and then going through a few rounds of drafts or waypoint updates to course-correct before arriving at a final image. Sound familiar? Generative AI art isn't making art: it is commissioning art from a computer program that makes it from an amalgam of other people's art. It reduces the role of the "artist" to making up for the machine artist's shortcomings.
When you're making art, making the details are ingrained in that process-- a requisite step to forming your ideas into images. Details are critical in high-level commercial art, and despite the insistence of many developers who know far less than they realize, current generative AI isn't even close to sufficient.
Economic realities aside, when you're merely editing someone else's images, you've basically transitioned from "writer" to "spell checker" and I don't understand how so many refuse to see how a professional artist would be distraught about that.
I think you don't know a lot of people in the art community if you think this. Good to see Krita standing with the people who actually use their tools.
I worked with 2D artists for 5 years, and the actual attitude is much more mixed than it might appear from listening to the vocal folks. Eventually most will accept this as another tech-heavy field like 3D CGI, especially when these tools will start to give more usable results in the hands of skilled artists. (they mostly don't, yet)
The new tools weren't made for or by artists. They're labor alienation machines that extract value from our communities and remove nearly all creative agency from the process. It's not cool to just dismiss this as people refusing to learn a new tool when that tool is the product of one of the biggest acts of abuse directed towards creative labor in decades.
I work in game dev and I kinda witnessed how photobashing went from "cheating" to "ok if for very early concept or internal usage" to the standard process among 2D artists.
AI tools will be much more artist-oriented than they currently are. You will be able to control parameters like denoise strength with your stylus pressure. LoRAs and prompt templates will be listed in a gallery like PS's neural filters. You will be able to preview colored thumbnails that generated by control net as you sketch in near real-time.
Which is kind of funny in a way. I am no artist but I'm using Krita with a smallish Wacom tablet to manually refine illustrations generated by Stable Diffusion.
But again, some of the Krita team have had strong ideological positions on many themes. Luckily you can keep using the software whether you agree or not (and you can contribute, too).
I am a firm believer that the creator should choose the way their work is licensed, but man - it is weird seeing an open source project have such a commitment to Intellectual Property. It seems like it is mostly developed by volunteers. So the expectation is that artists take these devs hard work and use it to create proprietary paintings.
I'm on board for this on the basis that the creator should ultimately get to choose how their work is released. Open Source has to be a choice. But dang, I would hope that one of the outcomes of a project like this would be more public domain digital art.
The AI hate is suuuch a meme in the art community, it's very frustrating/alienating. (Though understandably, neoliberal capitalism is also extremely frustrating, so I see why artists mad, I just wish they'd be mad at the root cause.)
((the root cause is that an economic system fundamentally based on scarcity == value doesn't make sense when applied to things that are essentially infinite, and kludgeing in artificial scarcity to make things work is not a good take))
I have a suspicion that art is to humans as fancy tails are to peacocks: the difficulty is the point.
I believe this is why we have art galleries proudly displaying oil paintings of fruit bowls, but don't do this for random food snapshots.
It's also why photographs as a category were initially dismissed (in an era that had come to praise extreme realism in paintings), but when photographers went on long trips to visit unusual places, people, and events, those photographs suddenly did count as art.
Bit of overlap between arts and knowledge shown by the wiktionary entry for the Latin "ars", so this can be extended to the way Socrates didn't like writing, and the desire for hand-made foods and durable goods over mass produced foods and products.
> I believe this is why we have art galleries proudly displaying oil paintings of fruit bowls, but don't do this for random food snapshots.
We also have them for social/historical reasons. A museum usually isn't built around "best stuff humanity has to offer", but has some sort of more complicated angle.
Eg, a reason why you may have a fruit bowl hanging on the wall is that this particular artist has been influential, and they just happened to paint a fruit bowl. Maybe thousands of artists of the era painted fruit bowls, and maybe a dozen of those are technically more impressive, but this is the guy that got talked about a lot, or started a movement, or such, so it's this guy's bowl we're going to go with.
Museums can have many themes. They may showcase a particular artist, a particular movement, a particular theme, a particular period in time. You can build a museum of nothing but paintings of cats if you wanted to.
We have art galleries displaying things like empty canvases or toilet bowls that make a statement, it's definitely not about difficulty. The fact is that the debate about what art is is part of what makes art art, it escapes definition because part of the spirit of art is rebelling against definition.
I'm 50-50 split on thinking I regard that kind of art as a vehicle for tax evasion, vs. thinking the "difficulty" is the money wasted on it (which is still Veblen "look at me I'm rich I can waste money on something pointless").
We can't judge all of the art sphere just from the worst examples. Think of people who judge software by our worst examples (crypto scams, I don't know)?
> art galleries displaying things like empty canvases or toilet bowls
This is so far from the norm and feels like a television lens version of artists and art galleries. Yes, Duchamp used a toilet bowl in 1917 as art which was over 100 year ago. This is known because no one had done it before and presented an everyday object as art. This is 106 years old but still referred to so you can guess it was a big deal.
My suggestion is to visit a modern art museum in a larger city and you'll see this kind of "easy bullshit art as a statement" doesn't really exist.
I have visited modern art museums. I have seen similar "low effort" pieces among a wide catalogue of high effort ones. It is evident that real artists are very prolific and are not lazy or anything like that, that was not implying that, I don't even think of it as bullshit.
My example was more about how the appearance of effort is not necessary for the viewer. Many people can look at abstract paintings and say "this is bullshit, my kid could do that!", and yet many people can also look at it and recognize beauty, and it's not because the latter group sees the piece as more effort, they're just able to parse the language of the piece better (imo).
That view of art ended over a century ago with the modernist (mediocrist) movements.
Nowadays art is whatever one wants it to be (or not to be). It's just a word people use to enhance the social perception of whatever manmade creation they like.
But AI has nothing like that. Some things that look like an awful lot of work it spits out with ease, some things that sound simple can take a whole lot of fiddling.
Like the other day I was playing with DALLE3, and for whatever reason it didn't want to place things on a table.
In the same way that using a limited medium like some oil paints, paintbrushes, and canvas to create images = the art of painting, there will become the art of hacking / abusing / advanced prompt engineering / pushing AI to do things that are close to or at its limits of capabilities.
A: Oh so your LLM generated an image of a spaceship cockpit, so what?
B: So what? This LLM was trained on nothing but tax records from 1929!
A: :o amazing!
So AI artists do not necessarily equal 'creatives who render images using AI tooling', they may instead be 'creatives who tease out novel outputs from AIs' or something like that.
Then again, this is suspiciously close to a 'what is art' conversation, so i'll stop here.
Mm. I’ve spoken to a number of artists who have expressed similar feelings of despair, frustration and anger.
There are many upset people over this technology, and calling it a meme diminishes them to meaningless copycat haters.
I don’t think that’s true; and doing it, really reallllly makes them angry.
Consider: this attitude it part of the reason why that attitude exists.
:|
If the stable diffusion folk hadn’t gone crazy cloning every art style they could and laughing about it, we could all have had a very different AI art future.
…but apparently we can’t have nice things because (some) people suck.
That's an interesting point. Retouching a photo after taking it used to be "manipulating reality", frowned upon by "real" photographers. Nowadays, postprocessing digital negatives and adding your own style to it is part of a normal photographer's workflow.
(However, I think the negative feelings don't come from a discussion of "real vs fake" or "classical vs new", but mostly from the point of view that using artwork as training data is stealing. I don't agree with that view, but I think it's at the core of the argument.)
Yes, but - at least in landscape photography - there was a divide between people using retouching to "fix" things (i.e. removing spots, making the colors more realistic, etc) and the people altering the style of the photo (e.g. by boosting or even shifting certain colors to achieve a certain look). The latter was viewed as fake by some people I knew. Today this is just part of your photography style.
I'm an artist and look at generative AI as a tool that's almost always going to produce content but not art.
I refuse to use it from a moral standpoint but I also don't use any digital tools at all in the creation of my work. Even if I worked digitally I don't create art to produce pretty pictures as fast as possible. Typing in a prompt and fiddling with some things back and forth is just that.
This comment hurts me to my very core, I actually screamed when I read it. I'm in one of the most artistically productive periods of my life right now. I've been doing multiple notebook pages of gouache and india ink a day. I also have a homebuilt plotter that I've written an entire suite of software for. I've been doing 3D graphics and photomanipulation, I've been dabbling in video editing.
AND I've been pushing what can be done with Stable Diffusion, and it's absolutely a tool to create art. The idea that "Typing in a prompt and fiddling with some things back and forth" is all there is to AI art is so fucking absurd. This is the "meme" I'm talking about. There SO much more to AI art from an artistic control perspective than the prompt, and not only that, there's so much we haven't even fucking invented yet which is clear from the rate of progress in the field. These reductive "it's not real art" are even worse than the "theft" moralizing. It's akin to saying photography isn't art because you just click a button.
> I don't create art to produce pretty pictures as fast as possible.
I create art because I like to make art, sometimes that means laboring over the placement of every line. Sometimes I need three hundred frames and there's only me and my GPU against the world. AI opens up possibilities that were completely unreachable before, just like everything else I'm able to do artistically with my computer.
I'm happy you get something out of it and wish you the best. I do think it's built on theft and I do think it's a cold, lifeless medium. I think art is truth and the farther you get away from a human making something the farther you are away from the artist and feeling. A digital print will never feel as nice to me as a painting or a drawing. A handmade sculpture will always feel better to me than something produced with a mold or a 3D printer.
We're all different and again, it's great if you like making art with AI. The world needs differences, otherwise it would be quite boring.
typing some text and pressing 'generate', iterating, or doing layering and photobashing, just isn't gonna be 'painting', or 'drawing', like, ever. on a fundamental level. you'll need to get over yourself asap if you're "screaming" over this
Man I know it's pointless for me to argue but it's just like... it's wild to me that people make these comments. Do my comments give the impression that I am unaware of what drawing and painting are? I have spent thousands of hours painting and drawing.
Photography isn't gonna be painting or drawing either, it's still art that affords the artist an enormous amount of control. This is the "meme" I'm talking about. The way you talk about AI art is what's making me go AAAAAAAAAAAAAAAaaaaaaaaaaaaaaa
> typing some text and pressing 'generate', iterating, or doing layering and photobashing
It's such a self report, you have no idea what is going on, you are quick to discount it without any understanding. We are still in the infancy of the space and people are fixated with these idiotic reductive arguments. Prompt -> image is the tiniest fraction of what's possible with this technology. I wish I was better at communicating how fucking epic the set of possibilities that this opens up is, I'm sure we will see it eventually. It frustrates me to no end that people are so blind to it.
it's not the "unaware" part, it's the 'endless possibilities' (snake oil) part. ai ppl looooove to draw clouds and expect everyone to buy into that shit, yet it's not that hot. it'll always be just an app, unfortunately. it's reductive but that's just its limitation and confines.
This sort factually incorrect dogmatic screed from a position of complete ignorance, this is why I find the discourse so frustrating and why I dismissively call it a meme. Look into ControlNets, look into Deforum, look into programmatic loopback, LoRa training, model alchemy; and that's just some of what's available TODAY in a field that's moving faster than most anything else.
You're someone looking at Amiga DPaint in 1986 and claiming to understand the limitations and confines of digital art. It's absurd.
> It's such a self report, you have no idea what is going on, you are quick to discount it without any understanding. We are still in the infancy of the space and people are fixated with these idiotic reductive arguments.
> I'm sure we will see it eventually
If your basis for your art is about the hot new tool, or a tool in the future then it's just novelty. Generative AI is incredible from a technological perspective, as was Photoshop before it, but neither of them are ushering in some profound new wave of art.
> fucking epic the set of possibilities that this opens up is
Most great art and great design is built on constraints. My suggestion in using these magical tools is to reduce your possibilities to find something that speaks true to yourself.
Most professional artists will be unemployed and hobbyist artists using AI seems to be kind of against the point of creating art for the art of creation.
But for one-click self-expression, AI tools will certainly come in handy.
It kind of depends on the type of artist. I use it to illustrate my stories, for example, and I'd be upset if someone claimed my writing doesn't count as art-
But I've spent well over a decade learning to write. I don't have any skill in drawing, and I don't earn any money from my writing. (...and last time I tried to hire an artist, they bit my head off when I offered an example of what I was after.)
The dawn of the post-work age! C :
.. nearly everyone will become unemployed five years from now (when AGI / humanoid robots hit mainstream adoption). The economical paradigm will have to change inevitably. Now it's on us to nudge it towards a nice and chill open source economy with open access infrastructure and cybersyn-like global federated resource stream coordination instead of the competitive vortex of death, madness & despair we have now.
Mainstream media said the same thing about nuclear energy, steam engines, integrated circuits, and a dozen other technologies since the dawn of civilization.
It never manifested though, and people still work and produce like they did for thousands of years.
Humans are surprisingly adept at coming up with new kinds of work.
IMO the bigger "risk" is that we will extinct ourselves in the next few hundred years, but that is a different discussion.
This is absolutely ridiculous and short-sighted. AI is a tool that is and always will make the creation of art less a matter of the expression of the human soul. What techies NEVER understand about art is this: that art is not just the end product for the artist but something they use to express THEMSELVES.
UNLIKE other tools, AI makes creative decisions. No other tool has done this, and moreover, its primary purpose is to take away the reliance on artists. The ultimate aim of BIG TECH is to take away this reliance so that they can be the ultimate source of cheap art, just like cheap slave labour is the ultimate source of cheap and unsustainable clothing for most people.
Therefore, AI will NEVER be a tool to create art like other tools. It is is a tool that will outcompete humans on a massive scale so that even if "normal human art" exists, it will never gain much traction or commercial viability.
To be honest, AI is absolutely sickening and companies like Microsoft and OpenAI make me sick.
That sounds like a very Luddite view. Why wouldn't artists be able to use AI selectively to automate "boring" tasks (such as filling the sky of an image with clouds) while still retaining overall artistic control?
Because that is not what's happening. My friends that work as illustrators for PC and mobile games say it's the exact opposite. AI is used for the bulk of the creative work - composition, posing, even the general artstyle. Illustrators are then tasked with "fixing" visual artefacts, stitching together generated images and giving the final polish. They describe it as being reduced from a creative writer to a grammar checker.
It's tempting to just say that creative work that can be automated this quickly should be automated so that artists can focus on more creative challenges, but this is not how it plays out in practice. Rather, this only allows companies to cut down costs. It is already extremely difficult to find work which will pay a livable wage as a creative. AI has already caused layoffs and negative wage pressure on remaining employees. The only thing that AI has done (at least among my circle of friends) is reduce corporate costs and increase antidepressant prescriptions.
When I watch a video like the demo-video for the Krita plugin we're discussing (https://www.youtube.com/watch?v=-QDPEcVmdLI), I do see a lot of creativity happening. The person is using stable diffusion as a tool to achieve the look, style and composition they want. The skill to be able to use such a model for creating art is definitely an acquired skill, and I would definitely consider it a form of art.
Of course there will be people just clicking "generate" on a website. But isn't that the difference between consumer and artist? Everyone can press the shutter button on a digital camera to take a snapshot. But the artist knows how to use light, angle and technology to create a photograph with the looks and composition that they intend. (If you compare snapshots from amateur photographers and from professionals, the differences are astounding. And it's not just about the cost of the equipment.)
Certainly, there will be jobs – especially the rather repetitive jobs – that will be replaced by the use of AI, just like stock photos replaced jobs of certain photographers, or just like industrialization and automation replaced the jobs of a lot of craftsmen and artisans. But craftsmen and artisans are still around, and they are paid a lot more than they used to be paid, as long as they provide added value on top of the generic products available on the market!
I would never argue that you CAN'T do something creative with it. The problem is not even this single tool itself, but the greater amalgamation of all AI tools that arise from the general soceital phenomenon of using AI.
The problem with many technophiles is that they have a very distorted view of what they create. They often think it's going to do good because it's so cool but once that tech is out in the real world, it just mostly causes damage.
If you're interested, feel free to reach out to me because I am starting an anti-AI coalition.
Technology is just what it is. Good and bad are human categories that don't apply to technology per se (and are very subjective categories that change dramatically across time, space, and culture)
What humans use it for is another discussion.
One example:
- You can use nuclear fission to provide light and warmth to millions or blow up millions.
Is nuclear fission good or bad?
I would argue it depends what humans make of it.
Same with what you call "AI".
I wish you luck with your coalition, but once a technology is "out there", you cant take it back. I don't think there is an example in history where that happened, would be curious if you know one.
In a certain light smartphones resemble the moral equivalent of violating the Prime Directive.
"Here, rural areas and undeveloped nations. Take this crippled, distorted window into the greater internet. It happens to be much better at viewing content than creating it and will surveil you more closely than ever you watch it. The preinstalled software is forbidden to remove. Don't view it more than ten minutes a day or the content recommended by social media algorithms may cause malaise. Like and subscribe for more content."
I think you'd be better served making moral arguments rooted in ethical principles that people adhere to in real life, not science fiction.
This is especially important when you consider how unethical the Prime Directive itself is as a principle, and how often Star Trek portrays violating it as the morally superior choice.
The position you're advancing here seems to infantilize people in rural areas and undeveloped nations, and aims to deny them the agency to make their own choices about how to fit modern technology into their lives and communities. It sounds like a modern variation on "noble savage" and "white man's burden" notions -- not exactly a good look.
> The position you're advancing here seems to infantilize people in rural areas and undeveloped nations
I believe it seems that way to you.
Many people (in particular unemancipated minors) might likewise consider it infantilizing to place a minimum age requirement on drivers' licenses, firearms, alcohol, etc. yet the consensus is that doing so is for the greater good.
> Many people (in particular unemancipated minors) might likewise consider it infantilizing to place a minimum age requirement on drivers' licenses, firearms, alcohol, etc.
It seems unremarkable that we tend to treat actual children like children, but it's far less mundane to propose treating mature adults like children on the presupposition that due to their cultural or ethnic origins, they must exist in an immutable childlike state. The latter is an extremely dangerous notion, and we ought to be wary of anyone who advances it.
> yet the consensus is that doing so is for the greater good.
I'm not sure that any 'greater good' calculus is part of any consensus whatsoever.
AI here to clone a voice was used to make a mother think her daughter had been kidnapped
(2) People getting fired from their jobs such as illustrators because AI can now do things. Also, people NOT getting hired when they could.
(3) I am a professional writer, and I know of some websites who are using generative AI for articles and hiring less (or even firing writers)
(4) AI removes what remaining reliance we have on each other and makes it less likely for people to talk to each other when needing some basic information. The societal effects of destroying communities where people need each other are pretty clear.
Ok but that can be said of any technology. Chemistry is bad because someone used it to poison their friend. Phones are bad because it can be used for bomb threats, cars are bad because they put out of work the whole horse industry and you can go on and on forever.
Every single technology can be abused but it doesn't mean that they mainly cause damage.
(1) You are right, and that is why we should be much more cautious with technology.
(2) AI is unique in the sense that it has a much wider range and acts much faster. Therefore, it is much more dangerous, similar to how both salt and sodium cyanide are dangerous but the latter is much worse. You need to think in terms of the magnitude of the effect, not just its qualitative nature.
That's actually a problem for the business model of mobile games. A consumer can - or very soon will be able to - pick up AI tools and cut out the middleman org churning out these illustrations, just like they cut out the professionals. It won't be too long before games are made that advertise "put your original characters in the game", and it won't be some complicated character creation tool - it'll be generative stuff.
There's a lot of "but wait, there's more" in what's happening around AI.
> I think AI use in art tools is inevitable, but replacing artists at any level is not a good thing.
Everything in the computing space has been shifting labor from one skillset to another skillset and maximizing the output per hour worked so that fewer workers are needed for the same output (but also more tasks are worth doing, because the costs are lower for any given benefit.) Why is displacing people manually building the visual component of video games any worse than, say, displacing typists, secretaries, people delivering interoffice mail -- all of whom also had salaries, dependents, and livelihoods -- while increasing the value of work in the field automating all those things?
I am a luddite and I agree with most luddite sentiments.
Most of this generative AI is NOT about using AI for boring tasks, and have you ever even tried to draw clouds? Not easy. Everyone draws clouds differently, which you would know if you ever tried to draw anything.
Moreover, AI as a societal phenomenon goes way beyond AI drawing clouds.
> which you would know if you ever tried to draw anything
I know exactly how hard it is to draw anything because I tried a bunch of times, and failed. I for one am happy that I can now express my creative ideas, which I couldn't do before due to missing talent / practice.
You're free to personally be happy that you can express your creative ideas, but it is a bit absurd to expect people who did put in the effort in practicing to not see you in a negative light as someone who wants the 'benefits' without putting in the hard work of self-improvement.
This is a uniquely AI related issue, as artists of all mediums can relate with each other about their struggles learning and improving their skills and ability to express themselves.
That's trying to put words in my mouth. We were talking about creative expression being taken away by AI, and I argued that artists can still retain creative expression, and that these AI tools make it possible for more people to express themselves creatively.
I never said that artists should have no reason to feel unhappy about that. That's criticising a position I didn't argue.
“I for one am happy that I can now express my creative ideas, which I couldn't do before due to missing talent / practice.”
The problem here is we need to look beyond our own self interest to how this will impact other people.
We don’t make a career out of art. This technology is just a novelty to us and but many others rely on it for themselves and their family and had no way of foreseeing the technology coming. They need it more than we do.
> Most of this generative AI is NOT about using AI for boring tasks, and have you ever even tried to draw clouds? Not easy. Everyone draws clouds differently, which you would know if you ever tried to draw anything.
Perlin noise on a plane, can be either in line with the camera or off at an angle. Nice effect. Very easy. I don't even count myself as a proper artist.
Clouds can obviously be hard when you have a specific cloud formation in mind — but "just" a random cloud, to the standards of most who will observe it, is much easier.
And of course, there are plenty of free photographs of clouds, and Photoshop has had plenty of filters — even from the days before people had broadband, let alone what people now call AI — to turn those photographs into different styles.
> Perlin noise on a plane, can be either in line with the camera or off at an angle.
This looks like trash and doesn't look like clouds. Even if you're doing procedural clouds, everyone does them differently. And a lot better than just slapping Perlin noise on a plane. Photoshop filters cannot change the bones of a cloud, and when people are illustrating clouds they're taking entirely different approaches. They're not just "this cloud, but flat" or "this cloud, but with a fuzzy diffused look." All you're doing is showcasing your own lack of knowledge on the subject while filling the arrogant techbro stereotype perfectly.
You're almost there. Few recognize that Art is human communication. Most just want a pretty or awe inspiring image, or an illustration to supply the consumer's lack of imagination.
Perhaps with commercial art, pragmatic bread and butter art being automated and pooped out by noncomprehending, non-communicating consumers the job of real Art that communicates the frontiers of human experience using rich metaphor and on the edge of language and reason can work without having to also deliver hallmark nonsense.
Yeah, the economics to allow this are all fucked. But if you're an artist communicating your human experience, that does not matter, it's a part of your work.
The reduction of "techies" to emotionless robots is an unfair generalization. The inappropriate and wildly inaccurate comparison to slave labor is out of line.
Even taking your arguments at face value, it doesn't really make sense: Let's agree and say that AI can never make "real art." How does AI art existing prevent human-made art with the intent of self expression from existing? You say it will make human art less commercially viable, but that's hardly related to the expression behind the art. Human art has just as much expression whether or not it is a commercial success. Is your argument about financial viability or expression? Would you agree that having deep human expression enhances the value of a piece of art? Are you aware that artists who focus wholly on expression were already stereotypically "starving," even before AI art was a thing?
I use any tool available to create art and express myself, so I does not really matter what you think of it. Art is a very subjective thing anyway and we can argue for hours what "art" really is. But here are some of my thoughts regarding your comment, I think they apply to many others in this thread as well.
>UNLIKE other tools, AI makes creative decisions. No other tool has done this
If you use any other tool or technology (for example certain oil paint, canvases, sculpting material, motif, etc.) that also implies a creative decision.
>its primary purpose is to take away the reliance on artists
That is not true. It might be the purpose that some people ascribe to it, but it is a technology & a tool.
Humans decide what its purpose is, its purpose is not inherent to the technology itself.
You can use Imagen to create very personal and individual self-expression. With open source models that you can tweak and train yourself there are many possibilities to get an individual result.
I guess you only looked at things like Dall-E and came to the conclusion that this would replace artists.
The same thing people said about photography aka "This is just a technology to replace portrait painters/landscape painters etc."
If you learn more about how Imagen works you will understand that there are many possibilities to make it your own and create meaningful self-expression.
>even if "normal human art" exists, it will never gain much traction or commercial viability.
There is 0 indication of that. Why do you think that?
FWIW the current art "market" is what makes me sick, so I am happy for any tech or idea that demolishes it and makes room for something new and creative.
> What techies NEVER understand about art is this: that art is not just the end product for the artist but something they use to express THEMSELVES.
And yet artists have always used tools and adapted themselves to the qualities of those tools. Paints, paintbrushes, canvases, chemicals, instruments of all kinds inform the end result just as much as the artist's initial intention.
Jackson Pollock's most famous works were produced by splattering paints on a canvas. Sure, he selected the paints, the canvas, and the trajectory and velocity of the splatters, but his works are as much the expression of stochastic fluid dynamics as they are of his vision.
Nothing is stopping people who see handcrafting every intricate detail of their work as an expression of their innermost sense of self from continuing to do so just as they always have.
If that's what they're getting out it, why should it bother them that people who do just want to obtain the end product for their own purposes are getting it from someplace else?
> AI is a tool that is and always will make the creation of art less a matter of the expression of the human soul
I firmly disagree. I have a very strong imagination, but I never had time (and still don't have the time between full-time work and needing to learn German) to develop the skills to turn what I can imagine into artefacts that others can enjoy by my own hand. AI gives me the means to turn some of what I imagine into things I can share — not everything! (SD is so terrible at dragons, even the basic body plan is all over the place) — but it can help with many things.
> its primary purpose is to take away the reliance on artists. The ultimate aim of BIG TECH is to take away this reliance so that they can be the ultimate source of cheap art, just like cheap slave labour is the ultimate source of cheap and unsustainable clothing for most people.
IMO the purpose is fully automated luxury communism.
Stable Diffusion is free, so "Big Tech" (which would here have to include a small German academic spin-off) can't reap huge rewards from this, just like there's no huge business case for yet more video call services or social networks — too much competition for the money.
Finally, just yesterday I was watching a year-old video from a german robot supplier that's undercutting "cheap slave labour" for clothing.
It's actually "fully automated luxury gay space communism", all words are important there. Also, in the Culture series, arguably the seminal series about the "fully automated etc.", there's a whole scene about AI producing art (i.e. nobody cares or think it's interesting)
> there's a whole scene about AI producing art (i.e. nobody cares or think it's interesting)
As I recall, this was a conversation between an organic person and a Mind, a specific category of AI that's quite capable of running several billion uploaded human consciousnesses simultaneously in real time, and it was the Mind who was saying that yes, they could make far better music, but they didn't want to.
So I suppose your need to create AI diversions is more important than the need of others to feel a sense of purpose through their work through their actual talent that you don't have? Good thing you have a full-time job....
Big Tech will benefit immensely from AI. Even if Stable Diffusion is free, it will spurn the development of new computers and new technology to run models like Stable Diffusion, so even not immediate things benefit big tech.
Fully automated luxury communism is a rather bleak future, and it will take us away from being stewards of the environment, and instead consume as many resources as possible.
Finally, even if you are doing something relatively harmless with Stable Diffisuion, many other people will use AI for malicious purposes.
> So I suppose your need to create AI diversions is more important than the need of others to feel a sense of purpose through their work through their actual talent that you don't have?
Not so. Sense of purpose is important.
Your sense of purpose conflicts with the opportunity of everyone else to express themselves as they wish.
For now. The LLMs will come/are coming for mine, just as diffusion models and GANs eat at the jobs of graphical artists.
> Fully automated luxury communism is a rather bleak future, and it will take us away from being stewards of the environment, and instead consume as many resources as possible.
It's (currently) pure Utopianism, taking on whatever hopes the proponents want it to have no matter how unrealistic. I therefore think you're arguing on the basis of which team it's associated with, without understanding the details of what it is you hate.
> Finally, even if you are doing something relatively harmless with Stable Diffisuion, many other people will use AI for malicious purposes.
Oh absolutely. Whole can of worms there.
Can say the same about basically every tech way back to the wheel, fire, and pointy stick, though unlike most using this analogy I am well aware of the problem of induction (in particular the turkey and the farmer), and don't claim that it will all work out just because it has so far.
AI in general could make us immortal, with lives of leisure and free from all suffering… or it could turn us all into paperclips to maximise shareholder value.
> Your sense of purpose conflicts with the opportunity of everyone else to express themselves as they wish.
I do not believe people should be able to express themselves as they wish unconditionally. They should not be able to express themselves to the point of destroying the environment, they should not be able to express themselves by creating nuclear weapons or something EXTREMELY dangerous (AI), they should not be able to express themselves by disrupting society.
AI is just as disrupting as creating biological or chemical weapons, and perhaps even worse.
And it would horrible if AI could make us immortal. We should die...
> I do not believe people should be able to express themselves as they wish unconditionally. They should not be able to express themselves to the point of destroying the environment, they should not be able to express themselves by creating nuclear weapons or something EXTREMELY dangerous (AI), they should not be able to express themselves by disrupting society.
That's just an anthropomorphization of “produces deterministic results of its inputs that are practically intractable to compute by other means”. But Stable Diffusion doesn’t “make creative decisions” any more than POV-Ray or (pre-Generative fill) Photoshop.
I agree, but I'll be more precise: AI may not make creative decisions, but it generates something that approximates a creative decision to a sufficient level that people are willing to use it as a replacement for real creative decision.
There. My argument works without modification but with this more precise definition.
Focusing that will into something actionable seems like an important task. When artists talk about their rights it would be prudent if we also talked about our duties.
The meaning of art, the art of intelligence, is being recreated after it was gruesomely vivisected by postmodernism... We better not let the narrative come with a price tag.
> I don’t think that’s true; and doing it, really reallllly makes them angry.
It is in some cases. Obviously not all of them, but there is definitely bandwagoning going on. Go on any social platform where this is up for debate and ask why they have this position, and you will be mocked or ignored while they fail to formulate any actual reasoning for their belief.
It really just highlights the massive gap between artist's impressions and the 'techbros' talking about how it's just another tool. It's clear that many techbros really just don't understand art. A few weeks ago there was a post here about how the majority of art involved in a game is never seen by the public, where someone had asked what could be done to make this process more efficient so less 'waste' art needed to be produced. I think this thinking that this approach to AI generated 'art' has a similar disconnect between artist and techie.
Techbros see the tool and talk on and on about how it'll optimize workflows and how it's just another tool, cloning artstyles and patting each other on the back for automating another task, while completely ignoring the concern from artists about having basic ethical concerns trampled over in the name of disruption and progress.
It’s more just ridiculous because the same community is completely fine with photobashing and “paint overs” (aka tracing) and “fan art” (aka profiting from IP you don’t own).
I don't know what sort of artists you know, but of all the artists I know, nobody is "completely fine" with tracing. It is okay but still frowned upon when people do it for "practice" without publishing the result, but anything that even looks remotely like it is traced gets called out and further investigated incredibly quickly.
And as for fan art, a lot of companies explicitly allow art based on their IP, as long as it's used/published by the artists themselves and the commercial rights to the work aren't sold to some other company. In Japan, there is a whole industry based around derivative works, Doujin - self-published works, that works off of what is essentially a code of honor. Companies don't go against the artists, as long as said artists adhere to certain guidelines on what they're allowed to depict (eg. no NSFW content.) Many franchises have become a lot more popular due to fan art/derivative works alone (ie. Touhou Project, Fate Series.)
Tracing is particularly unacceptable in professional settings. There’s been several cases, some even somewhat high profile, where manga and comic artists have found themselves in hot water as a result of engaging in the practice.
>Tracing is particularly unacceptable in professional settings.
Tracing is a fundamental skill in a professional settings for consistency, speed, and quality reasons. In torepaku it is the paku (pakuri) part that is not acceptable.
I've had the situation where long term friends contact me with a child/teen either entering or graduating art/animation/film school and want me to give advice to their kid. My background is 3D graphics, animation, film VFX, video games, and AI - from the software developer and digital artist sides.
Every one of those conversations has been their kid telling me they will never touch AI, AI is evil, AI is the death of art and artists, and they refuse to see it any other way. One is graduating this year, wants to be a concept designer for high concept film and games: a role that is leaning heavy into generative art simply for the variations it generates. They refuse to discuss how their intended industry already uses and is adopting AI generative art en mass.
Times when I wish I had the eloquent voice of another.
I don't get AI hate. It's nothing other than "technology hate".
As a web developer who started out in 2001/2002, I watched as custom web design jobs dried up, and more and more people (and ahem artists) started using online tools to create a templatised website on the cheap.
Did I throw a tantrum? Nope! I learned to do backend dev so I could make my own automation tools.
Seriously, just embrace these new superpowers already.
Who decided that they should have the power to rule over this? Not only does it appear to be legal, but deciding that dataset compilation is inherently immoral and consent-violating would represent a draconian expansion of IP law that almost no individual should agree to. If I perform statistical research and count the most common English words based on many books, should I be liable to go and ask every author if they're okay with me analyzing their works? If I see a thousand character designs and then pick up some cues and ideas to create my own, should I pay a licensing fee?
They are only superpowers until you have made enough AI art to see how pathetically limited and un-creative midjourney is.
This is all a lot of nothing. A year ago at this time I thought we had reached the art singularity. Now after thousands of images, seeing any AI art makes me want to puke. It is just the same shit over and over and over because it is so limited in what it can actually make.
In the long run all AI art will do is make people appreciate human art more. Any human artist against AI art is an idiot.
It is not like web design or anything else. AI Art is a giant decentralized PR and unintentional advertising campaign that will show how awesome human artists are and how overhyped AI is.
In a few years people will view AI art as useless, shit, scriblings of non-artists. Why? Because that is exactly what it is.
I don't agree. I take photographs as a hobby, and share them, just for showing them off.
They're generally licensed with CC-NC-BY plus no derivatives (akin to GPL), but I don't want my images to be taken to a training set to feed a generative model without my consent, because you're violating the license terms I put on it.
Same is valid for my code. I stopped using GitHub, because it devours any and all open repositories regardless of its license and without asking for consent.
This is not about scarcity, but respect and ethics mostly. At least, from my perspective.
For what it's worth, CC-NC is not akin to the GPL, at all.
The GPL says “you can sell this if you want, but whoever you sell it to can still do whatever they want with it, subject to the same terms.”
CC-NC says “you can't do whatever you want with this”
Which isn't to say that you're wrong to use whatever license you like, just that it's very much not similar in spirit to the GPL. Protecting the right of others to make derivatives of the thing being licensed is, in fact, the entire point of the license.
By akin, I didn't mean it's functionally equivalent.
GPLv3 is one of the strongest copyleft licenses for software out there, forcing re-sharing of changes while preventing any source closing. CC-BY-NC is the strictest CC license which allows sharing, with credit, with no commercial use and no derivatives.
Hence I tried to aim for "I'm selecting the one of the most strict license for sharing my photos, as I do the same for my software, yet AI systems disregard my license every occasion and just rip what I put out without my consent or consideration of the license I use for these, hence I refuse to use, or support AI models which are fed like this".
You can apply CC-BY-NC to software, people have done it. And it's recognized as _not_ being an open license, it's at best an “open-access” or “source-visible” license, because of the usage restrictions.
The entire point of an open-source license is that you're _preventing_ people from restricting modification and derivative use. The whole point of the AGPL was to prevent a hack that companies found to abuse the software's license and prevent derivative use of their changes.
I'm not saying you're morally in the wrong for choosing a license that doesn't permit commercial use, but I _am_ saying that it's contrary to the spirit of the license you're claiming it's akin to.
Surely, there exists no other economic system where a percentage of the world's population is allowed access to food, rent and education for their children.
And there certainly doesn't exist any where the entirety of the world population can be given access to these things.
You're talking about hypotheticals, I'm talking about reality.
Saying "these silly people are obsessed with their economic system, which is pointless!" makes no sense if those same people live or die by that system.
They aren’t CS graduates so they can’t really be human or have thought processes. Literally stealing their work to pad a VC linesheet must honestly be the most altruistic of options.
Your snark would be slightly harder hitting if not for the fact that freshly minted CS grads demonstrate approximately the same skill as LLMs when it comes to turning feature requests into code.
The "G" in "AGI" is "general", as in "can do all things" — Ask not for whose job Bell Labs tolls, it tolls for them all.
(And now I'm reminded of someone, ages ago now, who was boasting about how good he was at matching all the `new`s and `delete`s in his C++ code, or possibly even `malloc`s and `free`s, being completely oblivious to the existence of STL smart pointers…)
> mad about models being trained on their work without informed prior consent
This is the "your mind has been poisoned by a corporate view of intellectual property" take. The idea that copyright should extend this far is horrifying, it is another step toward stifling and controlling creative expression. The problem is capital being rewarded way more by IP law than labor, something that isn't going to be fixed by giving people more and broader ways to own things.|
You'll be surprised how recent the idea of owning ideas is.
Well I should be more clear. I don't mean by informed consent "they put it in the TOS and it's software I have to use so I'm boned". I mean more like "I use Blender, Krita, and Procreate and none of them do that bullshit. As long as there's a notification on software that scans my work so I can avoid it, I'm happy. If I want to opt-in to contributing to a model at some point, that might be cool."
I guess I should worry a bit that important software without worthwhile alternatives might start to do this, but I don't think the blender foundation will as long as Ton is alive, and I bet the same for Krita. Procreate I'm a little less confident about, but only a little, as they know who their main userbase* is and they know how those people feel about AI.
*Arguably there are more people with Procreate that barely know how to draw than people who do. BUT part of Procreate's appeal is that it's software that 'the pros' prefer to use. If the talented artists start publicly dumping on Procreate, the people who can't draw will slowly but surely follow those artists to whatever their preferred software becomes.
Have you ever actually looked into people who do that? They put in a lot of effort into understanding the original creator, their intent, process, materials etc. The copies are still very much art as they are expressions of the imitator's feelings towards the work. Performing studies of popular art by trying to replicate it is a powerful tool for learning precisely because it allows you, as an artist, to understand how the original artist thought, helping you learn to think like an artist and leading you to develop your own ideas and means of expression.
How do we know? We can do art Turing Test - have 3 real artists and an text/image generation AI train on somebody's art. Have it generate the images and answer questions about the art.
If you can't tell which one of them is AI - would you conceed they are "all of that"?
The claim that people don't copy art styles or even that it's rare is ridiculous. Opwm tumblr and you will immediately be presented with a counterexample.
I mean if you're talking about throwaway 4chan/twitter-anime, that's a different tier of art. I'm talking artists more like Jeremy Lipking or Jose Lopez Vegara.
It is not just in the art community: check the currently top post and see how many devs are saying they would never use AI on their work. They hate and are alienated as much as any artist.
I have tried to explain "you're not mad at generative AI, you're mad at late stage capitalism" before.
Most people aren't really willing to smash the state though (I understand, that's where all my stuff is) so look for less drastic ways to protect themselves.
Agreed that trying to create artificial scarcity is not good (and isn't really compatible with any ethical system, least of all "neoliberal capitalism"), but it should be pointed out that natural scarcity is an a priori fact of nature that economics itself is our method of dealing with, and is not a normative contrivance of any "economic system".
I'm not entirely sure how any of this relates to artists agitating against AI, unless they are themselves seeking to create artificial scarcity to prop up the market value of their services, now that the supply of art ability is no longer as constrained as it previously was.
it's quite simple. artists offer to make their art for a price, as a 'service'. then, something comes in, that pirated their previous works, and offers to make imagery in that art style for free (or at a low low price), undercutting and displacing that artist.
really it's a 'yet another spin on piracy'. cause that's just the 'services' part, besides the 'selling works/artwork' which has long been rife with piracy, but the 'pirating what's been offered as a service' thing is new. piracy expanding into 'services' field as well. services (particularly those that rely on someone specifically taking their time to do something, and not just 'press button, service gets performed unattended') are scarce. there's only so many hours and so much time in one's life.
Huh, I wasn't aware Krita was a thing. As a software engineer that rarely needs image editing Gimp was my go to software. Why there is a second open source image editing software now?
I think it's totally fine and normal for multiple OSS tools existing in the same space.
Krita is almost 20 years old and is more focused on painting than on image editing – but personally, I use it for both, liking the UI much more than that of GIMP.
Back in those days, GIMP was just barely usable for painting, and Krita was mainly good for crashing. Both have come a long way since then. GIMP is still mainly an "image manipulation" program. It got better at painting, too, but you probably want to give Krita a try for that.
As a software engineer and artist, Krita feels more focused on drawing/painting with a pen, while GIMP has always felt to me as more focused on photography/editing.
Obviously they both manipulate images so there's lots of overlap in features, but the idea of painting or drawing in GIMP seems really alien to me. I'm sure the interface and pen support was even worse when the Krita project was started.
Gimp has a notoriously unusable UI. I think that's honestly probably the main reason.
I'm actually more confused by the converse: why do people keep using and recommending Gimp when Krita has existed for decades and is so much easier to use?
It took me a while to switch to Krita, because I used to think it was mainly for painting/illustrating. It took me being unbearably frustrated with Gimp's UI to give Krita a go for basic editing. Never looked back.
For simple image editing with an easy UI, I use Pinta. If I need more advanced features, I need GIMP. I've never really found a use case for Krita personally.
> Why there is a second open source image editing software now?
Because it's possible and someone wants to. Same reason why we have multiple Linux distros, multiple databases, multiple browsers, multiple text editors.
Are you surprised that open source software in general is duplicated? Or is this specific just to image editing software. If yes, what makes image editing software special so that having a second option is surprising?
Yes, open source promotes collaboration. It also promotes forking and starting new projects.
It's much closer to how photoshop generally works, with an extra focus on drawing. Gimp is not a good replacement for photoshop for artists. It's quite popular for this.
Krita is actually quite old. The reason you haven't heard of it is probably that it's more focused on digital painting than on general image manipulation.
the video is mindblowing because on one hand, adobe photoshop announced this as "their own next big thing" and here we have an open source software replicating this same thing, so cool.
edit:
this also means photoshop doesnt have the "moat" they seem to have built around the generative ai thing and their software.
What Krita and the KDE project in general have achieved is nothing short of phenomenal, and I don't believe the power of libre software is recognized enough even in dev communities like Hacker News.
i think one could generate the background first and then the characters separately on a different layer, so any editing only affects them and not the background
Edit: just tried it and it mostly works. Generate background, then add pose, add new layer paint on top of pose, select area around character and click generate. The caveat is that it also generates a bit of background around the character, but it does not change it so dramatically
Krita support for generative inpainting has been around since the beginning of the Stable Diffusion craze. It was one of the first AI projects I saved. It definitely predates Photoshop adding it.
Stable Diffusion heralded an explosion in generative AI that predated ChatGPT. Weird how OpenAI got all the credit when it was Stable Diffusion that first opened the gates.
Nah that is an early version of a plugin that uses A1111 as the backend instead of ComfyUI (it does have a newer and maintained replacement, but its not the one in OP, which uses a ComfyUI backend.)
While watching the video I was also thinking "just like Adobe's stuff". Many of the Photoshop users will ask themselves why they should continue to pay them, if the evolution continues this way. Nice to see.
Sure, Krita is not Photoshop, but for the tasks certain creators will be doing in the next decade, they won't have a need for Photoshop anymore.
Interesting to see that the video is 2 months old.
That is already true for not just Photoshop, but for almost any kind of proprietary software. If you are willing to embrace the caveats and DIY nature of FOSS, for almost every task FOSS Software is good enough (and sometimes better than proprietary).
I think one of the major reasons of popularity of proprietary software vs FOSS is marketing.
If you look at the very right of the screenshots, there is a "history" of generations with unused alternatives. Me and my visual cortex managed to synthesize "before" images from this information.
At this point selecting good screenshots for git readme's should be a profession of its own, it's baffling how many projects' appeal could be really enhanced by simple informative screenshots.
I saw a person using this. The system had 4090, which can pull about 20-30 iter/sec. This roughly translates to 4 image/sec with 8 iter/image. This allows interactive AI drawing (thou a bit quirky). Once the desired image is reached, the user can re-run w/ 30-50 iterations to finalize the image. This is really cool.
Latent consistency models are a pretty radical game changer that came up recently. There are LoRAs [0] that you can just use alongside any SD or SDXL that just cut the number of inference steps you need to 2-8, rather than the usual ~25+. It's as close to magic as one could expect, and on ComfyUI my modest RX 5700XT spits out 512x512 images in probably around a second each, or a couple of seconds for a 4x batch. A more beefy GPU could certainly enable high res, very low latency interactive use.
For even better latency perception, you could hook into the generation steps and have TAESD [1] decoding intermediate latents.
ComfyUI works perfectly fine with ROCm on Linux. Using it with this krita plugin also works flawlessly. The docs are simply incorrect in saying that it's Windows-only.
I assume it's the case because the automatic ComfyUI installer that comes with this project doesn't know how to install/configure ROCm. Using your own ComfyUI installation works perfectly. I'll open a ticket with the author of the project to discuss this.
Source: I installed this yesterday on my Ubuntu computer with a 7900xtx and ROCm in Comfy
It should not be. Torch and AMD has been a thing forever on Linux even before Windows. The underlying comfyUI supports it. In fact someone replied here it might have been a mistake.
Why is that the case? Tools like OpenCL do exist, but I assume CUDA is simply better suited for these tasks, is that true?
(With the dominance of CUDA, choice of a GPU on Linux gets even harder. It used to be a clear "fuck you Nvidia" if you wanted to use Wayland, but Nvidia definitely has the lead when it's about video editing and machine learning.)
AMD's OpenCL implementation and tooling on Windows is terrible, even worse than NVIDIA's OpenCL tooling, and on Linux their ROCm stuff has been so unreliable in terms of its hardware support that it isn't worth the investment.
I'm on mobile and haven't looked at this project, but usually DirectML support is added as a torch backend. Instead of
device = "cuda" if torch.cuda.is_available() else "cpu"
you add
import torch_directml
device = torch_directml.device()
How else are you supposed to support AMD in Pytorch on Windows?
My point was that it's not a matter or DirectML or torch, it's simply a choice of backend for torch. It's an easy way of adding AMD support to torch based projects in Windows, there's probably an equally easy way of adding ROCm support in Linux. It's just that using cpu or Cuda is built in and usually the two default options when writing torch code, and somebody have to care enough to explicitly add AMD support.
It's not exactly as easy as just changing one line by the way, not all operations are implemented so there's some testing and maybe some rewrites needed. Hopefully the GPU backend mess gets solved in the general case soon.
> Thankfully there is an option for using a ComfyUI
Its not an option for using ComfyUI; its an option to use an external ComfyUI instance instead of one embedded in the plugin, this uses ComfyUI one way or the other.
It's always been a trend and always will be until AMD gets their shit together. NV spends a lot to make sure CUDA has the market share it does (marketing, establishing a foothold in academia, partnerships etc), AMD is working on it but progress is slow.
It’s not that NV spends a lot to get market share. It’s that for over a decade NV provided the actual tools to build all this and AMD didn’t and then when they finally did they fumbled it, then when it finally paid off big for Nvidia they had to start from scratch again.
People might not like it but Nvidia’s dominance is completely deserved from the actions, or should I say inactions of the now disbanded OpenCL crowd
At this point it seems pointless to even bother to try given that AI will generate all possible artwork within a couple years.
I mean. Say you get "good" at using this. What's the life expectancy at any kind of creative outlet you could have that would support you? I mean if we're talking this is fun as a toy, yeah ok. I could see that. But as a job? When everyone can paint no one is paid for it.
I suppose that we could all go back to paying people who can physically lift things or wait on tables, but that's about it.
I want to use this, but then I just think "Holy shit, what if I get good at this and then get my hopes up like I did with React? What am I going to do, sell artwork that anyone can make for next to nothing on the internet?" I believe I could probably come up with some cool paintings, but the question is "why"? Everyone else on the internet will generate all the possible content it's possible for me to come up with anyway, so why does it matter?
And if that makes me care about "money" then yeah, I care about money. So what?
All of that being said I'm now going to draw a latex glad ninja being molested by a demon. Also I'm broke and living in a homeless shelter. But I can get a supercomputer to make me draw sexy girls so I have that going for me.
Seems pointless to learn to make singular highly detailed visual art pieces? Maybe. Maybe it always was pointless.
But most visual art is not just single pictures in a vacuum. Say you want to make a game with 2d still-art, or say a comic. You will need dozens or hundreds of images and they will have to be tied together by a common design — characters and style that look similar in the different images, and most of all you have to have a story to back it up. This is not something AIs can do well, not for a long while, but a human artist now may do significantly better than before with help of "dumb AI", such as the featured Krita plugin.
Finally, most artists don't think like you. It's not "pointless" to do something that can be technically repeated by other humans or AI. You do art because you want to express yourself.
> Finally, most artists don't think like you. It's not "pointless" to do something that can be technically repeated by other humans or AI. You do art because you want to express yourself.
I've seen this sentiment a bunch of times, but I don't agree. Most people practice skills and make art in order to demonstrate their value to society. Art (and media) doesn't exist in a vacuum, it surely exists for societal reasons.
A person may want to make a game or a comic, but the reason they want to make those things, instead of just consuming existing media, is also to demonstrate their value to society. But they won't have any value either when everyone else can easily make games and comics.
I don't think you are disagreeing with me. I also mean by "expressing yourself" that the artist is trying to communicate with the community and be of value to them.
I'm saying AI does not allow anyone to easily make games and comics, at least not for a some while. Currently AI allows you to easily make still pictures, maybe a written chapter of a story. It does not yet compete with artists who do larger pieces of work like a book. And I'm not sure AI ever(?) will make "complete" works because it doesn't have full human background required to have "something to say". It only "mimics" in a manner that many artists focused on technical ability find threatening. So yeah some "artists" will be out of work because of AI, but it will not be a big loss for the community if they are merely replaced.
The surface area of "art with message or meaning" within "all art AI can randomly generate" is so vanishingly small that it doesn't matter. Humans will be in control of the message, and thus in control of art for the foreseeable future.
When the AI finally is smart enough to have something to say, it will be an AGI and humanity will quickly be enslaved to it. No point thinking that far.
Okay, I see. I'm less worried about AI running the whole thing, and more about the diminishing quality of life for creators. Also I'm less sure than you about AI being unable to compete with larger pieces of work in the near future, especially books and comics seem pretty doable. Humans might stay in control, but over time it moves from creation towards curation, which is pretty different and has different implications over who gets to experience artistic fulfillment, and who gets to make a living. But hopefully you are right and it takes long enough that we can make some sort of societal adjustment.
To me it seems like a fundamental motivation - we do art because we want to impress those around us, gain respect, help to attract a mate, make money... Those reasons don't exist without other people to show our art to, and are much less effective if art is too easy to make and too common.
I'm sure there are people who don't have ambition to ever show others the skills they've been building, even in indirect ways, but I doubt it's common... What do you think the motivation is?
To make oneself happy, to look at something and say i did this. To pass time and gain skills. Even if you want to show it to people you care about, it's not about the rarity it's about sharing. I may have doubts of the existence of pure altruism but gaining the respect of others is a very A_type personality selfish type view. I don't think everyone is really that concerned with others.
I don't mean "gaining respect" like that's some explicit goal, but I do feel like people are pretty concerned with others, and most of us do really care about the opinions of others. It might be selfish on some level but it's a big part of being human. I can't speak to everyone's relationship with creating, I imagine if your social needs are being met already then you might just use art to pass the time and not care much about it. But usually the outcome we want when sharing our work is to feel appreciated, noticed or special in some way that scales with the amount of effort, time and skill we put in.
Being a painter, a photographer, a musician, an actor, a freelance artist in any medium, has never been a viable career for any significant fraction of the people that want to do so. It has always been a hobby that some very small percentage of people manage to make enough money from to scrape by, and some infinitesimal percentage make enough money from to be wealthy. AI is unlikely to change that, because there will very likely still be a demand for celebrities that some infinitesimal proportion of lucky aspirants will fill, and the vast majority of the industry by numbers will be hobbyist or hobbyists-in-denial who think their small business drawing commissions for some normies & wealthy furries on Twitter will be an economically sustainable career for all the people that want to do it. The most likely outcome of AI in the long run is that a lot of these people produce significantly more work of equivalent quality without being paid any more because demand won't rise (there is already massive oversupply of art, demand is the limiter for financial feasibility), a lot more hobbyists are making art because of the lower barrier to entry, and animators + VFX artists have their productivity go up by a lot and can maybe trade that into real gains in conditions if they're willing to unionise.
A theoretical nice thing about Krita and art in these past decades was that you could be an 18 year old with some ok drawing skills, a thinkpad, a secondhand wacom tablet and a version of krita, and the internet, this wonderful innovation, could enable you to make some money as an artist.
If the future expectation is that artists all have 2000 euro graphics cards, I think that will really make art a lot less democratic.
That's not the expectation at all; a lot of work is being done to make it run on underpowered hardware. SD in particular runs on a 8-years-old potato, albeit slowly and with limitations, despite originally barely fitting into 10GB VRAM.
>A theoretical nice thing about Krita and art in these past decades was that you could be an 18 year old with some ok drawing skills, a thinkpad, a secondhand wacom tablet and a version of krita
You never needed a computer for that, just a pen/pencil and paper.
For digital painting in particular though, that only became possible in the recent years. Free digital painting software sucked until recently, so 20 years ago every 18 years old just pirated commercial software. And drawing tablets only became cheap and good after Wacom battery-less patents expired (alternatively, with the advent of iPads with pens that a lot of parents bought for their kids, and cheap drawing software in the App Store).
I'm not even starting on 3D, which always required beefy hardware. Tinkering with Maya/3DSMax/Lightwave in early 2000s required a really powerful gaming PC. These days you can at least rent a powerful GPU for peanuts to run the AI model.
Sure, the part about everyone just pirating photoshop is absolutely tue (it comes out to be the same thing though, you can't pirate hardware). My point is the gap in potential quality and art output between photoshop on a powerful pc an a pirated copy of ps on a thinkpad is pretty small: you need a lot of ram to produce 4k art, but a thinkpad is fine for most comissions. The gap is obviously a lot larger with ai: you yourself mention that sd (just one of the models people are currently using) runs slowly and with limitations. If the expectation becomes that you deliver 100 4k permutations on a certain theme, the time it takes to achieve that from a human labor standpoint will be similar, but the time that takes to render wise will vary orders of magnitude based on your resources. Not to mention that a workflow with a realtime refresh rate is qualititavely different frome one that runs 0.1fps.
Commissions are professional work. If you need speed to get things done, you can pay to rent the GPU and the cost will be negligible to what you earn, even if you're just starting. This is really not that much of a barrier compared to the hoops hobbyist 3D artists had to jump through 20 years ago.
Regardless, you can run SD on a several years old laptop just fine, this is entirely within the reach of most; yes you won't be getting realtime updates but that's not really necessary.
And that's only the beginning. SD was trained using really poor data; everybody is doing that on semi-synthetic datasets with much higher quality labeling now; high quality data and new advancements (see the Beyond U paper [1], for example) allow fitting more into several times less weights with much faster inference. In a year or two, this will be available to practically everyone.
Trying it now and will update later (as a comment), takes a little while to download and install.
One note about the installation on Ubuntu is that you need to install Krita first, run it, and then copy the plug-in to the desired folder - otherwise there is nowhere to copy it to.
Tested on a NVIDIA GeForce RTX 3050 under Ubuntu with 4GB VRAM. Initially tried with a 4Kx4K canvas size, but it seems too much and fails. I lowered the canvas to 2Kx2K and it seems to just about be okay.
My test prompt (to compare against other models):
> (masterpiece, best quality), a giant made of rock, highly detailed, rock texture
For Cinematic Photo XL this produces a picture of rocks. For Digital Artwork XL I get a nice scene with a complex rock structure. Both take about two minutes.
It seems to work well and the integration into Krita seems quite nice. The settings are suitably simple, but would be nice if more was exposed in an advanced window or something.
They list under hardware requirements "a powerful graphics card with at least 6 GB VRAM is recommended. Otherwise generating images will take very long"
Does anyone have any idea what would very long mean on a 4GB VRAM card?
"Tested on a NVIDIA GeForce RTX 3050 under Ubuntu with 4GB VRAM. (...) lowered the canvas to 2Kx2K and it seems to just about be okay. My test prompt (...) produces a picture of rocks. (...) I get a nice scene (...) Both take about two minutes."
My very-rough feeling about it from playing around with Stable Diffusion is that it takes about 4x as long if it runs out of GPU memory and needs to shuttle data back and forth from system memory. There are a lot of variables though - on my 3070 with 8GB of RAM, I can get very impressive 512x512 images in about 10 seconds with somewhat low sample counts, or I can set it to a higher resolution and sample count with 2x upscaling and get a really sharp image in around 2 minutes.
Too bad I don't have the Hardware to run it. Anyone had success with stable diffusion on Steam Deck ? The only thing that works for me is https://github.com/rupeshs/fastsdcpu , but it takes 1m per 512x512 image and is LCM
It says Mac OS support is untested, but wouldn't Mac OS be a great test bed, with many graphic pro users, and Apple Silicon running Stable Diffusion out of the box? DiffusionBee already does in-/outpainting and basically all the other things this integration is promising, you only have to copy/paste image data and resolution/context parameters I guess. But then this brings in the Python ML stack which seems like a no-go for an end-user product AFAICS, unless you wanted to generate endless support tickets.
(I am part of a group that builds UI on top of open models, but we stopped working on our Krita version for that reason.)