Hacker News new | past | comments | ask | show | jobs | submit login
Artist banned, told to “find a different style”- AI-made art (thetechdeviant.com)
164 points by kamban on Jan 6, 2023 | hide | past | favorite | 331 comments



The only real news here is that Reddit mods are power mad tyrants, which is nothing new at all. AI generated art has just given them newer, funnier ways to be in the wrong.


"Power" moderators, that is people who moderate a large number of subreddits, are untenable. It is not possible to effectively or fairly moderate dozens of communities, even if you were to spend all your waking moments doing so. This is, in part, why it's so common for popular submissions to be locked or deleted because "y'all can't behave".

The people that do so are largely doing it for their own self-gain (e.g., self-promotion) or because it makes them feel important. I had a very low stress job for a few years and ended up as a moderator for over a dozen large subreddits, including a few defaults. Socializing with Reddit's prominent moderators was enlightening.


Why do Reddit moderators do their work? They are paid in power. They get to decide what viewpoints are seen or not by others. That is a very compelling wage. And of course they all think they're doing a service by advancing their ideology because of course their ideology is the right one.


That may be true of the big subreddits. I moderate a small, niche one simply because I was visiting it every day for years and then they needed a mod. I saw it as a chance to improve an online space that I liked.


This is a common pattern with all organizations, but especially charities, non-profits, or volunteer roles. At first, the work is done by those who love that cause, and commonly some combination of

- the cause becomes prominent enough that there is influence or prestige associated with the role now, that it attracts power-seeking personalities

- the original founders are too burnt out to care, or clueless/trusting such that they get outmaneuvered by savvier entrants

Personally, I see it as the entropic drift of an organization away from an original cause or mission (order) towards a vehicle for the pure exercise power (chaos).



I think this is spot-on.


This doesn't seem to contradict the parent. Presuming that you are improving it along a certain direction, not in a random walk back and forth.


That's an extremely reductive claim. Can you really not think of more mundane reasons someone might find themselves moderating reddit?

It's hard to imagine "ideology" being relevant to the vast majority of reddit... Do you really think the moderators of ELI5 or PeopleFuckingDying or some obscure porn reddit or whatever are primarily concerned with "ideology"?

I used to help moderate a poker forum. I was a professional poker player, and an extremely active user of the forums. I don't recall pushing an ideology beyond "keep discussions constructive and topical."

The person you just replied to was a mod. Are you implying that their work was somehow about pushing an ideology?


> It's hard to imagine "ideology" being relevant to the vast majority of reddit...

This is egregiously incorrect.


In what sense? Would tipping that scale entail that "all reddit moderators are only and exclusively motivated by ideology?"


"keep discussions constructive and topical."

is subjective


Of course it is. That does not entail that forums moderators (myself included) are exclusively motivated by a desire to push an ideology, which was the topic at hand.

Nor does "subjective" entail "ideological" unless you're going to torture the term ideology being having a useful meaning.


We're not talking about forum moderators, really, but power moderators. Someone who mods a random topic they like is probably decent at it and not trying to push something. Someone whos gotten themselves in a position to mod 10+ of the largest subreddits on reddit is probably not doing very much actual modding and is very much trying to push something.


The person above stated "Why do Reddit moderators do their work? They are paid in power." They appear to be talking about moderators in general, not just power mods. The root comment here also appears to be talking about mods in general (with little idea what they're talking about, AFAICT).


> Nor does "subjective" entail "ideological" unless you're going to torture the term ideology being having a useful meaning.

What is your useful definition of "ideology"? Why isn't "subjective" included in your definition? Why would including "subjective" in your definition make it less useful?

Before hearing your response, I'm going to guess that you're thinking ideologies need to be "significant" for them to be an ideology. I'm guessing you don't think that subjective opinions are ideological because you don't think they're important enough to get that label.


Reddit mods have a habit of blocking/censoring views they don't agree with, (mainly all on one side, consistent with their ideology). That doesn't mean it applies to every subreddit, but if it weren't a widespread problem do you really think anybody would be talking about it?


Reddit's larger subs, particularly their political ones, are 100% content farming and ideological cults. Subs with hundreds of thousands of millions of subscribers that regurgitate twitter screenshots with timestamps removed, and where dissent is often banned. Antiwork, Latestagecapitalism, WhitePeopleTwitter, and many others.

Related, I got banned from entertainment for saying an exchange between jk Rowling and a trans person wasn't "mocking". I didn't defend her, I just called out a shitty title.

When I messaged the mods saying, in essence, "y'all are dumb and need to distinguish fact from opinion" they flagged me for harassment, which is one demerit away from a sitewide ban.

I know some mods are decent, and it's better in smaller subs with some actual purpose (city, hobby) that isn't memes, violence, porn or politics. Any of those categories, and with subs of any large size, and it gets really scummy really fast.

(Shout out to r/Texas mods for not sucking).


It's common knowledge now that abstaining from politics is taking the side of the oppressor. This has the effect of giving many subreddits and other topical communities a political orientation even if politics is nominally offtopic. To give an example, it is increasingly hard to find an online knitting community that tolerates conservative viewpoints; most have followed the lead of Ravelry.


This absurd beyond belief. Knitting has precisely nothing to do with politics, so I struggle to imagine what "conservative knitting viewpoints" even are. Or liberal ones, for that matter. Cross-stitching is a tool of systemic racism? Purl stitching is inviting the illegals to tukk-urr-jurrbs?

To me phrases like "abstaining from politics is taking the side of the oppressor" are just so damn American. You guys, more than any other nationality I've met, tend to dive head first into whatever ideology or sect or even hobby you happen to get into. There are, of course, people who are "extra" in every viewpoint or occupation. But more so for Americans.


>This absurd beyond belief. Knitting has precisely nothing to do with politics, so I struggle to imagine what "conservative knitting viewpoints" even are. Or liberal ones, for that matter. Cross-stitching is a tool of systemic racism? Purl stitching is inviting the illegals to tukk-urr-jurrbs?

Apparently politics definitely leaked into that community a few years back. I recall reading stories about it back then.

https://www.vox.com/the-goods/2019/2/25/18234950/knitting-ra...

I am not part of that community but if it behaves like almost any other online community, any accusations of racism seem to always create a backlash that lumps the conservatives leaning folks within the group to racism whether the conservative has outright committed any racism or not. There tends to be a guilt by association that seems to happen often—where if you have opinion “A” (some standard conservative opinion on some subject not directly tied to racism) you must also have opinion “B” (some fringe race-oriented opinion sometimes found in conservative circles).

So folks just stay silent and try and just knit (or focus on whatever interest of the group), afraid to disclose any political opinion in a non-political interest group for fear of the label. Then…they get called out because if there isn’t overt acknowledgement by concerned members of the “correct” political ideology. That results in the abstaining is oppression attitude. You then find these kind of communities creating rules that don’t just discourage political conversations but rules attempting to exclude people who may fall into a political viewpoint altogether.

I don’t know if it’s distinctly American, but it definitely seems to happen here a lot. To be honest, I find it all ridiculous.


> There tends to be a guilt by association that seems to happen often—where if you have opinion “A” (some standard conservative opinion on some subject not directly tied to racism) you must also have opinion “B” (some fringe race-oriented opinion sometimes found in conservative circles).

To be fair, U.S. conservatives are only reaping what they sow. The overtly racist wing of conservatism received such a drubbing after civil rights went through that they had to scale back the racist rhetoric and talk about social and economic policies that disadvantaged certain races, but appealed to traditional ideas about federalism and small government. So now whenever anyone talks about federalism and small government, it is assumed that there is a racist agenda lurking behind those appeals because historically, there was.


So it’s ok then to tell some 80 year old lady who just wants to share knitting things with other knitters that she is no longer welcome because some knitting activist asked her if she voted for Ronald Reagan in 1980 and she said “yes”?

Sorry, but that is just hateful and completely unnecessary.


One example of the top of my head: knitting is probably one of the most heavily gendered hobbies, which carries a ton of political baggage with it wrt gender politics.


No, it's not "common knowledge" that abstaining from politics is siding with the oppressor.

That's an unfalsifiable ideological assertion that has been well-socialized, but that doesn't make it fact and lots of people disagree with it, because it's an opinion, and it's one that presupposes a Foucaultian worldview of human dynamics as being able to be distilled down to pure power struggles.

It's absurd to see that bandied about as truth just because it's "common knowledge." I bet in Communist China it was "common knowledge" right before the famine that killing the sparrows would bolster the harvest, too.


It does not "presuppose a Foucaultian worldview". You have centuries of this sentiment permeating culture at the very least.

One example of many:

> Bad men need nothing more to compass their ends, than that good men should look on and do nothing.

~ John Stuart Mill


So the implication you're drawing from this... which was the topic at hand... is that all knitting forum moderators are motivated exclusively be the desire to espouse an ideology?


Please don't troll or make bad-faith arguments on HN. This post contains two instances of moving the goalposts and using extreme language for strawman attacks.

People are pointing out that moderation is often biased and that the power of controlling the narrative and topics & viewpoints that are allowed is a motivator for many moderators. Your strawman argument that "all moderators" being "exclusively" motivated is just rhetoric to try to win against a claim no one is making.


No, but preventing the Nazi camel from getting its nose in the tent is now an important motivator for moderators.


Sure, yeah. I doubt most knitting forum moderators wake up thinking "gosh I'd better get to that knitting forum to prevent the Nazi camel from getting its nose into the knitting tent," but I don't know any knitting forum moderators.

The comment I was replying to - "all individuals of class X are motivated exclusively by vicious desire Y" - isn't a truth-seeking comment, and I think we can do better.


Please tell us more about what you learned from socialising with these "power mods", I think it would be enlightening to us too.


> This is, in part, why it's so common for popular submissions to be locked or deleted because "y'all can't behave".

But they have no problem digging into downvoted comments and deleting them, even if the system already did the job for them (put the downvoted stuff at bottom and hidden).


Maybe this is already well-known, but I saw another behavior recently on some subrreddits where a lot of new posts are seemingly getting a single downvote to a score of zero. I suspected there was some troll doing it, so I went down the line and gave about 20 or 30 of them a single up vote back to one, refreshed the page, and most of them had immediately gone back to zero. I think the mods must have a button for squashing a post without actually deleting it, and on some subs they use it for a huge number of the posts.


There is also just a massive amount of bots on reddit. Unidan, a old famous redditor, was involved in a controversy where he would use bots to down vote all the posts made at the same time as his own posts so that his posts would be more visible. He's not the only one doing things like that


There are a LOT of bots, yes. Pretty much any submission that makes it to /r/all had a high likelihood of:

- being a submitted by bot reposting popular posts

- having comments from bots that repost popular comments from prior submissions

Though I'm pretty sure Unidan just logged into alternate accounts and downvoted things manually.

https://www.reddit.com/r/SubredditDrama/comments/2c9ida/reca...


Those numbers are fuzzed. You don't see the actual exact number of votes.


/r/3Dprinting has no such button for what it's worth. If there is they haven't given it to me.


They've added custom CSS to hide it, it isn't actually removed. You can easily bypass this by disabling custom CSS or using a mobile app.


13of40 was talking about a single button that mass downvotes multiple comments. None of the default (nor popular) clients support this functionality, but you could script your own version using the reddit API.


Oh, I guess I misinterpreted that because it's a bizarre speculation.

Shwartzworld is correct, the votes are fuzzed and deliberately fluctuate so that it's difficult for bots to tell whether they've been detected.


I was talking about a button that sets the score for a single submission to zero and makes it stick. I know the numbers are fuzzed, but I don't think fuzzing will take something with a score of 1 and show it as 0.


I highly doubt such a button exists.

> I don't think fuzzing will take something with a score of 1 and show it as 0.

If one or more people decided to mass downvote new submissions (which isn't exactly uncommon), or the submissions are controversial, they will show as 0 even if they're actually -/+ 2 (for example).

If you decided to mass upvote them you will initially see the total increment by one, however, subsequent refreshes will show you fuzzed numbers. I believe the more active you are (e.g., upvoting 30 posts in the span of a minute), the more aggressive the fuzziness will be.


> I think the mods must have a button for squashing a post without actually deleting it

No, Reddit does not provide such a feature. And setting up an outside bot farm to hide posts makes no sense for mods who can already just delete a post outright.


I wonder if/how a hard limit on number of communities moderated would work? Make it so one person could only mod 2-3 subreddits. Unfortunately, this would require some work from Reddit the company to keep those same powermods from just making new accounts so probably won't happen.


They already have rules about making secondary accounts to evade bans, upvote yourself, etc. If anything it'd probably be easier to enforce a "no moderating a bunch of subreddits across multiple accounts" rule since it's mainly bigger subreddits that matter, and "mod teams of big subreddits" is a smaller group of people to monitor.

And if there was really some big conspiracy to skirt around this system they'd have to organize on a platform outside of reddit, ensure everyone is always accessing through VPNs so reddit doesn't notice multiple accounts modding from the same IP, and hope no one ever defects and exposes the underground moderation ring.


It'd be logistically easier, I just doubt Reddit is going to put forth the money and staff time required to enforce it. Current enforcement is based on reporting, I believe, and don't reports go to mods first.

And they definitely will organize off site. Discord is huge for this. I also bet they would use VPNs since a lot of them have the barest hint of tech knowledge and a burning ideological conviction they're doing something important.

> hope no one ever defects and exposes the underground moderation ring.

Man, whoever had the balls or ovaries to do that would be immediately smeared and mobbed.


>They already have rules about making secondary accounts to evade bans, upvote yourself, etc.

If that rule was enforced, huge swathes of the so called power mods (and admins) would be removed. That's probably why, much like Twitter, they declined to hire me on to work on anti-abuse technology.

The so called humans in the loop are evil and replaceable.


Reddit just doesn't have any incentive to do this - you're talking about people who are doing free labor for them. Maybe it's got problems, but if you get rid of the power mods (and don't change the structure to add any incentives like pay), you probably just end up with a bunch of unmoderated communities that then die off.


Yes. This is why I doubt it will happen.


I would suggest that they follow the US system of government where users can vote out corrupt or useless moderators via referendum.


Reddit allowing moderators to lock threads was a mistake. It just enables lazy mods to be even lazier.


The laziest thing to do is to not do anything and let someone else moderate. While the disagreements over moderation are normal, complains about volunteers being "lazy" for not doing impossible are absurd.


Hardly. Volunteer positions come with responsibilities. No one is forcing them to waste their time on moderating an internet forum, but they chose to do it for whatever reason and then chose to be lazy and actively harmful.


The complaint here is about moderation tactic that is not actively harmful or abusive to people and ease their own load when threads become too much. Complaining about that is absurd.


No. The purpose of a subreddit is to let users engage. Excessive locking decreases user engagement and is harmful, not to mention that it punishes users (who are no longer able to continue a already started conversation in the comments) for violations of others.


>"Power" moderators, that is people who moderate a large number of subreddits, are untenable.

At least you know it's just one person, spread across multiple contexts.

I had a string of unusual behaviors when I ran /u/dontbenebby, culminating in being involuntarily being made the moderator of several Snapchat related subreddits around the time that Reddit let you view analytics and things I was posting were getting six or seven figure views as I dodged literal assassination attempts every time I tried to take a peaceful walk in the woods.

For context, I was (in)famous for not logging IPs, or even numbers of pageviews as far back as when I dropped that Facebook zero day on my blog and virtually planted myself in the middle of the protests against Mahmoud Ahmadinejad, and then went on to lecture class full of CMU students they should use strong anonymity tools and careful opsec if organizing protests in oppressive regimes like Tehran or Times Square as I threw up an image of a dead protester on the screen.

I meant what I said then, and I mean it now.

And maybe I spoke offline with whoever made me the moderator of a subreddit I never visited, for an application I have never used? In that case, let's share it with the whole class the three core points my art was intended to drive home:

1.) They are going to nuke Penn Quarter, not Pittsburgh.

2.) It is not my problem if you drop dead of a heart attack because you fucked around and found out.

3.) I am an alumni -- that means I can do whatever I want.

Anyways, I'm off to read a book and "do email".

Cheers!!

- Greg.


Reddit is a worse echo chamber than Twitter ever was.

I gave up on it when I got banned from certain subreddits for posting quotes from congressional testimony. If you post anything that deviates in the slightest from the moderator's viewpoint, you get banned.

The end result is an echo chamber that's getting tighter and smaller, excluding any diversity of opinion. It's no way to run a business.


Reddit's business is to house all of the echo-chambers, though, isn't it? That seems like a great business to be in, during this Heyday of Echo Chamber Construction™.


>"all of the echo-chambers"

All of the like-minded echo-chambers. They seem to have no appetite for certain heresies and have walked back Aaron Swartz' original emphasis on free speech as a virtue.


> All of the like-minded echo-chambers. They seem to have no appetite for certain heresies and have walked back Aaron Swartz' original emphasis on free speech as a virtue.

Reddit is ultimately amoral, despite the sensibilities of its moderators, imo. They want to sell ads and IPO, thus they've been increasingly purging communities, posts, and individuals that are either not advertiser-friendly or create trouble.

Even as a casual user of the site, I have noticed a sharp increase in the number of submissions and comments that get Removed by Reddit (i.e., administrators) for no reason. I think they just went completely 'mask-off' after the debacle with Aimee Challenor.


It's not just the mods, the users on Reddit are equally awful and contribute more to the echo chamber imo.

The UK politics subreddit used to be one of my favourite subreddits back in the early 2010s. Back then it was quite a small community and while we had differences of opinions I think it's fair to say we enjoyed each other's company. But around the time of the Brexit vote, then Trump shortly after that, the subreddit started getting flooded with reactionary, low-effort comments and anyone who tried to provide a nuanced opinion or alternative view point was typically downvoted and insulted.

I along with a few other long-time commenters were mostly in favour of Brexit at the time so we would constantly be downvoted and insulted whenever we wrote anything in favour of Brexit. And the worst was when a post made it to /r/all because then you'd an even larger flood of low-effort commenters just downvoting and insulting everyone with a different opinion.

And this wasn't even just minor insults, this was people telling me to kill myself and that I'm a horrible person literally everyday. I'm not sure how much this was a political subreddit thing vs Reddit generally, but it was honestly ridiculous the stuff people would say to me there.

Needless to say, I obviously left the community shortly after 2016, but I've seen similar things play across the site since. There seems to be no room for a difference of opinion there anymore. The mods if anything are just an amalgamation of the average Redditor.


The users being a horrible part of the echo chamber stems from echo-chamber-promoting moderation. Mods instaban (shadow ban) anyone and everyone in an extremely automated fashion based on a long list of rules and filters. You're only left with people that perfectly toe the line.


Fun fact: there's a popular car sub that will ban you for mentioning dealer markups in a disparaging way. That's right: if you say that Joe's Toyota tried to upcharge you $15k for a Camry, you'll be banned for life!


If it is the one I think they have both automatically blocking comments containing "stealership" too.


>you'll be banned for life!

Or for the 30 seconds it will take you to make a new account.


is it /r/cars?


Reddit mods are ineffective or harmful a lot of the time, but so is Reddit itself in how it incentivizes thankless moderation and oversized and noisy communities for the purpose of ads and their upcoming IPO. Most non-niche subreddits could be replaced by ChatGPT at this point.


> Most non-niche subreddits could be replaced by ChatGPT at this point.

According to the "dead internet theory" they already are. I'm inclined to believe that a lot of the political discourse on there is bot driven.


It feels like it. It is so single minded compared to any real group of people I know.


Many just look like RSS feed of news outlets, except moderated by morons. And obligatory wide reaching TOS where the discussion about say how fighting games could be more accessible to new players gets removed under "no content for fun or entertainment allowed" TOS point...


For real. From the headline I thought this was going to be a "ban" from an art department or marketplace or something of actual value, which would actually be news. Being banned from a subreddit for an arbitrary/idiotic reason is just reddit as usual.


HN is really lucky to have dang. Although he is a payed mod, so it's not 100% comparable.


Reddit mods fully convinced me that most of what we read on the internet is written by insane people. And I include most Reddit mods in that.

See previous discussion: https://news.ycombinator.com/item?id=18881827


The amount of great discussions I've seen shut down because a moderator removed the post on some stupid minor infraction is infuriating.


It is still surprising - at least for me, I've been using Reddit for years but mostly niche subs, nothing popular - how such a petty power warps people's minds. I dread to think what a real power does then.


But in this case, the mods can't win. If they let AI art take over, HN will be condemning them for putting artists out of business. If they refuse to allow AI art, HN condemns them anyway.


On the contrary, AI will put a lot more artists in business by drastically expanding the range of artistic works than can be created and kinds of people who can create great works. Hanging a rectangle still painting on the wall is of limited appeal in the age of smartphones and VR. Imagining being able to paint walls and ceiling of entire house with aesthetically appealing, one of a kind art, for the same effort as currently required for a small still painting. A lot more people will be interested in paying for that and they will be willing to pay a lot more.


I wonder if the simple solution is to prevent mod accounts from interacting on the subreddits they moderate.


This would probably make the situation worse. You'd remove the incentive for people who moderate for the good of their own community, leaving only those who do so for the power they get (which are probably worse mods, though I don't have a citation for that).


Not sure if that would not create more problems, but there should be some accountability when they behave like a-holes. The mod in question could have asked for proof (draw live on webcam? - not an expert, just wondering) or peer analysis instead of just silencing the artist. Then another user contacted the mod to complain, and got this reply: https://nitter.bird.froth.zone/MeaririForever/status/1607826...

That mod has become toxic and imho should either apologize or be removed asap.


I'm almost in favor, but I would worry that for really small communities that deprives the sub of its biggest contributor


Maybe implement it once the community reaches a certain size/certain volume of posts?


That's the opposite of what you want. Good mods are those that are invested in the community and are well-known and respected participants.


That's not news and applies to pretty much any moderators, especially the non-paid kind.


proof: I was a mod of a small hobby subreddit for a while, and I'm an idiot.


Small hobby subreddits are the best subreddits, and among the only ones I visit. I don't usually have problems with those mods. Anything front-page, or remotely popular to a wide audience, are where the worst mods (and posts / commenters) are.


Came here to say this.


Speaking on the topic of Reddit and mods and it's power structure:

In the UK Reddit has pushed a new subreddit called "HeyUK". It has turned up in the subscription feeds for some (all?) UK users automatically without the user asking for it or adding it. If you remove it from your list of subreddits the posts will still show up in your feed as "sponsored". As far as I can see this new subreddit is seeded with just cross posts from other UK subreddits and is created/pushed by Reddit itself.

The big issue I have is that this is just another subreddit with 15-odd random people who are the mods. These people have the unilateral power to shape discourse and be the arbiter of what is "UK" and what isn't.

Reddit is getting a bit too big, this feels very strange. On the swing-back we then have Reddit not banning the "jailbait" subreddit until it made major US news.

I have no idea what's going on with social media anymore, I'm just left with the overwhelming feeling that the people with the voice and the power are not the best of us.


Reddit has definitely started a massive push in an authoritarian direction.

I actually was permanently banned from reddit last night for saying "I didn't know shooting a guy in the nuts would kill him" for spreading hatred/violence in a video game subreddit. It kind of caught me off guard.


If it doesn’t kill him, he might wish he was dead.


From all of reddit or just that subreddit?


The beauty of reddit is that you can create your own subreddit with your own mods and have the discourse that you want. You're not held to just sticking with the subreddits that are given to you.


Until you can't. /r/star_trek was created by people unhappy with /r/startrek and was recently banned.


Banned for what? I've seen communities splinter off in seperate subreddits and none were banned.


Sounds like there is more to it, Daystrom institute is fine and a great alternative.


Let me guess, they were brigading the main Star Trek sub in some "anti-woke" crusade, right?


Likely for not thinking Burnham was the bestest ever.


>"The beauty of reddit is that you can create your own subreddit with your own mods and have the discourse that you want."

The "just start your own" strategy rarely works, and at the end of the day you're still on a platform that can ban your little upstart for any reason it wants. That's assuming the people in the main subreddit even learn that your alternative exists, as the mods don't want people leaving their little fiefdom and can ban you for mentioning it. This mindset is also counterproductive because it advocates for completely and totally giving up any effort to address the problem in the main subreddit. It lets the troublesome moderators run unchallenged and can make things even worse.


It can be 'sponsored' as the parent noted. (By whom? Why?)


Not only sponsored - but also automatically added to your list of subscribed sub-reddits! I only saw the "sponsored" posts when I went out of my way to unsub from this new subreddit I was unceremoniously subscribed too.


What the parent is pointing out is that Reddit choosing subreddits for people to subscribe. Sure, I can "create the subreddit with [my] own mods and have the discourse that [I] want". But nobody will see it because everybody is automatically added to the subreddits that Reddit chooses.


> everybody is automatically added to the subreddits that Reddit chooses.

No? That’s not how Reddit works, you join the subreddits you want to join


It's called "the defaults".


The concept of default subreddits stopped existing in 2017.


Oh wow. I've effectively been off reddit for years now, didn't realize they got rid of the defaults entirely, now I'm curious.


AFAIK, these days the content shown when visiting the home page at https://reddit.com is exclusively comprised of subreddits that you're subscribed to. You can visit https://reddit.com/r/popular to see an algorithmic page from subreddits that have opted-in to it, which is mostly dominated by like 50 mega-subreddits but the algorithm is still capable of surfacing small subreddits once in a while (it's not just based on absolute number of votes, it's also has a votes-per-number-of-views component or something). And if you're not logged in, you just get /r/popular when visiting https://reddit.com . (However, I rarely use either of those features, I only have the one subreddit I care about, so I could be inaccurate.)


Looking at one of the mods there, they've only had an account for 3 months. 0 comments, 1 post karma.


That isn't necessarily an indicator of something nefarious. I've been active in a community using an account that has identifying information about me. When I joined the mod team, I did it under an alt to avoid disgruntled users becoming a personal safety issue.


Reddit's weird setup where essentially random people who were first "own" entire categories (= subreddits) of discussion is totally absurd.


I was a moderator for r/robots for awhile. I even spent a few hundred dollars hiring an artist to create a theme.

I begged them to help me pin down what the subreddit was about since the submissions were all over the place, and some people seemed to think it was for a certain type of robot content and others a different type. Most ignored the question.

I tried to share articles and videos of actual leading edge robots that I though t were awesome. Generally these were ignored, along with most such things. Occasionally a video of a real robot would randomly become popular for some reason. The worst most repeated robot sketches would often receive many votes. Anything even remotely erotic went straight to the top.

They seemed to like art quite a bit, but often the voting was the opposite of what it should have been. Like artwork that was clearly derivative or low quality was top billing for the day, and amazing work was ignored.

Then there was someone who really wanted to use it for some channel that was obviously kind of a stealth marketing system. I repeatedly warned everyone about it and tried to discourage it, but the only feedback from anyone was that they liked the content and I was overreacting.

Due to the incredibly poor judgement of the people voting in the sub, I got fed up and left.


> Like artwork that was clearly derivative or low quality was top billing for the day, and amazing work was ignored.

With all Reddit-like forums, there are big secondary factors to what bubbles to the top. Just as an example, the timing can be critical [0], the current top posts [1], the current trends and even just whether the people who like that particular style are browsing new at just the right time to give the post some starting traction. It's a bit like the difference between weather and climate, you really need to sample a lot to get an accurate picture.

[0] If a lot of users are online when the post is new it will earn a lot more votes. Someone on a data subreddit actually modeled which times are the best to post content for maximum visibility. EDIT: There's even a website for this now: https://dashboard.laterforreddit.com/analysis/

[1] "Blocking" the top spots can easily happen. Just look at what stays at the top of HN on a Sunday vs. when Elon stirs something up at Twitter yet again.


With art specifically, people and especially non artists don't judge it on art-school kind of criteria. They don't care that someone somewhere else did something similar. They don't care about formal quality.

They care about how it makes them feel.


Good for them. You obviously wanted something else than the actual users.


Eh, the problem with this idea is everything appeals to the lowest common denominator eventually. All subs (and even HN) would fall to Sturgeon's law if lead to its logical conclusion.

Smaller forums in the past tend to deal with this by having some groups that were strongly moderated with a strict set of rules and no/low tolerance of them being broke. Then more 'general' discussions where the rules were lessoned.


What he wanted was a community with a specific theme and purpose. What he got was a bazaar of randos showed up to share softcore pornography.

If a bunch of people showed up tomorrow on Hacker News, who mostly wanted to discuss Pokemon, they'd probably be shown the door. I don't see why a person running a subreddit shouldn't be able to do the same thing.


They wanted the clickbait and trash.


What do you think would be a better model? I have a hard time imagining one.

It doesn't seem all that different to me than whoever is first to claim a company name, a domain name, or when we go back further in time, land.


IRC Network administrators tend to enforce "official" names (owned by a particular group) vs "unofficial" names. For example, "#Ubuntu" is owned by the Ubuntu devs. While the unofficial ##Ubuntu is just fan owned.

This already is a huge step forward compared to Reddit, where you aren't even sure which subreddits are owned by the corporation / official groups, or if they're "fan run". Are you sure /r/Ubuntu is a fan-run subreddit, or does it have official Ubuntu communications?


Which companies do actually own and moderate their own subreddits? I have always treated all of them as unofficial, even if there are employees of the company that post periodically.


Many streamers are root mods on their own subreddits. I doubt they do most of the moderation themselves, but they appoint and manage other mods. Most aren't companies, but some of the big ones have LLCs to handle their streaming businesses.


I'm pretty sure /r/Wargroove was company run (and well run) mind you. Its not necessarily a bad thing to have company run subreddits. But it should be clear when its an "official" vs "unofficial / fan-run" subreddit.


/r/Hololive is like that


Some are pretty much modded semi-officially for corporate cucks that remove anything that might hurt their favourite corporation, even if community clearly wants to discuss the topic


For a large-ish one, /r/Stadia (rip) was run by Googlers that also tagged themselves officially on the subreddit.


> IRC Network administrators tend to enforce "official" names (owned by a particular group) vs "unofficial" names. For example, "#Ubuntu" is owned by the Ubuntu devs. While the unofficial ##Ubuntu is just fan owned.

I didn't realize this was a thing outside of Libera, and maybe Freenode still. But then, I haven't been on other networks in quite some time.


I don't think it is. That said, the Freenode extended universe is the only corner of IRC that's really designed for official project communications.


The problem this is that historically a person will create a subreddit, add other moderators, and then vanish. The other moderators will build up the community and culture of the subreddit, and then one day years down the road the top moderator returns and decides they want to make unilateral changes that the actual moderators are powerless to contest.


I'd say that's a feature, not a bug. I've done it. In my particular case shortly after the beginning of covid some mods wanted to ban all discussion of hobbyist medical 3D printing, things like 3D printed masks and even those Prusa face shields.

I didn't think that we had the expertise to be the judges of truth in that area (it was during those few weeks where the governmentwas saying "don't use masks", and these moderators agreed with that), and the relationship quickly deteriated.


> I'd say that's a feature, not a bug. I've done it.

Arguably, what right do you have to decide the direction of a subreddit you have never actually contributed to? It's like an absentee parent returning after a decade and immediately trying to enact major changes in your life because "I'm your father".

There have been countless controversies of useless "head mods" returning and doing just that. WSB comes to mind, but there have been many worse ones.

https://www.reddit.com/r/wallstreetbetsOGs/comments/vd4gn7/w...

(I am not saying that's applicable for you specifically, as given your example it sounds like you actually had some level of care and activity.)


I find that mods who are heavily invested in a community tend to be worse, doing things like getting into arguments with users and then banning them. Obviously you mileage may vary, but most of the times I've seen this happen it's been used to kneecap mods I'd consider to be "power tripping". /r/linux is another example.


I wonder if it's possible to replace mods completely with a bridging algorithm such as the one used for Community Notes. The algorithm is very clever in that, so far, there doesn't seem to be a way to game it. It aligns user incentive with a goal of quality. As a result, it tends to make high quality decisions.

It might be possible to use this method to crowdsource things like creating subreddit rules and removing comments that break those rules.


The first person rules forever is almost the worst possible model. Assigning modship randomly would be better.


Erm. I can't agree. How is this different from be the first person that invent something, or be the first person that organize people around some new idea, or first person to create company?


I do miss the slashdot style meta moderation in a lot of places … I like the idea of running stuff past a screening group of users to temper the ability to wildly push around the moderation system by newly created accounts or low engagement users or bots. It seemed to work pretty well. But no such system is perfect so I’m sure there were flaws I just never noticed.


After a subreddit reaches a certain size voting tokens should be used to elect officials to run the subreddit.

Quadratic voting with your vote ratio being a derivative of your activity on the subreddit to mitigate bot voting, et cetera.


No. Reddit has too many bot accounts. There's no way to enforce one-vote per real life person.


Make them pay a small amount of reddit gold and enforce a minimum time subscribed to the subreddit.


That's just turning votes into a convoluted reddit-gold / money value. If I cared enough, I'd make myself moderator by just buying a ton of reddit gold for myself.

If I were too poor to make a big difference in reddit gold accounts, then I'd be forced to move to a different webpage without any power to make a change on Reddit.

Lose/lose from all sides.


I had enough of R.gold to give out meaningless awards to random people. I never paid R. any money and don't have a popular account by any means.


I see subreddits the same as opensource projects in this regard - users have all the tools and rights to fork. And the mods/maintainers can run the original to the ground if they so wish


Not exactly that easy to tell users that is happening tho


True, but the mandatory scheme to hand over your project to a vote also has consequences. The point it what's better overall and I don't think there is a clear and universal answer that works in all cases so we are left with the safer option


Namespaced subreddits


this model effectively e-parastroika


It's not absurd, it's reddit's killer feature and the primary reason that reddit is as big as it is.


I've had my fair share of terrible moderators on Reddit. Even on major / big name subreddits.

There's not much to do aside from start a rival subreddit (with a less popular name), or just give up on Reddit entirely and go elsewhere.


> I've had my fair share of terrible moderators on Reddit

Welcome to the internet, the same thing could be said about every web forum since the 90s.

> There's not much to do aside from start a rival subreddit

Yes, that's the system working as intended. You don't have any right to the subreddit any more than the people who got there first. There have been many successful offshoot subreddits that go on to eclipse their predecessors in size if the new subreddit is actually better.


> Welcome to the internet, the same thing could be said about every web forum since the 90s.

Not USENET, not IRC. There's other systems, older than Reddit, that have other rules.


Ah yes, usenet and irc, two platforms where moderators never ban people for capricious reasons.


I'm less interested in bad moderators per se, as much as I am more concerned about "permanently owning a major, important name on Reddit".

On IRC, if the channel goes dead, the channel name is back up for grabs. This means that the admins can kick everyone out of a room and "Reset the ownership" of a name, if necessary. (Ex: #Ubuntu suffers from a glitch and gets taken over by griefers. Admins come in, kick everyone off, and gives official #Ubuntu devs control of the channel again).

Reddit can ban redditors who are poor behaving (even moderators). But the name of the channel remains in their control. The subreddit name can be lost forever. I don't think its a good system of name management.

---------

USENET was way more ad-hoc. The alt.* distribution lists were fully unmoderated and fully ad-hoc from what I remember. I don't think it was even possible to ban anyone, and as such, a lot of those alt.* channels became filled with spam, porn, and other unfavorable material.

The comp.* moderated lists were done better. I don't fully recall the administrative structure. But I know it wasn't done like how Reddit did things.

Reddit's decisions can be discussed in light of its competitors. Be it Discord (today), or USENET / IRC (from the past).


> The subreddit name can be lost forever. I don't think its a good system of name management.

This is mistaken. Subreddits without active moderators get replaced with a landing page that instructs interested parties to put in an application to take over.


In fairness if a subreddit goes dead (unmoderated) you can request ownership of it from the admins in /r/redditrequest


/r/redditrequest ?


I would argue that “killer feature” killed its potential to be much bigger and more profitable than it is.


Killed it's potential? It's in the top 20 biggest sites in the world with no competitors anywhere near it.


It’s not nearly as big or profitable as the other social media companies, by a long shot. I think it has the potential to be though.


How much bigger could it possibly be?


Reddit is one of the smaller social media sites at 430m yearly users.

Yes absurd number but TikTok, FB, Instagram etc all have more.


Counterpoint: 430 million users is actually a lot. I just checked, and most sites don't have that many.


The first result on google (not verified) says it's the fifth most visited website in the US. There is much more room upwards if you are honest.


Sorry, I am not familiar with reddit’s model.

What is “ownership” in this case? Total private tyranny, with delegation of authority? Does it transition to collectively and more democratically managing the channel, as stackexchsnge does?


There is no collective or democratic management, unless the tyrannical overlords choose to allow it. "Total private tyranny, with delegation of authority" is correct. What it leaves out is that anybody else can make a competing subreddit, dedicated to the same issue, where they have control.

The only thing getting there first does is prime realestate in the namespace and network effects of having an existing community.


Ah, so like Valve Software then.

On a related note, the only reason the United States' citizens directly vote for the President is at the pleasure of the State Legislatures. The Constitution allows them to choose the manner in which electors are elected, and they in theory could elect them directly (by specifying this before election day), usurping the power from the public.

https://www.justsecurity.org/73274/no-state-legislatures-can...


Almost no countries have a directly-elected chief executive. Certainly that's not the case in the bulk of countries with parliamentary systems. For instance, outside of his own district ("riding", I think they call it there), no Canadian even saw Justin Trudeau on the ballot, and the few who did only saw him as a candidate for Member of Parliament, not Prime Minister.

What typically happens in those countries is that the party hacks of whichever party (or coalition of parties) has a majority get together and decide who's going to be the Big Cheese. The average voter has little or nothing to say about it. Voters who are registered members of the majority party or parties may have a (small) degree of influence, but the ones who belong to other parties don't get a say.

Worse, most Canadian parties actually charge you dues to belong to them (to its credit, Trudeau's party is not one of these). No pay, no say.


It's a web forum for shitposting, not a government for a country. Using hyperbolic language like "private tyranny" to talk about unpaid moderators curating a stream of news articles and posts typifies the terminally online madness of the current zeitgeist.


It's true, there is no violence involved. Private tyranny is a technical description.


I don't agree that it's a technical description, I'd say it's a political description meant to inflate the perceived harmfulness of what is a completely typical and mundane event on the internet.


Yes, it's total private tyranny if the person who started the subreddit chooses to keep it that way. But if Reddit (the company) doesn't like the way you're running your sub, they can and will replace the people in charge.


That's kind of bizarre to me. They act like they're firing employees, and replacing someone for that role, when they're just random volunteers to a community set up by a person unaffiliated with Reddit.


The issue here is stuff like brigading and stuff against the terms of service like straigh out nazism etc. And even those subs stay long often.

Brigading is when members of sub coordinate and go to harass people on different sub or mass downvote their posts etc. The subs that have been closed or taken over were notorious for that.


Sounds like the perfect activity for botswarms controlled by a couple guys


If someone founds a company they get to own it. Why shouldn't someone get to own a community they found?


Nah, it's pretty amazing getting those people to work for free.


The weirder thing is that redditors make it possible to own categories.

The name of the subreddit shouldn't matter much at all. For each category there are several subreddits but people don't actively move to the subreddits with the best moderators.

For aggregators as a whole, it's the same. Places like https://tildes.net/ don't have many visitors even though Reddit's flaws should incentivize significant amounts of users to try other aggregators.


What does it even matter how art is created?

It's the same with banning ChatGPT from StackOverflow: Who cares and who notices? Art is either evoking some feeling or not and it's different for everybody. An answer on SO is either helpful or not. Who cares how it was written? ChatGPT can easily say something more helpful than me, stable diffusion can easily make something I'd rather have on my wall than Da Vinci's Mona Lisa (or anything more along my preferences). Why do we care so much? What's "real art" anyway?

I always like a colleague's mousepad, it said: "Is this art or can we throw this out?" Always makes me smile.


> It's the same with banning ChatGPT from StackOverflow: Who cares and who notices?

People were posting low-quality rambling bullshit, sometimes completely off-base, without even bothering with the most basic of smell-tests. People occasionally post low-quality rambling bullshit too, or things that are off-base, but with ChatGPT you can post 100 answers in an hour.

It's a matter of scale. The ban wasn't pre-emptive, it was reactive in response to a real observed problem with people lazily Ctrl+C/Ctrl+V spamming poor quality nonsense from ChatGPT.

While technically not allowed, you can still use ChatGPT on Stack Overflow: just make sure it's correct, copy-edit things a bit to remove some of the waffling and repetition that ChatGPT tends to generate, and no one will even notice.

I'm less involved in the art community, but I would imagine that most communities are at least in part about people who create things for the joy of creating things, and then share that in the community for the joy of sharing. I don't have anything against AI art, but if lots of people start lazily spamming that kind of stuff then you've kind of lost your community. It's not so much about what is or isn't art, it's about having a community.

That said, this mod is clearly being an ass about it.


Well, the temporary ban at SO is entirely justified. They already saw the influx of massive amounts of new answers, generated by ChatGPT, that are essentially spam and make everybody worse off. Even if 90% of the answers are perfectly correct, that leaves the 10% that are wrong, sometimes in subtle ways. And because the submitter of those answers is only parroting what ChatGPT wrote, they probably won't be available for further discourse, or to amend the answer they submitted.

The biggest problem with ChatGPT is that when it's wrong, it's confidently wrong and cannot quantify its uncertainty in any way (maybe it's too human-like in that respect...) Furthermore, the whole idea of a reputation economy collapses if reputation becomes "too cheap to meter".


> Even if 90% of the answers are perfectly correct, that leaves the 10% that are wrong, sometimes in subtle ways.

I wish 90% of real people answers were correct...


If the goal of a community is to share work, critique each others efforts, and enjoy a hobby together, then someone coming along and pretending that they're doing it while they're not is obviously going to annoy people who are genuine.

I see AI-generated art as being similar to taking performance enhancing drugs in sport or using something that's against the rules in motorsports. Outsiders don't really care because they just see someone performing at the same level of the others by using clever tech, but if you're part of the group then you will care much more.


Ok, yeah, I can imagine if it's about competition, which it is when humans want to create the most beautiful thing purely with their bodies, that it's "unfair". There are (written and unwritten) rules in any competition after all. Imagine a formula 1 driver installing an AI :)


I think it's more than just about competition. If the whole point of a subreddit is to critique each other's work and discuss process etc., someone coming in with AI-generated art isn't going to further those goals, and the risk is that the subreddit gets overrun by the AI art and no longer be focused on its original subject matter.


> Imagine a formula 1 driver installing an AI :)

As an unpopular opinion: I don't watch races, but maybe I would watch that. I mean, Roborace is pretty cool (but it's mostly about the tech rather than racing, and IIRC it's all strictly driverless), I wonder how a man-machine pair would do.

Similarly, I find most sports boring and hardly worth watching (unless in a good company, but then it's not really about the sports), but if some day there would be Olympic Games NG+ without any doping or gear restrictions, I'd most likely check those out with great interest. I do realize there would be tons of drawbacks and nuances involved (and more than a lot of people screaming hate how this is wrong to them), but that'd be a) interesting on a personal level - watching some folks who trained for their whole life isn't really resonating with me - I'm happy for them being beyond good but I can't say I really care; and b) would produce enormous effects on society, in terms of medical and technological advances, just like the space races of the past (and hopefully the future) - I'd surely cheer for that.

Then, I'm a proponent of machine assistance in computer games, too. In my opinion, human bodies and minds are inherent sources of unfair advantage and machine assistance - if equally available to everyone - is the greatest equalizer. Though, of course, I acknowledge that a lot of games are designed solely or mostly around imperfections of human performance (mechanical or perception).


Have you tried AI-generated art yet? It's more like protein powder, not steroids.


I've used Dall-e, midjourney, and stable diffusion, and given my total inability to draw anything properly its like competing in a 100m sprint in a drag racer.


Well in this case, it's the artists who care. Ie the person at risk being "replaced". Yes, ML is not there yet to actually replace artists, but we all see the writing on the wall.

I'm a software dev attempting to learn art. I recently joined Mastodon related to this purpose and it's quite the hot topic there. Many, many artists pissed about how their work is being used to train corporate profits as well as potentially undermining their living/passion/etc. I've actually seen some cool art in protest of "AI"... usually involving malformed hands which the artist community have gravitated towards being the representation of current AI capabilities.

I think it matters how it is created, personally. Not because the author of an individual piece of art is important to me, but rather because once AI moves into a problem space and can effectively and accurately "solve" that problem space the displacement of humans will be surreal. How it affects people is the important thing to me. I'll be interested to see how we manage to recognize this reality as AI improves.


Seriously? Shouldn’t the SO example be very obvious?

If you submit an answer yourself and it’s wrong, if someone begins the process of critiquing it or editing it, they can engage in a dialogue with you in order to make this happen. You can explain how you came up with your answer, and they can help you debug your thinking. Seeing this process unfold over a couple comments is often one of the most enlightening things on SO.

How is this supposed to happen if you submit a ChatGPT answer which you have just accepted on blind faith and maybe don’t even understand?


It matters to me because, to me, art is a reflection of emotional processes specific to human beings. There is meaning conveyed by the difficulty of technique, refined over hundreds of hours at great opportunity cost. It says to me that the human being has sacrificed a lot to produce this piece, and so I should give it my attention. For human-made work, complexity is something like proof of work which is itself proof of conviction. None of this applies to generated work. While I am impressed at the analogous sacrifices of the human inventors of AI, the work produced by the AI itself has not yet surpassed the level of significance of a party trick, even though I would be impressed if a human had produced the work .


> An answer on SO is either helpful or not. Who cares how it was written?

"Who cares if the diagnosis is done by a medical expert or someone pulling out random drugs they tried before? As long as I feel better immediately after taking them, who cares how they were prescribed?"

The case on SO is clearly different, as ChatGPT might answer incorrectly or answer with something containing subtle bugs. There's also a good chance that you won't be immediately able to spot those bugs, as, if you were sufficiently knowledgeable in the topic yourself, you would have most likely not asked that question.

The case for art is a bit different, as there is no technically correct way to do it, but there is still a value to the way it is created. Would you think the first picture drawn by your child is worth the same as any other bad painting? Would you agree that a perfect copy of the Mona Lisa has equal value to the actual object? If no, it should be pretty easy to see why a painting generated by an AI is different from one created by a human.


"ChatGPT might answer incorrectly or answer with something containing subtle bugs. There's also a good chance that you won't be immediately able to spot those bugs, as, if you were sufficiently knowledgeable in the topic yourself, you would have most likely not asked that question."

Playing devil's advocate here, but this also clearly applies to human answers.


It's a matter of scale and frequency; people were posting significant amount of low-quality nonsense generated by ChatGPT. The average quality of ChatGPT was (is) significantly lower than the average human answer and there were thousands of those answers.

As a test I ran some questions through ChatGPT a few weeks ago; both popular ones and just some random ones from the homepage (including some not-very-good questions). In only one case would I say that it was "good enough" to post. All the rest ranged from "mostly correct but with huge omissions" to "used 5 paragraphs to explain what could be one or two sentences" to "this is not even remotely correct in any way, but at a glance it actually looks kinda correct". The "looks kinda correct" can be pretty misleading, because there have been a few answers where I went "wait, is this actually a thing? I didn't know about that!" and when investigating further it turned out it wasn't a thing at all and ChatGPT was just trolling me.


SOs answer quality was never perfect, but there are still a few differences:

- Humans actually try to answer the question at hand, whereas ChatGPT is only auto-completing text. This might lead to similar results in a lot of cases, but the goal of ChatGPT is not to produce correct SO answers.

- Related, humans will (usually) back off or correct answers if errors are pointed out to them. ChatGPT (at the moment) doesn't.

- Humans will (usually) not attempt to answer questions they don't feel like they are qualified to answer. ChatGPT will.

I'm not saying that SO is/was perfect as-is, but ChatGPT is - in my opinion - overall not likely to improve SOs quality as of now. Since it is currently banned on SO, it seems like this is not only my opinion.


It shouldn't matter how it's created. But some artists are pissed off that AI can generate art that people like, so they want to ban it.

I don't think the average person cares at all.


Some artist are pissed off that the AI can only generate art because it stole the reference pictures they spent countless hours creating.


We should at least have transparency in the origin of the work. I care about the human experience, not the AI experience.


This is /r/art by the way and based on the blurb and about section it seems less about art and more about fulfilling the deranged power fantasies of its moderators. What an unfriendly place.

Nice to hear the artist has gotten a more positive response in /r/drawing.


I'm not too surprised, /r/art has been famous for completely arbitrary rules for years by now.


/r/art mods are swivel-eyed loons


Those hands have the right number of fingers.

So either 1) It's not AI-generated art, or 2) It is AI-generated art and the artist is a master at prompting.

Either way they should be celebrated.


I've been keeping up with Stable Diffusion and all the tools around that for months now and it wasn't until a few weeks ago that I learned that you can just tell it to draw things accurately if you want to avoid weirdness.

For example, if I include "anatomically correct fingers" it significantly decreases the number of images with wildly creative ideas for how human fingers should be drawn.

Negative prompting works too. "deformed fingers" or "inaccurately drawn anatomy" can go a long way.


This person knows what they are talking about. Although I have been pretty successful in obtaining realistic hands by prompting photographs and including make of a camera, lens and film [1][2]. Somehow this seems to narrow down the model to mostly produce realistic output. Still, some variants will occasionally feature Chernobyl-levels of fingers.

[1] https://cdn.midjourney.com/6f52a6e9-b3f2-4830-81b1-84c8f8ca4...

[2] https://cdn.midjourney.com/361e143f-5121-4bff-ada9-069c2e400...


And that's the argument I've been making. Once you can't tell the difference between AI-made art and human-made art, the demand for human-made art will dramatically decrease, especially in the commercial areas.

If it takes a human a month to paint something beautiful, and 1 minute for AI, it's really hard to compete with AI.

The best we have is Midjourney V4, and it's getting quite close.


This is why I believe we’ll soon see a huge increase in the popularity of ‘physical’ art. Theatre, sculpture, dance… in a world overflowing with computer made images and sounds, things which can only be human-made will be all the more special.


There are AI approaches that generate images one brush stroke at a time. I can imagine that going from that to a robot arm laying them down on physical medium if it isn't already a widespread thing.

Disney uses a robot Spiderman stunt double to be able to launch it into the air and do aerial acrobatics, edging into the dance example.


You're assuming robots won't be able to sculpt, dance or theater.


We've had painting robots for quite a long time now.


I'm not saying I agree with the actions of the mods, but there is a grain of truth in the "way of the world" remark.

Human artists who are just highly skilled executors of bad taste are going to be decimated by AI.


Which is why, despite my distaste for DALL-E and its ilk, I’m not at all moved by the anguish over it rendering artists obsolete.

Stuff like this never had any artistic value in the first place, so it makes perfect sense to me that a bot would create it rather than a person.


The hilarious irony is that the AI is quite possibly copying the authors older work if it was published on an image sharing site like Reddit or deviant art.

Lawsuits need to destroy these models stolen from the public.


While we're at it we should publicly flog anyone who's ever done any art in the style of anyone else before them, because they also stole ideas from the public.


Why is the most common counter-argument that we should treat AI models the same way we treat human learning? The two are not the same - neither mechanically, nor morally.


I mean, if you fully clone the style of another artist and promote it as your own you will get derided by most art communities.


I guess we’re back to crude cave paintings. Tear down all the statues and burn the museums.


> Tear down all the statues and burn the museums.

No need. Soon there will be so much AI generated "art" that human-made art will largely be pushed out of popular consciousness.


Looking at recent online discussion, this is an easy conclusion to come to, but it's really just a huge bubble.

People won't stop creating just because there's a cheaper alternative.

The analogy with manual labor automation simply doesn't hold, first because people actually like creating stuff, but perhaps most importantly because there is no supply problem to solve.

Taking the analogy with automation in the car industry, automation drastically lowered prices, making each individual copy of a product accessible to anyone.

It costs virtually 0$ to copy a movie, a picture, a song and display it on every device on the planet.

The cost of creating a audiovisual product is amortized across all users, dividing the costs by the number of users instead of multiplying it.

Sure, one day we may have models capable of generating full 3d animated movies from just a prompt, paying for just the electricity used to run the GPUs, but how much cheaper can you get compared to 9$/month to compete with a traditional streaming subscription with human-made content?

Assuming the world even comes to the point where AI streaming services contain virtually infinite amounts of meaningless autogenerated content, and assuming people actually like that content (which may be the case for the chunk of people already watching statistics-driven garbage on Netflix), nothing will stop creatives fired from previously-human studios from creating their own studios, producing hand-crafted, human works at the same price as always (9$/month).

An analogy can be made with handcrafted cars like Lamborghinis may cost millions, with the difference that with audiovisual content, a work can be shared by (and the price amortized by) billions.


We are already seeing something similar in the western animation industry vs the anime industry.

Given the alternative of cheaper, semiautomated 2d animation, anime studios prefer handcrafting every single frame, instead, because some people really like drawing, and a large amount of people also likes watching handcrafted, high-quality animation with wonderful stories and visuals.


Hate to break it to you … but the “industry” that is the “anime industry” has been quietly adopting the latest technology for years. There are notable exceptions like Ghibli but for the most part, Japanese animation studios have been pioneering the state of the art in cell shading, lighting and post processing, adjustable “cartoon physics” style automated physics tweening and basically everything else in 3d computer graphics that will allow them to make the most of their artistic efforts. They’ve still got a vibrant story boarding process helped by the obvious intersection of anime with the vibrant drawing culture that produces the manga so many are based on… but beyond the storyboards they have been relentlessly trying to eliminate the need to tediously draw anything unnecessarily. There are entire 2D (not 3d!) animation tools that serve the professional 2D “cartoon” market that have widespread adoption. The 3d market is no different, the first cell shaded graphics code I read was all written with Japanese comments… “anime studios” (except Ghibli) don’t “prefer handcrafting every single frame” they use whatever tools are available and usable by them, that are capable of executing the artistic vision of the art director of that project/show/movie.

They care about the outcome not the process. They care that it looks how they want it to look, that it conveys what they want it to convey. This is artistic pragmatism at its finest. They will composite hand drawn graphics/animation, 2D computer graphics/animation, and 3D computer graphics/animation… it’s the final results that matter. I absolutely expect that once the tooling gets better at frame-to-frame consistency, once it is good enough AI/ML based animation tools will be widely adopted by the “anime industry” and it will be praised because it will do things like let small manga creators produce short OVA or movies or YouTube style content directly whenever they want without having to spend ages working with an animation studio and the fans will love it because they love the content and for the most part that love is not dependent on how the content was made.


Hmm, japanese studios may be pioneering and making use of time-saving 3d/2d hybrid techniques like cel shading, but looking at the modern anime landscape, I see hand drawn animation in the majority of cases.

Not that I have anything against cel shaded 3d, it looks very nice when done properly, but I still see no major shift from hand-drawn animation in the anime industry.


I'm not trying to be glib, but I wonder if people had a similar speculations in the past when photography was first invented. I could certainly see people thinking drawing and painting was doomed because photographs can produce an exact capture in a fraction of the time it would take for an artist to merely depict the likeness of the subject.


This stupid argument gets thrown around so much it's become really annoying, what really matters in art is the process, not the result; intent, not content.

A model randomly remixing human works to reach an approximated result that matches a few words inserted by a human cannot be put on the same plane as a human expressing a concept or inner feelings using knowledge he acquired by looking at other works.

When I look at art (be it paintings, stories, animation, or any other medium) I always think about the people behind it, what stories are they trying to tell, what life lessons and ideals are they trying to bring across the screen.

A text prompt absolutely cannot convey the same amount of meaning as a work of art that takes actual time to make and refine, and however amazing the output may look, it is inherently meaningless, with no human intent behind it to make something great and original.


> what really matters in art is the process

You don't get to define what is art. Art can and does move people who have absolutely no idea of what the process to create it was.

You have your way to experience art, it isn't the only way. In fact you have a limited ability to learn about the process, how much of it are you making up in your head?


When the flush of a newborn sun fell first on Eden's green and gold,

Our father Adam sat under the Tree and scratched with a stick in the mold;

And the first rude sketch that the world had seen was joy to his mighty heart,

Till the Devil whispered behind the leaves: "It's pretty, but is it Art?"

- Rudyard Kipling


"Hey, anything's art if you do it in a black sweater". - Mr. Wiggles

This said, according to the renown authorities on the matter, it's definitely about the process. :-)


I can very well get to define what art means to me.

Process is actually very easy to infer if the art is made by a human, since the emotions that are being conveyed by the humans behind the screen are displayed directly on screen.

I don't deny that I might get moved by a piece of AI art, but even so, my first thought goes to what human art was used to generated this specific piece that inspired this emotion.

An AI can't feel emotions, but the humans that made the art powering its model can, and that's the only way AI art can make you feel anything, when it isn't just a garbled mess.

As usual, it's always about the humans behind the screen, not about what's on the screen itself; and the people working on these models fail to understand that that's what people are really looking for: human interaction via creativity, not an instant "product" to be displayed on screen; and that's why AI art will always be relegated to the niche of 4channers generating AI porn and enthusiast devs generating nightmare fuel logos for their projects.

For once, I would absolutely refuse to consume any AI-generated content, and I'm happy in the knowledge that I'm not the only one taking this stance.

I also trust that legislation will eventually catch up with the insult to human creativity that are AI art models.


> I don't deny that I might get moved by a piece of AI art, but even so, my first thought goes to what human art was used to generated this specific piece that inspired this emotion.

Yes, exactly, behind AI art there are countless human artists and their experiences. You should not dismiss AI so easily. What it offers you is a new way to see your favourite artists, to interact with their works, to continue dreaming the same dream, and add your own story to their own.

AI art is a deeper way to engage with art than simply viewing. For the first time we can ask questions, we can explore vast landscapes of style and content. Only imagination has similar characteristics - goal directedness, freedom of movement, extreme creativity, one-time-use - imagine and throw away most of your ideas, a space of privacy and freedom - assuming you run your own models, AI art is more private than using the internet.


Sure, it's a fun toy to play with, I don't deny it.

Like a kaleidoscope, it can be fun to recognize familiar patterns and beautiful colors, but it's nothing more than that: a toy.

What I look for in art is intent and human passion; for me, the excitement of suddenly getting a nice number out of a PRNG or a cool story out of GPT is a very different from experiencing a story passionately crafted by a human.

When I read an amazing story like the Iron Widow, I feel respect towards the skills of the author, excitement for her future plans for the sequel; I follow the author on twitter and maybe interact a bit, because I appreciate what that human made me feel.

I can't do the same for a GPT story, because even if it was generated from multiple human stories, there is no actual human intent behind it; no one to thank, and no one to blame in case of messups: it feels like an empty shell.


GPT-3 is a synthesis of written human culture. Whatever it makes, it comes from us. I see it like a magical mirror into the soul of humanity. When it makes a good story I am very happy because it means I found a path into its deeper levels.

I'm kind of disappointed with the strict training they put on chatGPT, it is always the same canned formula, almost aways uses phrases like "in conclusion" and "however, ...". The stories are all 5-6 phrases and have no details. The original Davinci model from 2 years ago was more creative, but harder to direct.


> I see it like a magical mirror into the soul of humanity

Cool, next time I read a GPT story with an ending I don't like I'll know to blame the soul of humanity :P

> When it makes a good story I am very happy because it means I found a path into its deeper levels.

I understand the excitement, but again, it's the same excitement of say, finding and correctly executing a cool chess move.

It's not the same excitement of a book ending on an epic cliffhanger, wondering how and if the author will manage to save our brave heroes.


> I can very well get to define what art means to me.

Never meant to say otherwise.

> Process is actually very easy to infer if the art is made by a human, since the emotions that are being conveyed by the humans behind the screen are displayed directly on screen.

I couldn't disagree more, unless you have some method, some mechanism I know not of you are only guessing. It is good guessing because humans are more alike than not but still only guessing.

Often not even the artist himself knows the process.

I have grown to abhor legislation, if you want to aggress the users of AI tools then do it yourself without the help of the state.


By inferring process I do not mean guessing what exactly was the person thinking while making that piece of art, but inferring what concepts and feelings they are trying to transmit with it; good art and good stories are good precisely when their creator is good at communicating feelings and concepts.

For example, looking at some drawn art, I can pinpoint exactly what emotions the artist was trying to transmit, not necessarily by the expression of the characters but even just by how the piece itself is drawn: that's the whole point of human artistic intent.

Another reason why human intent is important for me, from my comments below: when I read an amazing story like the Iron Widow, I feel respect towards the skills of the author, excitement for her future plans for the sequel; I follow the author on twitter and maybe interact a bit, because I appreciate what that human made me feel.

This is all part of the experience for me: I can't do the same for a GPT story, because even if it was generated from multiple human stories, there is no actual human intent behind it; no one to thank, and no one to blame in case of messups: it feels like an empty shell.

It's this true emptiness that quickly made me stop playing around with GPT when I first discovered it: anything made with it feels like an empty shell to me.

Regarding legislation, I see I have struck some chords here ;). Nothing personal, but I really think the people supporting AI models are really naive in thinking any democratic government will be willing to let entire creative industries suffocate (not to mention the potential for degradation of society as a whole) just because AI models are very cool technology and human creativity is supposedly irrelevant: just look at anti-delocalization trends in politics and legislation anywhere in the world, and draw some conclusions by yourself.

Not all people are blinded by the awesomeness of a new technology, and even those who created it will realize that they too will be eventually replaced by it.


> For example, looking at some drawn art, I can pinpoint exactly what emotions the artist was trying to transmit

You can't test that ability and if you can't test it how are you so sure of it? You're guessing, you're making a narrative in your head, you cannot know if it matches reality.

You care about the author, great. Some don't give a rat's ass about the author. Are those people unable to appreciate art? If they are capable why do you insist on knowing intent being necessary for art?

> Not all people are blinded by the awesomeness of a new technology

Ah, well we are lucky that you, the one that sees, is here to tell us the TRUTH.

> Regarding legislation, I see I have struck some chords here

Yes, of course, it is violence. If you support legislation you support violence. There is no effectual legislation without violence. So if you think AI should be legislated get up and go punch some researchers yourself. Don't send others do the dirty work for you.


>You can't test that ability and if you can't test it how are you so sure of it?

This isn't some kind of superpower, anyone can look at Munch's Scream and recognize a feeling of anxiety.

My example was a very specific one, to provide a direct example of how, in some specific pieces of human artwork, the emotions an artist is trying to transmit will come through the screen.

Not all drawn art speaks with the same clarity of Munch's scream, but when it does, it's impressive.

Other mediums like storytelling also make a lot more explicit the emotions the authors are trying to evoke in us.

> You care about the author, great. Some don't give a rat's ass about the author. Are those people unable to appreciate art? If they are capable why do you insist on knowing intent being necessary for art?

I don't quite understand your points here, I actually don't think you're even trying to make any points here, but I think the proper answer would be to treat the result of human creativity not as a product to consume and forget, but as an experience of interaction with a set of passionate human beings.

This is how I experience art, and I'm sure a lot of other people experience it the same way: not as a product, but as art.

At the very least, I know for a fact that every member of my family also feels the same, and that gives me confidence that the majority outside of the HN/tech bubble also feels the same.

> Ah, well we are lucky that you, the one that sees, is here to tell us the TRUTH.

I stand by my words, I believe they're an accurate representation of reality.

People can be blinded by the technical awesomeness of something they've created, without thinking of the potentially disastrous consequences for society: it has happened many times throughout history (especially in war-related scenarios).

> Yes, of course, it is violence

I fail to see how majority-driven legislative action is an act of violence.

Mine was an (I believe correct) prediction of how things will go in a normal society that values human creativity, legislative action will rightfully limit the competitiveness of AI in certain creative sectors, to also prevent an overall societal degradation.

I really don't want to punch you or any AI researcher as it would be quite pointless, I'd much rather vote for politicians against creative AI, as that is the only proper way to trigger change in a democratic society.

From your messages, I infer you do not come from a truly democratic country, but rather a country where the government is just a corrupt and violent mouthpiece for corporations and criminals: I understand how this might impact your political and world view, but also please understand that most developed countries apart from the US (ie pretty much just the EU) are multipartisan democracies, with actually democratic elections and democratically elected governments that actually do a pretty good job at doing the right thing for the people in terms of legislation and market regulation, this is the reason why I'm confident that at least the EU will make the sensible choice in regards to AI (once it catches up with the times, hopefully soon enough).

Assuming AI even comes close to being a threat for the human creative industry, which isn't a given due to the lack of a supply problem to solve (unlike for automation in the physical world, https://news.ycombinator.com/item?id=34277750)


> I really don't want to punch you or any AI researcher as it would be quite pointless

But you will HAVE TO. There is no other way to enforce legislation, threat of violence is necessary. If there is no violence refusal to comply to your legislation follows.

> From your messages, I infer you do not come from a truly democratic country[...]

You are completely and entirely incorrect. I am from the EU and I wish my country would stop abiding by the inane regulations set by the EU that you seem to love.

> At the very least, I know for a fact that every member of my family also feels the same, and that gives me confidence that the majority outside of the HN/tech bubble also feels the same.

> I don't quite understand your points here,

> This is how I experience art, and I'm sure a lot of other people experience it the same way: not as a product, but as art.

My point is you ARE defining what art is, you are saying it is not art unless the viewer goes through the same experience as you. Just... why.


>There is no other way to enforce legislation, threat of violence is necessary.

Well, multi-million dollar fines are also very effective ;)

> I wish my country would stop abiding by the inane regulations

I'm also from the EU, and my wish is actually the opposite, I believe a more federated EU with less nationalism will only do good to every EU country.

GDPR, USB-C standardization and soon right-to-repair (mandatory removable batteries :3) regulation is proof of how effective good legislation can be when enacted by a powerful entity like the EU.

> Just... why.

Because I feel this way, I'm not the only one to feel this way, and I strongly believe my view is morally correct :)


> Well, multi-million dollar fines are also very effective ;)

You cannot enforce fines without violence or control of the banks, which you would get through violence.

Most people think their view is morally correct and many think that gives them the right to shove it down other's throats. It doesn't have to come to blows, let people be.


> what really matters in art is the process, not the result; intent, not content

in the fine art world, for now, that's been whats driven value. and when I am in that world to move cash around reliably, thats what I consider and I almost don't consider the aesthetics at all.

outside of the fine art world, there basically is no art market and it is purely aesthetics. the process and intent is irrelevant, only the result and content. I just want cool looking things.

Since it is not possible for you to invalidate my view, then its also what really matter.


Sure, different people like different things.

Netflix wastes the billions it earns on the really good productions like Arcane into hundreds of garbage statistics-driven shows canceled after the first seasons, because some people like watching meaningful, intent-filled stories made by passionate humans, and others just want anything to play in the background while doing other stuff.

And this is precisely why AI art will not be the end of human creativity: humans will keep creating, and other people will like watching the passionate work of other humans, not randomly generated garbage.


> And this is precisely why AI art will not be the end of human creativity: humans will keep creating, and other people will like watching the passionate work of other humans, not randomly generated garbage.

The problem is going to be if you hollow out the artist ecosystem.

If all low-skill SWE work were to be done by AI, and the only humans in the loop were the ones at the top of their field (The twenty-years-of-experience folks), there would be nobody to replace them after they retire. Because you aren't going to get a lot of new twenty-years-of-experience people, when there are no jobs for zero-to-nineteen-years-of-experience people.

If you're interested in how this has worked out in the physical product space, you can always look at the American rust belt. As it turns out, when you offshore all the low-level, low-margin, low-skill work, you lose a lot of the high-level expertise in the industry.


I see the logic, however I don't think an analogy with physical product industries is in place here (https://news.ycombinator.com/item?id=34277750)

Even if AI were to disrupt entire industries the way delocalization did, until local lawmakers catch up, I think creatives will keep creating regardless, will organize together, form new streaming services and keep creating, because a creative job is something you want to do, not just have to do, like any other assembly line manual job.

I still think passion will make a difference in a post-AI world, even in the worst-case scenario of no legislative action against AI (which is unlikely, looking even just at the current anti-delocalization movement in global politics and lawmaking).

More exclusively commercial and product-driven creative jobs like programming may be impacted more by AI competition than purely creative jobs though (still, I still would've gotten into programming as a kid even with AI competition, simply because it's fun to come up with solutions to problems).


> what really matters in art is the process

AI art has its own process. It can be quite therapeutic.


This is a common misconception. AI is not copying anything. It is studying the images in a similar way that humans do. Entire size of the stable diffusion model is around 6gb, with pruning it goes down to 3gb. Training set is in terabytes. So it is common sense that no copying is done.

I think artists need to be fair about AI, is there any artist that created their style without ever studying other artists? That is high improbable because humans need to observe to create art. There is even a saying that "Good Artists Copy; Great Artists Steal".


Still, people did not consent to their art to be trained on.

Just like how it is a bad idea to train github copilot on copyrighted code, or how the same company that made Stable Diffusion promised that they will not use copyrighted music for training (because they're scared of the music industry), copyrighted art should not be used on training sets without permission.

This is going to cause a lawsuit somewhere down the line, even if images only contribute a few bits each, signatures are still seen and this could make an argument in a court case.

It would be fair to everyone to only use public domain material for training sets.


Do people need to consent for other artists to use their work as references when painting? That seems pretty analogous to training an AI on a corpus of artwork.


It's generally not gone over well when it's discovered that an artist has been copying composition, poses, etc from pre-existing work of the same type. Manga and comic artists for example have had their careers derailed over it.

Stylistic mimicry is more murky, partially because humans aren't as good at cloning styles — inevitably the clone will take on some influence from the one doing the copying, which isn't true of AI. If you ask the AI to draw X in the style of Y artist, it's going to be a dead-on copy of Y artist's style.


Copying entire compositions is different from references. I'm case you're unfamiliar, artists usually have a few images off to the side of their workpiece to look at for inspiration and to reference colors and image elements.


That's the thing though, a it's not uncommon for ML-generated images have major elements that are almost 1:1 copies of existing pieces, especially if the prompt includes an artist's name or a style that's closely tied to a particular artist, going well beyond an artist using existing pieces for reference/inspiration.


Why do we make analogies to people? Machines have no rights of expression or inherent freedoms. There is no "learning", we're not "teaching" the machine anything. It boils down to heuristics and statistics. Imagine we're in the 1930s with no computers: If I were to study every every Agatha Christie book and write my own based on statistical likelihoods of words, characters, plot elements etc. I would be seen as a copy-cat hack-fraud author or worse. If I were to take inspiration of the crime/thriller/detective genre and write my own story in my own universe then I would be simply an author within the same genre.

ChatGPT/"ML" copy the fine details. We're getting artists signatures turn up in generated work... That's not inspiration, and I would argue not even transformative.


That is because the AI doesn't have enough context knowledge to distinguish prominent signatures from stylistic flourish or a thing being represented in the painting. The generated signatures themselves are usually very different, though in some cases they can come very close to the original artist signature when properly prompted for artists with "big" signatures. Better image part tagging will solve this.


If you want to get pendantic, the human brain ultimately functions off heuristics and inputs. Again, references are not copied, they are used to inspire and guide new artwork. Even artists producing totally original artwork use references.


The current ML approaches are not "studying", and have no thought process whatsoever, let alone "in a similar way that humans do". They build models that allow them to reproduce existing works in whole or in part—usually in many parts, put together to form a new work formed wholly out of those elements.

Humans learn actual techniques; they understand what the elements in their art actually mean; when they copy, they do so with intention (whether malicious or not). ML approaches to content creation are incapable of intention, because intention is the product of a conscious mind, and regardless of how similar some of the data structures involved in them are to certain models of the human brain, not one of the existing ML projects even remotely approaches anything we could term consciousness. (Nor is that even their purpose.)


Trained on 5 billion images to create a 6 GB model. Every image contributed no more than a few bits of information.


Billion and Giga is both 10^9, so about 6/5 or 1.2 byte or 10 bits, and somehow enough to regurgitate examples from training set[0]. Oh how convincing “AI is just learning as humans do” arguments are.

0: https://twitter.com/kortizart/status/1588915427018559490


Maybe this incredibly famous photo from a National Geographic cover is overrepresented in the training set?


Well I TIL humans are incapable of copying another piece of art even when asked to do so


It's never too late to learn to draw!


I could learn to mine pigments and create paints too but we have better options now


Pick up a pencil, start from drawing a circle, a real circular circle... Caveman in front of a VT100 is still a caveman. Caveman with a charred willow branch in hand, is something a bit more than that.


Your statement implies the source dataset's file sizes represent the amount of visual information in them. This is almost certainly not the case.


Once again, artists trying to stake a claim of personal ownership of a section of human expression standing in the way of technological and cultural progress.


I would argue we're on the precipice of a cultural regress - where "content" on the internet goes from 95% junk to 99.5% junk since junk is getting so much easier to generate. Consider "Elsa-gate" on YouTube a few years ago for an example of this in action.


I don't think limiting and gate-keeping are good for cultural progress. The way to deal with the amount of content is building better systems (not necessarily technological in nature) for curation. Building artificial barriers to what and how things can be created will only harm "cultural progress" (put in quotes because I am not sure I like all the implications of the concept).


I am horrified by the amount of people on the internet that value so lowly creative intent, the most important quality a human being has to offer to society.


My comment is not an expression of a low valuation of creative intent. It is an expression of my deep concern for creative intent due to the limits placed upon it by a rigid system of ownership.


Lots of people in here making arguments about the fact that the way these image models learn is roughly analogous to how people learn, the fact that these relatively tiny models simply don't have enough bits to grok anything except the most popular (and therefore recurrent in the training data) images, etc.

What about the fact that these models aren't just randomly spitting out and taking credit for random images? This seems the most salient point to me — if I used a paintbrush to create a copyright-violating clone of some notable artwork or IP and tried to pass it off as my own, I'd be breaking the law. We wouldn't try to ban paint and canvas and the human arm because it has the potential to create something that infringes on copyright, we'd enforce the actual act.

If these models make this kind of infringement easy, then they are bad products and their users will run the risk of going to court. The whole thing seems like a non-issue.


Stable Diffusion 2.0 has already made changes to make infringement harder by removing names of artists and celebrities from the dataset, which is probably the right move -- now if you want to emulate e.g. Greg Rutkowski you actually have to figure out how to describe his style in the prompt, teaching you a little about what makes his art special and making it easier to create your own style along the way.


Celebrities and artists were not removed from the data used to train SD 2.0.

2.0 uses a new text encoder trained from scratch and it just did not capture the same famous names as the OpenAI CLIP used in 1.x.


Or get a bunch of Greg Rutkowski paintings and perform textual inversion to get an embedding of his style, which is what people are actually doing, and go back to square 1.


If the result of this is an image that infringes on his copyright — I'd imagine this would have to be a pretty 1-1 copy, since his style appears to very generic — then the burden should still be on the person who produced the image and published it.


It is curious how different the sentiment is in this thread compared to the other thread currently on the front page regarding GitHub Copilot. https://news.ycombinator.com/item?id=34274326


It's just a sidenote, but HN is not a single person and doesn't hold a singular opinion. We have both sides argued for in both threads. Not that I disagree that there might also be some bias going on ("our" vs. "their" field), but some inconsistency is normal.


I very much know that, but it is still interesting to see the differences in how the threads are. Even if it is not the same people arguing in both.


I think the big difference is this is a tech forum. Of course it's ok for tech to put artists out of work, but not our precious selves.


I think the big difference is that artists seem to freak out about AI because it will supposedly replace them, while programmers freak out because it's allegedly violating their copyrights. Obviously this is a generalization and not true in all cases, but those really are different concerns.


Artists are freaking out out of fear that AI will replace them (it has already begun doing so) and because their copyrights are being violated, and with a general disgust at the quality of the content.

Programmers are only freaking out about copyright because they naively believe AI can never automate them out of a job, whereas everyone else (artists included) deserves what's coming to them.

It's the same concern in both cases, but different interpretations of the stakes involved.


> Programmers are only freaking out about copyright because they naively believe AI can never automate them out of a job

That's the crazy thing, programmers have already automated their job away but no one has realized yet! Behind our backs they just input a description of what they want into their "compilers" and the friggin thing spits out a ready to go program that does exactly what they asked! The prompt usually needs a bit of fiddling to make it work correctly, but anyone could do that right?


"A good artist copy, a better artist steal". AI is influenced by artists work like artists are influenced by artists.


Nothing is being stolen, get a grip.


Nothing is being copied. If you understand how AI models work you'd realize that's impossible considering the model is a few GB. How could every piece of artwork be in there? It's not a database lookup.


But can you seriously deny that everything the model generates is a derivative of the inputs? And if it's a derived work, it might not constitute "fair use" of the input materials. That depends on.... well, I'm not a copyright lawyer, so I won't attempt to specify, but I don't think we have any clear answers yet.


Everything is a derivative of everything. What is art school? They derive their skills from other well known artists.


That's just Reddit.. I don't think we should take any Reddit community. Seriously because it is run by Reddit mods who have proven countless times to be absolutely terrible at their job.


The problem with Reddit, SO, and pretty much any system with moderators is that a heavy-handed moderator can claim that he's "doing something", while a lenient moderator who only deletes genuinely egregious content looks like he's "not doing anything".

The same is true of other institutions, such as congresses and parliaments. Note that politicians run on the basis of what new laws they've gotten passed far more often than what laws they've blocked.

There's something to be said for the idea of a branch of government whose function is limited to repealing laws.


Wow, that's like banning a photorealist because their work looks too much like a photograph.


Too much effort for the article to mention r/art was the subreddit in question?


Err, banned from a subreddit by a mod who didn’t believe him.

Pretty sure that happens to about 10% of Reddit users every year.


AI is replacing artists like CAD replaced arhcitects


Or like InDesign replaced people who set type with hot lead.

https://en.wikipedia.org/wiki/Hot_metal_typesetting#/media/F...


Even if true, AI is just another tool and combined work of artists and AI can produce greater art than the artist alone. For example, natural world has a lot of repeated elements and hand painting each one detracts from time that could be spent on more expressive aspects of the work. Airplanes let us fly further and faster than birds, should that be avoided / considered not to be real flight just because we are getting technological assistance?


> AI can produce greater art than the artist alone..

Citation needed.

> natural world has a lot of repeated elements and hand painting each one detracts from time that could be spent on more expressive aspects of the work.

Can't agree. Using these repeated elements artist can add additional level of impression for a viewer. While AI probably will use random distribution in this situation.


No citation needed since he clearly said:

> combined work of artists and AI can produce greater art than the artist alone


Fundamental misunderstanding of what art is.


On the topic of AI vs "real" art, I visited /r/artcommissions. I was surprised at how little people are asking, many in the range of $5-$50 for original work.


Supply and demand.


Quite. Then I realized that most art has always been cheap (c.f. "starving artist"), so maybe AI doesn't make _that_ much difference in the end?


It’ll probably allow more people to be starving artists.


The difference has maybe always been in the interaction between the artist and the market. Even very competent and painstaking replicas of "great works" that are indistinguishable to most command a small fraction of the price of the original.


Notice how this is an unmanaged misuse of power. The artist has no legal tools to defend against the actions of this moderator.

An internet court is needed for these cases, like courts in the real world, and supported by them. And an internet police, which makes sure the court rulings are obeyed. Also supported by real world police, if necessary.


Safari telling me the SSL cert for this site is not trusted. Perhaps only appropriate for "the tech deviant dot com".


Seems like a normal let's encrypt cert to me.


Seems like parent is having their TLS certificate to thetechdeviant.com intercepted and the browser is having none of that.


All digital artists need to do to prove that their art is not AI-generated is to record their screen and show a start to finish recording of themselves painting the digital image from scratch as proof.

This is not new and it is similar to speed painting, and all these prompters using Stable Diffusion cannot do such a thing.

Problem solved and job done.


The artist in question offered to show the raw Photoshop files and the progression of work done on the piece to prove it was not AI generated, but the mod did not take him up on that offer, and simply banned him.


Wait until AI can generate such a video in 1min?


Reminds me of the type of stuff r/Seattle mod u/careless would do

https://www.seattleweekly.com/news/seattles-reddit-community...



If AI content is already this indistinguishable from normal content, how much normal content is "real content" anymore?

We have no metric or insight into this. The percentage will keep increasing as it's cheap and very economically beneficial for companies to use.


Surprised no one is talking about how this is terrible art and derivative of GoT and LoTR.


Because the art is actually good and better than most hobbyists in terms of technical skill in digital art.


I’m glad. If your “art” can be mistaken for AI art, it’s likely just content, not art. The example picture is meaningless, just technical work. Makes no sense upon scrutiny.

True art is something that can’t be replicated by AI. You will have no doubt once you see it. It still exists even with the proliferation of AI art.

It’s like the difference between a random picture and a meme. The meme looks like a picture, but it captures an emotion or essential human truth that you connect with upon looking at it, where as a picture is just a random picture that could look like a meme but has no real meaning to it. You will know what I’m talking about.


I looked at the image and it feels like art in the terms that you put it.

What now. Is my inner experience of art not valid? How do we reconcile this?


> True art is something that can’t be replicated by AI. You will have no doubt once you see it.

Ignoring your assertion about what art is, I have to ask: what happens when you can't tell the difference?


Then our search for human connection through art only deepens.


It does look like AI generated art, tbh.


Only at first glance. It doesn’t have any dodgy area or incongruence like you pretty much always find in AI pictures.


The new hotness is letting the AI do the first pass then the artist cleans it up before posting.


Reddit = brainwash center


Thats truly interesting thing to learn on such site as HN.


There is something inherently wrong there. The up/downvotes do not make sense, there is a seriously lopsided (non human?) feedback that goes against common sense. And certain very popular reddits are just puzzling. There are things you wouldn't do in real life if you would know a bit more about the context... and yet people get feedback at those reddits that what they are doing is good, every critical thought is downvoted in an instant and the list goes on. It's like an experiment on groups of people where their feedback is totally distorted.

I can compare it to a relationship where a parent praises their child when they do something wrong, instead of scolding them and explaining what is right. So in the end the child is doing totally batshit crazy things and thinks that is normal.

Or imagine that you get in a bubble where your sense of excess heat is distorted and you can touch a red hot iron without any severe pain, yet your hands are slowly crumbling away like a charcoal.

The patterns there are very very strange.


They aren’t wrong. It looks like AI art because it’s meaningless, derivative bullshit.


I think this may be overreach by the moderator but I basically agree with their points. This looks just like a midjourney output. It is a hodge podge of different cultural influences -- asian character, with green eyes, with grecoroman garb, and eye of sauron orbs floating around her. It would have once been technically impressive, and now is just a kind of culture diarrhea.


So what. I guess the artist in question hasn't even been playing with AI art or even seen that much, so they don't have that context. And why do we need to look at AI art to know what is right for ourselves to paint?


I have a passing familiarity with the webfiction "Beneath dragon eye moons", that this artwork was designed for. All of the things you are complaining about makes sense in the context of that writing.


You've got a point, but I feel like I've seen more egregious examples of what you're describing. The Sauron eyes are unnecessary, but without that, it could be an interesting "Muse in the Warzone" piece. The character doesn't really look Asian to me though, or drawn in an anime style, but the expression could stand to be more subtle. We're all critics.


I agree with the moderator, AI will push artists to their limits and force them to become creative which is the most important skill of an artist. Technicality isn't enough nowadays, doesn't matter if you can paint like Raphael, you need to be creative like Picasso or Dali.


> Technicality isn't enough nowadays

When was it ever like that for artists? Most complex music to perform doesn't mean it's the "best", neither is it like that for art.

As an example from visual art, abstract art is sometimes very simple, yet have a profound impact on people, and it was never about being "technical", "complex" or "hard to reproduce".


AI has reached a point where aesthetics and technique are no longer hurdles. You can literally prompt gibberish and get back a highly aesthetic image that is completely boring and irrelevant.

So, the challenge is still to dig deep and make something interesting and relevant. The hurdles are removed. The gates are wide open. You can make anything. So… what will you actually make?


Technicality was everything in painting before the advent of the photograph.


The impressionists wholeheartedly disagree with your comment.


The Impressionists came after photography, and in fact Impressionism was something of a reaction to its development. "Since we have the camera to do realistic images now, we should concentrate on other things.", more or less.


> When was it ever like that for artists?

For centuries. Everything you just said about abstract art encompasses the entire reason it was important and prompted backlash that continues to this day.


What is creative? How many artworks like this with something like sauron's eye flying in the coulds have you seen, is that too unorginal?

I don't think it's possible to judge these questions you present at a glance anyway.


Just like how AI is forcing coders to make better design decision and focus on the bigger picture since eventually it will be able to write any program. The implementation will be up to us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: