Hacker News new | past | comments | ask | show | jobs | submit login
The Downfall of DeviantArt (slate.com)
286 points by jfryusef 15 days ago | hide | past | favorite | 237 comments



I worked at deviantArt from 2009 to 2013. It was my dream job. At the time deviantArt made money a few ways.

In no particular order, because I don't know which were profitable or which represented a larger portion of revenue:

- Subscriptions (users could pay for a few extra features and to disable ads on the site) - DeviantArt branded merch. - Prints and products with users' art printed on them - Sponsored Contests. These promoted movies or other media properties, or software of interest to artists. Often the prizes included Wacom tablets and Adobe Photoshop licenses.

During my time there a significant problem we were dealing with was due to deviantArt's stance on adult content. Anything was allowed as long as it wasn't outright pornography. In practice that meant that nudes were allowed but sexual acts were not. This had consequences for deviantArt's revenue. It meant that we could not run ads from the "reputable" ad networks and were forced to deal with seedier outfits that often (e.g. constantly) included malware in the display ads, exposing users to all sorts of nasty stuff. One of my projects was to detect and/or prevent the malware ads which proved challenging and at least given the amount of resources devoted to it, it was not very fruitful.

It really is sad for me to see what deviantArt has devolved into. Once the original founders sold out a few years ago I really didn't hold out much hope for the site's future.


> This had consequences for deviantArt's revenue. It meant that we could not run ads from the "reputable" ad networks

Advertising: perhaps the best and most effective involuntary ally of the Puritan movement.

It is a shame that the Internet that we have created depends pretty much solely on that form of revenue.


What doesn't make sense to me is why the major ad networks don't just have a setting for advertising on NSFW sites, or for that matter on NSFW pages.

They're just leaving money on the table. Ads are sold at auction, so the auction price on NSFW pages would be lower, but because it would be lower then some advertisers would want the discount. The alternative is that the ad network bans them and gets nothing, the advertiser loses out on cheap impressions and the site goes bankrupt.

Sites like Deviant Art could have a policy to the effect of "NSFW content is allowed but you have to mark it" and then advertisers who don't want to be seen next to NSFW content, aren't. Hypothetically some user could post something NSFW without marking it, but the same is true on sites that ban NSFW content entirely.

Then the site itself gets the full payment on the majority of pages that are safe for work, still gets something from the ones that aren't, and doesn't have to deal with shady malware-laden ad networks.


> What doesn't make sense to me is why the major ad networks don't just have a setting for advertising on NSFW sites, or for that matter on NSFW pages.

Two main reasons:

- Websites cannot accurately classify content a lot of the time without human review. Ads are served dynamically, so they could appear on a NSFW post faster than a human can review it. It's less work to simply optimize for as little NSFW as possible.

- NSFW content consumers aren't valuable to advertisers or ad network operators because they don't sell products that could be placed as a 'contextual' ad. Ad platforms need insurance brokers, banks and CPG brands to make up the fat tail of revenue. That stuff can be placed next to posts about pretty much anything. NSFW content on the other hand doesn't work the same way, which is why ads on porn sites are for scam dating sites and unregulated erection pills.


> Websites cannot accurately classify content a lot of the time without human review.

This is the same whether NSFW content is banned or not. "It's banned but somebody posted it anyway" has no apparent solutions different than the problem of "the user is required to mark it but they didn't."

As an obvious example, adult content is nominally banned on sites like TikTok, and yet there are thousands of TikTok accounts that serve as the funnel for OnlyFans, with the creators dancing on the line of the site's policy (and often crossing it), only to create a new account and carry on if they get banned.

> NSFW content consumers aren't valuable to advertisers or ad network operators because they don't sell products that could be placed as a 'contextual' ad.

Which is fine because the ad slots are sold at auction. The ad network's choices are to get something or get nothing.


The first "kosher" brand that advertises on NSFW contexts can take my money.


It still wouldn't work in the way you think it might. Advertising is about keeping something "top of mind", companies won't want to link their brand in someone's mind to when they're pounding off.

This is doubly so considering the use of remarketing cookies. While you're watching something that's bound to fill you with regret moments after the fact, you won't want to see an ad showing the Amazon products you were browsing previously.


They are leaving money on the table. More than that, they have funded deliberate efforts to find out the non-conforming site and kick them out of their network and that's not free.

It's a deliberate, active decision and its cost. That's policy. Prioritized over cost and profit.

In the US, there is often concern that the more valuable clients or end-users might otherwise boycot their product, or get them regulated even further. A concern that their are under pressure to do that, or else face even more costly consequences. (And they are. It's a very vocal part of society that's puritan.)


Allowing adult content puts your site in the same category as every porn site on the internet. Maybe it doesn't make much sense but that's just how they play it. It's not literally seeing your ad directly next to a NSFW image, brands don't even want to be on the same domain as homoerotic sonic the hedgehog fan art.

edit: sibling comment has a better explanation which I think is probably 100% on the nose for this one.


It's not just ads, it's a lot of B2B services that ban you if you don't have a strict rule on sexual content (or even nudity).


Unless it's a conservative religious service, why would they care? Money is money, and porn isn't a crime.


In some places it is.


How silly.


> Advertising: perhaps the best and most effective involuntary ally of the Puritan movement.

Which is weird, given that "sex sells."


It needs to be teased not given away.

> If music be the food of love, play on; > Give me excess of it, that, surfeiting, > The appetite may sicken, and so die.

The point is actually to suggest sex. Keep you aroused, but not let you finish.

Imagine, if Budweiser ran a brothel and buying their beer actually got you laid. You'd get laid, forget about sex for a while, and the association between the beer and sex would not entice you to buy the product.


> Advertising: perhaps the best and most effective involuntary ally of the Puritan movement.

The usual defense by advertisers is that their clients do not want their businesses to be associated with adult material, so they insist that advertisers do not place ads on adult-content-friendly websites to reduce the risk of that association happening due to ads adjacent to adult content.

I'm curious as to how valid that is. Are many/most businesses (or many/most businesses that are highly valuable to advertisers) materially concerned enough about association-with-adult-content-via-ad-placement that they would switch advertisers over it? Or are advertisers manufacturing family-friendly ad placement as a competitive selling point that their customers never really asked for?

I'm also curious about the next level down the stack, as it were: if a significant subset of businesses who hire advertisers are concerned about adult-content-association-via-nearby-placement, why is that? Is it merely disgust/discomfort on the part of business leadership and prevailing culture? Or does data indicate that there's a material risk to enough businesses' bottom lines that it's fiscally prudent to avoid that association?

(Reasoning from first principles without data, so probably wrong) I'm skeptical about the validity of both claims. Adult content is really popular across many demographics whose behavior is otherwise quite different. Given that broad base of popularity, is it really that risky for a business to have its brand appear in an ad next to pornography compared to, say, appearing on a politically extreme news site, or next to algorithmically-prioritized ragebait content on social media? Unlike adult content, I feel like the associations formed by seeing a company mentioned next to something anger inducing are more likely to be negative than the associations formed by seeing it next to adult content that the viewer presumably sought out. In both cases, the chance of behaviorally-significant associations being formed at all seems quite low, so I'd imagine that effects here (if there are any) would only be visible in the very very large.


https://hbr.org/2024/03/lessons-from-the-bud-light-boycott-o...

Advertisers don’t have to believe the content is actually objectionable to fear reprisal. Outrage machine types on all sides will make an issue out of anything they can use to score points for themselves.

The marginal value of putting a Johnson & Johnson ad (say) in front of some furries is nothing compared to dread of the headline “J&J wants your children to be furries!”


This.

The advertisers care about getting bad press and having to do the rounds in the "news cycle".

By now everyone knows that advertisers don't choose which posts, tweets, images, etc. their ads are shown next to, but that still won't stop the CNNs of the world from writing a "<Brand> advertises next to Nazi content on <Platform>!" article.


While many media outfits are very good at manufacturing outrage, I’m not sure whether a big ad farm putting (say) Walmart ads on PornHub would materially move the needle on bad-press-risk.

Like, if a media outlet or politico wants to make hay about Walmart being advertised next to nazis, I’m sure that’s trivially easy to find today (and you only need to find one instance of that adjacency to make hay for a news cycle).

So why isn’t this already an issue in the status quo? Why aren’t brands being pressured into changing their ad placement habits right and left?

Sure, some businesses are being called out for things they explicitly endorse (e.g. Budweiser pride ads), but that’s not the same thing as running an ordinary ad in a questionable place.

Perhaps advertisers have written off adult content sites as places where there isn’t enough money to be made to risk it? If so, I’m curious what consumer behavioral data backs up that conclusion.


>So why isn’t this already an issue in the status quo?

Didn’t this just happen to Xhitter? Their ad revenue has dropped like a brick.


> putting a Johnson & Johnson ad (say) in front of some furries

A very real possibility when advertising on DeviantArt. But a no-adult-content policy would have (probably) satisfied adsense terms but would not have eliminated (most) of the furrie content on DeviantArt.


> The usual defense by advertisers is that their clients do not want their businesses to be associated with adult material

That's the biggest joke in the world, the largest diffusers of explicit content are by far the advertisers which they use to sell anything, from cars to shampoo

The advertising industry is the very last industry which can claim a moral high ground on anything.


> the largest diffusers of explicit content are by far the advertisers which they use to sell anything, from cars to shampoo

Relevant Bill Hicks: https://www.youtube.com/watch?v=G4NyMJHWVHw


Reread what you quoted. The advertising industry isn't claiming the moral high ground on anything, they claim their client are. And their clients will tell you, no, it's the customers who will throw a fit at the slightest hint of nipple.

It's buck-passing all the way down.


They lie outright. The other day I saw an ad of a body lotion that “modifies your skin DNA”, according to the ad.


> I'm curious as to how valid that is. Are many/most businesses (or many/most businesses that are highly valuable to advertisers) materially concerned enough about association-with-adult-content-via-ad-placement that they would switch advertisers over it? Or are advertisers manufacturing family-friendly ad placement as a competitive selling point that their customers never really asked for?

I've worked in a couple of digital advertising roles (mea culpa) and brand safety is a very big deal. No-one wants their product associated with porn, extremism, or even politics. eXtwitter's revenue implosion due to advertisers pulling out (pun not entirely intended) is one recent example of the importance placed on brand safety.


That's what I'm curious about. Are advertisers insisting that brand safety is hyper-important and businesses are going along (or starting to believe it after enough repetition)? Or are businesses independently bringing that constraint to advertisers? If the latter, why do businesses believe that--is it based in evidence of bottom-line risk to the business (customers will leave in proportion to brand-safety-compromising ad misplacement), or is it a more subjective combination of risk aversion and distaste for brand-unsafe content?

People who hate nudity will organise protests and get news coverage, giving more voice to a minority. Same with anything, the ones massively for/against something will spend money time/money on complaining that the passive majority. See gun control in US for an example


It ultimately all happens due to news media. While there are many journalists who work hard to earn the high status that was accorded to the profession in the past, the news industry as it stands today, by and large, exists mostly to foment outrage and spread strife.

If the media stopped writing shrieking news articles about "X brand promoting Y evil", we could all move on collectively as a society.


Religion is the source of that nonsense. Help bring it down any legal way possible. Ridicule it, expose it, help people realise it's just a bunch of fairytales, most often useless and sometimes actually damaging.


There are plenty of feminists that are as militantly atheistic as they are militantly anti-pornography/prostitution/female nudity. The modern Moral Majority isn't just composed of religionists anymore.


I would have to pick my jaw up off the floor if militant feminists composed even 2% of the general population.


Just to calibrate your sense of scale: 1% of the United States population is still 3 million people.


you are right in calling out these people are a tiny portion of the population, yet here in 2024, their numbers don't seem to matter... as they have been wildly successful in moving the overton window

In summary, they are a tiny % , but by no means insignificant in terms of impact.


It's not a head poll, it's who has influence. One NYT columnist has more influence on the politics of the society than 1000 people in rural Appalachia. Influence is not distributed equally, it's very heavily skewed.


Moral majority or vocal majority?


My phrasing was made in reference to the political movement largely associated with Jerry Falwell. Despite the name, the movement never lived up to either aspects of it.

https://en.wikipedia.org/wiki/Moral_Majority


There are valid social reasons for rejecting the cheap manipulation of people using addictive visual patterns that have nothing to do with religion but do relate to ethics. Even if no humans are hurt in the creation of the content, humans can be harmed by its addictive consumption.


Okay but would you want your ads on a porn site if the business was yours? That makes it seem like you approve of that behavior (e.g. drug use, gore, etc).


What does porn have to do with drug use and gore?


It makes you approve nothing, but for sure in the US it makes you a target for the vocal moralists. Leading to boycots (which might kill you), dubious free publicity (which you might feel is worth it or not), and extra regulation (which might kill you).


Where did pharmaceuticals and gore come into this?


Sounds like a very religious way to handle your issues


Your are correct. Religion taught me this. So why not let it backfire on them.


Yes, when they famously declared "there's no sex in USSR" it was because USSR was a theocracy.


Communism was a religion.


So, you extend the definition of a religion to "everything a lot of people support" and then declare that it needs to be "brought down". How does it work for you - you plan to ban people from agreeing on anything, and thus enshrine world peace?


As much as science is a religion.


Yet somehow being a nuisance streamer on platforms like twitch or kik still yields insane profits. Just don't show your ass and you'll never get banned for anything. Bonkers.


For what it's worth, I really think DeviantArt should have focused much more on the other revenue sources and abandoned display ads. I'm not sure how that would have turned out but I suspect it might have been good for the company. The sponsorships were far more aligned with the interests of deviantArt's users and I think it was a significant source of revenue that could have been expanded with the right focus and execution. I'm pretty sure that sponsorships amounted to $60k to $150k per deal which usually ran for 1 or 2 weeks with a sponsored art contest, judging and awarding prizes, it wasn't a lot of administrative work for deviantArt AFAIK.


For what it's worth, a very very long time ago, when it was just getting going, 500px had me design a go to market for them, that is what I designed.


Interesting!


It's still an issue for most publishing on the internet. How to fund it. Advertising is still the obvious, easy way. Not much else comes to that level of available funding. Subscriptions are next - but then again most of them only if you can accept credit cards - same problem again.

How does 4chan fund itself?


Apparently investment from a Japanese toy company, Good Smile[0,1], and the pockets of its current wealthy owner Hiroyuki Nishimura[2].

[0]https://www.polygon.com/23662953/good-smile-company-4chan-se...

[1]https://news.ycombinator.com/item?id=31553474

[2]https://en.wikipedia.org/wiki/Hiroyuki_Nishimura


It has ads and you can pay for a subscription to remove captchas and allow VPN/Tor access (from memory).

Then credit card payment processing/PayPal would have been weaponized earlier, and the result would be the same.

It's App Store. Centralized censorship through App Store destroyed the Internet.


> It meant that we could not run ads from the "reputable" ad networks [...]

It's funny how significant percent of Facebook ads I see are an outright scam, phishing etc.

It's been like that for years.


Back in 2012 there was a very large gap between the quality of ads on Google AdSense vs. all of the other options. DeviantArt was banned from using google adsense and thus had to run ads much more similar to the crap you see on Facebook these days.


imo once the suicide girls and the suzi9mm stuff started to be "the thing" - and we allowed a bunch of people from that crew into GD, things started to change really quickly. Honestly, I hated it so much, it made me really upset. I have no issue with that style of art, but it really took over the narrative for a while, and that was silly, and I'm still surprised some 20+ years later it was allowed to happen.


Who or what do you think it "was allowed" by? Nothing kills an edgy contemporary art platform faster than censorship, plus they would've got bogged down endlessly in fights about you let someone away with x so why are you blocking y. The place is called DeviantArt FFS.


> Nothing kills an edgy contemporary art platform faster than censorship

I don't know the solution but isn't 4chan an example of what happens with no censorship? That would also kill an edgy contemporary art platform as the "4 teh lulz" crowd drowns out the "for the art" crowd.

As another semi-related example. Imgur used to be interesting "images" but now 25-75% are posts of text (like a screenshot of a tweet for example, or of a blog post, etc...). That might be good for Imgur's bottom line, I don't know, but it's not the site it started as.

To be the site it started as would require somehow disallowing images of text but magically allow meme images (some text). That might end up starting an arms race as users skew their messages onto billboards, tvs, signs, and other things to try to make them not look like just an image.

Like I said above, I have no idea if they want the site to be mostly images instead of mostly text. Only that it's another site that changed character over time and I'm sure didn't start as a site expecting mostly images of text.


4chan has censorship and extensive and active moderation but the levels for what is acceptable are lower than most other sites. Unlike Reddit, moderators are hidden there's no way to know if a user is a mod or not. I like this way of removing ego and moderator abuse but this makes the place seem more chaotic, unaccountable and lawless.

In theory it's possible to have the same kind of site with the regular family friendly content and with strict (i.e. normal) levels of moderation. In practice we don't see this.

I think it's because the levels of active moderation by humans needs to be high. So 4chan can get away with less spent on moderation because their levels are lower.


> but this makes the place seem more chaotic, unaccountable and lawless

That's how exactly how reddit mods operate, no?


Reddit admins, employed by the site serve as a line of (corporate) enforcement.

Otherwise, reddit isn't a democracy, the mods can do what they want (within limits set by the admins), just like they can do what they want here.


Unfortunately, Imgur also largely ceased being memes and / or discussion board include image storage (Reddit) and starting being a lot of TikTok / Vine / Youtube Short cross-posts. "Here's my dog / cat, my dog / cat's cute." "Here's us in our car, we're very clever." "Here's us in softcore dress-up, look at how attractive we are." There's memes, just not that many unless you mostly subscribe to sub domains (#meme, ect...).


Perhaps Ironically, (or perhaps just tellingly), there was a very large overlap between the 4chan and DeviantArt user base back when I worked there.


I'm happy to say it: Daniel and Richard. But especially Daniel.


Enough time has passed, I guess I can speak my mind a bit. I'm sorry, but in my opinion, The downfall of DeviantART was when Angelo fucked with Scott and Eric. Scott and Eric should have run it with Chris and Heidi. Kicking a co-founder out unceremoniously, and especially the co-founder who was responsible for the customers, that was really ill guided. I have nothing against Angelo personally, I'd still consider him a friend, but yeah, I truly believe dA would have been amazing today if that hadn't happened, everything got really fucked up after that imo.

Building DigitalOcean was fun, but building dA was 1000 times more fun, best times of my life for sure, very very grateful.

(I worked on gallery, help, irc, dAmn and a bunch of other stuff, 2001-2008 my emoji still exists :neom:)


I remember that. I was part of the group vocal about how messed up it was. I was at the first summit and it was a blast, took home a couple of those posters. So sad to see what’s happened to it.


What was so interesting to work on at dA? Multiple people in this thread have mentioned how fun it was. I'm wondering what was so fun about it. Seems like a pretty simple application from the outside, but maybe there were some fun technical challenges?



Hey, just picking this comment, to say thank you for DeviantArt. It was a major part of my teenage pre-teen and early teen years. It got me into a bit of art. While I am not an artist, I loved the community. I also at the time, being young and not caring about things like plagiarism, copied a large chunk of Scott's website design as my own for my own page, which I used for a long time. And yet, Scott never mentioned anything, even though I think he was the one that approved my invite to `neopages` where I moved it to.

Anyways, even though dA of the past is gone, it was definitely an important part to many people, so thanks again.


I tried looking up :neom: but couldn't find any hit on images, got a link to share, I'm curious now :D


People think it's the f'ing monkey emoji tho: https://e.deviantart.net/emoticons/n/neom.gif :| :| :| :| :| :| :| :| :| :|


So much of the DeviantArt story reads like Tumblr. Two platforms appealing to amateur and small artists grow to great relevance among a patchwork of subcultures. Then, they start trying to turn a profit and end up alienating the entire userbase that carried them to that point. DeviantArt is much further down that road than Tumblr is, though. It's sad to see. Both platforms were key to the WWW of my childhood.

I wish the artists well in their AI copyright legal pursuits.


Apple mandated the changes to Tumblr that pissed everyone off. I'd almost call this a forced error, but Apple's rationale for bringing the hammer down on Tumblr was App Review finding CSAM on the front page. Which is itself a failure of their moderation team.

In contrast, DeviantArt saw dollar signs from AI art and rugpulled themselves. Their business model relies on art remaining scarce enough to not exhaust the demand for art. A machine that lets you create unlimited art for the cost of some GPU time completely destroys the economic underpinnings of most artistic endeavor. While not all artists are solely economically motivated, the ones that are economically successful are the ones paying for dA subscriptions - the things that keep the site alive.


To my understanding, what happened with Tumblr was moreso a compounding situation; the Apple app store reviewer managed to just find NSFW content in general, forcing Tumblr to change the app to remove it.

Before, under Yahoo, they'd just put some new hoop in the app to prevent users from accessing adult content, but by the time this particular Apple review rolled around, the sale of Tumblr to Verizon had already been finalized. Which created a situation where an outgoing management pretty much ordered to not bother fixing it and just banning it all, hoping that Verizon's "family friendly" policies meant that it wouldn't jeopardize the sale (which it turns out, the sale wasn't jeopardized).

You can still kinda see it in how broken the actual removal was; they just excised the NSFW from the frontend by marking the posts as sensitive on the API and then preventing the frontend from viewing anything sensitive. For years (and maybe even today) you could just scrape the API to find NSFW posts, although that's on the decline since most NSFW Tumblr accounts have been deleted entirely by the actual people behind them.


> Apple mandated the changes to Tumblr that pissed everyone off.

That's not true - Tumblr already had their "no adult content" plans in place well before the CSAM problem caused Apple to temporarily suspend them from the app store. That suspension just brought the plans forward by 6 months in a panic rush.


Apple will find the sexual content and push for its removal. Not just the stuff on the front page, but anything that gets surfaced within a reasonable browsing session (that includes popular items, keywords etc.)

And it's not just Apple, payment processors also have strong opinions.

In general, sexual/erotic stuff has become a really hard thing to keep allowing in mainstream platforms.


Then how is there a Reddit iOS app?


It's harder to accidentally reach porn on Reddit than it was on Tumblr ca. 2018. You could follow tags like #cute and #fox because you liked a cute fox picture and the next day get somebody's Sonic and Tails furry futa fanart on your dash because they used those tags.


Wait, they don’t have an nsfw search flag like Reddit??


Tumblr does have a self-reporting NSFW feature now, IIRC.

I don't have the app anymore, but I'd assume you don't end up in r/GoneWild with 3 random clicks from opening the app as a new user, nor that porn is prominently in the default subreddits.


When photography first began to proliferate in the 19th century, people feared that it would replace the art of painting [1]. Photography is used for some practical applications where painters were once employed, but we still paint and draw things. Ultimately, the two mediums are distinct in their aesthetics and expressive abilities.

Using that situation as a reference point, I'm not totally convinced that AI art will supplant digital art/drawing. AI art can be used to produce some types of drawings, sure, but not all types of drawings can be convincingly produced using AI. AI's stylistic and compositional abilities are constrained to its training set, for example.

Another thought: when the DSLR became affordable in the early 2000s, it wiped out an entire segment of low-level professional photographers. Suddenly, any schmuck with a DSLR could take "good enough" photos with ease, and without the friction of film. I'd expect AI art to have a similar effect in the realm of digital illustrators. But, professional photographers still exist, and so will professional illustrators.

[1]: https://daily.jstor.org/when-photography-was-not-art/


Part of Tumblr's downfall also comes from their change of stance on NSFW contents.


Tumblr's change in policy on NSFW content was bad enough, but what made it a complete disaster was outsourcing the enforcement of that policy to a crude image classifier. A lot of non-pornographic content got removed when that happened, and a lot of users never bothered contesting those removals (either because it was too much effort, or because they were no longer maintaining their account). So a lot of content on older Tumblr accounts is just gone.


Quite an understatement.


It's possible that artists posting their art aren't a large enough demographic to produce enough economic value that can be harvested to feed a cadre of SWEs and PMs and SREs and executives and moderators and, and...


I think it's just not possible for a centralized social media service to avoid enshittification long-term, at least not if it has to make money directly. It remains to be seen whether decentralized options can provide a long-term alternative at scale.


But why does a social network have to make money directly?

Why not start a centralized social-network service as a non-profit / benefit corporation, paid for by donation?


For the same reason FOSS projects aren't funded in proportion to their criticality, and why taxes aren't voluntarily - given the option to use the service for free, most people will do so and choose not to donate. Any such project, to stay afloat, will likely end up depending upon a small number of donors who can then exercise political control over the platform.

A social network has to make money somehow because it has bills to pay. Hosts aren't free. Servers aren't free. The cloud isn't free. Staff isn't free. Moderators are usually free but shouldn't be.


> A social network has to make money somehow because it has bills to pay. Hosts aren't free. Servers aren't free.

Sure, but probably a FOSS social network would need far fewer of these than a paid one, because 99% of the server costs of something like Facebook or Twitter, go toward the backends, analytical DBs, and graphical / ML models used to power "features" that no user wants, but which make Facebook themselves money.

And a FOSS social network would just... not build those kinds of features.

Instead of, say, a feed constantly rebuilt to drive engagement and rage-bait, you'd just get a simple chronological feed of what everyone you're following is posting; with maybe the ability to dial down the number of posts you see from any given person you're following ("show me the only the most-liked Nth percentile of posts from this user") without unfollowing. But even that kind of filtering — in fact, even the merging of followed users' feeds! — could all be done client-side. The whole "feeds" part could be as simple as a post-triggered static-site-generator pushing JSON into an object-storage bucket hiding behind an edge cache.


> 99% of the server costs of something like Facebook or Twitter, go toward the backends, analytical DBs, and graphical / ML models used to power "features" that no user wants

This is incorrect. From direct personal experience, I strongly believe the backend infrastructure and staff required just to operate the core product functionality of a successful large-scale social network massively exceeds what could be provided by donations.

Just in terms of core product OLTP data, Tumblr hit 100 billion distinct rows on MySQL masters back in Oct 2012. At the time, after accounting for HA/replication, that required over 200 expensive beefy database servers. This db server number grew by ~10 servers per month, because Tumblr was getting 60-75 million posts/day at this time.

Then add in a couple hundred cache and async queue servers, and over a thousand web servers. And employees to operate all this, although we kept it quite lean compared to other social networks.

Again, this was all just the core product, not analytics or ML or anything like that. These numbers also don't include image/media storage or serving, which was a substantial additional cost.

Although Tumblr had some mainstream success at that time, it was still more niche than some of the larger social networks. At that time, Facebook was more than 2 orders of magnitude larger than Tumblr.


> From direct personal experience, I strongly believe the backend infrastructure and staff required just to operate the core product functionality of a successful large-scale social network massively exceeds what could be provided by donations.

Because these social networks are designed for analytics. It's in their blood. It permeates everything they do, and causes immense overhead.

Check out mailing lists or usenet.


No, nothing about this is related to analytics. I was strictly describing storage, caching, and compute for core product functionality in my previous comment, which is written from direct first-hand experience working on infrastructure for social networks for a decade, including the two social networks I referenced in my previous comment.

Social networks store a lot of OLTP data just to function. Every user, post, comment, follow/friendship relation, like/favorite/interaction, media metadata -- that all gets stored in sharded relational databases and retrieved in order for the product to operate at all. For successful social networks, it adds up to trillions of rows of data (on the smaller end, for something like Tumblr) and that requires a lot of expensive infrastructure to operate. Again, none of this has any relation to analytics.

As for usenet, what? It's basically dead, after becoming an unmanageable cesspool of spam (or worse) more than two decades ago. It was great in the 90s, but the internet population was substantially smaller then.


> As for usenet, what? It's basically dead, after becoming an unmanageable cesspool of spam (or worse) more than two decades ago. It was great in the 90s, but the internet population was substantially smaller then.

Yeah, because no one is interested in promoting it because it doesn't have analytics baked in so you can't make money from doing so. Of course it deteriorated over the years. It's also cheap to run and can handle a massive amount of users.

> store a lot of OLTP data just to function

Right, so they can run analytics. You could reduce your tracking data to aggregates, but then you can't go back and run analytics on your users. You don't need to keep that data forever.

Especially with modern social media where content older than a day is effectively dead and ignored.

> it adds up to trillions of rows of data

This was a lot of data a decade ago. Nowadays a single postgres instance will handle billions of rows without breaking a sweat, and social media content is exceptionally shardable.


> Right, so they can run analytics.

Stop gaslighting me, it's not OK! I'm describing first-hand experience of things that were not related to analytics IN ANY WAY, SHAPE, OR FORM.

Try running OLAP queries on a massively sharded MySQL 5.1 deployment, or any aggregation at all on a Memcached cluster. These technologies were designed for OLTP data, and were woefully incapable of useful analytics over massive data sets.

I was Tumblr's fourth full-time software engineering hire. When I joined (nearly 4 years after the company was founded) the only thing remotely related to analytics was a tiny Hadoop cluster, where logs were dumped and largely ignored. Nothing about analytics is "in their blood". All you needed to sign up for Tumblr was an email address. WTF do you even think they are "analyzing"? Your comments are completely fabricated BS.

> You could reduce your tracking data to aggregates

Once again, I'm not describing "tracking data"! I'm talking about things like content that users have posted, comments they have written, content they have favorited, users they are following. These are core data models of a social network. It has nothing to do with tracking or analytics.

> You don't need to keep that data forever.

The OLTP product data I'm describing does need to be kept forever. Users don't like it when content they have written on their blog suddenly disappears.

> Nowadays a single postgres instance will handle billions of rows without breaking a sweat, and social media content is exceptionally shardable.

Yes, but running a massive cluster of hundreds or thousands of sharded database servers is still very expensive.


I think that what they're getting at is that in the Usenet days half of what you've mentioned would be local data.

There is no central concept of "content that have favourited" or "users they are following", that's all handled locally in that model.


> half of what you've mentioned would be local data

Not by volume. Posts and comments make up the vast majority of the storage requirements, and none of that can be purely client-side.

> There is no central concept of "content that have favourited" or "users they are following"

I'm aware, I used Usenet quite a bit in the 90s, as well as dial-up BBSs.

Usenet is a distributed forum / discussion board, which is related but not equivalent to the core functionality of social media applications being discussed here.

With Usenet's model, there's no concept of a profile aggregating content from a single user. This means you simply cannot replicate the primary experience of Facebook, Twitter, Instagram, Tumblr, Pinterest, DeviantArt, MySpace, Friendster, or any other social media site/app with Usenet's approach. Nor can it reproduce the experience of even modern forums like HN or Reddit.

Usenet also didn't actually scale massively. Every estimate I've seen of the peak Usenet userbase puts it at a tiny fraction of modern social media.

In any case, Usenet essentially failed. We already have empirical evidence about how these ideas play out! Why are we even seriously discussing this?


> Check out mailing lists or usenet.

These aren't really good "social media" examples. Both mailing lists and Usenet have limited retention, with mailing lists there may be almost no retention beyond the amount required to deliver a message.

While low retention might be a desirable feature and something you might actually want in a FOSS social network, it means old content will disappear from the central server. If it's not archived by clients it can easily disappear or end up locked away only in private backups. Google's buyout of Deja News should be a cautionary tale of retention and the locking up of public data behind a private gate.

Usenet history today is largely only available because someone at Google hasn't noticed Google Groups still exists and terminated it yet. If that happens tomorrow there's not any good complete archive of historical Usenet content. There's no guarantee Google won't kill those Usenet archives in the next year let alone the next five years.


Also speaking from direct personal experience. I have an API SaaS (which, yes, I realize, as a B2B product, has a much smaller audience than a B2C platform, and so probably much lower reads per second... but I want to just talk about writes here, since reads and aggregates-of-reads get cached on so many other levels.)

We have 1.5 trillion distinct rows living on a PG cluster right now. Growing at a higher rate. And that requires... four DB servers (four shards). And no replicas — all our prod query traffic hits these servers directly, as a mixed workload with the writes from our queue-consumer ETL agents.

(The servers are "beefy", sure — but they're not that beefy. They've got 128 cores and 1TB of RAM each. That's not even top-of-the-line these days.)

Despite serving a whole ton of QPS, these DB servers are not CPU-bound or memory-bound or even IO-bound. Our primary horizontal-sharding factor, in fact, is PCIe lanes — we're constrained mainly by the number of NVMe drives we can keep connected performantly per CPU [1] — and thereby, generationally, by "fast storage capacity" per server platform.

One thing that's perhaps "special" about our use-case, though, is that our data is inherently append-only. Which is pretty great for OLTP performance: there's very little write contention, as we just partition the data by insert time (which is also a natural partitioning key[2] for most queries our business layer does) and then only write to the newest partition, with all previous partitions becoming effectively immutable.

Most workloads aren't like this... but they could be, if you model your data carefully, with temporal tables holding versioned records and so forth. You trade off more storage growth, for far less worry about write contention. ("Event streaming" is not the silver-bullet solution it would seem here — as you would still need to create an aggregate to query from; and writing into that aggregate would still be an OLTP write bottleneck. No, the true solution here is data-driven [schema] design — i.e. forcing your API engineers to invite the DBA to feature design meetings, and ensuring the DBA has the perogative and incentives to say "no" to a bad design. Or you could just hire database systems engineers to code your business layer, like we did.)

My point here, is that if you're very careful in designing for not just "scalability" but economies of scale — and if your org is engineering-driven rather than product-driven (as a non-profit social network would likely be!), and so has "mechanical sympathy" — then you can achieve things with just the budget of an average B2B SaaS, that would rival a (smaller) social network.

But with the budget of a benefit corp that nevertheless charges even $1 for each install of its social-network mobile app? Who knows what you could achieve!

(See also: WhatsApp, pre-acquisition. They were just charging $1 for each install. With just 50 employees, that got them a lot of operating budget.)

---

[1] And actually, that's part of why the DBs are not IO-bound. Put enough NVMe disks in RAID0 together, and you get something nearly functionally indistinguishable from memory. (Yes, RAID0 — because our DBs aren't the store-of-record; our data lake is.)

And even where NVMe reads are not indistinguishable from memory reads, they're still complementary. Postgres, in relying on mmap(2)ing heap files and thereby on the the OS page-cache as a buffer pool, makes per-query serial page faults expensive, and causes multiple threads wanting the same cold data to bottleneck by repeatedly becoming coalesced awaiters on the same sequence of pages faulting in. But when you've got highly-concurrent queries [= many different OS threads] that are all page-faulting for different cold data, then you get deep IO queuing, and everything works out optimally. So the expensive cases don't really come up — the page-cache serves the head of the request distribution, while the deep IO queues of the RAID0-NVMe-pool are a perfect match for the long tail.

This makes it almost irrelevant, at runtime, whether data is hot or cold. As long as you have sufficient memory to host the very hottest data (e.g. relatively-low-cardinality fact tables like users/blogs that get joined to everything, and esp. their indices), the rest can be read straight from disk with nearly no penalty.

---

[2] Funny enough, given that we're doing heavy joins for some queries, the one thing we do sometimes run out of at runtime, is timeslices of the Postgres postmaster to coordinate lightweight locking of the global locks table.

We not only have time-partitioned tables, but also something akin to tenant-partitioned schemas. This results in a lot of relations. (It takes hours for us to run vacuumdb --analyze-only.)

And some OLAP-ier queries (that we nevertheless have to run, synchronously, in response to client requests!) need to touch many of the partitions and many of the schemas. That's O(MN) locks they need to take — which take time to write into the global locks table, even though those are only memory writes.

Every once in a while, we have to "consolidate" our table partitions — not by rolling them up with aggregations to reduce row-count, but rather by just copying all the data into fewer, larger partitions — so that we can take fewer access-shared locks per query, so that transactions can spend less time waiting on a handle into the global locks table at startup just to write these O(MN) read locks into it. (In other words, we consolidate data to reduce O(MN) read locks per transaction, to O(N log M) read locks per transaction.)

But that's a PG problem, not a resource problem per se. We've been considering writing a patch for PG to allow relations to be marked as "immutable", where an "immutable" relation doesn't have locks tracked for it (or its indices) at all — but also can't be written to, re-indexed, or even dropped. We'd then just apply this setting to all of our historical partitions. (If you're curious: given its semantics, this property could be enabled for a relation at runtime; but would need an instance restart for disabling it to take effect, as the instance would have no idea what read-locks "would be being held if the table would have been tracking locks", and so couldn't safely do any writes to the table without a barrier that drops all existing MVCC read-states — i.e. an instance restart.) As a bonus, such relations could also have "perfect statistics" calculated for them by ANALYZE; and those stats could then be marked as never needing to be re-calculated — allowing both VACUUM and ANALYZE to be skipped for that relation forevermore.


I don't think this access pattern is generally applicable to social networks, and ditto for your operational considerations.

"Four shards" and "no replicas" is a complete non-starter for a popular social network. That low shard count means a hardware failure on a shard master results in downtime for 25% of your users. Putting ~400 billion rows on a single shard also means backups, restores, schema changes, db cloning, etc all can take a tremendous amount of time. No read replicas means you can't geographically distribute your reads to be closer to users.

I'm not sure why you're assuming that social networks aren't 'very careful in designing for not just "scalability" but economies of scale'. I can assure you, from first-hand experience as both a former member of Facebook's database team and the founding member of Tumblr's database team, that you have a lot of incorrect assumptions here about how social networks design and plan their databases.

I can't comment on any of the PG-specific stuff as I don't work with PG.

Append-only design is not usable today for social networks / user-generated content. That approach is literally illegal in the EU. In years past, afaik Twitter operated this way but I was always skeptical of the cost/benefit math for their approach.

The "200 expensive beefy database servers" I was describing were relative to 2012. By modern standards, they were each a tiny fraction of the size of the hardware you just described. That server count also included replicas, both read-replicas and HA/standby ones (the latter because we did not use online cloning at the time).

I haven't said anything about event sourcing as it generally is not used by social networks, except maybe LinkedIn.


> nstead of, say, a feed constantly rebuilt to drive engagement and rage-bait, you'd just get a simple chronological feed of what everyone you're following is posting; with maybe the ability to dial down the number of posts you see from any given person you're following ("show me the only the most-liked Nth percentile of posts from this user") without unfollowing. But even that kind of filtering — in fact, even the merging of followed users' feeds! — could all be done client-side. The whole "feeds" part could be as simple as a post-triggered static-site-generator pushing JSON into an object-storage bucket hiding behind an edge cache.

Modern Usenet :-)


Ah Usenet, that bastion of civil, intellectual discourse where the most brilliant minds of the day mingled in the rarefied air of their own fart clouds.


That's basically Mastodon, which isn't free. Plenty of small instances that try the donation model or that are just funded out of pocket go under.

And I may be wrong but I don't think the recommendation algorithms and other such features take up as much of the cost as you're claiming. I think a lot of the cost of something like Facebook is probably taken up by infrastructure and storage. Recommendation algorithms probably aren't that expensive.


The algorithms themselves aren’t expensive, no.

Having extra entire complete copies of the relationship-plus-posts graph, denormalized in various ways (incl. in ways that inherently prevent use of easily map-reducible algorithms, and so require heavy vertical scaling) such that you can run the algorithms, is what’s expensive.

And constantly feeding the data into those denormalized models, using specially-tuned realtime ETL technologies that themselves do distributed scaling to ensure no infinite queue backlogging from activity bursts, is also expensive.


In general i would argue that it is bloat that makes it not feasible to fund as a nonprofit, although people may have different ideas about where / what that bloat is. Recommendation algorithms, or unnecessary product changes, etc.

I don’t think it’s true that it’s intrinsically impossible for a public service to be self funding, and I think that not everything has to grow / change forever to remain relevant.

We need to figure this kind of stuff out, I mean Wikipedia is nice and all but it’s really bad that humanity in general has to rely on megacorp for things as basic as maps while we say we’re living in the Information Age


Do keep in mind that Mastodon is build on a tech stack that's mainly known for not being very efficient since it solved the scaling question with the answer "throw more computer at it". (In other words, it's a Rails app.) It's not very suitable for a free social media network since it's designed in a way to encourage large silos since that's the only way Rails scales from a financial perspective; you need more from a smaller core of users as opposed to having every user pay a tiny amount.

There's other AP implementations that aren't a constant server hog like Mastodon is and can run on much weaker hardware (some of it can run on a raspberry pi). You don't need a full rails stack if your user count never exceeds 100 (which y'know, is the ideal state of AP - small communities who can remotely interact with each other).


Someone ought to rewrite it in something less embarrassing given you only need to be protocol compatible.


Multiple people have written multiple compatible alternatives that are lighter weight. Pleroma and its forks (Akkoma looked good last I checked) are popular for single-user servers.


The second system will always be playing catchup to Mastodon features or they will fork in features, meaning clients will have to support both or take sides.


I think this reflects very much the Silicon Valley way of thinking about internet governance, in which there seem to exist only two imaginable forms of it: Either anarchy, in which there are no rules at all and the only limits are technological, with full-on tragedy of the commons unfolding - or oligarchy/corpocracy in which whoever is the most powerful private actor gets to make the rules and then of course gets to make them in their own interest, not the common interest.

Didn't we have some more forms of government available for discussion?

E.g., after the xz backdoor, I read a call of OSS maintainers that critical OSS projects should be state-funded as they literally comprise critical infrastructure. Why couldn't we do the same with social networks?


>I read a call of OSS maintainers that critical OSS projects should be state-funded as they literally comprise critical infrastructure. Why couldn't we do the same with social networks?

If the goal is to avoid "enshittification" I don't think the solution is to have governments control social media and then have those platforms be subject to bureaucracy, decency and profanity laws, surveillance and propaganda (more so than currently) and the fickle wrath of the taxpaying voter. Realize how little critical infrastructure actually gets funded, and then add the psychotic dog-water paranoia of half the US thinking that infrastructure is a psyop by communists and "groomers" to turn their kids gay, and making that an issue in the polls.

PBS is probably the closest analogue to government run social media I can think of. Half the government and their constituents consider it "liberal propaganda" and want to defund it entirely, and it has constantly has to go begging hat in hand just to stay afloat, and then pursue commercialism to make up for the deficit.

You think social media is bad now? Imagine if you're required by law to sign up with proof of citizenship and your SSN is your password. And every platform is constantly putting Wikipedia style donation popups. And it's a misdemeanor to post swear words or material deemed "inappropriate."


Another perspective of resources are tackling adversary bots. It's difficult to strike a balance between enough good features to have your platform likeable and useful for users while maintaining security from bad actors who'll find clever ways to exploit vulnerabilities.


Depends on how you want to define social network, but Signal has a stories feature and is paid for thusly.


the problem isn't making money, the problem is chasing continuous growth and ever increasing profits


Which is why expect to see a government run social media platform in the next few years.

Functionally, the “public good” part of social media networks is almost certainly better served by a single organization.

However, the “freedom of speech and ideas” part, runs in horror at this idea. (rightfully so).

The best middle ground concept I’ve heard is to contrast the current state of the web with libraries.

NB: Enshittification is going to become a term like “Fake news”, completely divorced from its original roots.


> Which is why expect to see a government run social media platform in the next few years

I would expect government run social media to be especially enshittified, honestly

Just for different reasons


Yup :D.

There are countries which CAN make it work, but man, a central nervous system co-opted by oligarchs, tyrants or other worst case scenarios, would be the outcome for most of the world.


I don't think decentralisation is the solution because the problem is as much the lack of central authority as the presence of it. Enshittification happens fastest on platforms that are run by committees, who know they need revenue so take the path of least resistance, without a single clear owner who can resist it. Look at Google's decline since its founders left, compare to Facebook which - say what you like about it - is much the same user experience that it always was. (Hell, look at MySpace for an even more dramatic turndown than Google)

Social media sites that are still founder-owned or have strong individual leaders can continue fine (consider e.g. Dreamwidth). Though I guess whether you can sustain that past one person's lifetime is another question.


This comment discusses only centralized services. Decentralized services run on protocols such that no one service provider or software project can dictate the experience for all users.

ActivityPub, used by Mastodon, Misskey, Lemmy, and Pixelfed among others is an example of such a protocol. BlueSky's ATProto is another, though it's in an earlier stage without mature third-party implementations and service providers. Email, too is decentralized, though it may serve as a cautionary tale; spam, attempts to block spam, and feature stagnation have all degraded the user experience considerably.


> Decentralized services run on protocols such that no one service provider or software project can dictate the experience for all users.

In theory, certainly.

In practice, one instance dominates and all the other instances have to censor accordingly or die.


> Decentralized services run on protocols such that no one service provider or software project can dictate the experience for all users.

And? I guess that somewhat hinders enshittification just by making it hard for the platform to ever evolve at all. But the cure is worse than the disease, you can't ever build something new that way nor can you really improve something that has any level of traction. Look at how IRC users revolt when you try to fix even the most glaring problems.


Mastodon, Misskey, Lemmy, Pixelfed are all very different, with different feature sets, supporting the same basic protocols, and evidence you can build something new on distributed protocols.

ActivityPub dictates how to meditate relationships between activities on collections of objects, and a few default objects types. It's specifically designed to let you "subclass" (not really inheritance, and more composition as you give a list of types with no enforced hierarchy) objects so you can create new object types nobody else understands but still give them another type that allows them to carry out basic operations on it.

It's not perfect but it's far from as dire as you make out.


So far the situation with ActivityPub is that the protocol is flexible enough to allow very different feature sets and user experiences. The most popular so far are twitter-like and reddit-like, with multiple implementations of each. I don't think ActivityPub was designed with the reddit-like use case in mind, yet it works well for that. There's no user revolt because the creation of new software with new experiences doesn't have much impact on users of existing software.

Enshittification is hindered not because nobody could create a Mastodon fork (or green-field project speaking the same protocol) that's riddled with ads, but because people can select a different service provider and still access the same network.


decentralization just multiplies the problem. instead of a gargantuan overlord, a million fiefdoms.

it's no wonder that the fediverse is most active on the fringes, especially outside of tech bubbles who use it because they like the idea


I think this is part of what ATProto is trying to solve by decoupling identity provider, data provider, and labeling.

That's not to say a million fiefdoms is necessarily a bad approach. A small server where all the members know each other is much more likely to be run in a way that's satisfactory to all its members than a big one. Furthermore, users have the option to maintain multiple accounts.


Compare to Craigslist, or for a period of five or ten years after its inception, Google Search.


It's inevitable as long as the model is "don't charge the users directly, figure out some other way to make money instead."

Enshittification is not a necessary consequence for a sustainable business. But I think social media platforms are susceptible to forces that pull beyond that.


define sustainable - I mean, do non-growing businesses seem allowed to exist these days?


Yes, they are. Not all forms of incorporation are like publicly traded joint-stock corporation.

Just a regular story of capitalism and platform enshittification.

Whenever a platform is owned by shareholders who then need to extract rents from the ecosystem, this will happen. Whether it’s couchsurfing or twitter.

Expect it to happen to Reddit etc.

There is a direct line from the profit motive to platforms becoming enshittified, promoting the most outrageous content and making people emotional and angry.

The AI is just another level of appropriating human work. Whether it’s google’s disruption of publishers through AI-generated answers, or OpenAI training on artists’ work.


Is "profit motive" that different than "survival motive"?

These platforms need money to survive. Automattic, the latest owner of Tumblr wrote a great post on all the things they've tried and how Tumblr is still losing $20MM a year IIRC.


Even if they make money, the next quarter must always be better than the previous one.

At a certain point, people seem to start looking at self destructive options to make that happen.


It is called inflation. If the next quarter isn't looking better then you are doing worse.


Inflation is one thing. Rent extraction from the ecosystem isnt done only because of inflation, but because investors want profits from capital appreciation. That’s one of the failure modes of capitalism. Sure it works well in early stages (providing capital to promising startups) but there are diminishing returns and ultimately huge negative externalities to this model of stock ownership forever. Whether by pension funds or the public, the incentives are just toxic.


Automattic is not publicly traded.


A lot of social media platforms lose money because they decide they want to grow above and beyond, even after being well established. Same thing for other software products that are perfectly fine, even loved.


That’s only in the capitalist system.

Wordpress by that same Automattic doesn’t need money to survive, in the sense of money going to one large corporation. Anyone can self-host their own copy of wordpress, buy plugins etc.

If you want to know more about how to monetize digital content without a trusted central actor, we are working on a Web2 version of that ecosystem btw: https://qbix.com/ecosystem

Also, science and wikipedia and openstreetmap are examples of open gift economies.


I persevered with your website because I am very interested in the ideas, but I have to say it was very hard going. Thousands of words about abstract concepts could easily be reduced to a few hundred or spread across multiple pages. The only comprehensive list of services actually on offer is a PDF? The videos describing the merits of these decentralised services are accessed through image links to YouTube! There's lots of low hanging fruit to improve usability for those less patient than me in my humble opinion.


That is one page on the website, the rest of it is a lot more friendly (https://qbix.com/communities or https://qbix.com/invest for instance).

Sorry the experimental stuff is not slick enough for you yet, we don’t have the resources of Facebook or even Automattic. We worked very hard for 12 years on the foundations at https://github.com/Qbix/Platform but I am sure you can find many faults there. (I’d like to hear about them btw.)

On the other hand, many other projects like the E programming language, Capn’proto, Linux etc. are also very complex and did not have fantastic and slick documentation, first adopters also had to read some words in order to get it.

This is an open source project. You are welcome to reduce the words and make a summary. Perhaps when we start marketing to a broad audience, we’ll reduce it to 10 word slides and sound bites, or jingles.

Until then you can try it yourself, the documentation is at https://community.qbix.com and technical documentation is at https://qbix.com/platform/guide


I see a lot of information about how to distribute content and revenues, but I don't see how you ensure that enough revenue comes in.

Wordpress' Tumblr problem is that costs exceed revenue, not distribution.

Note that "decentralized" doesn't reduce total costs - it just spreads them out. Server costs do not go down with the number of owners.


Tumblr: bought for $1.1B in 2013, sold for $3M in 2019. Now losing $30M a year.


DeviantArt was never perfect but it really is a wasteland now. Regardless of where you stand on the ethics of genAI the results in practice are just boring, the feeds are an endless stream of the same handful of prompt templates, and the volume of AI posts is so enormous that it drowns out anything else a hundred to one. Manual curation of good posts eventually hits a breaking point when the volume of white noise posts becomes so unbearable that the curators just give up and leave.

Even categories that are supposed to be for specific mediums where AI shouldn't be applicable are full of it regardless - just now I scanned the Photography section and almost immediately spotted a conspicuously three-fingered woman. Posts made using AI are supposed to be tagged as such so users can opt-out of seeing them, but that "photo" isn't tagged, nor is anything else on the uploaders profile despite all of it being blatant AI.

You could almost turn it into a game - pick a random category and see how far you have to scroll before you see anything at all that doesn't scream "babbies first copy-pasted MidJourney prompt".


It's everywhere. Google image results are already becoming heavily polluted with AI art (try searching something like "unicorn" for example). Someone posted a cool site here the other day that was a sort of automatically-generated encyclopedia, except that since it was automatically grabbing images, most of the examples of historical art styles had ended up being modern AI art instead. That wouldn't have been the case at all even two years ago, it's a bit scary.


Pro tip: You can add "before: 2023" to your image search prompt.


Woah, never knew about that one. It works wonders!


Thanks, that actually does an extremely good job on the "unicorn" search of filtering out all the AI unicorns.


It really needs to be said that AI "artists" have confused productivity with quality. I actually don't go to DeviantArt to see your ai generated garbage. I care more about people who are willing to do interesting things with their medium even if they takes a lot longer.


Yeah, it's almost comical the degree to which quantity has become emphasized over quality. More than a few times I've clicked through to an AI posters profile out of morbid curiosity and seen that they have thousands or even tens of thousands of uploads despite being active for less than a year or so. Even with the supposed productivity boosts that AI brings you can't convince me that someone posting 20+ pieces every single day like clockwork is putting any real consideration into them, but the magic of AI is that something with little time and zero thought put into it can still be superficially passable.


I am a bad artist but I do make art, and have been trying to make art over the past year or so. I've made less than 200 pieces over that time but I can still go back to work that I've spent hours on and remember the decisions I've made and the specific works that have helped me grow or that I'm proud of. Do you think AI artists remember the work they've produced?


They will not remember the prompt that created them but by default AI art encodes that information into the image metadat.


Not that same thing, though. Reproduction isn't the purpose of considering artistic decisions you've made-- it's to reflect on and refine your eye, ability to communicate things visually, and your trajectory as an artist. That capability is entirely separate from the medium you use, or the source of your work. Indeed, nothing in AI image generation makes that less possible than in physical media, but metadata has a very different use case. The entire point is that you need to pay attention to your output enough to parse the extreme subtleties, and posting dozens of pieces per day negates that.


> Posts made using AI are supposed to be tagged as such so users can opt-out of seeing them

While this is the official line I'm fairly certain the real reason for stuff like this is to prevent AI models consuming their own output.


Probably, either way they are doing a piss poor job of enforcing the rule so everyone loses. The people who don't want to see AI posts see them anyway and the AI models will end up Habsburging themselves.


There were always uneven applying of rules based on certain staff whims, and that of course, carries its own law of unintended consequences.

Now, with Wix in charge, and a handful of the roachier staff left (I'll name names, Realitysquared) the site has negative bupkes chance of content moderation/curation worth a wet damn.


I worked at DA in a volunteer capacity (Gallery Director mostly) from 2004-2008 and have been on the site since 2002.

I actually started writing a long screed about this on my own blog last year with the AI debacle but shelved it (now I feel compelled to return to it)

DA had plenty of small problems and a few big ones, exacerbated by incredibly green leadership at the top. It was pretty much run off the whims of the CEO, who even when he had good ideas, had no execution focus, and often outright ignored capable feedback from his lieutenants.

Things like having an app, an API, a GTM strategy that was remotely baked for feature releases was simply non-existent at BEST, and done in the most haphazard more typically.

DA eventually shredded its own community; exhausting most of its most dedicated members, and ignored offers for acquisition that made much more fiscal and practical sense then the pittance it eventually went for to that trashfire place called Wix, who in all likelihood will eventually sell off bits for scrap.

DA was a true gem for a decade, and then fell in spite of that due to categorical poor management and vision (or the execution thereof).


The engineering team was really great. Perhaps that helped the site persevere for as long as it did. I credit $randomduck for a lot of the site's technical success. He is a top-notch engineer and a great tech manager.


The tech side, while having (IMHO) some quirks, did really well under the poor leadership. Chris Bolt was walking glue and baling wire, and I was always impressed at what he could keep going under a shoestring budget. Ryan Ford was a great UX guy.

So yeah, I have no disagreements with you there, but imagine what they could have done with a real product roadmap that didn't change on a whim?


A little off-topic. As an artist, I have a big problem. Look at 17-18 centuries art. Nudity is not a cardinal sin. Today, in the era of OnlyFans and porn normalization, posting nude art is censured publicly. I cannot post anatomy tutorials on YouTube because some puritan soul will see things that fire the eternal rage. How is this democracy? How is this an advanced society? We have a serious issue with society values alignment.


You are free to purchase a domain from a registrar, setup a server or purchase hosting and bandwidth, and display any legal content, including nudity, on the internet for all to see.

>How is this democracy?

If you are in a democratic country, you can vote for representatives, or run for office yourself. Has nothing to do with you being able to host something on someone else's computer.

>How is this an advanced society?

Because you can upload functionally limitless digital content to someone else's computer and someone else halfway around the world can almost instantly view it on a handheld device. Obviously restricted to the rules of the entity paying to host said content.


> You are free to purchase a domain from a registrar, setup a server or purchase hosting and bandwidth, and display any legal content, including nudity, on the internet for all to see.

This retort always seems kind of trite and disingenuous.

First of all, no, they are not free to do these things unless they are billionaires with capital to spend. "Just start your own business" is a silly dismissal because the vast majority of people cannot start such a business. Second, it doesn't even address the original complaint. "I think Company X should do ABC." Suggesting they start a different company to do ABC doesn't really address what OP calls for: Company X changing their ways.


It doesn't even take $100/mo like the other commenter says. It takes about $10/mo to have a website that would be completely capable of displaying "art" to however many people you can convince to look at it. It's simple too.

It's just that the network effect of megacorp social media means you won't actually get many random arbitrary visitors and you won't have a way to monetize easily.

So if you just care about art and not about monetizing you're art there's no problem. Should be good. If your goal is only money then, yes, you have to stick to the marketplaces.


They do not have to start a company to share their nude art or whatever they want. And they don’t have to be billionaires. $100 per month is plenty to cover bandwidth and hosting expenses.

There has never been a cheaper and quicker way to share one’s art in the history of humanity.


Let's clarify some things. I am fully capable technically. VPS, Next.js site, Stripe, and so on. My complaint is cultural first, megacorp censorship second. Read the part with 17–18 centuries nude art. Think about the society in those times. Compare with today.

Yes, I agree with the cultural changes.

So you solved the issue? Right?


What happened is American puritanism. Some places are a lot more "chill" about this stuff. I remember visiting Hungary in the late 90s and seeing a huge billboard ad for toothpaste featuring a fully topless woman. Unfortunately the US is the center of power for the internet, so much of the internet reflects American puritanical norms. Just take a look at Apple's restrictive policies on the subject.


Yeah, this is a very American value.

I remember being at the gym in England in the middle of the day with my American wife and the TVs were showing a BBC show about body positivity which basically involved full frontal nudity of all the participants. She was very surprised. To me, it was just Tuesday.

Another time in Switzerland I was watching an evening game show and the host inexplicably undressed. Here's a pic seconds before the real action: https://imgur.com/a/n3iLhps

Europe isn't for beginners lol


Europeans were still having uproars over paintings of fully clothed women in the 19th century: https://en.m.wikipedia.org/wiki/Portrait_of_Madame_X

The reason you can't post an anatomy tutorial is unrelated though. That's because YouTube is cheap. They don't want to spend the effort to try to differentiate an art tutorial from more problematic content.


I'm pretty sure DeviantArt stopped being about actual "art" long before AI. Years ago I deleted my account because I didn't want to see my posts next to furry scribbles, softcore porn, and other low-skill fetish adjacent crap.

Professional digital artists post at Artstation now from what I can tell.


ArtStation is also filling up with AI crap, though not to the same extent as dA quite yet. It's most obvious in the marketplace which is now full of paid "reference packs" that consist entirely of AI generated images, which rather defeats the point of using a reference.

There's still people on there selling good reference material but now they're buried between a few dozen variants of 6000+ WIZARD CHARACTERS IN 4K!! that someone churned out with Stable Diffusion in an afternoon.


I use several sites like ArtStation, Pixiv and aggregators to download new wallpapers for my devices.

This used to be an entertaining experience to see all the kinds of beautiful art people used to create... But as you said it is getting filled by AI crap. I noticed the same thing with Pixiv, even if I opt-out from AI art, new results from several popular tags are filled with badly tagged AI images.

Now the experience feels tiresome, in some categories I waste too much time filtering between real and AI art. I started to search things produced before 2022-2023 to avoid AI.

This "art" should be uploaded to its own subdomain and never get mixed up with real art.


Ugh.

You sound like someone who missed the word deviant in that website's name.

Care to share a link to the art you make?

Really curious about what kind of sophistication you're producing that would be tarnished by being seen next to furry scribbles.


"Deviant" and "good" aren't mutually exclusive. Just wanted to clear that up for you.


>"Deviant" and "good" aren't mutually exclusive. Just wanted to clear that up for you.

Great. Give yourself a pat on the back for that one.

Now that you've had your smug moment, note that the parent comment wasn't concerned with the quality of content, but the category.

"Furry smut isn't art", brought to you by "Rap isn't music" people.


I encourage you to go read the comment again and consider that maybe you've misunderstood the intent, possibly due to confirmation bias (maybe you expected to encounter comments about furry art not being art, and you saw a comment containing the word "furry" in a negative context and mentally replaced it with a generic cut-out of an anti-furry sentiment). Makes me wonder if you're a chatbot, actually.

Yes, it's weird to get outraged about furry porn being AI-generated. Who gives a hoot?


[flagged]


Adult content isn’t necessarily low skill. It’s just that DA was full of low skilled adult content.

And fringe stuff, like pregnancy fetish art or inflation fetish art.


That's my experience with the site a decade ago. I remember seeing really tasteful and well done sexy photography, but the amount of very terrible cartoon porn was overwhelming, weird fetishes aside. They all have the characteristic look of drawing by beginners with no taste whatsoever, like overuse of gradients, uncanny faces, colored hair or facial piercings. I don't know how to explain, but if you have spent 5 minutes on that site, you have certainly seen what terrible cartoon porn drawn by horny teenagers looks like.


I watch porn but my observation is that most hentai is barely/badly disguised child porn / pedophilia.

This is something that put me off in some mastodon servers there are a lot of people posting not hentai but drawings of underage looking girls either in creepy sexual postures or with exagerated female attributes (chest sizes or waist to ass ratio).


I don't think you understand where he's coming from, DA literally became filled with My Little Pony porn. I'm as open minded as can be but, that's too much, I also left for the same reasons.


I watch porn but I don’t want porn on every site I use. Skill has nothing to do with it, it’s that adult content is generally meant for a different response from “awe”.

Some fringe content is downright disgusting and so far away from art that some people should put their pencils down and go to therapy.


Wanting to avoid pornographic content on platforms which do not cater exclusively to porn is a valid goal, but often that is just a checkbox away.

Something can be both disgusting and artistic at the same time (plenty of lauded examples of modern art by established artists can be quite disgusting), and dismissing erotic content as unartistic just because it is erotic content seems silly.


Isn't the real problem here not Deviant Art, as much as people making low effort bot nets to trick Deviant Art into paying them?

It's a spam problem, only worse, because they're actively paying the spammers.

Even Spotify has this problem. All too often I'm getting recommended crappy remixes "slowed and reverbed" or "sped up". Just recently I got some guy's crappy techno with the artist field spammed with completely unrelated bands I follow. Of course, when I tried to report this, Spotify only cared if the guy was selling bootleg merchandise.

The whole thing made me click, "hide artist" and "hide song" for the very first time.


It's not Spotify's fault that bots exist, but is their fault that they don't care.

DeviantArt built a system that pretends bots are not a problem on the Internet...and gave them a profit motive.


For the nostalgic readers of this thread, here's the website in 2000:

https://www.webdesignmuseum.org/web-design-history/deviantar...

And 2012, which I consider peak DeviantArt:

https://www.webdesignmuseum.org/gallery/deviantart-in-2012


Wow, thank you for sharing this. I miss the old 00's era of maximalist web design.


the 2012 DA is so comfy.

oh well, it's not 2012 anymore, let the past die.


The bot buying and selling network reminds me of the time KeyBase tried doing a giveaway of their crypto coin.

You needed a GitHub account and a KeyBase account. So people created as many accounts as their bot networks were capable of, and tried to get the crypto.

Thankfully KeyBase changed the requirements to include "account must be X weeks old".

Edited to add: I'm not sure if there's a way to prevent bots these days. Feels to me that we're lucky (more?) economic systems haven't been bled dry by bot networks.

I miss the promise of KeyBase. It felt like a real digital identity, but for whatever reasons it wasn't good enough to succeed.


the only thing I remember about keybase was when they did this crypto ‘air drop’ thing, and then a while later (months? years?) I realised I had this coin in my account and I sold it for like 70 Euros on some sketchy crypto marketplace. Can’t complain to be honest, no other startup so far has just handed me 70 Euros without asking to at least harvest my eye data..


> for whatever reasons it wasn't good enough to succeed

The reason, imo, was the acquisition by Zoom and apparent total abandonment of the project.


It’s just that bots haven’t been good enough yet. With the new LLM tech they can pretty much pass every hurdle you’ll throw at them. Even if you require people to show up in person, they’ll do that but then run a bot the rest of the time in their account.

I am sure that LLMs and bots will be able to fool many people on HN and run “rings” around dang’s ring detection software, in about 5 years. It’s a gameable metric, after all.

They were already able to do it on 4chan in 2020 with just GPT3! And the most impactful thing is users started accusing each other of being bots! It literally enshittified the whole forum overnight:

https://finance.yahoo.com/news/breaches-every-principle-huma...


And to be more exact, GPT-4chan is based on GPT-J (same architecture as GPT-3 whose weights were never released) which only had 6B params and that was back in 2021-2022.


There's a straightforward but costly way, tie it to something that costs money, over the long term. E.g. Utility bills, bank account statements, etc..., for x number of years.

And manually confirm with the companies at random.


Tie what?

They’ll just check in and then run bots in their account. Line a chess bot for example


How can they fake the bank or utility companies internal records?

To me KeyBase always felt like grifters trying to co-opt grassroots identity stuff. IIRC they were sort-of-but-not-really OpenPGP at the start, pushing people heavily towards a not-your-keys not-your-crypto setup, and then at some point they completely removed the ability to actually control your signing keys yourself.


Keybase always performed crypto on-device using their open source client written in Go. What not-your-keys not-your-crypto setup are you referring to?


I'll drop a bit of deviantArt history that you're unlikely to find in an article like this, but which probably contributed to dA's initial failure to sustain its place as the preeminent platform for sharing art online as social media rose to prominence: its banning of sexualized nudity in 2006, which lead to the exodus of adult/adult-adjacent artists - and, particularly, furries.

That was the stumble that gave room for other platforms to grab pieces of its then-current and future user-base. Anyone can tell you that a very large portion of the money changing hands online for art (adult and not) is actually changing paws, so dA missed out of having a slice of that, whether through advertising or facilitating transactions. Worse, its reputation was tarnished among adjacent subcultures.

There have also been regular ToS panics every 2 or 3 years, where someone's (mis)interpretation of the licensing rights dA claimed for being able to modify and distribute artwork (i.e., make thumbnails and send images in daily update emails) caused users to swear off the site for fear of having their work "stolen". Add to that, quite a backlash against the recent site redesign (and the ones before it).

That is all to say, this really has been a long time coming. My account is nearing the 2-decade mark, but I haven't logged on more than a couple dozen times in the last half of that. There's just almost nothing there you can't find more easily or comfortably elsewhere.


This sounds like about the right time for DA’s major drop in relevance given that my Furaffinity account was made at the end of 2005.


It may have been sometime in 2005, now that you mention it. Recall's a little fuzzy (it HAS been 18+ years)... I was going off a particular memory of sneaking access to it and a certain specific queer-focused art platform during the summer of '06.


- alternative culture is born or aglomerate around internet hub

- hub tries to monetize its audience, rightly so for sustainability of the platform. Server costs, management, moderation etc etc.

- it takes just a few years for hub to become a corporation full of clueless corpos who have no idea about the initial culture and core audience.

- hub is bleached beyond recognition because corpos are scared of anything that is slightly controversial - including the original culture hub was about.

- "That's not family friendly! We also need those ESG and DEI labels to attract investors! The advertisers won't approve their brand associated with this!”

The current term for this is "enshitification" right?

- hub dies, culture might disperse, corpos get their golden parachute and latches into the next big project.

It seems to be a very common story. Reddit, DA, you can think go a lot of examples. It WILL happen with big Mastodon and BlueSky instances.

If you think about it you gotta give credit for 4chan keeping a big chunk of its soul, rather you like the site contents and its users or not.


I'm not sure it's happened to Reddit so far? They've managed to avoid the desexualization that has destroyed other platforms, and now their size might mean they can ride it out.

In the same way that Playboy can get away with posting sneaky things and Meta can only shadowban them and doesn't dare terminate them: https://www.instagram.com/p/C3-nnn9RM82/ (NSFW)


To be fair...

1) ESG/DEI stuff wasn't even a whisper on the wind when this stuff went down. It was still very much DADT, Jack Thompson lawsuits, elected-Bush-twice America. In other words, the complaints were mostly coming from the center/right (and Joe Leiberman, if you still considered him a liberal).

2) 4chan was in the "advantageous" position of never being able to attract major advertisers in the first place. And while a good bit of the old culture is still extant, the post-Trayvon-murder/GG/Trump-meme-magic era did a number on its userbase's ability to focus on the lulz instead of descending into conservative (if not just straight-up Nazi) rhetoric (and not for laughs).

But, yeah, the rest of this tracks. It's basically inevitable unless the site admins get to a point where they're happy with the userbase size/culture/whatever and decide there's no need for any more changes. Examples: Craigslist, SA (to a degree), FA (despite controversies and the recent UI change, which users can mercifully opt out of). In fact, I would say that unnecessary or large-scale UI changes are good heuristic for determining when things are about to go downhill.


DA has been “declining, dying, or already dead” for the past decade.

Not saying it doesn’t suck but ALL websites have an AI spam problem. Twitter, Pixiv, ArtStation etc are no exception.


Ah, I remember deviantART, besides a gallery, was my JavaScript (userJS) playground. I found ways to automate giving llamas (for free points and for receiving 10000 llamas in return), discovered API paths to read hidden/deleted comments and journals (fixed now).

Since the aptly named "Eclipse" redesign it became terrible to use, so I stopped.

The lean into AI will just let it rot for a few more years than otherwise.


They embraced AI slop and now their site is a wasteland, the best they can hope for now is that like NFTs before them AI art becomes a useful route for money laundering.

Its a shame to see what its become, the ascension of AI slop means a site like it will never be possible again unless there is some incredible filtering capabilities available.


There's a brick and mortar retail problem in these dinosaur sites no one wants to address.

In DeviantArt's glory days, 50% of the content was derivative fan art. Machines are pretty damn good at making things that already exist.

That's not a direct contributor to the demise of an image sharing site, no matter how much DeviantArt dresses itself up as a Web 2.0 era hub. "It's like a virtual silk road specifically for artists all over the world!", wonder how long that can stand in the age of monolithic social media platforms. Sites which are more than capable of serving what DeviantArt specializes in.


Deviantart was full of crooks 20 years ago. I was looking for a good logo artist. They all had these beautiful, well made logo portfolios that immediately made you salivate. One day I sent a few letters to some of these very good logo artists.

Sent over what I wanted, some initial payment... and then around 1 week later, they showed some hideous, horrible, amateurish logos that had nothing in common with those that were promoted on their page. Of course my money was taken, some even told me I can't do anything with them. And that was true.

Well, it took them a few decades to implode...


I suspect we’ll be seeing a lot more of this stuff, but DeviantArt has been a difficult place to navigate, for some time.

A number of years ago, someone posted an outstanding wireframe that I wanted to license. I was willing to pay quite well. I had just started a company, and the wireframe would have been quite useful in branding. I probably would have contracted the artist to do the rendering, as well.

I found it pretty much impossible to initiate contact with the artist. I think I got through, but I have no idea if I was successful, as they never got back to me.


It's easy to blame AI (and yes, AI images spammers are a problem on every platform). But DeviantArt was in a decline way before Stable Diffusion became a thing.

It was mostly social media, really.


I struggle to understand how you can mark work as "not for AI training"? Might as well mark it as "do not get inspired by this".

If I'm a painter and I look at a picture of a drawing I'm going to catch some of the style, some of the color palette, some of everything. My next painting will inevitably have some influences from that drawing .. and the other thousands I've seen. Why is it because it's now done in binary is it any different?


The first issue I have with AI art is that it is self-destructive. It outcompetes and devalues artists to a point where it is virtually impossible to survive by having art as a job, but learning from artists is the only reason why it can exist in the first place. It is a machine that destroys the very thing that gives it life.

The second issue is that AI does not learn like humans do. Two different people looking at the same image will not be inspired the same way, because humans are inspired subjectively. There is a layer between what you see and what you learn, because everyone is wired differently, there is creativity involved in this process. AIs are not like this, they are not creative and there is no subjectiveness, it is built on plagiarism.

So yes, I agree on not treating binary differently. When a human learns from other artists and applies it without any creative twist, we call it plagiarism. When an AI does it, we give it a pat on the back and say it's just learning from what it sees. Why should AI be treated any different?


I remember I read an article before how DA did a lot of features that were taken later by other social media like facebook, etc., even the platform owner -named spied if I remember correctly- was very humble and nice person, one time they made a feature for paid users and I made a sarcastic joke how this is not fair, he promised to bring it to free users and did deliver. I think DA failed unfortunately because they are not greedy nor evil like how other platforms are, and they are focused on a niche audience like artists, and artists are mostly broke.


I have fond memories of the old dA and was a frequent visitor but I haven’t been there in probably 15 years now. You can either build a great online community that shares cool stuff or you can try to make a $$$ of money, not both, because the latter destroys the former. Flickr is another example (another of my fondly remembered sites), Tumblr too. Once they sold, their new corporate masters were interested in $$$ and it was the beginning of the end.

Wasn't on dA in 2006, but the Eclipse redesign of the site to look more like Wix is what dropped my site usage to 0 even before NFTs and GenAI shit. The old design looked "dated" - so what? It was more intuitive, feature-rich, customizable, and distinct. dA was Xing its brand recognition before it was cool.


You've just helped me realize that when it comes to ui these days "dated" basically means "useful".


Welcome to Google Universal Analytics going to Google Analytics 4.


Art teacher speaking. The influence of DeviantArt (and now ArtStation) on young a tists is staggering, and not always good.

When I was a student, I got my eye candy from the school library, which provided context and history to the art. On these site, most images are presented without much context: what was this artist influenced by? How did they develop? Etc.


You can see who they follow and what they've written right there on the site, and you can move freely between works via tags and collections. If anything I'd say it's much easier to get more context on DA than in a physical gallery.


There really aren't many major platforms left for artists at this point - DeviantArt and ArtStation have both dove down the AI hole, Twitter needs an account to view now, that leaves what, Pixiv and Tumblr?


The site in question to judge yourself https://www.deviantart.com/


I posted a lot of my Disco Diffusion AI work on DeviantArt back in 2022 (2 centuries ago, in AI years) (https://www.deviantart.com/holosomnia/art/Sea-of-Color-92572...) as an example. At that time the AI hate machine wasn't as pervasive, and it was very well received.

Only when the crowd effect of "AI=bad" came along, the pitchforks came out. As someone to whom AI art has brought incredible joy, it's very disheartening to see artists and the public straight up refuse to understand both the technology and the artistic potential -- the human side of AI art.


In the same way that I would hope you would understand the problem with submitting 10000 drop shipped factory-made bowls or sweaters to a hand made competition or exhibition, I hope even if you appreciate AI art on the merits you can understand the frustration and challenges with trying to create them out of spaces intended for art created without AI.


The link links to 46 images. Not 10000.


Oh I wasn't talking about their work, I was talking about other AI "artists" that are poisoning the well for people like OP. I was trying to find some common ground between people like me, who are sick and tired of AI spam, and OP, who is sick and tired of reactionary responses to any attempts to incorporate AI into the creation of art.


One of the reasons I like computers is that they let me do things I couldn't do before even if others could without computers. I mean, think about the revolution in desktop publishing in the 1980s -- a person could use software to make a nice looking brochure or even a full book without any typesetting knowledge. And people were excited by it. They didn't say "You horrible person! You are trying to destroy the livelihoods of professional typesetters!", which I'd imagine would be the response if desktop publishing was invented now.


> a person could use software to make a nice looking brochure or even a full book without any typesetting knowledge

By using a template? Because based on my experience with inDesign and Affinity Publisher, it's still required to have knowledge about design and typesetting. They reduce the costs to get started and work in the domain, but the knowledge requirement was still there. Same with digital drawing and photo retouching. You're no longer gate-kept by the material costs. But AI is the equivalent of pressing X in a fight game and then saying you can do MMA and ready to go against UFC champions.


It ... really isn't. The one thing that an AI tool does do is create an image that looks like something right out of the gate. That might fool you into thinking it's easy.

However, getting the picture you want, consistently, is a little bit more work. You might need to involve many more tools, including some old ones (blender for setting up, photoshop or gimp for post-processing) , and some weird new ones (like what the heck is a LoRA?)

It's like the one time I wrote an essay using LaTeX. Even when I was half-way done. It looked really well typeset and professional from the get-go, but of course half of the text was missing and still needed to be added.


You're describing art. Most people who make "AI art" are not making art: they're using the computer as their own personal content mill.

Well, "most people" might be inaccurate. By volume, the people causing AI-generated content to come into existence are overwhelmingly just cranking the handle to churn out content, and that volume overwhelms everything else to the extent that it appears to be "most people", but might only be a few hundred. In the time it takes you to produce one artwork, they've got ten thousand 4096×4096 squares.


> Only when the crowd effect of "AI=bad" came along

It only came along after a crowd of "AI artists" who are just spammers came to the platform. I mean every platform.


> I posted a lot of my Disco Diffusion AI work on DeviantArt back in 2022 [...] it was very well received.

> Only when the crowd effect of "AI=bad" came along, the pitchforks came out.

You don't see how you spamming "a lot of" work onto a site reduces the value of that site? What you call the "AI=bad crowd", others were calling the "anti-spam" crowd.


There's 46 images at the link provided . There's some recent ones, and some examples of early AI art there that must have taken quite a bit of work to get right. Surprisingly the style has remained fairly consistent over the two years.

I'm not sure how you would characterize this as spam.


You don't understand why the actual artists from which your AI "art" tools stole from to become viable, dislike these tools?

Really?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: