I'm 53 and love Spotify and Reddit's 80s subreddits for music discovery.
I love finding new stuff (IDLES, Shame, and High Vis are recent faves) and finding old stuff that's new to me (I flew across the country to see The Cult playing in LA as their old band name -- Death Cult -- and never knew there was a live album version of Dreamtime that's terrific from back in the day).
Authors, artists, etc. are suing projects whose models just inhaled their work and generate derivative works and I'm surprised the studios aren't getting more involved with going after AI models that included their properties in them.
I can see this going the route of us having a Disney AI, Sony AI, Discovery AI, Amazon AI, etc. where you can generate stuff using models owned by the studios, but only those studios and any public domain stuff they suck in too.
> I can see this going the route of us having a Disney AI, Sony AI, Discovery AI, Amazon AI, etc.
Exactly. These companies are probably hoping the artists win as many legal battles as they can, since the result will be that only big companies will be able to create useful AI models.
That’s for a fine-tune of an existing SD model that has already trained on a mountain of data. Training from scratch requires a mountain of image data and a lot more compute, so you would need a 100% clean base model as well, but then yes it’s totally doable.
There are a couple existing pretrained SD models that use all CC0/public domain data. I think at this point they're still significantly lower quality than other popular models, but I'm sure that will improve over time.
What Getty's done I think is the most "artist" friendly version of this.
Presumably when photographers/artists submit their images to Getty they hand over full ownership, or otherwise pretty broad licensing agreement. If Getty's the rights holders for these images they can use that by training their own models.
In my mind, OpenAI/Midjourney/Stable Diffusion are the Napsters of generative AI. Adobe Firefly, and now Getty, are coming up with the iTunes Store/Spotify. For better or worse.
It's legal but not particularly friendly to hold someone to a prior agreement like "you can mostly do what you want with X" when what it's possible to do with X significantly changes.
Hey, at least Getty handed over some money as the basis for that claim, OpenAI et all are leaning on "our scraper brought it back" to maker their version of that exact claim.
I don't think there is any "quibble" involved in whether record labels are good or bad for artists. And claiming that the alternative is "torrenting" is really throwing in a huge strawman.
The alternative to Spotify and the traditional record labels are places like Bandcamp, where artists get a far more significant portion of the earned money. Any industry where 99% of the money goes to the middle man or service provider rather than the creator is an industry in dire need of disruption.
My limited understanding is that Getty is at least as bad as the music industry, if not much worse. No doubt Getty will make millions of dollars from their AI image generator, while the artists may get a few cents a year if they are lucky.
edit: also, re spotify, as far as I'm aware they do carry small and independent labels. Getty is comparable to the big music labels, not to spotify.
Anyone can, through a service such as DistroKid, publish their music on spotify without a label. You can even enter your own label name so yeah spotify very much seems pretty label agnostic. Not sure how much the big labels end up paying them at the end of the day though.
Very different -- Spotify significantly increased the amount of music people consume, and the amount of artists that can get easily compensated (listing on Spotify is much easier than releasing a CD).
In this case: Getty is going to be paying less overall (otherwise, why do it at all?), and they will pocket the extra margin.
> I can see this going the route of us having a Disney AI, Sony AI, Discovery AI, Amazon AI, etc. where you can generate stuff using models owned by the studios, but only those studios and any public domain stuff they suck in too.
As entertaining as it would be to have a model where you type "photo of an astronaut riding a horse" and it defaults to Buzz Lightyear riding the horse from Tangled, that's not really what people use these models for in the main.
You don't actually want derivative works. The interesting training data isn't Hollywood movies, it's all the junk people post on social media. What you want is thousands of generic pictures of astronauts and thousands of generic pictures of horses. Pictures of Tom Hanks as Jim Lovell aren't any better (and may actually be worse) than actual public domain photos from NASA.
> I'm surprised the studios aren't getting more involved with going after AI models that included their properties in them.
If you're the first group to sue you have to spend millions extra on lawyers to establish the precedent. I don't see much reason for Disney and friends to rush to be first.
The current state of AI isn't really a threat to Disney and friends, just suggestive of a future threat. No need to rush on that account either. Especially since if they win the lawsuit - they're still going to benefit from all the R&D on neural networks that is happening on other peoples dime right now.
And all in, do studios stand to lose more or gain more to AI? Drastically cutting their costs might be worth some extra competition.
It's a lose-lose for the studios. If they win on the IP side, all of the generative content of the future will exclude their IP. If they lose, then they don't get paid for it, or worse they have to pay for it.
Most people seem to have a difficult time grasping that if the model is trained in such a way that Disney's IP is excluded, the model doesn't know Disney even exists. Consider if every Disney website was excluded from Google and every Disney related trademark was blacklisted.
As of mid 2023 it is very clear personal assistants are going to replace traditional web search. In my use case, they've replaced it 100%: excluding searching a single website or product name. Will these generative assistants -- which must be capable of both processing and generating images -- know the rights holder's IP even exists? The idea that a useful LLM is going to be trained on 100% public domain, copyright free data is absurd.
IP owners are suggesting that future personal assistants would need to pay them for the knowledge of even the mere existence of their IP. That would be like if every website indexed by Google and every trademarked keyword required Google to pay the copyright owner per search. That isn't possible. To the contrary, the reverse occurs.
What if the future is in fact the opposite of what IP owners are now suggesting? Nike pays the generative AI company for Nike shoes to appear in the one-off movie generated for a single viewer based on their personal preferences?
The world is drowning in IP: text, video, music. The future will be a deluge.
The other angle on ‘Corporate AI’ is when we’ll start to see product placement and adverts inside generated content. Create an image of coffee, and you’ll find Starbucks logos everywhere. Ask an LLM about a topic and see it work in an advert about a particular brand of beer. I’m sure people are working on this already, but I really hope it never happens.
Oh god yeah that sounds horrible. If this happens, peopl will just hold onto the endless stable diffusion base models we've got now and never update anything.
Just like how every time streaming gets more convoluted more people turn to torrenting. These companies manage to find ways to hurt themselves while thinking they're forcing people into paying them more.
Does not seem like its hurting Netflix / Disney all that much. If I had a 100 million subscribers paying me $100/year, I'm pretty sure I'd be crying into my caviar.
Admittedly, they're both nonsensically incompetent at times, and if you believe Disney's financials (do you believe any Hollywood financials?) then they somehow manage to lose money on $10,000,000,000 / year. I'm still not sure how you make the equivalent of 10's of 1,000's of tv episode costs ... and then lose money ... with Disney's back catalogue and vault.
+1 - large scale owners of IP should all be working towards this, it just makes sense. This is also a big point of contention with the Writers and Actors guilds, they understand this dynamic very clearly.
I've been saying all year that LexisNexis absolutely HAS to be building a training corpus. It was a big deal when that dumb lawyer in TX filed a brief with bogus citations, but what if you had an LLM that was specifically trained on legal filings, understood rules like "If you cite a case, you MUST include a reference to that case and it must be valid in our system"?
Yeah... but the manufacturing processes for those chips are long, long gone. Nobody's building fabs for CPUs like the MOS 6507 which only have 3,218 transistors in total (seriously). That CPU hasn't been in production since 1992, so any replacement would be a complete redesign, or an FPGA packaged inside to simulate that part, which would no doubt cause its own griping. It would also be pointless, because emulating a 6507 cycle-accurately is pretty easy nowdays. Same goes for the other chips. If you can emulate with perfect accuracy because the tech was so primitive back then; what's the point of making what will still be a redesign?
At that point, just build a MiSTer (FPGA-based reimplementation), then get or build an arcade cabinet adapter to have your MiSTer provide the brains.
> that does not actually work with original hardware
Well, if you were to populate it with the chips from your old board, it would work. So as a replacement part if it was just your PCB that was cooked, it's not a terrible option. Just having the option to get such a brand-new ancient part is still valuable, regardless of cost.
On the bright side, I could spend $500 more on Mac Pro wheels.
I definitely wouldn't trust it if they're not certifying that they work and providing the exact revision of the board. Even boards that appear to have the same layout may be different - it was common to screw up a trace layout on a first version and need to manually solder bodge wires during assembly.
They are providing the specific revisions:
Lunar Lander, Warlords (Rev D), Black Widow (Rev A), Gravitar (Rev C), and Major Havoc (Rev D).
And their site also states "These boards use the original bill of materials, follow the original schematics, and can be used to replace damaged original boards by using the original parts from these boards."
> Nobody's building fabs for CPUs like the MOS 6507 which only have 3,218 transistors in total (seriously).
But you can still buy 6502 compatibles built on other technologies and processes, right? (Not sure if you can get them in DIP but still.) Differences in voltage levels are bound to cause changes, but jumping straight to emulation seems excessive (even if it’s probably actually easier).
That exists but for 6502. The 6507 was not used on arcade boards, just for the 2600 home console. (Of course it could be easily modified to be used as a 6507)
This is a link to some emulated 6502 cycle accurate units both using ARM or FPGA
The 6507 is just a cut-down 6502 without all the address pins broken out. You could fit a 40-pin chip on an adaptor board hovering above the PCB on an Atari 2600, for example.
You may need to do some Grievous Bodily Metalwork to make the screening can fit.
The WDC 65C02 is a current product which you can buy for a couple of dollars direct from the foundry.
Most of the TTL stuff is current manufacture, although you may need to adjust passive components used around "straight" TTL if you use modern 74HCT-family chips.
It's a piece of piss to build 1970s computers with cheap modern jellybean parts that Farnell keep in stock by the million.
Agreed. They could have the whole decorative thing, and in a little corner put an STM32 and breakout components, with the whole emulated game... I'd pay good money for something like that.
... until they just take over the moderation of their largest communities and this whole thing ends. I mean what's so magical about the moderation of something like r/videos other than the mods being free labor for Reddit?
I think you underestimate the value of that free labor.
Reddit isn't even profitable. Hiring mods for thousands of subreddits would cost them a ton of money, not only in wages but in the cost of finding and training those workers.
Beyond that, the minute the mods are official employees of Reddit, Reddit is fully responsible for all the content on all those subs. Social media companies are already under a lot of scrutiny for the kinds of content they allow on their platform. I doubt they'd want to go there.
> Beyond that, the minute the mods are official employees of Reddit, Reddit is fully responsible for all the content on all those subs. Social media companies are already under a lot of scrutiny for the kinds of content they allow on their platform. I doubt they'd want to go there.
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230(c)(1)).
That explicitly applies to providers and doesn't say they aren't allowed to moderate at all.
> "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230(c)(1))
While Reddit started as a link aggregator, the vast majority of useful content on the site is not external links. It is content generated for reddit, on reddit.
It's content generated by users, and it's the source of the content that matters, not the source of the moderation.
There's no generally accepted interpretation of 230 that restricts site owners from moderating user comments. https://www.eff.org/issues/cda230 "Section 230 allows for web operators, large and small, to moderate user speech and content as they see fit." See again the FB/Twitter examples I linked - those sites are also full of content generated for them, on them.
> Reddit isn't even profitable. Hiring mods for thousands of subreddits would cost them a ton of money, not only in wages but in the cost of finding and training those workers.
They don't need to. Aww (36m), music(32m), videos(26m), futurology(18m), me_irl, and abrupt chaos are some of the biggest ones, if they seize just those 6 then they've reclaimed most of the r/popular anyway.
Reddit can't afford to pay the employees they already have out of the revenue they make. That's ostensibly the reason why they raised prices on the API in the first place. Losing all that free labour is absolutely the last thing they want at this juncture.
But even ignoring that, suppose all >15M subreddit suddenly get salaried mods. Don't you think the mods of e.g. a 13M subreddit would want a piece of the cake too, and would strike to get it? Or would they just wait around and keep working until their subreddit is also "siezed" by Reddit inc.
Is there any example of a community which successfully runs on mixed salaried/volunteer moderator labour in such a way?
> Reddit can't afford to pay the employees they already have out of the revenue they make, that's ostensibly the reason why they raised prices on the API in the first place.
I'm sure there'd be plenty of volunteers who already manage several of the big 20M+ subs that didn't go dark. We're only talking about 6 subs, and they aren't that complex.
> Don't you think the mods of e.g. a 13M subreddit would want a piece of the cake too, and would strike to get it?
They don't matter nearly as much. The top few are what make up the home page with a few others just sprinkled in. And again I highly doubt reddit would need to pay mods.
Again we're only talking about 6 subs. Not even double digits. There's already groups of subs that mod many, many large subreddits.
You're assuming that if Reddit decides to "fire" all mods of the top subreddits, there would be dozens of (suitable) people happily waiting to take their place. I don't think that's remotely the case.
> The top few are what make up the home page with a few others just sprinkled in. And again I highly doubt reddit would need to pay mods.
Reddit wouldn't exactly be Reddit if you reduce it to only /r/aww and five of the other most mainstream subreddits. If that's what you think will happen... Well I don't exactly disagree with you, but Reddit would be a very different product at that point.
But then they’d get banned and lose all that Reddit karma and their very important badges
And oops all the subs that aren’t total shit have min-karma requirements these days… hope you didn’t enjoy posting in those?
there just is this disconnect between the Redditors who imagine that all the users are behind them and the actual masses who will keep scrolling r/aww and mildlyinteresting and so on. If 5-10% of users want to self-immolate and be banned, that’s fine. People will get bored of the harassment campaign in a couple weeks, and the world will generally keep turning.
Mod labor is not irreplaceable either. The six people running 60 of the top 100 subs aren’t doing any personal work either, they’re just setting up scripts/etc. And at a lower level, there is an infinite supply of people willing to be petty tyrants for a modicum of personal power.
The users of the TikTok-Shaped Reddit that spez is trying to pivot to don’t care about any of this and in a year they will have stabilized around a new user base. And that won’t include a lot of the current powerusers/powermods and they clearly know that and it’s fine for them.
Probably only 10% of users even comment so if you’re that engaged you’re not the users they care about. And sure, those are the people posting content etc, but the Reddit calculation clearly is that they will be able to repost tiktok videos and memes onto the subs perfectly well without the users who want to leave. Which does include me, most likely.
Gallowboob alone is responsible for a large fraction of the top-scoring content on Reddit. The labor of reposting TikTok and tumblr onto a third platform is just not as valuable as aggrieved Redditors imagine. It can be replaced by a very small shell script, and that’s all you need to do, is to keep content flowing and there’ll be a large retention of users endlessly scrolling, that’s all it takes.
Content spam is way easier to manage especially automatically and I don't mean with LLMs, just current tools.
The only reason why it's working right now is because the mods in charge of these subs are upset, and a few mods can shutdown a 30m+ sub. I suspect the number of users on r/aww that care _that much_ to get their account banned for spam.
If it was just users being upset (which would be the case if reddit reclaims them) I just don't think there's enough upset people for there to be an impact like that.
Just earlier this month Reddit started having issues with follower spam -- something that was specifically a problem because it bypassed volunteer mod control and Reddit was doing a shit job taking care of it. There were tons of threads about how to turn off those notifications.
Just yesterday Musk was lamenting about how bad the bot situation on Twitter has gotten recently.
And none of that even gets into keeping content on topic for a community, just straight up spam.
r/aww is literally just cute animals. Most of the others are just memes/reposts of their respective topic.
There's plenty of mods from subreddits that didn't go dark and have 20m+, reddit can just put them in charge.
I highly doubt the specific mods matter that much on these "fluff" subreddits. Subreddits like r/apple I could see it mattering a lot more, but most of the top ones aren't exactly complex topics.
There is a social contract and moderator effort to make that happen. When that contract is broken and moderators are stretched thin, how long is that subreddit still going to be just cute animals?
I'm betting all subreddits soon turn into /r/eyeblech (gore/death/dismemberment gifs). Not necessarily illegal, but definitely not what you're looking for when you go to /r/awww
But I think there's enough active, and willing mods who already manage several large subreddits (10m+) capable of managing the half a dozen or so subs that I mentioned.
And I'm only talking about 6 subs, not even 10's of subs.
I just don't think theres enough ambiguity/complexity with moderating those in regards to choosing what content is or isn't appropriate for that sub.
But either way I truly hope we can get old reddit back, api, third party clients, and all :(
If one of Reddits big goals in all this is to sell actually useful information on a broad variety of topics for training LLMs (I think they’ve basically said as much), then I’m not sure that only being able to prop up a few meme and cute animal subreddits is that much of a win.
It’s the long tail of niche subreddits where almost all the content that keeps Reddit at the top of SEO rankings and makes them interesting for training AI lives.
can you name a single from r/aww ? Could you even tell if they reopened it with new mods? People are not that passionate about mods from a sub about cute animals.
Even in subs where people are very passionate about moderation, they just give up and forget about it very quickly because it's not worth it. r/Animemes/ still has orders of magnitude more visitors than the alternative people moved to after their big mod drama.
Extremely few people interacts with mods and care about them
I suspect if Reddit starts replacing those mods outright you could see other mods leaving, it might work out for Reddit but it's a dangerous game. Also, which mods get removed, and would they be banned completely? The big subreddits have lots of mods, many who just help with small stuff and from my understanding do so over a lot of the large subreddits, so the actual details here of who stays, who's leaving what subreddits, etc. probably get murky pretty fast. (And that's besides the fact that the new mods would have little idea how things were currently run, unless some of the existing mods help them).
The selling point of Reddit, though, is not that r/popular is the greatest list of links ever created. The selling point is that you can tailor your links perfectly to your tastes, right? Good luck to them.
I had no idea futurology was that big. I remember years ago when it used to be a singularity/life-extension/transhumanist subreddit, and then it got brigaded by climate change activists, to the point that most of the articles on the front page were about clean energy, rising sea levels, and recycling. Also a lot of doomer types actually bemoaning life extension outright, because of the usual nonsense about "where will we put ALL THE PEOPLE" and "Drump will live FOREVER." I stopped checking it out years ago, but it serves as a case study of how entryists co-opt and destroy online communities.
lol good luck getting to the singularity when food scarcity hits riot levels and the coasts are flooding. The future we get is the future we deserve, not the one we fantasize about.
> Beyond that, the minute the mods are official employees of Reddit, Reddit is fully responsible for all the content on all those subs. Social media companies are already under a lot of scrutiny for the kinds of content they allow on their platform. I doubt they'd want to go there.
Oh. Wow. I never thought about that. Does anyone know if section 230 of the DMCA shields Reddit from liability as-is?
>Reddit isn't even profitable. Hiring mods for thousands of subreddits would cost them a ton of money, not only in wages but in the cost of finding and training those workers.
And it would be a policy nightmare. Right now, most sub-reddits are flexible in terms of what they'll tolerate. At best I think they could have a single moderation policy applied across all sub-reddits they pay to moderate. Would that work?
Let me know if my math is wrong: it means that 5 people are on the moderation team of 18.4 subs. The 500 top subs often have mod teams from 20 to 50 people.
Bringing up that stat doesn't seem really relevant.
I'm saying it probably doesn't take much man power to moderate these subreddits as you might think. Bring in a team of 5-10 full time workers would probably be enough, tbh.
I don't see how they'd have difficulties replacing the mods. Even if it's hard to find replacements within the Reddit community there are lots of blue checks on Twitter who'd probably be thrilled to take over Reddit moderation and might even pay Reddit for the privilege.
I don’t know why you are downvoted. Reddit did annonce to some moderators that AI moderation was coming this year and I heard that some already have access for testing. I also heard that it works pretty well to automate many actions, though it still requires some humans.
Paying .kids for the large subreddits won't cost that much.
However doing that makes the company directly liable for all things. Right now they can at least try the legal argument "we just provide infrastructure and oh we didn't know what was uploaded there" as soon as they moderate themselves that blind eye strategy can't work at all. (Right now they can at least try it)
I think you underestimate the ability of AI to literally solve this problem for almost nothing. The large subs can absolutely be automated moderation, if they're not already. Enough flags on a post? Just take it down, there will be 15 million new posts in an hour.
> Just take it down, there will be 15 million new posts in an hour.
I'm a mod at a 150k+ user subreddit. AI can probably handle removing objectionable content, but I spend most of my time on other areas that are much less susceptible to automation.
• Removing requests when there are recent similar posts. We've found that users disengage when the same requests are posted every week.
• Hosting AMAs. An AI isn't going to know who we should invite or whether they'll be a good fit for our subreddit.
• Detecting and banning people deceptively promoting their own products. This is incredibly common and often quite difficult.
Exact duplicates are already automatically removed by bots.
We remove request posts that lack detail or when there have been several similar active posts within the year. This strikes a balance between allowing every post only once and swamping the subreddit with nearly identical requests. We removed about 75% of the posts reported for this category last month, the remaining 25% either didn't have previous similar requests or had enough to detail to stay up. Many cases are ambiguous and resolved by majority vote.
AMAs don't happen spontaneously in major subs. As with everything else, there's a lot of admin work that goes on behind the scenes. We mostly invite people for AMAs; under 10% of this year's 10+ AMAs were requested by the participant. Someone has to email them, answer their questions, schedule a time, explain how to post, join them in a chat channel, etc.
Deceptive promotion is rarely flagged on our sub. We got ~450 reports last month and none of them were for promotion. Promotion posts are quite rare; they're against our rules and most people don't even try. What we regularly see are comments from users recommending their own products. Some cases are obvious, e.g. new accounts frequently recommending a book without ratings that was released yesterday. Many are much harder to identify; we regularly search user profiles and the Internet to confirm an identity. One person we banned last year had been covertly recommending their book for months. It took numerous searches and several weeks to remove all their promotion comments.
Well that means bots can take anything down unless you become really good at detecting bots. Not like it's hard for bots to get points you can do so with reposting and GPT for comments.
I'm sure they have metrics at this point on how often real humans take a particular action. If an account is reporting posts higher than some amount X, then shadow ban that account.
Do you think users will continue to report posts if they see that reporting too often will cause them to be shadowbanned?
Conversely if you are in the business of publishing spam to Reddit, wouldn’t it be useful if you could overwhelm the site with spam such that the people who care enough report all of the spam they see will get banned?
For how much I see people here saying AI moderation can solve this, I frankly think AI tools are likely to be an order of magnitude better at playing offense than defense - producing huge volumes of AI-generated spam/marketing/offensive content that could fool an AI content moderation algorithm on the other side.
I think you underestimate the ability of AI to literally solve this problem for almost nothing. The large subs can absolutely be botted, if they're not already. Enough flags on an account? Just take it down, there will be 15 million new accounts in an hour.
The big subs have figured out an extensive set up of bots and moderation guidelines, while investing their own time, all for free. Reddit’s mod tooling is extremely abysmal, that’s part of the protest. (Only 3P devs have bothered building more advanced tools.) A new mod will not have the expertise or careful set up from the previous mods.
Most of the mods are debatably not power hungry, particularly on a sub like Music or Videos. Finding a replacement mod team without specific agendas is going to be hard. Only the most power tripping or ideologically driven will want the position as paying mods seems out of the question given Reddit’s stance.
Inexperienced mods will also be unable to handle the torrent of spam and low effort posts and comments the big sub attract.
It’s not impossible to punish the dissident mods but having a functional replacement on short notice is impossible.
Agreed. To me the most interesting part is what is going to happen to the large community specific subreddits, i.e the sports subreddits like r/soccer, r/music, r/movies, r/anime, etc. Those mod teams are generally very good and dedicate a lot of time to make the subreddit feel like a home for people who are interested in that community. Those will be the hardest mod teams to replace as opposed to large front page subreddits like r/pics or the smaller subreddits. Those subreddits are the middle ground of large enough to have constant abuse/spam but require domain specific knowledge so not anyone can be a good mod.
People underestimate how much goes into moderating a community. You can train the bots to remove posts based on age of account, or karma. But to respond and remove racist trash in comments or just spam takes a lot of time. Especially for large communities, the time spent can become a full time job (in terms of hours).
And when you have to insert paid moderators, the cost to moderate all of the huge communities will blow up Reddit’s internal budget.
It’s easier said than done. Just ask FB or Google (with YouTube), they have difficulty moderating their own platforms and they have billions of dollars at their disposal.
Hmm I wonder if one could just run comments through an LLM and have it classify for things they want to block, then moderate automatically based on that. Probably more accurate and less expensive than hiring people for it.
If they do, whatever thrown together group of reddit appointed substitutes get the task of trying to moderate all of reddit at once is likely going to be drowning very badly
And you can expect people engaged with the protest to take full advantage of this if it happens, making the problem that much worse
If you simply reopen without any of the current mods, the site will be overrun
I’ve got my entire neighborhood teed up to go and spread rumors that spez likes to do weird things to porcupines the second the moderation walls go down.
Reddit is already complaining about being unprofitable, having to actually pay moderators to do the work community members are doing for free would make their situation even worse.
Trying to milk 3rd-party developers with excessively high API fees while expecting the community to provide all the value for their site free of charge was a very short-sighted move.
Impassioned people with domain experience, in many cases.
/r/videos, sure, that's got its own culture, history, known reposts, moderation style needs, etc. It will take some work to for a group of communications majors to figure out how to moderate it.
But /r/PLC, where I hung out frequently? You need reasonably intelligent electrical engineers to discern spammy press releases from interesting news! We don't work cheap, except when we work for free. Elite athletics, weird hobby niches, professional forums, on and on...Reddit's long tail, where much of the value was, relies on domain experts who were also good communicators. Those are hard people to find.
Stackexchange, love it or hate it, has the same thing going. Are you going to hire a pilot to mod the aviation stack, or a post-doc to moderate mathematics?
Should voting by itself sort the good and bad content? So if the community knows things naturally the spam should fall down and good content be on top?
Is that the case? Wouldn't the mods by definition be the most passionate people about reddit, and therefore the sort of people who would most likely be upset about the recent changes? And therefore anyone who would potentially do the job of mod likely to also be upset by the changes?
If was working for Reddit I wouldn't take the assumption that these people can be easily replaced for granted.
Moderation of the subreddit matters a lot less than sitting on a common keyword. It’s not like r/real_trueToyotaFans2 is going to see a ton of traffic even if it’s the best moderated sub in the world.
Domain-squatting on a couple dozen important keywords and brand names in 2005 is the powermod secret to success, not that they’re just that great at their job that they rose to the top on merit.
There are quite a few subs that prove this - really, really badly moderated but sitting on super, super valuable real estate. And Reddit doesn’t really have a mechanism for handling this unless the mod is outright posting gore or something - they got their first in 2005, that’s the end of it.
I miss 3dfiles.com . I wish there were people churning out cool stuff like back in the late 90s/early 2000s that would support goofy stuff like that. :)