Pretty cool to see my project hit the front page of HN, but definitely a bit of a /shrug moment on the subject itself. "Facebook gonna Facebook" I think is approximately how we feel about this.
I know here on HN we're used to hearing stories about scrappy startups trying to carve a piece of the pie big enough to exit on, but that is pretty much the exact opposite of what Dreamwidth is. Our motivations are very different, so this FB block is mostly a curiosity to us.
Dreamwidth is a small, neighborhood corner store kind of site. We're run by a couple of dedicated part-time staff (who have other jobs/responsibilities in life -- I personally work for Discord!) and a cadre of amazing volunteers who donate of their time and energy to make a nice little corner of the Internet that isn't driven by the cycle of VC and growth and user monetization.
We do not have any goals around growth, we don't advertise, and we ultimately don't care that much what the other platforms do. Our goal is to give people a stable home where they don't have to worry about their data being sold, their writing being monetized. Users choose to pay us for a few more advanced features (like full text search), and we support ourselves entirely off of that.
We are home to a large group of online roleplayers, Hugo Award winning fiction writers, Linux kernel developers, parents, security researchers, artists, activists, recipe bloggers, educators, and everything in between and around the edges who would rather work with a service owned and run by people who are motivated by something other than get-big-and-exit. Large communities of online roleplayers who get together and build whole worlds on Dreamwidth, who tell stories together. I'm constantly impressed by the creativity of our community.
Anyway, it's super cool to see Dreamwidth on the home page here. It's been my side project for over a decade now, and I'm quite proud of it. Even if modernizing a 20+ year old Perl project is a hellish undertaking at the best of times... but we keep going. :)
My wife and I tried to setup a simple business page for our local store we opened less than a year ago; they flag us as a fake/fraudulent account multiple times when we tried to created one; neither of us have personal/active FB accounts so I guess that's the reason (and this behaivor, yeah makes me double down on NEVER getting a FB account now). I even tried to emailed them 'proof' as they requested because my wife was worried it would really hurt us, nothing ever came of it. We finally decided it wasn't worth our effort, forgot about them and our store has thrived since. I'm happy to grow our business without having to deal with them. We've been using local and other ad platforms such as NextDoor.com, which I'd never heard of but one of our older customers brought to our attention. People talk about getting rid of Facebook, to me it starts with the actions you guys take and how my wife and I are going about it.
Don't support Facebook at all, they don't deserve it.
I had a quick look through Dreamwidth's "latest" page (https://www.dreamwidth.org/latest) earlier today, and a major portion of the posts on there were blatant spam for things like credit card scams, "Work from home and make $1000/day!", and so on.
You seem to be hosting a lot of spam, and those spam posts are also far more likely to be getting linked externally on sites like Facebook, since that's the reason they're being created.
Because Dreamwidth is effectively free website hosting along with a free new subdomain for each account, blocking individual subdomains is futile, and it's difficult for external sites to distinguish between spam and legitimate blogs.
I'm sure Facebook will unblock you fairly soon, but unless you get the spam on Dreamwidth under control, this will probably happen fairly often with different sites blocking it. It would be easy to end up with an impression of Dreamwidth being a spam-hosting site, and decide to block it (either manually or automatically).
Blogspot has always been in a similar situation and would get blocked from a lot of sites due to the sheer amount of spam it hosts.
We have a very manual anti-spam process right now that relies on humans to detect it and action it. We have a couple of very dedicated folks who end up looking every few hours, but it's not automated, and we don't have full timezone coverage.
It's definitely something I'd like to see us improve, but we've been focused on other projects (like switching from mid-90s HTML to a responsive design, which is a slow rewrite of the entire site). That said, if you have any advice on reasonably scalable ways of doing this in-house that don't involve sending our user content to a third party, I'd love to take any recommendations!
Feel free to email me, firstname.lastname@example.org, if you would rather do that. And if not, don't worry about it, I appreciate the comment anyway :)
The nice thing about this is it's pretty computationally light and straightforward to implement for any language. I have no clue as to your stack, but if you have python for your backend then sklearn is a good library that has a naive bayes classifier (plus a lot of other better options). Any post with a high probability of being spam, I'd automatically flag and by default just remove with the option for a user to ask for manual review. Main thing you'd need for this or any fancier approach is some dataset of spam/non spam posts. If you have an easy way of retrieving past posts that were labelled spam that should allow you to make a fine dataset. If you don't want to train on your own user posts (although only information kept is word counts here), you can look online for spam datasets and use one of those to train your classifier.
The nice part is that SpamBayes gives you two numbers, the spam "probability" and the ham "probability". When one of them is very close to 1 (like > .99) and the other is very close to 0 (like <.01), there is a good chance that the message is really spam or ham. And this classify almost all the messages. But from time to time you get a message where the numbers are not so clear, or both are big or both are small, and this means the classifier is confused and you really must take a look at the message.
Then google started doing that or something similar at scale and effectively eliminated spam in my mailbox ever since. (With the curious recent exception of some highly similar bitcoins spams)
Not sure what you mean here. The problem Deimorz was bringing up wasn't just about users writing something, and spammers linking to it. It was that this site was being used to host the spam payloads. By spammers, not by actual users.
And this is how a lot of the early spam fighting worked: by finding hosts that allowed sending spam and publishing their IPs on blocklists. All mail traffic from those IPs, even if legit, would then be rejected by a large proportion of mail servers that subscribed to these blocklists.
That's where the spamming is happening.
Calling these "spam payloads" is incorrect. The spam payloads are on Faceboot's servers. These are sites that are linked to by the spam, ostensibly for the purpose of funneling to whatever the spam is trying to market. Trying to police generic web pages, rather than the spam itself, seems like an exercise in futility given the basic philosophy of the Internet.
> And this is how a lot of the early spam fighting worked: by finding hosts that allowed sending spam and publishing their IPs on blocklists
The situation has a similar shape, but there is a distinction as Dreamwidth is not actively sending spam but rather responding to requests from viewers. Still, we can look at the outcome of what happened to the email ecosystem - increased centralization of providers - for a warning of what's to come.
A typical way to deal with this is to consider domain reputation somehow, if the content contains a link. E.g. trust links to old domains more than young ones. Or trust sites that with lots of back links more than ones with none.
So an old domain with user created content, a good reputation , but little moderation or abuse protection turns into a great place to host this data. Eventually links to the domain get flagged one too many times, and it gets blocked.
I agree that they are not sending spam in this scenario. But neither were the open smtp relays of old. They just passed it through, while allowing the spammers to leech off of the relay’s reputation.
(Just to be clear, I have no knowledge of what happened here in reality. So I don’t know that DW is hosting spam, nor that it was linked to from Facebook. This is just an example of why a domain blocklist might be a totally reasonable option.)
These scam sites are like that - do you really think you can make $30,000 a week working 30 minutes a day from your home computer if you just send these idiot $25?
There's already a call to control political information when it has harmful effects on society. Next up is "your website was blacklisted because you allowed a user to link to Plandemic". I agree Plandemic has no redeeming purpose, but censorship is not the answer.
I've got no problem with their operation, but YOU are going down a VERY dangerous and slippery slope by saying I can't block domains that clearly host trash because they might host something else.
On my network I can block child porn, malware sites, scam sites and even entertainment sites like youtube. If you are running a service that mixes the content together, then you may be blocked by folks (like me) who don't have time to chase down every (free) subdomain you allow scammers to create.
That is my right. Period. Full stop. That is not censorship.
Folks here get censorship confused. The govt does virtually nothing to stop these scam sites - so they are certainly not being censored. I'm fine if govt does nothing, as long as communities of people can block these places.
And yes, if you run a site on the internet and don't make it slightly difficult for scammers to use your site to host crap, then other folks in the neighborhood will move the heck away from you.
It seems like you're getting confused on what censorship is. https://en.wikipedia.org/wiki/Censorship . Censorship can be done by the government, and it also can be done by sufficiently powerful private entities.
Also, nowhere have I argued that anyone shouldn't block whatever they'd like on their personal infrastructure. Although if you do it to your kids, then you are indeed censoring.
And that is what we are talking about. An obligation to link others
Usually people follow the network -- find one person that's interesting, see who they talk to, and go from there.
Another approach is to see what "interests" are popular and click through to see who shares those/is active: https://www.dreamwidth.org/interests?view=popular
But, TBH, there's a lot of happenstance and serendipity (or are those the same).
I finally got it unblocked, but only thanks to a former colleague (engineer) who now works at Facebook. If that didn't work, my next step was going to be filing a lawsuit, just to try to get their attention.
A summary from spending a month trying to get this fixed:
1. The "Report" link doesn't actually do anything, or if it does, you have to get loads of people reporting to surface it to a human.
2. The best resolution may be through Ads billing, by trying to post an ad and reporting a problem displaying an ad.
3. This affects a lot more than just blocking links. It also affects messages in private DMs (including from a Facebook for Business inbox), links on Instagram, any Facebook APIs you may be using, and even getting password reset emails to that domain.
For more context, here's my post: https://watilo.com/facebooks-community-standards-censorship-...
Something kind of like that worked at the job I a few years ago.
A large provider of free email that also runs an ad network (not Google or Microsoft) once kept flagging my employer's emails as spam. The only emails we sent were receipts to people who had purchased stuff from us, an email with installation and activation instructions to new purchasers, and replies to support tickets.
We'd go through their procedure to report false positives, and anywhere from a few days to a week later our mails would start going through...and then a few weeks later they'd start getting blocked again.
This finally ended permanently when our guy who managed ad spending called up our rep at the email provider, and asked something along the lines of "Could you explain to me why the fuck I'm spending $X thousands dollars a week on ads there to acquire new customers, and then you block us from sending those customers receipt and instructions!?".
The ad rep put him hold for a moment, and then returned with their head of IT conferenced in. The ad rep explained the problem to the IT head. The IT head then conferenced in the leader of the anti-spam team and told him to add our domain to an exception list so that nothing we sent could ever be classified as spam. The anti-spam team guy said it would be done in 10 minutes.
We never had a problem after that.
I think the origin of this wasn't even originally sinister. It's just that "free" services aren't willing to provide human support, so youtube, gmail, etc. rely on automated systems. Ultimately, only paying customers get human support, and that simple system (even assuming all actors behave well) progressively skews to the advantage of paying customers. The house edge is small, but cumulative.
1) Spammers using one domain and multiple sub-domains
2) A poorly calibrated ML model for spam.
It is worth noting that FB get probably the world's largest amount of spammers, fraudsters and general bad people due to the fact that they have an absurd amount of users.
Honestly, though, if this stays on HN it will get fixed, but it's deeply concerning that if this is happening a bunch of times (and it probably is) but those sites don't appear on HN, then this will not get fixed.
It's also possible that someone has weaponised the FB spam system against dreamwidth (which seems less likely to be me, to be fair).
Yes, and they make a ton of money from those users as well. So the question is why the Facebooks and Googles of this world don't have proper procedures in place to deal with these things in a sensible way.
If a decision gets reversed only after exploding on Hacker News or Reddit it should be considered a bug.
The problem is that we only see the cases that go viral. We don't see the cases that didn't, and thus can't actually judge how many cases get reversed via the official escalation channels.
We should collectively start holding platforms to higher expectations; they certainly have the resources to do a better job.
On what grounds? They are free to remove stuff from their website
> Dreamwidth Studios is an Open Source social networking, content management, and personal publishing platform. Our mission in life is to make it easy for you to share the things you make, and easy to find the people who are making the things you want to enjoy.
> We have all the features you've come to love in social networking sites, including privacy and security features, community interaction, content aggregation, multimedia support, and more. We're committed to adding features that you'll find useful and relevant, as well as working to integrate our site with the other Internet services you regularly use.
> Dreamwidth Studios is based upon the LiveJournal codebase offered by LiveJournal, Inc. We've taken the LiveJournal server code and updated, modernized, and streamlined it -- and we make all of our changes available under an Open Source license.
I feel like it's one of the last pure corners of the internet, technology wise, so I guess I shouldn't be surprised that FB is blocking it.
The only reason I have ever heard of dreamwidth is because Matthew Garrett (mjg59) has his excellent blog there:
... otherwise I don't think I ever see that domain anywhere ...
Most DW users post under friends-lock. You're not going to see most DW posts unless someone has given you access.
Main thing it has against it is the lack of an app.
(And that Facebook has just blocked it)
Curious about why? I rarely use apps if there's a web page available that doesn't suck.
This looks more like "Main thing it has against it is the website predates responsive, mobile friendly web design"
EDIT - actually aside from a few glitches it's not awful on mobile - and a random community I tried seemed to work fairly well.
There's no good Hacker News client for mobile though, at least one that's better than the web page. Not that I'm saying someone should make one either, the site is responsive as heck.
It's perfectly possible to write a good Reddit web client; they chose not to
Good for whom? "Good for the user" does not mean "good for Reddit".
as my Hacker News app, it does me. I like that I can vote on comments and stories, but it's irritating that I can't post via it. On a medium sized phone screen, the mobile web experience of HN is not optimum in my opinion.
Thats a development I often see.
I think the only improvement needed might be a different style on "code" blocks, but otherwise it works great
There's literally no need for an app
A bit kludgy, but still a much better user interface than any other discussion board I've used lately.
Also possibly a feature, depending on who you ask.
(You can download the source and host it yourself if you like though. But Dreamwidth itself is hosted at Dreamwidth.org)
If the hypothetical app is to be used with the main instance only, then there’s no challenge (unless there is risky/restricted content on the main instance, so the app is not allowed by Apple/Google and others).
(Unrelated: whenever I read something about Tumblr, like its Wikipedia page just now, I'm always surprised to find it in the present tense, not "Tumblr was" but "Tumblr is". To me, it's just dead with the censorship, I never see it being linked to anymore, and I wasn't even regularly on the nsfw part. Every time I'm reminded it wasn't actually shut down.)
My memory for internet dramas is fading a bit, but I think it was both the sale to SUP (the Russian company) and the content policy changes (and the fears were only stronger under a Russian owner) that led to the massive LiveJournal diasporas to Tumblr (which was new, launching in early 2007), and the various LJ clones/forks like Dreamwidth, JournalFen, DeadJournal, InsaneJournal, etc. As far as I know, Dreamwidth is the only one of the clones/forks that not only still exists, but is maintained, which is really incredible.
Livejournal became associated with CP thinly disguised as fanfic. Think Snape and an underage Harry Potter kind of stuff. It was sold to the Russians after its value imploded because all the English-speaking normal users left. The Russian content on there now is as far as I know innocuous; that kind of fanfic is mainly a Western thing.
Don't get me wrong, I understand 100% the people that find the idea repulsive, would never want to accidentally see it, etc. But that's the same with gore / gruesome scenes: I really don't want to see that, either. That's not what makes something illegal.
It might also depend on research into whether it makes pedophiles more likely to act on their feelings. If it's shown that pedophiles viewing drawn child porn has adverse effects, it should indeed not be allowed; but I never read about it having such an effect. So far as I know it doesn't hurt anyone.
There's ethnographic evidence (see Patrick Galbraith, Mark McLelland, Suzanne Ost) of such fans from Japan that the assumption they are pedophiles, or that they carry over their desires from "2D" to "3D" is dubious at best. The English government, when banning virtual depictions of fictional characters, admitted they had no evidence on its effects, mode of usage, or popularity among any particular group.
I would suggest going to your local university library and looking it up there as access to most of these studies will require expensive journal subscriptions.
For your second question: there is a fair amount of research into the effects of visual and audiovisual stimuli on pedophiles and other sex offenders. This research was and AFAIK still is the basis for indefinite confinement of sex offenders generally and pedophiles specifically.
For your third question: there is some research on textual material, specifically as it relates to pedophiles, showing that it does stimulate demand but to a far lesser extent than visual, audio, or audiovisual media. AFAIK, nobody has researched the effects of Harry Potter porn.
For your final point: there is research showing that non-realistic visual media (i.e., manga) can, like textual material, stimulate demand, but to a far lesser extent than action or simulated visual images.
I'm more just curious because I haven't seen anything (other than from the authors I've cited) on the effects of fictional (and in particular highly stylized) material on regular people, or even pedophiles specifically. Gary Young's book on The Gamer's Dilemma from 2010 or so also concludes there is no evidence for fictional material having these effects.
What do you mean by "stimulate demand"? For example, it would not surprise me at all that pedophiles find certain depictions arousing, but that still wouldn't be any reason to illegalize it, unless we are to go about banning everything that pedophiles also find arousing (which is a bridge I suspect many would not want to cross). Of course pedophiles are aroused, but are non-pedophilic individuals also aroused? The study on Japanese fans of lolicon manga does not so neatly indicate pedophilia as an appropriate category for their attractions, nor does it explain why they are so defensive against real depictions.
With fictional material, from what I can gather, there's something else at play which does not fit into the commonly held notion that fiction is always (or even most of the time) a substitute for the real deal.
To my mind, the difficult questions are: if the material is arousing to pedophiles, what does that arousal indicate in the risk of an offence (obtaining real CP or otherwise)? Do the effects persist? What is the persistent effect post-orgasm? Are stylized or fictional depicitions sufficient to arouse an interest in the real thing? Is it appropriate or desirable to prosecute or produce policy based on the tastes of pedophiles, especially given that the majority or a high percentage of CSA crimes are not perpetrated by pedophiles?
I've tried quite hard to find material on specifically lolicon manga or fanfiction, and a study among either pedophiles or others. If it does exist, I'd be very interested to see it, and I would have expected it to crop up in the arguments I've had so far. Especially, a study among those who are not already convicted for a crime (contact offence or otherwise). Those results would only, at best, tell us about criminals, and pedophile criminals with NC/C offences are almost reputable for showing lower impulse control.
To put it another way: access to regular porn may inflame the desires of a rapist, and they may even make him more likely to committ an offence; is this sufficient to illegalize porn for everybody? Alcohol may have the same effect, and the question still applies.
In relation to risk of re-offense (aka recidivism), access to stimulating images was found to increase the risk of recidivism dramatically, by double-digit %, and pedophiles are already one of the groups with the highest rates of recidivism (in this context, we're talking more than 50% rate of re-offending).
If you get off to sexually abusing a child, that is pedophilia, so essentially all CSA crimes are committed by pedophiles. Generally the only exceptions I can think of are where both of the "offenders" are teens engaged in consensual acts before they are old enough to legally consent.
To put it another way: access to regular porn may inflame the desires of a rapist, and they may even make him more likely to committ an offence; is this sufficient to illegalize porn for everybody? Alcohol may have the same effect, and the question still applies.
This hasn't been shown to be true. But more importantly, the converse has been shown to be true: the overwhelming majority of people can watch porn, or drink alcohol, without becoming rapists, so it is not the porn or alcohol that creates the urge to rape. In contrast, the consumption of CP has been shown to create the urge to engage in pedophilic acts in pedophiles. (Additionally, CP is also illegal because it results in actual physical harm to the child.)
Again, AFAIK nobody has done a study specifically on manga, since that's an incredibly niche market in the U.S. Studies using fictional characters in the U.S. have generally used Disney characters or similar, so if you really insist on following up on this, you should start your search with that.
The data seems to contradict this. "Estimates of preferential attraction for children among those who offend are often in the ballpark of 25% to 50%". Granted, this is talking about preferential attraction, but estimates on the percentage of men with non-preferential attraction factor in at a much higher percentage of the population than pedophiles (and others we assume are preferentially attracted). Other than this, it still seems to leave a segment of child abusers who do not enjoy their actions sexually, but for other reasons (say, power).
>This hasn't been shown to be true.
To some degree (and I suspect, to a similar degree to pedophiles) it has, see my comment here. Exposure to pornography is usually shown to trigger or at least intensify feelings of sexual agression, either physical or verbal, even in non-experimental studies.
> the consumption of CP has been shown to create the urge to engage in pedophilic acts in pedophiles.
Is this true of all pedophiles generally based on a sample of the pedophile population, or those recruited to take part in such a study after being convicted of child abuse (contact or non-contact) offences? I can very easily see the fact they've been convicted as being a huge confounder here.
Just to be clear, I'm not arguing that CP should be legal; I just think that on the basis of the available evidence, I can't see any reason to illegalize fictional representations. I think the alcohol and regular porn comparison holds quite well. The longitudinal study data which contradicts the theory that increased availability of porn does not lead to higher rates of rape is simply unavailable for child porn, perhaps with one exception, though. The link between inebriation and likelihood of sex crimes is to my knowledge undisputed, though.
 References here: https://www.b4uact.org/know-the-facts/behavior/
Do they climax or simply look at the images in an experimental lab setting?
Do they have a secondary condition? Do they have POCD?
> Is this true of all pedophiles generally based on a sample of the pedophile population, or those recruited to take part in such a study after being convicted of child abuse (contact or non-contact) offences? I can very easily see the fact they've been convicted as being a huge confounder here.
Many studies are done in a clinical setting. Someone who has self-control issues is more likely to go to a shrink for those issues than someone who does not. Other studies are done in a criminal justice setting which is similar.
It is questionable whether possession should be illegal. It incentivizes people to cover up crimes as they themselves may be held complicit otherwise. It incentivizes law enforcement to disrupt support / prevention efforts as anything someone could say could be taken in evidence.
This won't be a popular notion, admittedly. But it is an important elephant to address.
I've never been on livejournal, no idea what that looks like. I saw elsewhere in the thread it's a fork from Livejournal's code; it just reminded me of Tumblr.
Tumblr introduced and pioneered the concept of the reblog, which was similar to a trackback/pingback in blog parlance, except turned on its head.
I had to deal with the Facebook developer platform support recently, it was (still is) a nightmare... We only have 100 or so Facebook users, but dealing with their app approval process, review process, and repedative re-review process, has probably cost me 4 weeks of "maintenance" work in the last year to keep OAuth working for them.. And here's the best part: we request no additional scopes! We get first name, last name, email address, profile picture, the minimum to OAuth... No post read/write, no photo permissions (aside from profile picture), no friends access, nothing. We are literally the lowest risk app to end users that you can make on Facebook's platform...
They started a review on us a few weeks ago, we reported saying we needed more information (their wording was unclear)... They acknowledged the request for information, then a month later banned us without ever providing the information. We had an outage for Facebook users for 2-3 days while we waited for their support to read or appeal again asking for information. It's now been 7 weeks, and both requests for the same information has not been answered, I'm afraid they are going to ban us again in a week it two still not working with us...
Facebook is a nightmare to work with, we have no developers that actually want to keep working with them. It's offensive to most our developers that we have to link personal accounts to our work Facebook application.
"About six months after opening, PayPal -- our payment processor at the time -- demanded that we censor some of our users' content (mostly involving people talking about sex, usually fictionally, in explicit terms) that was legal and protected speech but that they felt violated their terms for using PayPal."
HN discussion on it: https://news.ycombinator.com/item?id=15099761
Tell that to people who live in China, N. Korea, or any country controlled by a theocracy.
Even in the US some states ban doctors from talking about abortion and/or force doctors to give women seeking an abortion false information.
Facebook has over 3 billion users.
Gmail has over 2 billion active users. All their products combined probably have more than 3 billion.
US has 350 million people.
By those numbers alone, aren't tech companies more powerful arbitrators?
But that's a bad metric. We should talk about enforcement rather than raw numbers. Government's interest and design won't allow it to censor expressions in the same way that private companies  do. Private companies can filter anything and since tech companies are digital, the enforcement itself can scale a lot more than government. Government maintains some sort of appeal system while tech companies don't have to.
Algorithms responsible for gmail filter billions of mails daily probably. Obviously, not all of it can be called censorship as majority of it is spam but anything wrongfully filtered is censorship, no? That's going to be a huge number that can't scale offline in enforcement.
You don’t have to tell 1 billion people in China not to say F%%% China”. You just punish a few and the rest fall in line.
Once you punish a few, the rest self censor.
"Oh, you're friends with someone who says X? Welp, you must be a Bad User and your posts will receive a lot less visibility and the validation that you desire. But if you were to stop interacting with your Bad Friend, you could become a Good User again."
And oops, "Bad Friend" no longer gets invited to real-life parties because most people still consider FB the most convenient way to organize them and people only rarely remember to forward the invites.
The enforcement works better when you aren’t aware of it. Being covert means you don’t protest so loudly.
It's less severe, sure.
Bam, your reach has just expanded to 3+ billion people, most of whom aren't even your citizens.
If a link can’t be shared on Facebook, Google, or Twitter, it may as well not exist in most of the world.
Heck when you can literally close off entire markets, private corporations censor themselves.
It cheapens speech and denigrates those ACTUALLY living without freedoms.
Facebook, Google, Twitter, or any other online service for that matter controlling what is and is not said on their platform is not "gatekeeping" anything.
Posting something on Facebook is not now, has never been, and will never be "free speech".
No matter how much you want it to be.
Facebook has almost certainly already removed what would be considered free speech according to the UN declaration of human rights, it's unlikely that none of the thousands of government critical posts it removed in India alone at the behest of the government could be considered free speech in the human rights sense.
If you're referring to free speech in the US, while generally agreed that companies acting on their own volition can not be seen as suppressing free speech, it is not impossible they could be considered a state actor through several means, including their censorship rules considered as influenced by government policy.
Since it's not impossible to rule out that some speech on Facebook might at some point be free speech in the US legal sense, and that much of it is free speech in the human rights sense, it is not entirely incorrect to talk about free speech on Facebook, but entirely incorrect as far as I can tell to claim it can never be.
Personally I think it's ethically problematic to require companies to give voice to everyone equally, but it's also somewhat problematic to allow complete freedom of what or who to reject when companies and their services become large enough to be an integral part of society's fabric, and perhaps the political debate.
In some way companies like Facebook has a similar function as asphalt, it simplifies communication greatly, but it's not really whether or not a piece of land is covered in asphalt made by a private or government entity that determines what is public or private. It's not who made the asphalt, and in several countries it's not even who owns then area under the asphalt, but rather how the covered place is used as determined by interpretation of law.
Twitter is probably more similar to a public place as it is, but Facebook's marketplaces, events, and groups nevertheless implies character in a direction that can be interpreted similarly.
But they are misusing it. There are articles besides the 19th.
Tweeting content disagreeable with Twitter, Inc. cannot be a right under the UDHR because of Article 20.
1. Everyone has the right to freedom of peaceful assembly and association.
2. No one may be compelled to belong to an association.
"An association" doesn't mean an organized body like a club, political party, or union. It means any tangible or intangible connection of any type between two parties.
The twin freedoms of speech and association means a newspaper can't be forced (in civilized countries) to publish content against their will. The owners of the paper can't be forced into an association with ideas or speech that is not their own.
Same goes for Twitter. If Twitter wants to delete or prevent the dissemination of any tweet for any reason at any time-- that's their right.
If this was not true, then my local newspaper would be violating my rights for not publishing my letters to the editor. Are they?
You can sue anyone for any reason. Facebook and other online services enjoy broad immunity from civil suits IF AND ONLY IF they make a good faith effort to remove defamatory content.
I understand why conservative bigots are obsessed with Section 230: its repeal is being used as a threat against platforms that remove bigoted content. Conservative bigots know that Section 230 means nothing to the fact that Facebook can delete their bigoted comments, but they ALSO know that repealing it will cost Facebook money so they are pressuring Facebook, saying "If you allow our bigoted content we will drop efforts to repeal section 230".
I will never understand why people believe conservative bigots who say that it has anything to do with free speech.
Section 230 is literally two sentences, easily understandable by average persons. It has nothing to do with speech.
The point is: do you want to live in a world where a handful of authoritarian corporations decide, what you can publicly say and what not?
Livejournal was a hotbed of dubious fanfic (shall we say) in which one or more characters was underage. Maybe such a thing is technically not illegal but even so, many left that platform once it started to develop a reputation for it. Then it went bust and was sold to a Russian company, it is still used for normal blogging in Russia.
Sounds like Dreamwidth is an entirely different site from Livejournal, it just uses the same open source code. How does dubious fanfic on Livejournal relate to the banning of Dreamwidth?
If that fanfic migrated to Dreamwidth, why not just say there was dubious fanfic on Dreamwidth?
"We can't review this website because the content doesn't meet our Community Standards. If you think this is a mistake, please let us know"
So an unhelpful; start but a start to understanding what may have happened.
My guess is that FB should have blocked a sub-domain but instead blocked a whole domain because of a small set os users.
Alternatively, it was a fat-finger.
You can still manually post links to any site that isn't banned though.
This is a blogging site with tens of thousands of users, around for years.
2. Are other Facebook users with posts linking to Dreamwidth also experiencing this deletion of posts?
Edit: Yes and Yes.
I bet it was added to some spam list automatically because some ppl really used it for spam.
That is, anyone can pick who gets to censor / curate their stream. And Facebook just provides the platform. Maybe, for legally required censorship, the platform can still take action. But for anything else, let people pick their censor.
At first blush, it looks like LiveJournal 2.0. There's probably some pretty...interesting stuff, but it's not aggregated and promoted, like FB. It looks like each account is pretty insular.
1. They limited themselves to blocking content that was decidedly harmful (phishing sites, malware, etc.), AND
2. They were transparent about what links they blocked and why.
It seems that neither was the case here.
Transparency would be nice indeed. The argument I keep hearing against it is that they believe it would help the spammers.
So, whilst anti-trust might help sometimes. It is not the best tool here. What we need is legislation.
People don't notice this automation exists, except when it doesn't work. Then they either complain "Why didn't XYZ get caught! This is obviously spam! I could write better automation." or "OMG they banned dreamwidth!".
It's possible they extended this (or something similar) to filter out sites with hate speech/"community guideline violations". And in that case some sub-domain of dreamwidth posted something horrible. And then the automation slammed the hammer on the whole domain.
So while explicit imagery is not the norm, nor is it normative, it’s not unusual and is to be expected.
But I very rarely go to "latest". I tend to read "friends" or a subset thereof.
These gatekeepers flexing should be a huge red flag, akin to the BP oil spill in the tech world.
This might be the last straw for me to delete my Facebook. I would be a bad capitalist if I didn't.
I wouldn't be surprised if Facebook just blocks an entire domain when abuse report incidents hits some threshold.
I'm not saying it's ideal, but I understand how it got this. DW should be on:
And maybe FB moderation should consider the public suffix list.
Another solution is to moderate content on your domain. Or require users have separate domains.
If only there were some universally supported distributed cached database of domain properties.
One can argue that governments should have the right to censorship when its a matter of urgency or emergency or national security (though this is debatable), does the same power lie with Facebook (or any other platform) to exercise a censorship on its users? It goes back to the big debate whether they are accountable for what gets shared on their platform.
This is a weird question. Of course a private business can choose what content they want on their platform. ESPN, for instance, censors tech news. If an ESPN journalist put together a story about Facebook blocking Dreamwidth, they'd censor it.
Having said that, I know there are some on HN that react emotionally anytime someone engages in "censorship". Oh well. No company is obligated to post content just because someone else wants them to.
Dreamwidth has no such potential pitfalls as a whole platform.
The only point of Facebook's censorship is to control what the willing can see.
The same reason any other forum or platform moderates their content and usually removes sexual content, you want the default experience to be good for the average user on the site.