I very often type out comments on HN and then delete them. As Snowden alludes to, everything we submit to public forums is logged and stored. There is no doubt in my mind these comments are easily associated to my real identity with very little effort.
Another side of censorship I consider often is signal to noise. There is no reason to prevent people from saying whatever they want if no one will ever see it. I recall a stat I heard that YouTube receives ~400 hours of content uploaded every minute to their site. Or the long-tail of Twitch streamers with 0 viewers.
Finally, there is a threat of violence. We all know what happens to high-profile journalists because of their high profile. I often wonder how many nobodies disappear for some string of comments on some no-traffic forums/blogs.
I do the same. I write tweets, or facebook notes to family or friends -- argumentative or loving. Or hacker news comments, or blog posts, or whatever. I often spend a surprising amount of time in editing, re-reading, and honing them out.
Then, I tap `CTRL+A, delete, CTRL+W`.
I have an old draft post where I mused about a coworker's hamfisted and awkward attempts to magnify the voices of those around him whom he felt were under-represented. He, a cisgender cissexual 30s-ish white tech guy, would often single out visible-minority individuals and then loudly try to cajole them into giving opinions on topics he felt they should have opinions about, or whenever he felt they were being unheard. It was surely coming from a good place, but without fail, it made his targets and everyone else uncomfortable. It was interesting, so I wrote about it.
Then I didn't post it, because I really don't know if my own take as another 30s-ish blah blah white tech guy on some hamfisted attempt to be an ally would ring true or read well at the time. Or especially years later. But I wrote it and then stashed it. Part of that 'chilling effect,' right?
But, I also wrote a big 'intro to contributing to open source' post, which I felt was too long and rambling, maybe had its own share of bad takes, and I honestly worry that I'm not qualified to post it. I'm just some guy! So I stashed that, too.
Was I self-censoring in the most dangerous way in the first instance, but not in the second? What brings about the distinction? I felt in both cases that the post might reflect poorly on me, so I didn't post it.
Snowden's post reads to me a bit like a writer's writing about having writer's block. I imagine him sitting down to write something, because he has this big new writing project he has to do, the newsletter. He comes up with 10 ideas, but then in thinking about really seriously grappling with each of them, he crosses each one out. Who is he, after all? Just some guy! He did a great thing that one time, that doesn't make him an expert on every info-sec/cultural/political topic du-jour.
So... what's left? Well, he can write about what he's going though! And 'how bad it is to censor yourself' is a pretty unique and elevated take on writing about 'how hard it is to figure out what to write', so, voila.
(And I'd love to just delete this rather than post it.)
> Was I self-censoring in the most dangerous way in the first instance, but not in the second? What brings about the distinction? I felt in both cases that the post might reflect poorly on me, so I didn't post it.
If you as an individual self-censor on a particular topic because of a lack of expertise, that's not particularly dangerous to society - different people have different areas of knowledge, it'll all average out.
If the environment around a particular topic leads every moderate to self-censor on that topic, that is a lot more dangerous because it applies systematically; the societal discourse on that topic will be dominated by zealots who may steer us ever further astray.
No “perhaps” necessary—Facebook employees have even published academic papers based on this.
> Social media also affords users the ability to type out and review their thoughts prior to sharing them. This feature adds an additional phase of filtering that is not available in face-to-face communication: filtering after a thought has been formed and expressed, but before it has been shared. … Last-minute self-censorship is of particular interest to [social networking sites]…
> In this paper, we shed some light on which Facebook users
self-censor under what conditions, by reporting results from a large-scale exploratory analysis: the behavior of 3.9 million Facebook users.
Thanks for fighting the urge to delete and posting this. I appreciate the nuance of your perspective, and I think it brings a lot of value to this discussion. I don’t particularly want to express an opinion about the broader topic under discussion, but I do think that regardless of the cause, introspection of our “hot takes” as to whether they’re worth taking public or not is a valuable thing, and something it seems like not enough people do.
The story in it's original incarnation may be a bit outlandish. However, dial it in a few notches and you get a powerful AI that can associate you with anything you have ever left a data trail for, in the hands of an unknown future bad actor. In fact I wouldn't be surprised if that was the thrust for the initial conception of it.
Given that, the Basilisk may already be in it's infant stage.
Tangent: You have nothing to fear from Roko's Basilisk. I analysed it from the perspective of four different decision theories, and in every one:
• It doesn't make sense to build the evil AI agent; and
• the evil AI agent has no incentive to torture people who decided not to build it (unless its utility function relates to such torture, but it doesn't make sense to build that AI agent unless you want to torture people – in which case, you should be scared of the mad scientist, not the AI).
I didn't publish because I find my essays embarrassing, but if you have specific worries I can assuage them.
The Basilisk's tangible threat to the person in the past also relies upon the notion that a perfect simulation of you is indistinguishable from you (or can be used as a bargaining chip to regulate your actions in the present), which is a hypothesis that rests on very shaky ground.
The easiest way to escape the Basilisk's control is to say "Future simulation of me? Screw that guy; he sucks and gets whatever's coming to him."
It also relies on the fact that you can simulate the Basilisk well enough to know that it'll definitely hurt you (or the simulated you), such that your observation of its (conditional) decision to hurt you affects your actions.
However, we're not good enough at simulating the Basilisk; if it would decide to do something else, we wouldn't know, so it has no reason to waste resources on torturing us, so we have no reason to believe the threat credible, so nobody will make the Basilisk in the first place.
No, that would be impossible. The idea is that a future AI is built with the goal of [something good] and discovers self preservation and then does the torture stuff.
> a future AI is built with the goal of [something good]
Er, no, the idea is that someone hypothesizes the (malicious) AI, and then is compelled to (intentionally) build it by threat of being tortured if anyone else builds and they did not. The AI is working as designed.
See also prisoner's dilemma and tragedy of the commons; Roko's Basilisk is only concerning because of the reasoning that someone else will ruin things for everyone, so you had better ruin things first.
No, that's a version of the Basilisk that makes sense (almost – you don't need an AI for that). The original formulation was that the AI, built with the goal of [something good], would decide to torture people who didn't help build it so the threat of torture encouraged people-in-the-past to build it. (Yes, this is as nonsensical as it sounds; such acausal threats only work in specific scenarios and this isn't one of them.)
But yes, even if the Basilisk could make the threat credible (perhaps with a time machine), your strategy would still work. You can't be blackmailed by something that doesn't exist yet unless you want to be.
I think you've got an interesting idea there, but I'm not sure why you'd associate it with Roko's Basilisk given that people who are aware tend not to take it very seriously. It seems like you'd be better off just presenting your own idea, and maybe gesturing that it was "inspired by other ideas from LessWrong" if you really feel the need.
> I often wonder how many nobodies disappear for some string of comments on some no-traffic forums/blogs.
Maybe a bit conspiratorial. Even dictators need to prioritize their actions. And they'd go for speech that has impact (like said high-profile journalists), and less so for random noise on no-traffic forums with anonymous authors.
A lot gets lost in the internet noise. Nobody cares about it.
> Even dictators need to prioritize their actions.
A bastardization of Andy Grove's famous words: "Only the paranoid [dictators] survive".
You'd be surprised at how paranoid dictators get. Every little sign of disobedience is magnified into a "threat". This is the reason why almost all autocratic systems end up devolving into police states with (usually) multiple large intelligence agencies keeping each other in check.
I try not to be paranoid about it, but there are several governments that I just don't talk about online. They are known for aggressive Internet task forces and have histories of taking actions. I don't see any benefit publicly voicing my opinion on them.
All it takes is for them to indefinitely store all content posted to certain sites (and you can bet reddit and hacker news are on that list), then run algorithms to de-anonymize it. Then they can score you.
Maybe in the future you get a promotion. Maybe you're crossing a border. Maybe a YouTube video you post goes viral. Suddenly that scored record of you sets off an alarm.
The digital history you are creating today isn't going away for the rest of your life.
You seemed fairly reasonable up until you revealed that you believe in some paranoid conspiracy to disappear internet nobodies for their social media comments...
>I often wonder how many nobodies disappear for some string of comments on some no-traffic forums/blogs.
Another side of censorship I consider often is signal to noise. There is no reason to prevent people from saying whatever they want if no one will ever see it. I recall a stat I heard that YouTube receives ~400 hours of content uploaded every minute to their site. Or the long-tail of Twitch streamers with 0 viewers.
Finally, there is a threat of violence. We all know what happens to high-profile journalists because of their high profile. I often wonder how many nobodies disappear for some string of comments on some no-traffic forums/blogs.