> Which - keeping in mind that at the time I had absolutely no idea this would all blow up the way it did - caused me to yell quite loudly at Roko for violating ethics given his own premises, I mean really, WTF? You're going to get everyone who reads your article tortured so that you can argue against an AI proposal? In the twisted alternate reality of RationalWiki, this became proof that I believed in Roko's Basilisk, since I yelled at the person who invented it without including twenty lines of disclaimers about what I didn't necessarily believe.
I don't understand what Yudkwosky's trying to argue there. "Even though I don't actually believe in it, I yelled at him for invoking the wrath of the nonexistent basilisk that I don't believe in"?
I believe there is a secret government agency with the purpose of controlling the entire world will come about.
I believe that this agency will control everything, and will punish all people it determines learned about it's possibility and did not either work to make it a reality, or immediately turn themselves in to the authorities.
In light of this, it's important to keep knowledge of this future event from people so they aren't targeted.
Whoops, I just posted this to a public forum.
Regardless of whether the belief is true, if the poster believes it, then they are acting in a reprehensible manner. If you incorrectly believe you have a button that sets off a nuclear warhead in a heavily populated area and choose to press it, you still shoulder some responsibility for your intended actions.
If you think you've discovered a serious info-hazard and your first action is to post it publicly, you are not someone I want frequenting the same online spaces I frequent. Whether that particular info-hazard is, upon examination, remotely hazardous is aside from the point.
(If you discover something that scares the crap out of you and you need someone to talk you down/point out why you're wrong/help you sleep at night, you find one very smart, very stable friend and you ask permission and then you talk through it, you do not just publish)
More like, "he believed that he'd accidentally invented an eternal torture machine and then he tried to turn it on, and I yelled at him even though the eternal torture machine didn't work."
It should have been obvious to Roko that either it was a worthless idea (in which case he was safe to post it... but shouldn't, because it's worthless); the other case was that it was a serious, valid idea, in which case he obviously should not post it.
Well, yes? If Roko's basilisk was true, it'd be unethical to spread. It turned out to not be true, but if Roko was reasoning from the premise of it being true, or even possibly-true, then the action was still unethical.
In attempting to debunk this sort of statement, he has literally written the following: "In the hands of RationalWiki generally, and RationalWiki leader David Gerard particularly ..., [the fictional version of the Roko's Basilisk event] somehow metamorphosed into a Singularity cult that tried to get people to believe a Pascal's Wager argument to donate to their AI god on pain of torture. This cult that has literally never existed anywhere except in the imagination of David Gerard."[1]
"This sentence is a lie." He doesn't seem to actually debunk the idea, just yells about David Gerard for a bit. Can you explain how it's different from Pascal's Wager?
At least the part where it's implied that "Someone from organization X used the basilisk to convince innocent bystanders to donate to X" is false, for any value of X that I know of. Pascal's Wager is/was (I imagine?) used to get people to subscribe to various churches and give them money.
I don't understand what Yudkwosky's trying to argue there. "Even though I don't actually believe in it, I yelled at him for invoking the wrath of the nonexistent basilisk that I don't believe in"?