> This then causes young men to decide they should be in open relationships because it's "more logical"
Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).
The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.
> Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.
Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.
> The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place.
No, there isn't a correct way to do anything in the real world, only in logic problems.
This would be well known if anyone had read philosophy; it's the failed program of logical positivism. (Also the failed 70s-ish AI programs of GOFAI.)
The main reason it doesn't work is that you don't know what all the counterfactuals are, so you'll miss one. Aka what Rumsfeld once called "unknown unknowns".
They're instead buying castles, deciding scientific racism is real (though still buying mosquito nets for the people they're racist about), and getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.
And of course, they think evil computer gods are going to kill them.
> No, there isn't a correct way to do anything in the real world, only in logic problems.
Agree to disagree? If there's one thing physics teaches us, it's that the real world is just math. I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were. Re counterfactuals, yes, the problem is uncomputable at the limit. That's not "unknown unknowns", that's just the problem of induction. However, it's not like there's any alternative system of knowledge that can do better. The point isn't to be right all the time, the point is to make optimal use of available evidence.
> buying castles
They make the case that the castle was good value for money, and given the insane overhead for renting meeting spaces, I'm inclined to believe them.
> scientific racism is real (though still buying mosquito nets for the people they're racist about)
Honestly, give me scientific racists who buy mosquito nets over antiracists who don't any day.
> getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.
As far as I can tell, that's one guy.
> And of course, they think evil computer gods are going to kill them.
I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?
> I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were.
Hmm, they're not a complete anything but they're pretty different as they're not discrete. That's how we can teach them undefinable things like writing styles. It seems like a good ingredient.
Personally I don't think you can create anything that's humanlike without being embodied in the world, which is mostly there to keep you honest and prevent you from mixing up your models (whatever they're made of) with reality. So that really limits how much "better" you can be.
> That's not "unknown unknowns", that's just the problem of induction.
This is the exact argument the page I linked discusses. (Or at least the whole book is.)
> However, it's not like there's any alternative system of knowledge that can do better.
So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it. (A religion meaning a principle you orient your life around that gives it unrealistically excessive meaning, aka the opposite of nihilism.)
> I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?
That's a great argument. The book I linked calls it "reasonableness". It's not a rational one though, so it's hard to use.
Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.
Main "logical" issue with it though is that it seems to ignore that things cost money, like where the evil AI is going to get the compute credits/GPUs/power bills to run itself.
But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.
Iunno, quantized networks are pretty discrete. It seems a lot of the continuity only really has value during training. (If that!)
> So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it.
I mean, nobody's actually done this. Honestly I hear more about Bayes' Theorem from rationality critics than rationalists. Do some people take it too far? Sure.
But also
> the real world isn't discrete
That's a strange objection. Our data channels are certainly discrete: a photon either hits your retina or it doesn't. Neurons firing or not is pretty discrete, physics is maybe discrete... I'd say reality being continuous is as much speculation as it being discrete is. At any rate, the problem of induction arises just as much in a discrete system as in a continuous one.
> Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.
Sure, but you should do that because you have no evidence for Russell's Teapot. The history of human evolution and current AI revolution are at least evidence for the possibility of superhuman intelligence.
"A teapot in orbit around Jupiter? Don't be ridiculous!" is maybe the worst possible argument against Russell's Teapot. There are strong reasons why there cannot be a teapot there, and this argument touches upon none of them.
If somebody comes to you with an argument that the British have started a secret space mission to Jupiter, and being British they'd probably taken a teapot along, then you will need to employ different arguments than if somebody asserted that the teapot just arose in orbit spontaneously. The catch-all argument about ridiculousness no longer works the same way. And hey, maybe you discover that the British did have a secret space program and a Jupiter cult in government. Proposing a logical argument creates points at which interacting with reality may change your mind. Scoffing and referring to science fiction gives you no such avenue.
> But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.
The thing is that reality really has no obligation to limit itself to what you consider reasonable threats. Was the asteroid that killed the dinosaurs a reasonable threat? It would have had zero precedents in their experience. Our notion of reasonableness is a heuristic built from experience, it's not a law. There's a famous term, "black swan", about failures of heuristics. But black swans are not "unknown unknowns"! No biologist would ever have said that black swans were impossible, even if they'd never seen nor heard of one. The problem of induction is not an excuse to give up on making predictions. If you know how animals work, the idea of a black swan is hardly out of context, and finding a black swan in the wild does not pose a problem for the field of biology. It is only common sense that is embarrassed by exceptions.
Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).
The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.
> Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.
Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.