> I'm seriously surprised at how many people here are fans of Yudkowsky
In my (admittedly extremely limited) exposure to him as a person through videos of his talks and some of his transhumanist writing, I didn't really get the impression that he's anything but a very nice and gifted person. So I think this practice of judging or dismissing entire lives by applying a simplistic theme is an antipattern, and this is true here, too. There's a reason why ad hominem attacks are generally frowned-upon.
Yes, I personally found the certainty with which several conclusions are asserted as universal truths on LessWrong in general offputting at times, especially when used in conjunction with the Rationality label, insidiously implying any other analyses of the subject matter would be inherently irrational.
However, some subjectively bogus tenets notwithstanding, I still think it's a valiant and intellectually stimulating attempt at building a new philosophical framework which could potentially keep up with science and future human development. At the very least LessWrong Rationality is a good basis for an ongoing discourse on the subject, and at its best it demonstrates a unique exploration of ethics and, indeed, rationality.
You may argue most or even all of this framework is lifted from earlier philosophical and scientific achievements, but in some areas standing on the shoulders of giants is actually a good sign you're in the right place.
>You may argue most or even all of this framework is lifted from earlier philosophical and scientific achievements, but in some areas standing on the shoulders of giants is actually a good sign you're in the right place.
The good bits are not original and the original bits are not good.
The problem is that LessWrong has a habit of neologism - so EY will use his own term for something ("fallacy of gray" is one example - known for 2000 years as the "continuum fallacy"), then his young readers, who have met whatever it is for the first time ever, will think his work is much more original and significant than it is 'cos they can't find his term for it. This cuts them off from 2000 years of thinking on the topic and increases LW's halo effect.
What about people like me that would never have learned about the "continuum fallacy" if it weren't for Eliezer's willingness to stoop to my level and explain things like I'm 5 (or, more accurately, like I'm a fan of Harry Potter)?
I personally don't care one bit if the good bits aren't original. They are approachable, and nobody else has done that for me. So I applaud Eliezer and his efforts, regardless of whether or not he has broken ground philosophically.
Would you have known that everyone else had been calling it the continuum fallacy all that time? No, you wouldn't - you'd think Yudkowsky was uniquely insightful.
Furthermore, you wouldn't learn anything beyond the limits of Yudkowsky's knowledge, or - more importantly - that there was anything beyond those limits.
The habit of neologism makes stuff impossible to look up, and creates the illusion that this is new ground, not old, and that there isn't already a world out there.
su3su2u1, debating this matter with Scott Alexander (Yvain), sums up a lot of their problems with the world view (which I am as familiar with as anyone who doesn't actually drink the Kool-Aid can be, having been on LW around four years and read not only the Sequences through twice but read literally all of LessWrong through from the beginning twice), which I largely agree with as a summary: https://storify.com/lacusaestatis/sssp-su3su2u1-debate
I'll quote one telling bit, which points out the level after Bayes:
> Heck, there are well defined problems where using subjective probability isn’t the best way to handle the idea of “belief”- when faced with sensor data problems that have unquantified (or unquantifiable) uncertainty the CS community overwhelmingly chooses Dempster-Shafter theory, not Bayes/subjective probabilities.
Do you remember the Sequences post mentioning the words "Dempster-Shafter"? Me neither.
(And then there's the use of "Bayesian" to mean things that nobody else uses the term for. As su3su2u1 puts it: "I suspect I’d be hard-pressed to write about probability theory in a way that wouldn’t fit some idea you cover by the word 'Bayesian.'")
Yudkowsky definitely gets credit as a good pop science writer. The habit of neologism, not so much. And definitely if he wasn't into the encapsulated, self-referential world that LW builds. In philosophy, Yudkowsky is the quintessential Expert Beginner: http://www.daedtech.com/tag/expert-beginner
untiltheseashallfreethem notes in http://untiltheseashallfreethem.tumblr.com/post/107159098431... : "I think Eliezer did a great service in writing these ideas up. But they are not his ideas, and I’m really worried that a lot of people read LessWrong, see that Eliezer is right about this stuff, assume he came up with it all, and then go on to believe everything else he says." And that's a serious problem when the good stuff is not original, and the original stuff is not good.
I haven't read everything on LessWrong, nor do I have time to keep up on the meta-discussion of Eliezer's neologism habits, but I can say that I've never thought that he invented any of the concepts that I learned through HPMOR or the sequences that I have read.
On the contrary, he seems very intent on citing the very books and people from which he learned these things. At least in my more limited experience. You definitely seem to have studied up on the issue much more than I.
It took way too much reading to realise there wasn't actually a "there" there, that none of the pointers-to-pointers-to-explanations actually resolved in the end. The evidence is pretty clear that I have way too much time on my hands.
In my (admittedly extremely limited) exposure to him as a person through videos of his talks and some of his transhumanist writing, I didn't really get the impression that he's anything but a very nice and gifted person. So I think this practice of judging or dismissing entire lives by applying a simplistic theme is an antipattern, and this is true here, too. There's a reason why ad hominem attacks are generally frowned-upon.
Yes, I personally found the certainty with which several conclusions are asserted as universal truths on LessWrong in general offputting at times, especially when used in conjunction with the Rationality label, insidiously implying any other analyses of the subject matter would be inherently irrational.
However, some subjectively bogus tenets notwithstanding, I still think it's a valiant and intellectually stimulating attempt at building a new philosophical framework which could potentially keep up with science and future human development. At the very least LessWrong Rationality is a good basis for an ongoing discourse on the subject, and at its best it demonstrates a unique exploration of ethics and, indeed, rationality.
You may argue most or even all of this framework is lifted from earlier philosophical and scientific achievements, but in some areas standing on the shoulders of giants is actually a good sign you're in the right place.