Hacker News new | comments | show | ask | jobs | submit login

"Logic is the beginning of wisdom, not the end" - Spock

Anecdotally I found that the Less Wrong community tends to be decidedly more full of crap than average. In the same vein as spiritual materialism[1], many people that engage in a bias witch-hunt seem to be falling prey to "logical materialism", where the whole exercise turns into people deluding themselves into thinking they're somehow "better" than others because they're less full of crap than average.

It's good to know thyself, but it's no use if your knowledge isn't tempered by wisdom, and you're not going to get that by reading blog posts about cognitive biases online, no matter how good the posts.

[1] http://en.wikipedia.org/wiki/Spiritual_materialism




It's nice to know I'm not the only one who thinks this. I think it's really telling that a community which is ostensibly concerned with the science of achieving desires spends so much time focused on the "problem" of akrasia.

It's also heartbreaking to see intelligent people getting so excited about ideas like cryonics and personality uploading. I mean, they're interesting things to talk about, but a lot of people on LW seem to actually think they might get to live forever. It's kinda sad.


> It's also heartbreaking to see intelligent people getting so excited about ideas like cryonics and personality uploading. I mean, they're interesting things to talk about, but a lot of people on LW seem to actually think they might get to live forever.

I wouldn't totally dismiss sci-fi concepts like brain uploading. But then I'm not totally sold on their viability either.

One of the things I always thought was interesting about cryonics was data integrity. It could be a while before you hit the singularity. (Or whatever it is you think will wake you up.) Even with liquid nitrogen I doubt your brain can be 100% preserved. So let's say hypothetically you get yourself some proper Ray Kurzweil recursively improving A.I. And as a common courtesy decide to revive the Alcor people. If you have 99% of someones brain image and use statistical algorithms like Bayes theorem to fill in the rest is it still the same person when you wake them up? How about 99.99% their brain? 99.999999? (Which brings us back to the semantics of labels and reductionism vs. wholism.)

People who think they'll live forever have huge logic fails on their hands. Ignoring the heat death of the universe as an obvious one. [0] In fact every time I think of the whole business Issac Asimov's The Last Question comes to mind. [1]

[0]: (http://en.wikipedia.org/wiki/Heat_death_of_the_universe)

[1]: (http://www.multivax.com/last_question.html)


> ...is it still the same person when you wake them up?

This is pretty much where I land on this. Given any empirical theory of consciousness, it's never going to be "I get to live in the computer," just "I die, but a copy of my brain lives in the computer." And it's pretty hard to draw a bright line between that and "I die, my brain slowly decays for thousands of years, then it's surgically reconstructed and awoken." Still feels like death to me.

Of course, to your transhumanist theorist, there's probably just as much connection between me and the computer as there is between me in the night and me in the morning; it's all just an illusion created by a persistent brain state. But that doesn't help either, because now you're describing a kind of immortality -- "This exact stream of continuous consciousness is a dead end, but something very like it will continue to be and think of itself as part of me and that gives me and it some comfort about the whole thing" -- which the general public has been achieving quite successfully for some time in the form of procreation.

In fact, since every living organism represents a terminal link in a chain of unbroken life dating back to the first self-replicating molecule, it might be quite reasonable to say that we have all been alive for billions of years at least, we just don't remember most of it. But this is changing: I can go on Wikipedia today and recover the memories of our culture dating back for most of its existence.

Naturally, they grow vaguer the further they go back in time, as memories do; but for events that take place today, we have a record which far exceeds human memory in accuracy and exactness of fact, and which will very soon be competitive with it in emotional effect. It is quite realistic for me to expect to create a record of my life which has as much effect on my descendants in a hundred years as my own memories of today would have on me were I to live that long.

So I think it's possible the transhumanists missed the boat. Or rather, they're on the boat already and just don't realize it. The human macro-organism does, starting from now, seem to stand a decent chance of living to see the heat death of the universe.


transhumanists == a bunch of people who are unhappy with their bodies.

I don't mind it though as much -- I am much more disturbed by the prospect of a tyrannical or even "Friendly" AI some of them seem to be fond of.

Ignoring the implications of an AI that may be required to support a transhuman, I believe that one could draw up a very pragmatic argument why transhumanism could be useful even to us, embodied souls. What you described as inter-generational memory via things like history books and Wikipedia is good but not perfect (the classic example is that history gets written by the victors). A transhuman living for thousands of years would potentially bring a fresh perspective to the table, even if that perspective was imperfect too. Just the way both a free market and an ecosystem can benefit from a diversity in their pool of ideas/genes, so can humans.


>I am much more disturbed by the prospect of a tyrannical or even "Friendly" AI some of them seem to be fond of.

Same. Even if such a thing is possible, the probability that it's done right the first time is close to nil. And since a recursively improving singularity A.I can be assumed to irreversibly take control of the balance of power, it's not really something that you could afford to screw up.

Of course, Yudowsky (To the extent that he's doing anything.) seems to be working off the assumption that if he doesn't do it someone else will.


I don't have a big problem with their ideas about AI just because they're so disconnected with reality. I mean, if we go another thousand years without an apocalypse, it seems more or less bound to happen, so in the abstract it is something that should be investigated; however, it's so far off now that I'd lay serious money that all of their theorizing will be pretty much useless when the first implementations come around. In fact, since I've read and understood the Basilisk and said so publicly, you could fairly say I'm willing to stake my life on that :) I suppose I can only have faith that, when it comes about, people will do their best to make it all work out OK.

The argument I'm making about memory is that when communication has become advanced enough, you can't make a clear distinction between inter-generational memory and meat-memory over a significantly long lifespan. People change over time; give it enough time and you're as different from yourself now as your great grandchildren will be.

I don't think we need long-lived human beings or human personality constructs to gain the societal advantages of "I remember when..." It's feasible today for a person to record and archive audiovisual, geospatial and limited haptic data of their entire life experience, beginning to end. We can't record your thoughts, but if they're important you can write them down. I'd also wager that we'll see almost fully convincing sensory recording, which is a plain prerequisite for uploading, well before any life-extension technology which deserves the title of immortality. It would then be unrealistic for your descendants to say that they remember the things that happened to you only because these recordings would be far superior to memory.

Of course, the only issue is that we've had this sort of thing for a good while now, and it turns out we just aren't that interested in the things that happened a long time ago, just like, aside from the highlights, I don't care that much about what happened to me ten years ago.

Ten years ago, of course, I thought that everything that was happening to me was quite important.[0] That's why I label this idea of immortality "greedy"; it represents the whim of a brain state at the present moment to continue to influence the world long after it has become irrelevant. Just look at the current state of US politics to see where that gets us. (I've never seen a transhumanist argue that every transient state should be preserved in perpetuum, but I'd be curious to know what they tend to think about it.)

The point being that if uploading constitutes a form of immortality, so does having kids; the same theory of consciousness underlies both.

[0] This is a bit of a tangent, but I think this is (most of) the reason that burial rituals are one of the cornerstones of human society. Obviously it doesn't matter to the dead person what happens to them, but it is crucially important for us to have a say in what happens to us; we hope that, if we respect our parents' wishes after they die, our children will respect ours. And we take this so seriously that, in fact, they do.

I suppose I consider transhumanism, especially cryonics and uploading, to be a very highly developed burial practice. If it is the wish of a dying man to have his brain frozen in nitrogen, I will respect his wish, and even humor his beliefs about what that might mean. But I don't believe it means any more in reality than if we stuck him in the ground with everyone else.

And yes, I recognize the irony in writing this much about something I think is silly to spend time thinking about :)


I completely agree with you that having kids is the best and patently designed for us way of achieving immortality, but I'd still love to talk to someone who grew up among the Ancient Greeks. Yes, people change with time, but some memories stay. I would be plain curious to find out which ones do. Reading (perhaps less so with watching recordings) doesn't quite give you that information, as evidenced by the fact that there are plenty of scholars out there who read a lot of material yet who utterly disagree with each other on what the Ancient Greeks (or Hebrews) were really like.


> actually think they might get to live forever

And they aren't horrified at the prospect of that being possible?

Not much thinking going on there, I suppose.


Eliezer Yudkowsky's younger brother died in 2004 and it inspired a lengthy email thread about the subject.[0]

I expect it is not so much about wishing not to die, which even a transhumanist must admit is at least no worse than living, but wishing not to have loved ones die. Truly tragic.

[0] http://yudkowsky.net/other/yehuda


Why would you be horrified at the prospect of living?

I suspect it is you that needs to put a bit more thought into this matter.


Oh wow! A downvote! So, the message I'm getting is that a) it is appropriate to be horrified at the prospect of living; and furthermore b) asking why is inappropriate...?

This is seriously weirding me out.


The message I've taken from this entire subthread is "try to laugh rather than shake my head." The behavior makes a lot of sense if I frame it in status signaling (which helps make sense of so much that I suspect it of being too broad a framework). My own comment on the topic of living is: so it may be physically impossible to live forever, I think shooting for even a "mere" 200 years is doable and would be fucking awesome. At least we're not dogs, they get less than two decades.


"... more full of crap than average..."

Than average what?

I often lurk on LessWrong, and post there very occasionally. I find it to be a very rich source of original ideas, some of which are truly profound. YMMV, of course.


As a frequent LW contributor, I agree that Less Wrong users can be arrogant in a way that's counterproductive. I'm actually planning to write a series of posts presenting a sophisticated argument for this at some point.

I'm curious if you think there is anything else we could stand to work on. I interpreted your use of the word "wisdom" to mean a lack of arrogance, but if you were using it to mean other stuff as well I'd love to know.

(I'm a fan of yours, BTW.)


Anecdotally I've seen some pretty good articles on lesswrong, and some not so good ones.

But the idea "Hey, want to be more rational? Join our community" gives me the willies. If you want to be more rational, then joining a groupthink-ish community is the last thing you should want to do.


Reminds me of my Mensa membership (hey, at 16 you're young and impressionable). Quit that ghetto a week into reading their mailing lists. Imagine the complete opposite of HN.


Given that everyone is wrong some of the time, wisdom suggests that being involved in both communities will lead to being more right than either of them.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: