Hacker News new | past | comments | ask | show | jobs | submit login
How to Be Happy (lesswrong.com)
282 points by adamzerner on July 20, 2014 | hide | past | web | favorite | 213 comments



At the time in my life when I stopped reading LessWrong I became remarkably happier within a week. I attribute this to two reasons: 1. LessWrong's focus on optimizing your life is so great that whenever I failed at something or didn't uphold something I said I would do I felt bad about it. 2. One thing that makes me happy (and I think most others would agree they're happiest when this occurs as well) is accomplishment. I like the feeling of working on a goal and having that lead to tangible results, whether that means something I've created or something like lifting more weight in the gym.

By actually making a change to how I act and seeing the results of that change in the world I developed a lot of confidence (I was formerly very shy and uncomfortable with myself despite having a lot of things going for me).

I want to come back to the first point briefly as well. At the time I was reading LW it often felt like I was being productive. I'd think to myself "oh I just need to read this article on motivation and acrasia (the LW term for procrastination), and I'll never fail to do what I planned to do again. This caused me to spend my time preparing to do things that I never ended up doing, and it also set me up to feel angry with myself when I inevitably failed.

One thing that helped me a lot was accepting that on top of everything we humans have going for us we're really just animals, and we get upset, or jealous, or sad, or angry, just like any other animal. So rather than try to eliminate these negative emotions from my life I came to accept them as normal and try to structure my life so that they occur as infrequently as possible. I've found a few ways of doing this that work for me, and I can post some of what's worked if anyone's interested. I still have bad days or bad moments, but when I do I accept it as normal and try to see what I can learn from it to lessen the impact next time, rather than trying to over-optimize the life of an unpredictable, fallible human making most of the important decisions in his life based on emotion (hint: that's all of us).

The allure of LW is that we're perfectly rational or that we can at least make ourselves that we through sheer force of will. For better or for worse (I think for better) that's not the case.


> One thing that makes me happy is accomplishment.

What you describe sounds suspiciously similar to what Muehlhauser calls creating "success spirals" [1], which is coincidentally in the same sequence "The Science of Winning at Life".

> The allure of LW is that we're perfectly rational or that we can at least make ourselves that we through sheer force of will.

And to be fair, LW: does make a distinction between epistemic (ideal) rationality and instrumental (pragmatic) rationality [2]; is aware of its propensity for insight porn [3]; and has made progress towards bringing things in the stratosphere [4] back down to the object level (e.g. [5][6]).

[1] http://lesswrong.com/lw/3w3/how_to_beat_procrastination/

[2] http://wiki.lesswrong.com/wiki/Rationality

[3] http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_...

[4] http://lesswrong.com/lw/58g/levels_of_action/

[5] http://lesswrong.com/lw/58m/build_small_skills_in_the_right_...

[6] http://lesswrong.com/lw/k4n/a_brief_summary_of_effective_stu...


I don't think it's fair to blame Less Wrong for over-optimizing. Less Wrong is just a resource and can be used in bits and pieces. Just because it can be overused doesn't mean that it is a bad resource.

Thinking about it some more... perhaps they don't warn you enough against over-optimizing. Like you said, there's a ton of advice about how to optimize a plethora of things. If you're going to do that, perhaps it's your responsibility to sufficiently warn against over-optimizing. But still, despite the fact that it could be overused, I think it's a really useful resource if used appropriately.


I think another key factor with a lot of modern unhappiness is that, as the article notes, money does not correlate to happiness as long as you are above the poverty line.

We have lost the art of teaching people how to have a philosophy of life. We teach people how to have a craft. We teach basic morals. In religious households you may even be taught a pretty comprehensive "life philosophy", but it is often underpinned by many practices that defy rationality, and I think, lead many to unhappiness.

A few years ago I stumbled on some pretty interesting reading about Stoicism and I have since read much of ancient greek philosophy and a bit of more modern stuff like Kant. It still sort of blows my mind just how insightful the ancient greek philosophers were about human nature. One of the key principles of Stoicism that has an analogue or similar set of features in many other philosophies is that it is essentially not things that happen to us that cause us pain/grief/sadness/unhappiness but our judgement and reaction to those things. It seems a little trite in a short post like this, and I can't sum up something as complex as an entire school of philosophical thought, but it has worked well for me.

I have been in conversations where life philosophies like Stoicism and other related ancient greek philosophies get dismissed out of hand. Only to have a conversation wind over our current empirical approach to happiness. This less wrong article is /exactly/ what I call this empirical approach to happiness. Study all of the correlates, take all the best modern science about which behaviors yield the most happiness and which yield the least happiness and optimize accordingly.

Yet, 1800+ years ago, ancient philosophers identified many of the same principles articles like this are encouraging us to apply to our lives. What is missing from this article, and I think that can often cause many people to fail at sticking with a plan, is a coherent and binding strategy for all of this disparate "strategies for happiness". The ancient philosophies are not perfect, us moderns have learned a thing or two since Socrates and Epictetus, but their insights on the human condition are too keen to ignore. We know a lot more about genetics, and I have witnessed the ravages of deep nearly incurable depression on my family. Even the ancient stoics knew of depression and how it could destroy rationality of the most adept philosophers. Outside of these parameters, I would argue, strongly, that we could all live with a little more Socrates and Epictetus in our life and a little less religious extremism and modern consumerism.

And this brings me to my main point, without a rational, modern, and scientifically palatable life philosophy for how to think and feel about /everything/ it is very easy to put yourself into negative emotional states. I have found with a system to judge and evaluate emotions, and a keen understanding of the basic tendencies of my nature, I don't need a lot of empirical scientific knowledge to stay happy, though it does inform my approach.

When I read your post it made me think of myself and my old strategy of decreasing pain/bad moments and increasing good moments where good feelings are more likely. Now, I proceed in a manner as I see fit and I simply accept that much of my feelings are up to me and that jealously, sadness, and anger are just my judgement or feelings about something. I have also, often found, that when properly analyzed, I am not truly sad about a situation, merely that my judgements were wrong. I have definitely embraced many stoic principles into my life, and when I continue to work at maintaining and improving my understanding and ability to apply the principles I have adopted, I stay very even keeled, neither greatly happy or greatly sad. Content. I try to stick close to my baseline and take joy in every day life (since most of life is every day life) :)


Thank you for writing this. Can you say more about the system you found, this binding strategy for "strategies for happiness"?

My judgements being wrong used to be a source of frustration for me too. Now I take them as a given. Being wrong is part of the process.

The biggest problem I have now is forgetting what strategy I had decided to follow. Mostly because there are too many things to decice. The process of me deciding doesn't ingrain each decision in me hard enough.

How do you deal with that?


Here is the thing. Stoicism as it was taught 2000 years ago was extremely bound up in its "providential" and deity focus. Not as unpalatable as some of the more direct monotheistic religions, but depending on your teacher stoicism was still on a scale of pantheism to direct theism. (Marcus was very pantheistic and deemphasized religion, Epictetus was much more vocal about the theistic elements). As a modern reader a lot of ancient philosophy is like that. It has elements that you have to excise. So ancient philosophies are not really plug in play for a modern rationalist.

Basically, I think you have to start by writing things down and learning about the elements of these ancient, and modern, philosophies to see the sort of systems they put in place to deal with every day life events and exceptional events. How should I think about X? And some very obvious patterns will emerge. Some elements of stoicism may be difficult to employ and Stoicism in general is a hard to follow (fully) philosophy. The key point Epictetus had was that very few of us have the ability to follow Stoicism (or any coherent, but challenging) philosophy to its fullest. However, we should still endeavor down this path of "progress". This is why Epictetus has stayed so popular. He didn't advocate perfection of philosophy, merely that you work at it.

Which brings me to my final thought for you to answer your question directly: write it all down. For example, Stoicism has a lot of great elements that resemble modern psychology CBT (Cognitive Behavioral Therapy). So you pull the elements of ancient philosophies that work, you augment it with snippets like this Less Wrong articles various things. And you write your own coherent philosophy of life. And then you work at following it. It is pretty easy to lay down ideals, and pretty hard to stick to them. So you have to work at it, almost every day. You have to work on mindfulness and staying present and ensuring you keep your philosophy in mind in every day situations and exceptional situations. And, and this is where it gets tricky if you have trouble with non-empirical elements, you just have to believe your philosophy works. You have to believe it enough to follow it, even if you can't explain all of it. Science has spent the last 2000+ years catching up to things the ancient greeks found to work.

Much like a running coach can't really tell you WHY, exactly, everything he is having you do is working, it just does. Physiology has slowly been providing more and more detail, in the mean time running coaches have learned through trial and error how to train very efficiently. Some day we will probably have an exact explanation for the mechanisms of physiological improvement. Until then we can use our very good heuristics. I would argue it is the same from psychological perspective. It is close enough. Trust in your system and intuition and work to make it better and progress as a human being. There is no "perfect system" for everyone, but I think we could all agree on some pretty broad ideals that count as forward progress as humans :)


> The allure of LW is that we're perfectly rational …

Isn’t this allure extremely irrational then? (Since it’s a known fact that we are controlled by large parts by relicts of our primate ancestry.)


Let's not forget that optimizing only a few variables in a system as complicated as happiness can easily result in a net loss.

If you optimize a race car's engine for power, the engine will explode and you will lose the race. That's because you optimized for only one variable, and did not consider longevity. With complex systems this is an important concept to remember, as you may not even be aware of some of the variables, and inadvertently sacrifice them!


Thank you - been looking for that explanation for a while. It's the same if you substitute "professional" for "happy"


> I've found a few ways of doing this that work for me, and I can post some of what's worked if anyone's interested.

If you have time, please do :)


I've read many articles and books on this subject and not a single one of them had any effect on my life. It's usually just a list of things that - according to the author (and every author of course has different opinion) - will make you happy. It's bullshit, there is no "golden solution" for everyone. Reading such articles will make you even more unhappy, because now you've the whole list of things and you feel like if you don't achieve them all, you'll be unhappy for the rest of your life.


What can one do to a live happy life is a very hard question to answer for anyone. Life is fundamentally absurd, we are yet to find what this whole madness is all about. Most people choose ( or fall into) an ideology to give meaning and structure life. To cite an example of an ideology the family man - fulfillment in raising a child, having a family, bunch of friends to hang out with, a job to sponser all this. This is the default ideology of most people and societies built around this. There are many others , 'the religious man', 'the scholar/intellectual', 'social worker' , 'the rational man who votes' etc. You have to find an ideology that you find comfortable brainwashing yourself with, doesn't matter which one as long as the delusion is strong enough that you can convince yourself that you are living your life to reach a certain goal or a purpose.


Maybe the closest writing to fit your view is Daniel Gilbert[1]. Basically, humans have a biological "set point" for happiness and it's different for everyone. It is not affected by winning the lottery, or getting paralyzed from a car accident, or doing meta-analysis on what happiness is (such as reading self-help books about achieving happiness.) Those events may affect happiness in the short term but not in the long term. People will eventually gravitate back to their predisposed set point of happiness.

If I agree with the above, I can set aside the quest for ultimate measurement of absolute happiness (e.g. Solon's "Count no man happy until he be dead."[2]) . However, I can still do things that affect relative happiness. When I stopped consulting for boring ERP software, my quality-of-life definitely improved. Again, I won't know if I'm ultimately "happy" until I'm lying on my deathbed. Nevertheless, it feels like I got a little victory from changes like that.

[1]http://www.amazon.com/Stumbling-Happiness-Daniel-Gilbert/dp/...

http://www.ted.com/talks/dan_gilbert_asks_why_are_we_happy

[2]http://en.wikipedia.org/wiki/Solon


I agree you can do things that affect your relative happiness, but the key here is that it's YOU who must know what the things are - everyone have different needs and you can't fit all in one model. Some prefer to have interesting, but lower paying job, some doesn't care as long as it pays for their hobbies - and both can be happy, but switch their lives and both will be miserable. So if one would give advice to the other on how to be happy, it wouldn't help a bit, because they've different perspective on life.


I'm not saying that the author is right or wrong, but your anecdotal evidence doesn't prove that reading about how to be happy will make you unhappy.

Science has consistently shown that good health and relationships correlate to happiness. It has even been shown that there is a causal effect (ie, if you start exercising you start feeling better).

Someone unaware of that fact may read a post like this and decide to exercise for 30 mins everyday.


> It's bullshit, there is no "golden solution" for everyone.

The science of happiness (not their philosophy) is new and it is in a prehistoric stage. Think about the history of chemistry and the alchemist adventures!

The post doesn't invalidate your opinion, just reading and article doesn't make you happy except for a few enlightened ones. But, knowing the correlation using a lot of data can help you in this search.


I've found that the main reason why so many of these articles and books don't have an effect on my life is that I read them and then forget to apply what they suggest.

Sure, it's possible that many of them suggest approaches to 'happiness' that do not apply to me, but I really don't know because I never took the time to actually spend, say, 30 days committing to their advice. And my impression is that this seems to be the case for most people who bother with 'self-improvement'.

(That said, I'm not disagreeing with you. Finding what makes specifically you happy is important.)


It's really not bullshit, I think. The stuff about fear rings totally true to me, it aligns perfectly with my experiences. But, I know that being on the other side of it, it can sound like a bunch of hokum. (That's part of the reason why people keep living that way - the logic is pretty watertight.)

I've read books about it too, and not been helped that much beyond perhaps gaining some insight, or setting expectations of what happiness is like. But that doesn't invalidate the information in the books, it tells you that merely reading about happiness does not make it so. You can read all the books about how to be a great basketball player, but you won't improve until you actually get on a court and shoot some shots.


Author himself identifies this problem in the concluding paragraph: seeking happiness as an end might be counterproductive


Did you merely read them or did you also apply what was said in those readings? Have you also consulted a specialist?


The posted article acknowledges that happiness is in large part a genetic trait, and in other large part the result of life history (parenting, health, etc).


>The posted article acknowledges that happiness is in large part a genetic trait

Citation needed. Preferably not a citation that can be countered by another, opposite, citation.


Can someone tell me how this 'cult of happiness' started in western culture. The obsession with happiness is so pervasive and absurd that people are depressed because they are not happy 24x7 not because of a personal tragedy. Why is some vaguely defined happiness sold as some holy grail that must be achieved at all costs.

There are billion dollar industries built around selling happiness ( fashion industry, fitness industry, education industry and a million others), and yet, and yet most of happiness we are sold is a distant dream for the most of us. Why? . Why do we live in a constant conflict both within ourselves and conflict with the outer world. Do you want to live your life in this constant conflict ?

Most happiness peddling misses the important intricate relationship between happiness and pain/anxiety how they are the same feeling and pursuing one means pursuing the other. And yet most people are fooled into believing that we can somehow chase happiness while avoiding pain/anxiety. Even people who are very logical in every other parts of life buy into this absurdity.

Does this have to do anything with rise of western industrial power when marketing changed from 'buy this because you need it' to 'buy this because it will make you happy' [1]

1. http://topdocumentaryfilms.com/the-century-of-the-self/


In the article, the writer mentioned he is rarely unhappy for more than 20 minutes. I don't understand this. Happiness is a result of something that is good and sadness is a result of something being bad. Emotions have a purpose. Emotions are not some sort of primitive remnant from prehistoric times we need to overcome. It's as if because of science we can now see what is good and what is bad and remove the bad emotions like they had no reason or purpose.


There is a difference between happiness and joy. Some have the definitions flipped, so I'll not try to define them. It's not important which is which.

One is a result of circumstances. If something bad happens, like you get a flat tire, then you're in some negative state of emotion. Most people are ruled by this, and since most circumstances are out of our control, we spend a large portion of our time in some state of unhappiness or frustration.

The other is trying to be at peace. If you're basically at peace then the flat tire doesn't ruin your day. It's just something to deal with. Things happen, you deal with them, life goes on, and you're mostly thankful for what you have.

The author's 20 minute comment, I suspect, is that he practices mindfulness meditation, and is therefore never longer than 20 minutes away from resetting his mind to a more balanced state.

Plus there is nothing new here. This is the ancient philosophy of stoicism. See "A Guide To The Good Life: The Ancient Art of Stoic Joy"


If you don't score yourself by how much better you're doing than peers (or yourself yesterday), it shouldn't be too hard to be content+optimistic, provided you're lucky enough to be healthy + intelligent + free of bad relationships.

What seems natural for most of us, though, is to become desensitized to what's good and feel only about the surprises, which eventually end up negative (mean-reversion and things over-enthusiastically taken for granted - I'll always win, they'll always love me, etc.)

I think knowing that helps a little. Just like it helps knowing that your blue mood when you're sick will pass on its own as you recover.

Some people are naturally more or less happy, I'm sure, in their baseline no-surprise state. I wouldn't worry about it until someone figures out how to change it. As long as you have energy to want+do things, you're ok.


Emotions have a purpose, but that doesn't mean you need to wallow in your unhappiness.


Tell that to someone with clinical depression.


"the pursuit of happiness" is right there in the US Declaration of Independence. The country holds that the pursuit of happiness as an unalienable right.


That's a straw man though. The "pursuit of happiness" in the Declaration is just fancy phrasing for what was originally the "right to own property". It was changed by Jefferson before the Declaration was finished because of some philosophical points from John Locke [1].

[1] http://hnn.us/article/46460


That's not even close to what "straw man" means. And even if it was, the mechanics of how and why the language ended up as it did is largely irrelevant compared to the fact of what the language actually is.

The GPs hypothesis is perfectly reasonable, indeed in the very article you link to:

> The “pursuit of happiness” has led its own life in popular culture.

Now, it's difficult to say if there would be the same focus on happiness in the culture if the phrase has been "property" but that certainly doesn't invalidate the hypothesis.


Sorry, I didn't mean to use a logical fallacy...I just got it from Wikipedia: http://en.wikipedia.org/wiki/Life,_Liberty_and_the_pursuit_o...

Edits: Actually, reading the article fuller, this makes it even MORE interesting. Buying things, protecting products and property has always been tied up with happiness and the declaration. Fascinating!

"life, liberty ... and the possession of outward things"


I think you're attacking 2 different things here. One is chasing happiness, the other is the commercialisation of chasing happiness.

I have no problem with the former. I think it's healthy to try to lead an enjoyable and happy life. And I don't think that's mutually exclusive with leading a productive and meaningful life either.


>And yet most people are fooled into believing that we can somehow chase happiness ...

Well it has a long history going back before Christ eg. http://buddhism.about.com/od/basicbuddhistteachings/a/The-Bu...

And more recently has become established as a fairly respectable field of study kicked off with "Positive psychology began as a new area of psychology in 1998 when Martin Seligman chose it as the theme for his term as president of the American Psychological Association" amongst other things. (from Wikipedia)

Personally I'd rather be happy than depressed and as an atheist would rather turn to science than religion for how to advice. Dunno if that's foolish.


I'm happiest when I'm seeking out and feeding off all sorts of misery, schadenfreude of others' failures, absurdity, ennui, existential guilt, jealousy of others' success, fear of missing of out, fear of growing old, sexual frustration, impotence of metaphorical and of the limpy kind, the haze and alcohol/drug induced dependency, anxiety of status.

I get angry and anxious reading my Facebook updates about weekend adventures that I wasn't part of, LinkedIn updates about undeserving acquaintances climbing the ladder higher than where I'm. I proceed to then practice mindful meditation to be fully conscious of my human nature to practice the dual art of sour grapes (e.g., "Going to law school given all of the articles I read is not worth it nowadays") and rationalization (e.g., "They may be richer, but I'm happier... as a maker."). Then I realize what a worthless person I'm but self-worthless transforms into the joy of masochism.

I overeat, overdrink, stay up late, sleep in, skip out of work, act out, make inappropriate propositions to friends or strangers when inebriated. Nursing the bad hangover the morning after, I pat myself in the back for getting free therapy sessions from unwilling acquaintances and for my courage to let myself go to practice authenticity.


I am not a psychiatrist, but you may be suffering from some form of mental illness. Please consider talking to a mental wellness professional about how you feel emotions. Nobody is going to try and make you change who you are, but almost anyone would benefit from understanding how their own mind works better.


Behavior like the type you describe could easily be self destructive and hugely detrimental.

> make inappropriate propositions to friends or strangers when inebriated

this is really really not good


  8. Find your purpose and live it. One benefit of religion may be that it gives people a sense of meaning and purpose. Without a magical deity to give you purpose, though, you'll have to find out for yourself what drives you. It may take a while to find it though, and you may have to dip your hands and mind into many fields. But once you find a path that strongly motivates you and fulfills you, take it. (Of course, you might not find one purpose but many.) Having a strong sense of meaning and purpose has a wide range of positive effects.41 The 'find a purpose' recommendation also offers an illustration of how methods may differ in importance for people. 'Find a purpose' is not always emphasized in happiness literature, but for my own brain chemistry I suspect that finding motivating purposes has made more difference in my life than anything else on this list.
I think that's the biggest factor of my happiness. I have a long-term open source project that I believe in and I work on it in all my spare time. Little by little, it's getting closer to one day being very useful in many ways.


That's exciting, very happy for you. Can you share more about this?


Thanks. :)

It's an experimental software development tool, largely inspired by Bret Victor's talks. It's similar to projects like Light Table, Zed editor, but it's nowhere near as complete and focuses on a single programming language (Go).

I worked on it full time for a year after finishing my master's degree, culminating in a first place winning demo submission to the LIVE 2013 contest [1].

By then, I ran out of money so I got a full time job and continue to work on Conception in my limited free time. By now, I'm working on a pure Go implementation of Conception [2], which is a lot more advanced in some ways, but is still needs a lot more work before it's actually usable and helpful to other people. Someday, when the time is right, I want to get back into it full time, because it really requires a lot of work.

[1] http://liveprogramming.github.io/liveblog/2013/04/live-progr...

[2] https://github.com/shurcooL/Conception-go/commits/master


I was a lot happier when I stopped reading LessWrong. I'm not even trying to be (overly) snarky - I really did used to read LW and I'm very glad I stopped as it was one of the many small things dragging down my well being.


Spending a lot of time commenting and, especially reading comments, is usually sad-making, as you're likely avoiding a better alternative as you would with a video game.

As well, there unhappiness-producing currents in many "we are all so smart" commenting fora, but I expect most people can expect a positive or at least enlightening experience if they're wise enough to avoid ego-tussles.

If you find you're getting less out of a group than you used to, likely you've grown more than the group. You wouldn't necessarily have done better to avoid it in the first place.


Would you care to elaborate on this? I have my own criticisms of LW and without mentioning what they are, I want to see if other people's match up with mine.


Edi/Disclaimer: I do feel like I benefited intellectually by reading the original Sequences on LessWrong. The general idea that strong AI might represent a risk to humanity seems plausible, although I'm not sure how credible the specifics are.

Well, I don't feel intelligent or well educated enough to critique most of what they (EY and supporters) say point by point. However, one thing I recall seems indicative of some of the systemic problems with how he/they think about the world:

EY once wrote that he tried "exercise" (unspecified but presumably steady cardio) and found that it "didn't work". His conclusion was that he....inherently was unable to improve his physical fitness due to some genetic trait that a minority of the population was cursed by.

That is so breathtakingly arrogant and foolish that I was taken aback and it was one of the many small things that led me to question his 'rationality'.

Presumably, EY was a baby, and like all other babies, gradually developed increased muscular strength and coordination during the process of learning to walk. Thus, his muscles are capable of responding to stress and adapt to that stress by getting stronger. If EY had the grit to actually try rigorous training such as progressively loaded barbell squats I'm quite sure he would experience at least a modest, but measurable, increase in physical fitness. Instead, he rationalized his physical weakness and chose the easy road. Plenty of people do this, but they're not so 'rational' as to try and intellectually justify it on their own website publicly!

* I don't feel like searching LW to try and find a citation for this - but does anyone really doubt it? Just look at a picture of the guy.


IIRC his attempted exercise was walking around the Bay Area which left him winded and sore. Agree it sounded woefully ignorant to the science of fitness. He would have done a lot better to hire a physical trainer who could kick his ass a little bit, boot-camp style.


Woefully lacking in the sort of common sense most people acquire just by living.


I see LW as part of the cult of reason - under risk of strawmanning, it's the idea that pretty much everything subject to perfect logical deduction from first principles.

It's important to study and understand human biases and it can be helpful in overcoming many struggles since most of the time, you're your own worst enemy, but the philosophy that you're inherently flawed and you should put up a constant effort to be "less wrong" is a recipe for disaster IMO.

Absolutely, take time every now and then to reflect on life and whether any biases and assumptions about the world is impairing your wellbeing and happiness and whether it might be worth changing that -- but in everyday life, listen to your impulses, intuition and feelings. Don't be blind, don't be stupid, but also don't constantly second-guess yourself.

I should totally write a self help book. Or at least make some inspirational Facebook cover photos.


> but in everyday life, listen to your impulses, intuition and feelings

The idea, if I understand it correctly, is that those are the things that are supposed to end up "less wrong." You're not supposed to be consciously thinking all the time about how your thinking is broken; you're supposed to practice a few tricks for a while, internalize them, and then your impulses/intuition/feelings will be (less) broken.


I think that's right, and it's probably even pretty compatible with what I suggest.

What I think is dangerous is adopting the underlying philosophy that your intuition is inherently wrong and in need of salvation from reason.

In this, as with every other area of human improvement, there's a balance to be struck between recognising your current state as "good enough" (even that has a derogatory ring to it) while not isolating you from the fact that there's almost always almost infinite room for improvement. And I think the cult of reason and LW in particular is bad at recognising and respecting a "good enough" state.

Or put another way, imagine if the most popular software engineering website was "YoureNotAsGoodAsJohnCarmack.com".


I think they have taken this idea of being logical and internalized it as a status symbol.

The use of the word "rational" apparently only applies to them so by criticizing LW, I assume that makes you irrational.

Is it useful for every human to overcome the biases on their list? Or are these just criticisms that one group can use to distinguish themselves from other people? [1] I don't even believe all of these biases exist but instead could be attributed to a discrepancy in definitions and usage of language in a formal and colloquial sense. Further, although possibly mentioned elsewhere, I think it is possible to suffer from a "cognitive bias" cognitive bias where belief about overcoming cognitive bias causes a new cognitive bias.

Check out this link [2], I think Harry Potter is associated with Eliezer himself.

In short, I don't think a bunch of people who claim to be rational are completely ego free.

[1]http://wiki.lesswrong.com/wiki/Bias#Blog_posts_about_known_c... [2]http://lesswrong.com/lw/k9r/cognitive_biases_due_to_a_narcis...

edit: I don't mean this to sound as if I am arguing there is no such thing as cognitive bias or that nobody can really be rational but that ego can and has gotten in the way of discussion about it.


>The use of the word "rational" apparently only applies to them so by criticizing LW, I assume that makes you irrational.

This is most of why LW has been pissed off about RationalWiki since it dared have an article about LW. They own that word, dammit.

(Now of course, it's because of an enormously popular article on one piece of LW's history. But it started really early.)


I've been on it for years, but I firmly classified it as internet television and a fun place to argue amateur philosophy. Productivity impact, pfeh. YMMV of course.


"The only correct view is the absence of all views." - thich nhat hanh

We are unhappy when we assign reasons to people's actions or words. When we think that others say or do things because they are "out to get me". That is a view and that is an incorrect view that will lead to all sorts of unhappiness and discontent in your life. Get rid of all of your views. Some things just happen. There is no grand plan.

"Do you want to be right, or happy?" - thich nhat hanh

Don't correct people or argue with them. It leads to unhappiness. Especially don't do this in front of others. It will only cause confrontation that hurts you just as much as the people you are correcting. Even if you are being wrongly accused, just be silent. There will be a better time later to correct the situation and set the record straight. This is much needed in corporations in America that have contentious meetings. People interrupt each other, openly demean and correct each other. Sit silently in these meetings and stay above it all. Everyone will notice and you'll be much happier and content.

There is much more we can learn from Eastern philosophy. When things cause stress and unhappiness in your life, examine them, be mindful of your words and deeds toward others, live only in the present and you'll forever be happy.


>>"The only correct view is the absence of all views." - thich nhat hanh

That is a pretty strong view! ;D

>>live only in the present and you'll forever be happy

Live only in the present and you will be happy, sad, angry, in love, sleepy, hungry etc, etc. Happiness is just a thing that happens, like all things. All things happen in the present. Can you experience a "past event" in the present?


Things I would qualify as "experiencing a 'past event' in the present" (which breed unhappiness) include regret, remorse, PTSD, etc. E.g. emotions like regret are always experienced in the present, but that emotion may be about about past event.


You're making a lot of absolute statements. I sense that a lot of these things depend on the situation, rather than are always true.

Reversing stupidity isn't intelligence.


I doubt he will reply to your comment given that the philosophy makes sense merely by practice. I do think keeping out of mud is a good idea, because mud sticks.


Extroversion is among the best predictors of happiness,

Well of course it is! No one ever meets the shy happy people because they never go out!


There is also, as the author mentions, a correlation between religiosity and happiness. Still, "start going to church" is not on the list of things to do. I'll hazard a guess that the author is not religious and thus doesn't consider it relevant. While I don't personally believe this a worthy pursuit, it does show the arbitrariness of the selection.

EDIT: Point 8 does touch on religion but does not (directly) endorse it.


This is on Less Wrong, a website devoted to rationality. You'll notice several times in the article he is clearly addressing a rationality-focused audience. For instance:

Without a magical deity to give you purpose, though, you'll have to find out for yourself what drives you.

He's writing for atheists. Telling an atheist "in order to be happy you need to get religion" is like telling a recovering alcoholic "in order to fall asleep you need to have a few drinks."


Instead of a recovering alcoholic, perhaps a teetotaler or other person sober on principle would be a better comparison. I've never believed gods could exist or had religion, and thoughts otherwise wouldn't tempt me.


You may be right. I chose alcoholic because suggesting they try drinking to solve one problem will cause other problems for them. If I were to try attending church in order to make myself happy, I would find myself 1) tortured by the inanity of everything that goes on there and 2) pained by the experience of behaving outwardly in a way that conflicts with my values.

However, there are also plenty of people who were raised religious who are now atheists. I've met many of them who have the same toxic relationship with religion that an alcoholic has with alcohol.


I share your views of religion. I also feel pretty much the same way if compelled to fake extroversion and attend, say, a fancy dinner reception. In the eyes of the article author, it is fine to dismiss religion while extroversion is to be coveted. I assume this to be because he is himself an extrovert atheist and can't imagine other people being happy in another way.


That's one of the entire problems with atheism: without a higher power to give your life purpose, it can be very difficult to suffer through yet another day on this earth full of such suffering and sorrow.

Nihilism has a similar issue.

It's icky.


And it is better to worship a deity that at the very least created "this earth full of such suffering and sorrow" if it isn't actively controlling it depending on your view of the deity's role in the world?

As for purpose it isn't that hard to have one; it can be as simple as to having a little happiness and trying to make the world around you just a tiny bit better rather than worse.

There is also no reason why for most people days should be things in general to "suffer through".


That's one of the problems with religiosity: you can't imagine a way to find meaning in the world without fooling yourself.


Or like telling a diabetic he needs an insulin pump.


Within a given country religious believers seem to be happier (based on many surveys) but I don't think you can suddenly choose to believe when you don't. On the other hand basically secular countries seem to have higher average happiness levels than fundamentalist ones.


Just a minor correction: introversion and shyness are not the same thing. You can be extroverted and shy, or introverted and gregarious[0].

The rest of your comment is still basically correct if you substitute the word introverted for shy, but this is a very common misconception about introversion I wanted to point out.

[0] Susan Cain did an op-ed in the WSJ or NYT explaining this distinction in more detail a while back. I'm on my phone but it should be easy to find online if you're interested.


I think that there could exist an introverted happy person who only seeks out happy things on the Internet. That wasn't possible before now.


I think the point npsimons was trying to make is there is selection bias in that statement, so it really says nothing about how many happy introverts there are.

I'm an introvert, for example, and generally very happy (but that's anecdotal evidence, which is also not helpful).


The assumption is that the researchers accounted for any sort of selection bias in who participates in the study. But that's an interesting point; maybe that assumption needs to be questioned further.


Maybe extrovert people just don't want to admit they're unhappy, because there's such a high social pressure to be happy all the time - people don't want to spend time with unhappy people, so everyone pretends they're happy. Introverts on the other hand just don't care :)


I tend to agree with you. I find that certain people I know who are extroverts always seem happy, but you spend enough time with them and get to know them, it's more or less a facade and they're not any more or less happy than others. Just like anything else I don't believe this to be universal, but it may be more common than advertised.


I hardly agree with "Develop the skills and habits associated with extroversion". Why I need to be comfortable with social company if my predisposed nature and comfortableness is to be alone ?


I am an introvert who masquerades as an extrovert fairly well most of the time. There is, of course, nothing at all wrong with taking joy and comfort in being alone. However, it turns out that a lot of the factors which are correlated with happiness (the article points at love/relationship satisfaction and work satisfaction, for example) are easier to improve when adopting some of the skills and habits associated with extroversion. That doesn't mean learning to enjoy going to the club every night; it means learning how to make very good first impressions and getting along well with co-workers. If an introverted person is already satisfied with their life, then of course they don't "need" to be comfortable with social company, but this article seems to be written for people who are not as satisfied as they could be.


If you are already an introvert, developing the skills to be an extrovert will let you have the choice & flexibility to jump between the two whenever you want to.

There's nothing wrong with feeling comfortable being alone (indeed that's a good skill to have, and not everyone has that). But there may come a time when you would like to be more extroverted occasionally & make more friends. If you don't build that skill now, you'll find it more difficult to be extroverted at the times when you need it.


Agreed. I'd go further to say that if you are naturally extroverted, it would be worthwhile to develop some ability to feel comfortable while being alone.

I know some people that go stir crazy when there is no one to interact with. That seems sub-optimal if you want to be truly happy.


Because you have a lot to gain by doing so. Humans are capable of deriving a lot of happiness from social matters, so there's a large opportunity cost to being alone.

That said, it obviously depends on the person's situation. I don't think the advice is good for everyone. But it's probably good for most.

Edit: I found http://www.succeedsocially.com/allarticles to be really insightful and useful.


There is also quite a difference between not choosing to socialize, vs. feeling depressed because you are not able to socialize successfully.


not choosing to socialize may seem like a fine choice and not a source of depression in itself. But I think what the parent tried to say is that there is opportunity cost associated with such a decision. I need good amounts of solitude each week, which makes me an introvert i guess, but if there are no good interactions with other humans at all, my happiness dips noticeably. Having an option to meet with another human (and having a good interaction every so often) is a boost to the feeling of security, which I believe can help the feeling of happiness. For example If i do something awesome (or really dumb), I need to tell someone, and it needs to be someone that actually cares. Or the experience is not as good / might as well not have happened.


>, so there's a large opportunity cost to being alone.

But for some personalities, there's a large opportunity cost when hanging out with others. Some people simply interface their optimal best with the world when they don't have to force themselves to socialize at whatever "normal" standards of frequent interaction expected by extroverts.


True. Definitely depends on the person.


If you are a man, there is one thing that will solve the extroversion problem and that is to be able to talk about sports. It is the default subject of conversation for men who don't know each other, and if you can't talk about sports then you are awkwardly trying to find something to talk about 80% of the time.


You don't. This is another example of an extrovert assuming that introversion is just a circumstance that is both undesirable and fixable.


Consider it may be an extrovert making an observation regarding the different states of joy, the emotion. While I certainly bring myself joy at times, I find it more easily found in the company of others, or in the presence of something new to me.


If you're alone and happy, you don't. If you're alone and unhappy, it's probably worth at least considering whether improving your social life might help.


"Happiness is subjective and relative"

True story: When I was a kid, I was very happy with the rattling performance of my old Schneider PC. The sun was shining bright that day during the lunch break on the schoolyard, when a notoriously rich kid approached me with a grin on his face: "So you have a PC at home, huh? Want to see a real one?". A couple hours later we walked to his mansion. I took a seat in front of his PC. It looked new, and great. The monitor was very large. Very impressive. And then he turned it on. "It's a Pentium!". And he went on bragging about it. When I came back home and turned my PC on to continue working on my website, I was more aware than ever that my PC was slow. I knew it sucked, and from that moment on it depressed me. A couple years later I got a fast Dell. I was happy for a while. Until I got used to it.

Morale: Sometimes, not knowing what can make you happy means you stay happy.


I've reduced it to a series of equations:

  Happiness = Perceived Actual Life / Expected Life  --- (1)
where,

  Perceived Actual Life = Perception * Actual Life ----- (2)
and in most cases,

  Expected Life ∝ Perceived Life of Others ------------- (3)
where,

  Perceived Life of Others = Perception * Others' Actual Life -- (4)
and finally,

  Actual Life ∝	(1 / Happiness) * Skill * Circumstance - (5)

The most success I had was in breaking equation (3), and then in upward adjusting Perception in (2). But (5) is the kicker -- it implies that your lack of happiness feeds back into other people's lack of happiness, creating a loop. As long as it holds, everyone will try to one-up each other in trying to improve their lives.

EDIT: corrected error in (1)


Hmm, equation 5 bugs me. I like the model that being less happy can drag down other people, but this directly implies that being more happy drags your own life down, which I feel is antithetical to the definition of happiness.

I think this stuff is really cool though, and I really like where you're going!

Do you have proposed units for the quantities in your equations?


I think it's easier to see the relations once you substitute everything in. I combined the perception terms instead of cancelling since presumably they're different, but here you go:

  Happiness = Perception * (1 / Happiness) * Skill * Circumstance / Others' Actual Life
  Happiness^2 = Perception * Skill * Circumstance / Others' Actual Life
  Happiness = +-sqrt(Perception * Skill * Circumstance / Others' Actual Life)
I'm not certain about the negative root there; maybe that's depression? Otherwise, looking at http://en.wikipedia.org/wiki/File:Square_root_0_25.svg, it seems reasonable; improving one of perception/skill/circumstance if it's near-zero will improve your happiness drastically, but it's a relatively slow-growing function so improving happiness after that has diminishing returns.

Also, as you mention we never defined the terms. I believe all of these are generally measured on self-administered surveys with "rate X" scales, and so are just percentages with no units.

Finally, the terms don't have any obvious correlations with the post, so I just stuck in the things that were mentioned that seemed relevant:

* Happiness - see eqn., you can't affect this directly

* Skill - conscientiousness, self-esteem, optimism, feelings of purpose and fulfillment

* Others' Actual Life - income and wealth

* Circumstance - genes (health?), extroversion, "flow"

* Perception - paying attention to your situation/actions/feelings and in particular adjusting your estimates of success/failure chance to match reality (humans deal very badly with probabilities) - includes "agreeableness" (understanding of others), "memory" (perceptions of self being happy), and "mindfulness" (perceptions of world)


Feels like you should invert equation 1.


I believe you're right!


My dog is probably one of the happiest creatures I know. But he can't get his own dinner and I have to take him out to poop.

If everyone was happy all the time we would probably still live in caves because there wouldn't have been the struggle that brings us to where we are.

This in my opinion may be why religion correlates with happiness. For the very same reason lobotomized people might be happy. Do you want to be happy or do you want to know how the world really works and what is certainly going to happen?

But... the dark cynicism of reality aside, I think happiness probably results very simply from good health, physical activity, strong social connections and a feeling of being important or succeeding. It's good to be happy sometimes. But I still think being happy all the time might not be the best thing for us personally, nor for us as a species. IMHOP.


Cynicism is not the driving force of innovation, optimism is.

It is possible to be happy and rational.


I don't disagree with either statement.

But if there isn't a problem in the first place there can't be a solution. And if you don't see the problem because you are busy "being positive" then you can't work on it.

I look at it like this... motion comes from polarity. Yin and Yang. Male and Female. Happiness and Misery. Just my opinion.

And yes... I find the cult of constant happiness a bit annoying and superficial. I do think it is good for people to be happy __most of the time... I'm happy most of the time, but I'm not happy all the time and neither is anyone else. Nor should they be unless they are mental patients or animals. That's my point.


Maybe not so relevant but I imagine that most crazy scientists are happy and that leads to optimism in their ideas despite outside attacks on their research. I suppose happiness can be somewhat like armor against slander


> 1. If you suffer from serious illness, depression, anxiety, paranoia, schizophrenia, or other serious problems, seek professional help first. Here's how.

Link is broken. :-(

Also nowhere in the list of happiness suggestions are volunteering and helping others to be found. Says to practice gratitude by appreciating all the good things you have, but never to actually help those less fortunate than you.

The message is happiness is easy, as long as you aren't living below the poverty line or dealing with major problems in your life. Well plenty of people are dealing with both and could really use your help. Helping others might fulfill the gratitude and 'meaningful job' portions of this vanity list too.



I actually found the Be Happier [0] article to be a better source of actionable and researched-backed advice. I put a bunch of the In A Nutshell sentences into my general Anki [1] deck, and it's had a measurable effect on my overall happiness.

[0]: http://lesswrong.com/lw/bq0/be_happier/

[1]: http://ankisrs.net/


Re: the anki deck, what you do put on the front of the card vs. the back of the card?


I know it's not recommended, but I made a single page Note Type, and just put them on there:

<b>Be Happier</b><div> </div>Interpersonal: Give the people around you opportunities to be generous. Ask them for favors.

When it comes up, I quietly read it out loud to myself. I've found that seeing these cards often enough keeps their concepts on my mind, so when I go out with friends and someone offers to buy me a beer, I accept it and thank them, instead of refusing even to the point of awkwardness (as is my natural inclination).


The author mostly described the ancient philosophy of Stoicism. See "A Guide To The Good Life: The Ancient Art of Stoic Joy"

http://www.mrmoneymustache.com/2011/10/02/what-is-stoicism-a...


I recommend giving The Conquest of Happiness by Bertrand Russell (yes, he wrote what amounts to a self-help book) a read. Despite being outdated by a long while, it's still pretty relevant and useful imo.

https://archive.org/details/TheConquestOfHappiness


I tried reading that and found it fairly bad. Even the opening sentence "Animals are happy so long as they have health and enough to eat" is bollocks. I rather liked the "Happiness Hypothesis" by Haidt although it's more of an academic look at the area than a self help book. Also "Philosophy for Life: And other dangerous situations" is good on the philosophical side.


As someone was currently suffering of depression, this is going to help me. Nice article and linked resources.


Interesting use of varying tenses in your grammar. Is English your first language?


No, it's spanish. Just spotted one mistake, would you care to correct what I wrote? BTW, there's no way to edit my comment here on HN right?


Great article. I think that the main point is always being busy doing what you like, namely, in the "flow" the author describes.


Flow has high highs but it doesn't produce a lasting contentedness that the author is talking about


I tried actively increasing my happiness by reading up the on self help / academic stuff with moderate success. I probably managed to go from well below average (based on some Seligman tests) to a bit above. I kind of had a goal to get to the top 10% which I don't think I've really hit but onwards anyway...


Most self-help advice boils down to the author saying "be more like me." This is no exception.


Your point? That's true of most advice in general. It's common sense that if something worked for one person, then perhaps it could work for another.


The difference is that this person actually backs his advice with some studies.


This is the author saying be like me and become aware of the science of happiness. He provides a huge number of references to peer-reviewed studies. This is decidedly NOT like most self-help advice.


I think more or less everything boils down to this:

1. Good health: when you are sick, nothing else matters

2. Money: it solves almost any solvable problem, and in case it doesn't, it helps

3. Family/Friends/Relationships/Love/Intimacy


I felt happy when I decided not to read the article :)


I felt happy when I downvoted your casual dismissal :)

[Edit: that's not true, I felt happy before I downvoted, when I imagined downvoting.]


My problem may be laziness. I couldn't bring myself to read more than half of such a long article, so I guess I'll just stay moderately unhappy.


The bit about procrastination was in the first half of the article :)


I'd procrastinate more if I wasn't so lethargic.


Who said I read the first half?


(LPT: Always jump straight to the conclusions part of the article. Then, maybe, read the parts that may interest you. The Internet is an infinite sea of data; If you stop swimming or dive too deep, you'll drown on it.)


Pretty sure the last thing the world needs is Less Wrong cross-pollinating with Hacker News.


You've got it backwards. In the Glorious Hacker News Golden Age that people sometimes speak of, Hacker News tended to gravitate towards pure-market solutions, deny the existence or relevance of social issues that don't affect white middle class men (i.e. everything but ageism in tech), and gave weight to Eliezer Yudkowsky's Bayesian equation-driven morality, and everything else that looked and sounded scientific (imagine how many old-HN readers would be guzzling Soylent now).

The HN readership had lots in common with LessWrong, but they were diluted and displaced by people who don't share their same, ahem, qualities.


Really? Why? What exactly is wrong with lesswrong? Please be specific.


From this thread [0], they are looked at as a sort of cult. Funnily enough, they look at HN the same way.

[0] -https://news.ycombinator.com/item?id=8053606


> Funnily enough, they look at HN the same way.

[citation needed]. This would be somewhat... surprising.


Singularity idiots?


Is there an argument to be made why artificial intelligence that surpasses human intelligence is not a problem for the human race?


Is there an argument to be made why $HELL is not a problem for the human race?


$HELL? Is that SHELL with a $? Is that Shell Oil and Gas? Is Shell Oil a problem for the human race? I suppose it could be, but your point escapes me.

Or maybe the $ is a typo and you mean the Christian notion of Hell? Is Hell a problem for the human race? Not likely as, even if it exists, it's only a problem in the after life. So again, your point escapes me.


null hypothesis.


We all ready have a well defined rigorous AGI[0] that would have god like intelligence. There are some approximations that played pacman [1] without knowing anything specific about pacman itself. i.e. it learned how to play by itself.

[0] http://wiki.lesswrong.com/wiki/AIXI [1] http://www.youtube.com/watch?v=yfsMHtmGDKE


So? You can have a neural network that "learns" tic-tac-toe by itself in 100 lines of code. Does that prove anything?


AIXI is an artificial 'general' intelligence. It is not hard coded to do a specific task. That neural network is useless at doing anything other than playing tic tac toe


Actually no -- it's the neural network + biases, start values that's good at playing tic tac toe. The neural network code itself is not "hard coded" for any specific task.

You could use the same neural network with different inputs to do other stuff, like recognize characters, and all kind of pattern matching / classification.


If by "null hypothesis", you mean artificial intelligence that exceeds human intelligence is impossible, that is a bold claim. Since human intelligence is nothing more than a successful evolutionary algorithm implemented in 3-dimensional cellular networks, it is inevitable that we'll be able to copy and improve the algorithm at some point in the future. Then we'll have artificial intelligence that exceeds human intelligence. Then what?


Probably the same argument that human intelligence that surpasses feline intelligence is not a problem for domestic housecats.


Seriously? Would you want to be neutered without your consent?


If it came to that I doubt I would have a choice. And if I didn't want to, I'd probably be wrong.


That's just because we happen to like housecats. The whales will have some legitimate gripes. I mean, they aren't even competing with us for anything.


To be happy

1. Be like Water. Keep your mind like Water. Stateless. Formless. Keep flowing like water. Don't settle & Stop.

http://soopara.com/being-water-be-water-my-friend-be-water-b...

2. Fall in love with yourself more than anyone else.

Fall in love with your Passion, it already knows everything about you.

3. Try to Know Everything about your Previous life(life's).

Thus, know the purpose of your life & connect those dots. Get closer to your niche & Life purpose.

4. Surround yourself with Right people.

5. Be in a place(Create that place) for yourself, where you got more freedom & autonomy to think and live.

6. Finally to be happy, be happy, create Good, great & positive thoughts always.

What we think, we become. Its that simple!

Our thoughts = Key Life Formula.


Just for those of you who aren't aware, LessWrong is a site devoted to the cult of the "singularity." The site founder, Eliezer Yudkowsky, is a secular humanist who is probably most well-known for his Harry Potter fanfiction, "Harry Potter and the Methods of Rationality." His core beliefs include that the obvious goal of the human race is immortality, that it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth, and that the most important thing we could possibly be doing right now is devoting all of our time developing a mathematical model of a "safe AI." The site frequently dallies with discredited ideas like Drexlerian nanobots and some on the site take absurd concepts like "Roko's Basilisk" [1] seriously.

All of this is not to discredit this specific article. And there are lots of very intelligent posters there. But I tend to take everything I read there with a massive grain of salt.

[1] http://rationalwiki.org/wiki/Roko%27s_basilisk


>LessWrong is a site devoted to the cult of the "singularity."

LessWrong is a discussion forum mostly pertaining to psychology, philosophy, cognitive biases, etc,. A frequent topic of discussion is artificial intelligence, but it is hardly centric. Saying that LessWrong is a hangout spot for "singularity cult members" (as you call them) is simply incorrect on multiple levels. Technological singularity is no more than a scientific hypothesis, and it's slightly dramatic to say it perhaps has cult members worshiping and breeding its realism. In actuality technological singularity just has scientists and researches observing/theorizing its stepping stones and outcomes. Maybe you meant transhumanists rather than "singularity cult members", which I suppose makes more since from your other statements.

>Eliezer Yudkowsky, is a secular humanist who is probably most well-known for his Harry Potter fanfiction

Yudkowsky is also a prominent researcher of a variety of artificial intelligence topics which is enhancing the field. Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

>the most important thing we could possibly be doing right now is devoting all of our time developing a mathematical model of a "safe AI."

"friendly AI"* and I'm not sure what you're talking about when you say mathematical model, you should do more research it's mostly hypotheses and ideas for system transparency.

>But I tend to take everything I read there with a massive grain of salt.

Maybe you should visit LessWrong and read some articles about cognitive biases so you understand why someone saying "massive grain of salt" makes me want to kill innocent puppies.


> Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

That's absurd at worst, science fiction at best, akin to worrying about manned flight safety in the 1500's.


Are you really trying to deny that google cars and other automated systems at least partially based on AI have safety issues? Even if we're talking autonomous, "life-like" AI, there is a long list of interesting philosophical and legal questions to be asked. I can't say I find any of the statements here or in the article very appealing, but you shouldn't dismiss real safety/security issues just because you don't like the guy.


Are you really trying to assert that MIRI is addressing systems on the level of Google cars, in any serious technical manner? If so, can you point to examples?


No, I'm saying that AI has wider applications, and I was responding to the manned flight safety example. Also, I'm arguing that we shouldn't dismiss the guy's arguments just because he's an ass. Especially with regards to this article, we really don't need to resort to a straw man to refute what he wrote.


AI in the sense implied does not exist. Otherwise "would pose" would be "poses" in the sentence I quote.


> Yudkowsky is also a prominent researcher of a variety of artificial intelligence topics which is enhancing the field. Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

Nice defense on the other points. But no, Eliezer Yudkowsky has no peer reviewed publications, open source code, or really anything else to point to which provides any independent assessment of his contribution to the field of AI.

He has a couple of blog posts and self-published white papers. Forgive me for being skeptical.


There is the abandoned Flare language project from long ago: http://flarelang.sourceforge.net/


Play fair.

> His core beliefs include that the obvious goal of the human race is immortality

"We should be allowed to live as long as we want to." Emphasis on the want. I don't find that very controversial.

> that it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth

No, the setup was one person's torture for 50 years (not death), or specks of dust in the eyes of 3^^^3 people. How much is 3^^^3 people?

  * 3^3 = 27.
  * 3^^3 = (3^(3^3)) = 3^27 = 7625597484987.
  * 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(... 7625597484987 times ...)))).
"3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall. You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times. That's 3^^^3. It's the smallest simple inconceivably huge number I know." [1]

That's a monumentally different setup than mere 7 billion humans on Earth. And it's making a point about the cold cruel calculus of moral utilitarianism, a very common and popular ethical position. Your disgust is precisely his rhetorical point...

(Edit: ...about human heuristics and biases getting in the way of the moral calculus of utilitarianism. If you are a utilitarianist, there is some cutoff of X independent people -- for some very large X -- for which it is better to save that many people a slight inconvenience at the expense of 50 years of torture for one individual. This follows straight from the math of finite utilitarianism. Yudkowsky's position of trusting the math over intuition may not be intuitive for most people, but I'd be surprised if a HN reader did not agree at least with the methodology.)

[1] http://lesswrong.com/lw/kn/torture_vs_dust_specks/


So annoying 3^^^3 people "for a fraction of a second, barely enough to make them notice" is worse than horribly torturing a single person for 50 years?

Keep in mind that the 3^^^3 dust specks are spread across 3^^^3 people and their annoyance can't be simply added together.


> and their annoyance can't be simply added together

Why not?

EDIT: Serious question. If I save two lives that is twice as good as saving one, right? Why is this situation different?


It depends on how you measure "good". :/

Most people measure goodness and badness by how they feel about it, and for anyone who feels a significant amount of badness (grief, anger, whatever) at the death of a single person, it is physiologically impossible for them to feel a million times worse about the death of a million people. It's intuitively obvious to most people, therefore, that suffering, annoyance, life-saving, and so on, are not additive: they just have to check how they feel about the situation to know that.

In order to suggest that they are additive, and that N "people annoyed" can outweigh M "people suffering", you have to first convince someone that how their own internal measurement of goodness (how they feel about it) is not as accurate as some external measurement.


It's analogous to saying that "most people believe the world is flat. the burden is really on you to show that the world is really round."

Which is true. But requires a willingness to counter one's own intuition when encountering contradictory evidence. Unfortunately the type of person that does that is uncommon.


I don't disagree with what you actually said, but your choice of analogy suggests that you believe that questions of morality are settled and have obvious, objective answers.


In some cases, yes. If you accept utilitarianism as the the reductive explanation of morality, and assume some non-controversial terminal values, then all of morality is reduced to straight-forward calculations.

"The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right." -Leibniz

Unfortunately we retain some ignorance on the correct nature of utility functions (finite? time-preference adjusted? etc.), and terminal values for humans are demonstrably arbitrary.


>If you accept utilitarianism as the the reductive explanation of morality

... then LW ends up with Roko's Basilisk.

Really, you're using that as your answer to "I don't disagree with what you actually said, but your choice of analogy suggests that you believe that questions of morality are settled and have obvious, objective answers." You can prove anything if you first make it an axiom.

You can't seriously claim that utilitarianism accurately captures human moral intuitions. Variations on the Repugnant Conclusion occur immediately to anyone told about utilitarianism, and are discussed in first-year philosophy right there when utilitarianism is introduced.

LessWrong routinely has discussion articles showing some ridiculous or horrible consequence of utilitarianism. The usual failure mode is to go "look, this circumstance leads to a weird conclusion and that's very important!" and not "gosh, perhaps naive utilitarianism taken to an extreme misses something important."


And why, even while being an atheist, would I accept utilitarianism over Jesus Christ in this case?


For more-less exactly the same reason you accept general relativity over aristotelian motion - it is derived from first principles using maths, can be shown to match experience even if somewhat intuitive to people, and works pretty well in practice.


> can be shown to match experience even if somewhat [un]intuitive to people, and works pretty well in practice.

I think these are the two points that those skeptical of utilitarianism have trouble with: it's exactly that it doesn't seem to match experience that started this thread. Additionally, it doesn't actually seem to work well in practice: http://econlog.econlib.org/archives/2014/07/the_argument_fr_...


It's easier for two people to cope with one bad experience each than for one guy to cope with two bad experiences.


Assumed independence. You are talking about a strictly different setup. Assume that these individuals don't know each other.


I don't think he is making the assumption that they know each other.

Each individual has a tolerance of what they can comfortably cope with. If 3^^^3 people were all experiencing a pain that is below that tolerance, nobody would be prevented from happiness. However in the other situation, the tortured individual clearly would be.


That's an example of infinite or unbounded utility functions: no matter how many specks of dust in the eye, it will never add up to a single person being tortured for 50 years. Even 3^^^^^^^^^^^3 specks of dust. Unfortunately the mathematics of infinite and/or unbounded utility functions doesn't work out well. It leads to some seriously messed up edge cases. (So does finite utilitariansm, to be fair -- [Pascal's mugging](http://www.nickbostrom.com/papers/pascal.pdf) -- but these are fully dealt with by decision theory, whereas the infinite or unbounded cases are not). It's not very strong, but it is evidence that we should be accepting of the calculations of finite utilitarianism since the formalization works out better in cases which are within the realm of our experience.


Talk to an urban planner or someone working in disaster relief.

Empathic "strangers" help others, in many contexts, often at personal risk.

That's what separates humans from singularitarian quantum computing devices.


I'd say if each of those two people had the "bad experience" of having their only child killed in a car accident, that's worse than someone else having his uncle and grandmother killed in a car accident. "Bad experience" is hugely oversimplified. And just not start about a trillion specs of dust in a trillion people.


To quote Heinlen: "Men are not potatoes."


To put it differently, since we are rationalists, try the scientific way. Do an experiment.

First day, let a speck of dust enter your eye, at noon. Before sleeping, write down how you feel about that event.

Next day, rip your balls off at noon. Before sleeping, write down how you feel about that.


Let me know how it goes.


Because at the end of the day, no one gives a fuck about a spec of sand in their eyes. Having your balls ripped off, might be different. YMMV. Human feeling is a bit more complicated than just adding.


What's the probability that having speck in the eye at any given time has some terrible consequences (like, for instance, causing traffic accident, or making a mistake when doing a brain surgery)? If we assume that on the whole Earth, at any time there's at least one person who would suffer greatly from speck in the eye, the probability is at least 1/8e9.

Now notice that 1/8e9 of 3^^^3 is more people than have ever lived.


I assumed there to be no side effects to the dust specks. Otherwise millions would die in accidents and the thought experiment wouldn't make much sense.


It doesn't assume that we can simply add their annoyance together. It assumes that, whatever function computes the badness of some badness applied to multiple people, it diverges and does so fast enough (where "fast enough" will probably include "exceptionally slowly", but in principle there are sufficiently slow divergent functions).

I don't think this is well enough established, though it seems plausible.


I'm inclined to think that there is some threshold below which that function is constant. "Dust in eye" is definitely below this threshold.


I think that's illusory, at best. Does time play a role? If not, this would lead to "Someone has ever got a dust speck in their eye. It is now no longer a bad thing when someone else gets a dust speck in their eye." Which seems absurd. In fact, given that we live in a universe where someone has ever got a dust speck in their eye, it would mean that getting a dust speck in one's eye is no longer the "least bad bad thing" that could happen, which violates the assumption of the thought experiment and you should substitute something more severe.

If time does play a role, you can spread out the 3^^^3 dust specks. (Obviously, we don't realistically have enough time, but then we don't realistically have enough space either).


If your friend has dust in their eye, do you feel the urge to help or comfort them? Is that good, bad, utilitarian?

If you personally torture someone, would that have any effect on you? Is that good, bad, utilitarian?

Is there a recursive relationship between observers and observed systems? If so, does the intention of an observation matter?


What do you mean "good, bad, utilitarian"? Good and bad are descriptions we attach to causal causes which lead to outcomes with higher (good) or lower (bad) utility.

Having either my friend have dust in his eye or some stranger tortured are outcomes with less utility in my book than the counterfactual default. Actions which lead to these outcomes are bad, actions which prevent or provide restitution are good.

I don't know what you are getting at with a recursive relationship between observers and observed systems.


> What do you mean "good, bad, utilitarian"?

I'm asking if emotion (e.g. desire to help friend) has higher or lower utility.

> recursive relationship berween observers and observed systems

See page 2 onward in this cybernetics paper

http://www.nomads.usp.br/pesquisas/design/objetos_interativo...


It's funny but this article is actually the strongest evidence of cult behaviour. Teaching you how to be "happy" is pretty much the initial selling point of almost every cult out there. Just walk by any Scientology building.

Also from the wiki you linked [1]

> Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture

Wow either that article is hyperbolizing this or... wow.

[1] http://rationalwiki.org/wiki/Roko%27s_basilisk#The_LessWrong...

[5] (reference from the quote) http://lesswrong.com/lw/fq3/open_thread_december_115_2012/80...


> It's funny but this article is actually the strongest evidence of cult behaviour. Teaching you how to be "happy" is pretty much the initial selling point of almost every cult out there.

This is nonsense. It's an article posted to a curated community blog by some guy who happens to like reading tons of psychology papers and wanted to do some research into the state of scientific understanding of happiness. Is Wikipedia selling a cult too now[0]?

[0] - http://en.wikipedia.org/wiki/Happiness


> This is nonsense. It's an article posted to a curated community blog by some guy who happens to like reading tons of psychology papers and wanted to do some research into the state of scientific understanding of happiness...

...and is executive director of MIRI, the organization hosting and providing most of the content to Less Wrong.


Which would be an equivalent of pg posting his essay to HN, and then people accusing HNers of cultishness because of that essay.

Hmm... actually, that happens every now and then and is exactly as fair as the accusations against lukeprog and LessWrong.

Also, MIRI was created years after LessWrong.


MIRI is the continuation of SIAI which predates (and started) LessWrong.


Though Luke's involvement does not, not that that really means much.


Luke didn't walk by Scientology, he walked in:

http://lesswrong.com/lw/58m/build_small_skills_in_the_right_...

You'll see me in the comments going "WHAT THE HELL, HERO."


You should take Slate[1] with an even bigger grain of salt. While Less Wrong is a canonical illustration of all the problems with high IQ people, at least they try to be accurate in their beliefs.

Which is the actual purpose of the site. Singularity, consequentialism/utilitarianism, and so on are side effects.

[1] I'm guessing http://www.slate.com/articles/technology/bitwise/2014/07/rok... right?


I'm not sure what you mean. I've been following LessWrong for a while now, but I've never seen that Slate article before. I should be clear that most posters do not agree with the idea itself, as it is fairly absurd. What's more interesting are the thought processes that might lead one to consider such an idea, which are in full evidence throughout the site.


I do not understand why there are so many haters of LessWrong. It's not a cult. No one on that site has ever bought into the Roko idea. Also, Roko's post is poorly explained there. However, everyone thought it was nonsense at the time and still does. Imagine if the most inane nonsense posted to hacker news was used to dismiss everyone who posts here.

As for nanotechnology. It is possible - biology is a proof of that - though some of the early visions may have been naive. It's likely that artificial systems can be created using a broader pallet than natural selection does, and that these technologies will be powerful.

>That it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth

No, he wrote that it was justifiable to torture a man to prevent 3^^^3 dust specs from floating into 3^^^3 eyes. 3^^^3 is a mind-bogglingly large number - far far far far far more than everyone on earth.


> I do not understand why there are so many haters of LessWrong.

A lot of us have ended up in places where they have tried to force their beliefs on us (a huge one being "debate is always the correct way to understand things, and is never inappropriate in any situation ever" - the concept of, say, cooperative discussion, or even caring about people's emotions and sense of self, seems to have never occurred to any of them that I have met).

It's rather understandable that many would see them as a cult, given the religious fervour that many of them have in forcing their beliefs on others.


>Or even caring about people's emotions and sense of self, seems to have never occurred to any of them that I have met.

But that's not relevant to a community focusing on rational discussion. And it's the exact opposite of what cults do - they try to shower their followers with love. Not caring too much about people's emotional attachments to their beliefs is appropriate on LessWrong. It is inappropriate outside of that context.


> It is inappropriate outside of that context.

Yes. The issue is that many seem to have the complete inability to understand this. I don't care what they do in their own IRC channel or community; it's entirely inappropriate to attack somebody without their consent anywhere else, and yet they do.


>A lot of us have ended up in places where they have tried to force their beliefs on us

Really? I guess 'they' being less wrong enthusiasts. I'm honestly curious where as I have not come across this stuff much.


Various IRC channels across freenode; generally social spaces.


You forgot to take a jab at cryonics. Eliezer sincerely believes that people who do not sign-up their kids with a cryonics package are "lousy parents".

http://lesswrong.com/lw/1mc/normal_cryonics/


And there are people who sincerely believe cryonics is some sort of weird pipe dream. Can you believe it?


Where have Drexler's nanobots been credibly discredited?


Every real-world chemist who looks into the idea.


And yet, surprisingly, you don't have a reference.


Maybe you disagree with some of the conclusions, but do you disagree with the methods it preaches? Ie. reductionism, awareness of our biases, bayesian inference.

There do seem to be a lot of extreme viewpoints on LessWrong, so I think it is justified to take those extreme viewpoints with a grain of salt. But I also think that the core beliefs/approaches are valid, and so that should be factored in to how big a grain of salt you take things with.


I think they're probably a little too focused on the Bayesian interpretation, but yes, the site has plenty of good content. In particular, it is a really excellent way of finding effective charities. Where the site goes astray is that it has its own preconceived biases--e.g. its priors for the eventual development of a transhuman AI.


I agree with your remarks. One reason, however, why I still enjoy reading articles on the site from time to time is that they almost always make me think. Their perspectives, however flawed, have the merit of being strongly coherent, and trying to find what's wrong with an article turns out to be a very good exercise in judgment.


I agree with the grain of salt, but you're being overly dismissive. Calling them a cult surely is over the top.

It's just a bunch of grumpy nerds with a somewhat hardline "rational" fanaticism, who love to spend late dark nights on the internet reasoning about stuff.


[deleted]


I'm a member of their IRC channel and have no idea what the channel invasions thing is about. What have I been missing out on?


> that it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth

Maybe Eliezer has argued that too, but I doubt it. He has said it about a much, much, much larger number of people than everyone on Earth.


Sure, it's probable that I'm misremembering the exact numbers. Though I realize that the numbers are critical to the proposition from his perspective, I think the fact that he would consider it at all probably differentiates his worldview from that of a lot of people.


"Sure, it's probable that I'm misremembering the exact numbers."

The exact number makes "the population of Earth" a tiny rounding error away from zero. Eliezer could be entirely right in his conclusion here ("core belief" is a gross mischaracterization - it follows from other things, not the other way around) and it could still be wrong to torture a man for a minute to spare dust specks in eyes of a billion Earths (and thats not even getting us much closer to the number in question).

That he would consider it at all differentiates his thinking only from those who don't think about these things. "What happens in the extreme?" is a useful question, when trying to pin down how systems work.


>His core beliefs include that the obvious goal of the human race is immortality, that it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth, and that the most important thing we could possibly be doing right now is devoting all of our time developing a mathematical model of a "safe AI."

Possitive proof that IQ doesn't equal smartness, much less wisdom.


Positive proof that you can make everything sound ridiculous if you half-assedly read about it, cherry-pick some sentences, and misinterpret them as it suits your preconceived opinions. Saying that GP's criticism is just inaccurate would be extremely charitable.


>Positive proof that you can make everything sound ridiculous if you half-assedly read about it, cherry-pick some sentences, and misinterpret them as it suits your preconceived opinions.

Sure. But I can't conceive ANY possible argumentation and further discussion that accepts the above points that is not also silly.


I would have trouble doing that as well; fortunately, you should not accept the points above because they are either non-issues or seriously misrepresenting what Eliezer believes and ever wrote.

It's fair to evaluate and criticize opinions, but as maaku said upthread, one should play fair.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: