Hacker News new | past | comments | ask | show | jobs | submit login
The Happiness Code: Cold, hard rationality (nytimes.com)
128 points by applecore on Jan 15, 2016 | hide | past | favorite | 61 comments



Their method boils down to "think about what you're doing." It sounds simple enough, but it's something that takes practice, and as someone who does practice it, the lack of practice by others does become apparent. We tend to shoot ourselves in the foot, and the control and focus we need to get things done often is a simple matter of not shooting ourselves, as opposed to gaining some new or special insight.

The reason why many end up on facebook or addicted to cigarettes or pile on credit card debt or subscribed to Time Warner is not because we suck. It's because there are seriously effective forces actively trying to get us to do these things. Cigarettes maybe not so much as before, but it boils down to them physically being addictive and their ads being effective etc. The engineers at facebook aren't trying to help you not waste time. Their actively seeking to keep you there. So in the end, we're not underestimating our own inadequacy. We're underestimating our collective competency at getting people to buy and do things we want. It's just that we find ourselves at the wrong end of it too often.

And regarding happiness... At the end of the article there is a guy talking about how he gets up an hour early to do the things he wants... That's pretty much it. Happiness is also a skill because it takes practice, and the best way to practice it is by making other people happy. Once we get good at recognizing the simple causes of happiness, we can then do the same with ourselves, almost as if we were just another person. And it does boil down to the simple pleasures such as "making coffee and listening to Moby-Dick". There really isn't much more to it.

So practice thinking about what you're doing, and practice making everyone happy. Include "yourself" in "everyone". Brilliant!


> It's because there are seriously effective forces actively trying to get us to do these things.

You will probably enjoy "The Shallows" by Nicholas Carr if you're on the receiving end of this or "Hooked" by Nir Eyal if you're on the production end of this.


Your post made me happy :) So thanks!


Jesus, this sounds so complicated. Mental models of self-behavior are effective ways to change those behaviors, but it sounds like these guys are piling way too much on at once. My most effective behavior change models have always been simple and almost singular. The one I'm exploring now is "Dopamine in your brain correlates with motivation." Others have been "Plan ahead all at once so you aren't tempted to procrastinate", "Measure it and visualize it to drive yourself forward", etc. I've effectively changed my behavior in massive ways by playing with simple models of myself. Constantly thinking of all those cognitive biases and mental errors seems like it would exhaust me to the point of paralysis.


I agree with the sentiment. I don't think there's a solution in focusing your rumination inwards - in my case it's a good way to feel more anxious.

But I do have a bit of learned rationality in me, in the form of "I know I will commit error x in the future, therefore..." So rather than try to become a master of self-control, I make moves to arrange the world around me so that there are safeguards, the error is insubstantial, and I end up on the happy path automatically 90% of the time.

Similarly, I focus personal development on going "from strength to strength" rather than on shoring up weak areas. If I start from the models I know and keep extending them out more and more, I will get to the things I was overlooking, eventually. One thing I am doing now is using Streak Club [0] to develop a single habit over a long period of time, that I would ordinarily excuse myself from.

[0] https://streak.club/


Awesome. I've managed to get a consistent meditation habit built using streaks. My longest chain was a little over 100 days straight!


As someone who's been into a CFAR workshop, I think their problem is the opposite, actually.

The article's description makes it sound complicated, but the fact is that a lot of the techniques are pretty simple when it comes down to it. And that's the problem: they're so simple that it's hard to get people to realize how useful they are. Most people will hear a description, shrug, go "makes sense I guess", and then forget all about it.

Take TAPs (Trigger-Action Plans) that are mentioned in the article. They're basically pretty much what it sounds like: plans of the form "when [trigger], then [action]". "When I see stairs, I'll take them (rather than using the elevator)". Not something that'd sound very revolutionary. But if you dig into the psychological literature, when people are instructed to set goals and use TAPs (the psych literature calls them "implementation intentions") to plan out how exactly they'll achieve those goals, people get better success rates than people who don't. (if you want a reference, google e.g. [1])

There's a bunch stuff about what makes for a good TAP and how you should use them to get the best results, but even if you include all of that, it's still not very complicated. The guidelines are stuff like "make your trigger a concrete specific thing such as 'when I see the stairs', not vague ones such as 'before dinner'". Not rocket surgery.

So here we have a simple, straightforward technique that has research support behind it - and getting people to use it is hard. It just doesn't sound exciting enough, or obviously useful enough. Heck, I too often just forget that I could use TAPs for something.

Most of their techniques are kinda like this. Simple, useful, and not terribly exciting by themselves.

So - I don't know if this is their explicit motivation for having a three-day workshop, but I think that this is a big part of the reason why the format works - instead of trying to make the individual techniques exciting, they instead throw everything and the kitchen sink at you within a short time period. Rather than relying on any single one of them making you go "wow", they rely on the sheer amount of them to make you go "wow" - and then, hopefully, actually adopt at least some of them. As well as to internalize the mode of thought that lets you come up with your own.

It seems to work okay.

[1] Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta‐analysis of effects and processes. Advances in experimental social psychology, 38, 69-119.


I suspect at a certain level of (intelligence + arrogance), the more sophisticated models one requires to even be able to buy in to a system or philosophy that engages one in growthful introspection.

For most people, traditional self-help methods, time with friends & family, or religious prescriptions work wonders. For a certain type of mind, though, I suspect it helps to have a more intricate (more scientific?) springboard from which to explore the intricacies of yourself and how to arrange the pieces together in a positive, productive way.


I think you hit the nail on the head.

Isn't it irrational to expend so much mental energy and time on trying to be "rational" in the first place? Seems counterproductive to me.


As someone once said to me when I proposed running a diet-related experiment on myself, "You can't live your life in a research lab!"

To which the answer is, of course, "Maybe not, but I can certainly spend two weeks."


To continue the metaphor from the article, once you have steered the elephant in the right direction, it won't take a lot of input to keep it on course, just the occasional nudge.

You don't have to analyze every choice, but rather do a periodic (say, daily) check-in on the sum of your choices and make any necessary course corrections.


I think "value neutral" is an important phrase. Getting many people to spend time in bureaucracies, creating uninspiring things, requires a lot of psychological infrastructure to tolerate the psychological damage over long periods of time.


Is self evaluation of one's "modus operandi" really a recipe for happiness? Not to be a smart ass, but throwing around confident statements about rationality and charging people $3,900 just smells like "Garage Scientology."


In general, it's a bad idea to form opinions of people from a news story about them -- sometimes accurate news doesn't make for a good story. The CFAR folks are... what's the opposite of cocksure? Epistemically cautious?

Here's an illustrative example. The New York Times article says this:

"[...] Afterward, participants are required to fill out an elaborate self-report, in which they’re asked to assess their own personality traits and behaviors. (A friend or family member is given a similar questionnaire to confirm the accuracy of the applicant’s self-assessment.)"

What the article doesn't mention is that the reason they're giving such a questionnaire to a friend or family-member is to cut down on self-report bias in their one-year longitudinal study of whether or not they're getting real results [1]. People can trick themselves into thinking that their lives have changed for the better after spending time and money on it, and the ask-a-friend thing is an attempt by the CFAR people to avoid getting a rosy-tinted picture of their own efforts.

They could get away with being a lot less honest.

[1] http://lesswrong.com/lw/n2g/results_of_a_oneyear_longitudina...


Scientology was the ramblings of a narcissist. It emerged from many dubious sources; pop psychology, stage hypnotism, and a wide variety of other bullshit, all piled together into a reckless mass of bad ideas.

What CFAR is doing is based on a reasonably solid foundation of current science. Expensive? Yes. I'm unlikely to ever pay $3900 for the program (though I can definitely see why a company might send its teams to CFAR workshops), but it isn't in any way cult-like. There's so much evidence that they're genuinely trying to evolve their program based on what works and what doesn't (the longitudinal study, the ongoing research, the very frequent sharing of what they do and how across a wide variety of media), I don't see any evidence they want to build a cult and lots of evidence they genuinely want to help people make more rational decisions and be happier because of it.


I befriended one of the founders of CFAR a few years ago when visiting NYC, before she moved to Berkeley and founded CFAR. She genuinely practices what she preaches, and was doing so long before CFAR. It's interestingly disarming to meet someone who is so smart, but has no fear of seeming dumb by asking lots of questions (including seemingly silly ones). It has stuck with me for years, in fact, and it's been one of my personal goals ever since to stop being such a damned know-it-all (mostly to myself) and ask more questions, and more genuinely engage in conversation even when I think I'm smarter and know more than the people I'm talking to.

I'm sure their approach to rationality would have a different effect for different people, and the article covers the different ways people step out of the comfort zones. But, my personal weakness, one that slows my own personal growth and development and impacts my productivity and general happiness more than most, is the "I am smart" armor that I built up as a kid (because I wasn't all that good at other stuff, being smart was the identity I embraced).

It has all sorts of negative consequences. I don't ask questions when I should, for fear of looking like I'm not smart and on top of everything. When complex things are hard, it is more frustrating than it needs to be, because most of the time I expect thinky things to not be hard (or at least, when in school and comparing my own performance vs. effort to others in the class, nothing was hard). A lot of it boils down to that "I am smart" belief, rather than taking an approach that accepts that being smart is a process and not a permanent condition.

Anyway, if hanging out casually for a couple of weeks with someone practicing this stuff has had a years-long and mostly positive effect on me, I would guess a formal program would be awesome.


What I don't quite get is why discussions of rationality so often mention threats from superhuman AIs. Is it just because that happens to be the interest of the key figures of the movement?


The answer to your question is basically yes. The "rationality" movement that is usually found in HN and similar groups is the LessWrong crowd, and this was founded by Eliezer Yudkowsky specifically to teach rationality in order to convince people of the threat of superhuman AIs.

While CFAR is technically disconnected from this, it's still part of the same crowd, so obviously a lot of the people related to it (including a lot of the participants) are of the same beliefs.

You could make the case that, if you learn to think rationally about the world, then you realize that superhuman AI actually is a threat. I'd even make this case myself. But that doesn't change the fact that the historically/causally correct reason rationality/AI are connected is because they both started from Eliezer.


Ugh, there's a lot of argument about that in the cfar alumni community. Some folks take it for granted (why??? Take things for granted about such an uncertain subject???) that we're just doomed unless you Give Miri Money(tm). Those folks tend to be pretty good about actually carrying out what is reasonable behaviour in most other ways if only they were right about that one thing - if it really were such a big deal, you'd want to not ignore it.

Meanwhile, another part of the alumni community actually understands the theory behind ai and machine learning, and those folks end up in arguments frequently with the first category about the topic.

The reason you hear about it is the first category is a pretty panicked and hopeless group - for the people who actually believe yudkowsky's "recursive algorithmic improvement" to be able to give large improvements, they generally think that humanity "loses by default" if they do nothing. So they tend to be very, very into recruiting. Thankfully they're not so nuts about it that they'll never change their minds, the problem is it takes a lot of explaining to get the theoretical basis for why the recursive self improvement thing isn't actually as scary as they think it is. No, it's not going to take an hour as soon as an ai is built, learning is hard, and humans are freakishly good at it.

Computers will beat us at data efficiency eventually but it's gonna take a while, and current machine learning is better at being data inefficient but getting good results from the large amounts of data. And the best you can do isn't good enough to make miri's monster - unbiased, maximally data-efficient Bayesian inference doesn't actually fit in the universe in either a time or memory sense if you try to build a full ai out of just that one thing. And approximating it is, you guessed it, less data efficient.


Okay so: 1.) preaches about new destroyer god 2.) saviors who possess the secret knowledge of salvation 3.) give them money 4.) think correctly, not incorrectly

Yep, it's a cult.


Eh... Sort of? The two organisations together would form a cult, but the community split I mention makes it a bit confusing. Overall, I agree that miri's level-of-cult is too damn high.


Interestingly, it's an ancient form of cult called Gnosticism. Gnostics teach that the material world is evil ("gives rise to the AI"), and only through hidden knowledge ("correct thinking") can one find the true path to spiritual salvation ("become an immortal human")

https://en.wikipedia.org/wiki/Gnosticism


> Ugh, there's a lot of argument about that in the cfar alumni community.

I take it you're involved with parts of the community outside the CFAR mailing list? I haven't seen it on there. There's certainly argument about it within the broader LW community, but the CFAR alumni community is distinct (but overlapping).


primarily in person, yes. presumably primarily from people who frequent less wrong. I don't, so I don't really know.


You seem to assume that the belief that humanity "loses by default" is wrong. I'm curious to hear why, since you seem to be know about this topic. (And because I disagree with your view, so hearing might educate me).

"Some folks take it for granted (why??? Take things for granted about such an uncertain subject???) that we're just doomed unless you Give Miri Money(tm)."

Are there really people who would put it that way? I know plenty of people (including myself) who think it's a good idea to donate to MIRI, but I certainly wouldn't put it in the terms you did.


So as I was writing a reply, I found that my opinion doesn't actually differ in "do we need to do a bunch of stuff"; it's more that I don't think miri is taking a useful approach. After talking to a friend who is also in ml about this really, I think the key point I should make is that it's more a matter of engineering control tools that ensure we can spy on its thoughts. If we can build an agi, we can also build safeguards that it doesn't realize are there until too late.

Anyway, here is the original comment I was going to write. You can read it and extract your thoughts; I'm fairly confident I got the theory right, but I've reduced my confidence in the point/counterpoint. Regardless, I definitely think miri's position is unreasonably extreme, and this is a pretty ok explanation of why.

https://gist.github.com/lahwran/99d84c3f8461ece9153f


Thanks for the detailed reply!

So firstly, I'm glad we agree on the core idea - some effort/funding should be put into this problem. You don't think MIRI is the best vehicle for such funding - that's understandable, although I'm semi-biased in their favor as they probably brought the most attention to this issue, and have been working on it for a long time. I assume that they have good reasons for the approaches they're taking right now. But again, not believing in MIRI or in a foom event is totally compatible with still believing we should be doing something about this problem, so you and I are pretty closely in sync.

Having said that, I read through your notes, and I do have a major object. (You seem to understand more of the technical details than me, so take with a grain of salt).

I think your argument boils down to one main thing - that humans are pretty much "efficient" with respect to data usage already, because of evolution. If we take away that belief, then your reasons for thinking we can't build an AI that's superintelligent goes away - we would be able to squeeze more efficiency out of it.

So here's the problem - I don't understand why you believe that humans are data efficient. I understand the idea behind it, but it seemed to me like the only explanation for data efficiency is that evolution would've taken care of it. You even mention that the default argument agains this point is that eveolution might get stuck in local optimums, but it's very hard to do better.

But there are 2 counterarguments to this:

1. I don't think it's so much difficult to do better than evolution, since we do it regularly. We defeat diseases. We build machines that are faster/stronger/etc than anything evolution has "come up with". Etc.

2. I don't think evolution was "optimizing" for intelligence anyway. Our intelligence, however limited, is enough to become the most powerful species on the planet. Biology faces very hard trade-offs in squeezing more intelligence into us, but this isn't true of digital equipment, which doesn't have to be based on hardware that, e.g. has to be born, therefore has to squeeze through the birth canal, etc. These are limits that evolution was "working under", but that we don't have.

Note: All anthropomorphizing of evolution done figuratively.


No, there are more dimensions than just data efficiency; what I would say is that humans are just about as data efficient as you could possibly hope for _at that wattage_. It's easy to do better than evolution at something it wasn't trying to do, I agree - and this is the point where, as I thought about it, I realized I do actually think there's some concern. But I don't think we'll be able to do it on one gpu, because:

- human evolution has spent quite a while in an adverserial environment - the smarter you are, the more you win - a recent finding of the neural network research is that local minima are kind of not a problem in very highly dimensional spaces, as long as you have a problem that is smooth and has optima. If it has any optima, then in very high dimensions, there's probably always something you can change that will keep you moving towards the optimum. while evolution may have gotten stuck in a general class of architectures - neural ones - it seems very much like there are many dimensions along which it can change the brain, and that changing the genes for it slightly will change its performance slightly (fsvo slight). - evolution, in species that have learning systems in the first place, optimizes for intelligence per watt. energy is very costly in the wild, and so finding algorithms that work well with low power is very important. It so happens that algorithms that minimize power usage are theoretically tied to algorithms that compress well, but the key thing is that evolution has had a crapload of optimization time for tuning the brains of mammals in general, and then humans got in this runaway optimization process - which seems to have made us smarter primarily by making our brains use more power for the relevant parts.

I definitely think you could do better, I just don't think you're going to do it with a paradigm that looks vaguely like the brain, because if it looks vaguely like the brain, evolution probably passed it up on the way to the general architecture that mammals use, and the specific one humans use. Possible exception for gradient descent and weight sharing, because those would be difficult to implement in the brain, but that doesn't give you results hundreds of times better, and it's not even clear the brain doesn't do that - hinton has made the argument that it could.

the key thing here: if we make an agi with neural networks (which at this point is almost a for-sure thing), then going beyond human level on one gpu will be a very difficult research task, and take it a lot of learning to figure out how to do. Which means we'll get a chance to control it using less formal mechanisms than miri demands out of their work.

(I don't think miri's stuff will be done in time to be useful to anyone.)


Thanks for the reply!

I'm also curious in your answer to Eliezer about why you assume 1 GPU.

But a few other questions:

1. You say: "human evolution has spent quite a while in an adverserial environment - the smarter you are, the more you win". Again, maybe I'm missing something in modern evolutionary thinking, but why that assumption? I always thought the consensus was that we were as minimally smart as required.

2. You seem to be under the assumption that today's most popular algo's (namely neural networks) are definitely for sure the thing that's going to become an AGI, but why that assumption? The more broad idea of some algorithm/method bringing us an AGI is more probable than specifically neural networks.

You also write: "I definitely think you could do better, I just don't think you're going to do it with a paradigm that looks vaguely like the brain". Again, why the assumption that whatever will be built will have to even resemble the human brain?


Why on Earth would you assume that MIRI assumes one GPU rather than 10,000 GPUs?


Initially I was interested in reading a bit more about these exercises they do(sounds like a form of mindfulness, to be honest) but seeing a fee of $3900 set off all kinds of alarm bells in my head.

Now I'm more interested in their marketing strategy with its obvious market segmentation of premium + suggestiveness. I'm also curious as to how they handle sales objections and how they approach up-selling.


> I'm also curious as to how they handle sales objections

A friend of mine told them he was interested, but hesitant because it was a lot of money. They asked him whether he knew anyone else who'd attended who he could ask. As it happened, he had me. I told him something like: it's really hard to know if this has had long term effects on me, I do think I'm happier than before, I'm definitely glad I went. My friend decided to attend.

> and how they approach up-selling

I'm not sure what you mean by this? I can't think of anything they'd need to upsell. I suppose, for the yearly reunion, they say "it's free to attend, but it costs us $x per participant, and if you could pay that much, we'd greatly appreciate it". You could count that as a form of upselling, and their approach to it is "please".


> I'm not sure what you mean by this? I can't think of anything they'd need to upsell.

Ah, ok. So it's a one-time sale only? No other products or services that they recommended to you afterwards?

May I ask you some other questions regarding their service, if you don't mind?


Yeah, once only.

Feel free to ask. I attended in... April 2013, I think, and then helped out at another of their workshops in I think November 2014. So I might not remember things in much detail, and I don't know what changes have happened since, but I'll answer the best I can.

Also, someone linked to this thread on the CFAR alumni mailing list, so others might see your questions and be able to answer better.

(Actually, another caveat: for at least some of the questions you could plausibly ask, the true-and-complete answer includes uncomfortable personal details that I'd rather not share. So still feel free to ask, but I may elide those details.)


Yeah, no , I won't ask anything too personal.

1. Why the high cost? Was this addressed?

2. Do you personally feel the cost was justified?

3. What benefits/improvements specifically did you notice afterwards.

4. And your friend? Did their life or attitude observably improve?

5. How much of what you learned is freely available on the net? In other words, is the course simply a curated set of information that is otherwise available elsewhere?

6. Do you still receive regular communications from CFAR, and if so, what do these communications consist of?

7.(last one!) Regarding the uncomfortable personal questions, was this information that CFAR recorded and archived?

Thanks!


1. It costs a lot to run. They have to rent out space, get meals, snacks, stationary. The staff:attendee ratio is pretty high. And they need to meet their general operating costs - office space and staff - not just for the workshops but as an organization. They've written about their finances: http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/#fin... (you might also find the rest of that interesting).

2. Justified for them to charge, certainly. Worthwhile for me to pay, I think so.

3. Difficult to say, partly because I don't have strong memories of what I was like before attending. I think that post-CFAR, I'm generally happier, more likely to try new things, better at making phone calls. I expect other changes too. But I can't say that these are definitely attributed to CFAR.

It's also difficult because I don't often explicitly use the techniques they teach, but that doesn't mean I don't use them. E.g. about six months ago I went mostly-vegetarian. I realized that for the most part, I'm just as happy without meat as with it, but there are exceptions. So I let myself have those exceptions, and get almost all the benefits of vegetarianism with few of the costs. This was a form of goal factoring, and I don't know if I would have done it without CFAR. But I didn't sit down and think "okay, I'm going to goal factor this".

4. I'm not sure. This was too long ago, and I don't think I saw him super-often around that time period.

5. I suspect most of the specific techniques can be found elsewhere in one form or another. They run tests to try to find the best way to teach the techniques. There's also value in curation, and in having someone there to help dubug and adapt to circumstances. ("You suggest committing to a specific time to do this, but I'm reluctant because my social life is pretty unpredictable." "Sure, maybe try picking a time, but giving yourself permission to change it, maybe up to three times.")

But what they're trying to teach isn't really techniques. This won't do it justice, but it's closer to: they're trying to teach the mindset that lets you generate the techniques. That lets you say, "okay, here's a problem. Let's try to solve it. Is this solution going to work? No. Okay, what could I do that would actually work?" and to come up with answers.

I'm not aware of any other resources trying to teach that.

6. They have an alumni mailing list, but CFAR doesn't post to it much as an organization. They sometimes ask for volunteers to help out at workshops, or if someone can help them scope out venues, or for beta testers for classes they're working on. Outside that, I don't receive anything from them (except automated thank-you emails when I donate).

7. Not at all. Some of it came up during conversations, either naturally or deliberately as something I chose for comfort zone expansion. There was also a session of "againstness training", which was even more optional than everything else, in which Valentine deliberately asked me uncomfortable questions so I could practice being in a super-uncomfortable state.

When I said about personal stuff, I just meant that - for example, if one thing I'd got from CFAR had been "I realized I could come out to my closest friends, but probably not my family", then I wasn't going to make that public knowledge. It wasn't to do with CFAR itself.


Cool, thanks for the reply


Cold hard rationality also means being honest with ourselves. The stories we tell ourselves may not be true, and this incongruity can cause a lot of harm: https://medium.com/@hypnobuddha/be-honest-are-you-lying-to-y...


Their videos are worth checking out. http://rationality.org/videos/

That said, I am not sure it is rationale to pay $3,900 for four days of training. At least give me a celebrity like Tony Robbins.


As someone who went, I agree. I think most people who do it right now consider the high price to be a donation to help them scale, rather than an actual product-for-money trade. I'd pay $500, maybe, if I thought the marginal benefit they'd get from my going was negligible. That's how much conferences of that length usually cost, anyway. Also, it's totally right that the techniques aren't exactly the point - you don't go to a tech conference because you can't read about the things presented there elsewhere, you go because you won't focus enough on them. But it's definitely not 4k of value from the workshop, it's 3.5k of giving them momentum to refine and scale the org, and .5k of actual experience.


If it is effective, that is absolutely a reasonable price to pay. Whether it is reasonable given the uncertainty of the effectiveness is debatable.

I also imagine that the high price might be a conscious tactic to increase effectiveness: People will put more effort into something they paid a lot of money for.


There was an earlier discussion of the the Center for Applied Rationality (the company discussed rather than the NYT article) here:

https://news.ycombinator.com/item?id=4751584


How about children's warm, openhearted instinctivity?)

Also scientific psychology (opposite of popular meme jogging) tells us that pure rationality is a myth - fictional concept of the mind. We are driven and motivated by hardwired, non-verbal heuristics , such as looks, status, health and beauty, which could be defined, at least for living beings, as youth + health/good genes markers (lack of any age or sickness related deformities).

Is there any estimation what percent of GDP and personal wealth has been spent amually on booze, hookers and status items? Including all the money spent to impress potential mates?)

The more appropriate meme instead of "rationality" would be "understanding" (and Joy instead of happiness) - I do, more or less, understand "how it works", so I could enjoy it occasionally, not too often to become a mere numb consumer. Read about Dorian Gray also.)


HPMOR was just referenced in a NY Times article. Awesome.


The tl;dr version of this is that "Applied Rationality" is a New Age cult, whose God is "Rationality" and whose Devil is human nature, rather than watered down Eastern mysticism.

The creepy part is how they expect you to live together and unquestionably repeat the mantras ("techniques"). It reeks of dianetics and other bullshit that claims to be a cure-all for all of your problems.


This is a bit disingenuous but not completely. I think a comparable service would be outdoor wilderness survival training, like BOSS in Boulder. Part of what makes it effective is that you are dislodged from an environment in which you can comfortably cling to the heuristics you already use to get by. In the new environment, you have to adjust all your norms and it provides more of a blank slate, cognitively, on which to imprint the lessons.

The CFAR stuff is like this too, but rather than being an outdoor wilderness survival school, it's just a survival school. It's even weirder than what it would take to survive in the wilderness, because the space of mental tools is so much more vast than the space of physical tools tailored to one type of environment.

I would guess that many CFAR employees would like their service to feel more like a "boot camp" sort of thing -- a transformative experience in which the intensity of learning and the bandwidth demanded are extremely high compared with what that intensity and bandwidth will be back in regular life. But I also think they don't want it to feel like an indoctrination, and would want to preserve and even enhance someone's ability to be skeptical, even about CFAR itself.

In that sense, promoting self skepticism, CFAR is very different than a cult, and just because it shares some superficial aspects of a cult doesn't mean it's fair to make that comparison.

But, but, I still do agree with you that CFAR has work to do to prove that they are not just a marketing engine fleecing bored rich people who fancy themselves seeming like philosopher savants or some shit. Merely having verifiably good, open content, like the LessWrong sequences, is not enough. They further have to show that they are willing to change, and verify that they aren't just a certain kind of boutique fraternity.

I for one would really welcome hearing ideas about alternative ways to teach rationality. For example, I recently read the science fiction book The Black Cloud by Fred Hoyle, and I was particularly interested in a part of the book where humans communicate with a far more intelligent being. Hoyle's writing is fun and all, but what I really thought was cool was the idea of a human (Hoyle himself) trying to emulate a being far smarter than him, and how believably he did this. But of course, on closer inspection, we should expect that Hoyle's portrayal would not be good enough, or else such superior intellect would be in our grasp merely by imagining how it should sound.

I think I got more out of reading that fiction book, in terms of thinking about how to think better, than I did out of vast swaths of LessWrong. Maybe that says more about me than anything else, but it is a data point that maybe there are all kinds of ways to elucidate the useful tools of rationalism, and the format of CFAR might not even be close to optimal unless your goal is to vend a status merit badge to a certain set of semi-wealthy people.


The classic cult tell is the fact that if you strip out all the claims to spiritual and moral wisdom, cults exist to service the leadership with money, narcissistic strokes and a sense of authority, sexual opportunities, and free labour. (I have a non-scientific theory that this is how religions propagate. They're such an effective way to provide all of the above that whatever the dogma, the social dynamics are just too attractive for weaker individuals to ignore.)

Aside from money, it's hard to see how that applies here. (If Yudkowsky was running this personally I'd definitely be concerned.)

But while there are obvious culty elements here, there doesn't seem to be a funnel which uses introductory bootcamps/workshops to find the most suggestible converts so it can sell them more and more expensive follow-ons. There also isn't any sense that there's a "reward" scheme where loyal followers are allowed into an inner circle - from which they can publicly purged if they misbehave.

It looks more like there are some interesting brain hacks on offer, packaged into a format that's maybe too intense to be ideal.


By this definition I would argue that most SF start-ups are more similar to cults than CFAR is -- though as I said in my comment above, I do agree that CFAR hasn't conclusively proved yet that this is more than just a sort of Space Camp for bored rich people of a particular variety.


oh my god yes. I'm frequently creeped out by how much my various employers have wanted me to be TOTALLY AND ENTIRELY on board with their mission. like, I like making awesome software, and I like customers enjoying it, but plz no I do not want to devote my life to x thing just because it's both fun and gives me money. If I'm going to devote my life to anything, it's going to be doing something like building computational models of the genome or something.


> maybe there are all kinds of ways to elucidate the useful tools of rationalism, and the format of CFAR might not even be close to optimal unless your goal is to vend a status merit badge to a certain set of semi-wealthy people.

I've been keeping an eye on CFAR for a while and they'd likely agree that there are a bunch of ways to teach this stuff. They're just personnel- and money-constrained and pedagogical-methods R&D takes up enough of their budget.


Related blog posts, by a past instructor (possibly co-founder? I forget): http://acritch.com/cfar-scaling/ http://acritch.com/cfar-altruism-and-core-vs-labs/


You're being down-voted but I was about to say the same thing (minus the bullshit part). They do seem like an evolved version of the same old New Age philosophy, which is an evolved version of the age old Eastern mysticism, which is technically all religion in traditional sense. Whatever.


I'm confused. There are a lot of things I might associate things like "behavioral economics" with, but New Age wouldn't have been what I'd have expected. Where do you get that from?


Actually, the proper definition (for the sake of understanding) is to treat this phenomenon as psychological therapy sessions. Based on my reading at the source, this group and the people involved, are engaged in a group therapy sessions. Now, one can bring association to religion, fairy tales, science, etc, but the essence stays the same. It is all about human beings trying to find meaning in their lives.


You do what you want to do. Ask yourself what you want to do, and why you want to do it.


What other approach do expect from a place/culture like that.


Not cool. Please post civilly and substantively, or not at all.


Judging by the user name I doubt we'll be seeing this person again.

edit: Just kidding they have 201 karma.


I'm not great at naming.


Too much drugs from Haight Ashbury...


Generalizations about Silicon Valley are a tedious media game; please let's not make it worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: