This tone is what makes it difficult to me to read lesswrong, and bayesian stuff in general. I'm sure it's unintended, but when I wanted to look into bayesianism the only thing I found was an air of superiority – and I'm supposed to be the target audience (I feel strongly about assumptions). In the end I concluded it's mostly an inner-platform effect, where you include a configuration system and then claim your software is better because it can meet the client's needs perfectly; all they have to do is tell it exactly how they want it to behave.
We don't need to hear about what everyone does wrong, unless accompanied by how to do it right. I hardly think it comes as a surprise to most people that they make unfair assumptions in order to get on with life. A method of detecting false assumptions – now that would be nice. We could call it "science".