The kind of outlier-culling technique suggested by civilized is not recommended these days because it adds unprincipled choice points to what Andrew Gelman calls the 'Garden of Forking Paths' [1, 2]. Thus they are bad for hypothesis testing, which tends to be what most statisticians care about.
Additionally the technique obscures the relationship between the variance of the sample and the population variance if we do not have reliable knowledge of the population distribution; likewise for the mean if the mean is not close to the mode. These problems can be quite dramatic for long-tailed distributions.
This post seems to be conflating a few different things:
1. Trimming for mean estimation, which removes extreme values in an algorithmic fashion
2. Subjective removal of outliers based on researcher judgment (this is the garden of forking paths Gelman talks about)
3. Estimating other distributional properties, such as the variance, with trimmed estimators
These are all different things and come with different theoretical and practical risks and benefits. Trimmed means are perfectly good statistical tools, although they have their limitations like anything else.
The choice of N used in cutting out the N% most extreme results is not determined by widely accepted statistical best practice. Hence it is a source of forks. The algorithm might be deterministic but the choice of this parameter isn't.
My discussion of distributional properties was another issue concerning this technique. You seemed to have missed the point that dropping extreme points can also lead to biased estimates of the mean.
Ten years ago, dropping outliers was considered good practice in the social sciences. Today, it has become a reason for rejection in peer review. There are better techniques for dealing with noisy data, such as adding measurements to data points to measure "badness" that can then be adjusted for in a multi-level model.
All but the simplest statistical estimators have researcher degrees of freedom (certainly including multilevel models) so it seems arbitrary to criticize the trimmed mean in particular for that "fault".
Similarly, any estimator can be biased if its assumptions are violated, so I'm not sure why the potential bias of the trimmed mean in particular is an interesting point.
I'm sure that social science peer reviewers have their reasons for their methodological preferences, but trimmed means are great workhorses in other areas of science, like signal processing.
The critique strikes me as potentially valid in its subfield but a bit parochial if it is attempting generality.
I don't deny the technique has its uses. The point is it is a poor technique to use if your goal is hypothesis testing, which, as I said, is what most statisticians care about.
I didn't reply to you, but to goodsector, who claimed that statisticians cared focus on efficiency at the expense of reliability. I dispute this.
The kind of outlier-culling technique suggested by civilized is not recommended these days because it adds unprincipled choice points to what Andrew Gelman calls the 'Garden of Forking Paths' [1, 2]. Thus they are bad for hypothesis testing, which tends to be what most statisticians care about.
Additionally the technique obscures the relationship between the variance of the sample and the population variance if we do not have reliable knowledge of the population distribution; likewise for the mean if the mean is not close to the mode. These problems can be quite dramatic for long-tailed distributions.
[1]: http://www.stat.columbia.edu/~gelman/research/unpublished/p_... [2]: https://statmodeling.stat.columbia.edu/2016/09/30/why-the-ga...