> the fundemental approach to statistical analysis is flawed
This is a very strong assertion. What changes do you think are needed?
> With enough data, good enough filters, and a wide selection of adjustments for background processes, any model can be made to work.
Sure. Which is why no scientist will care that a particular signal model "can be made to work". You try everything you can to explain your data with only background processes and only if this fails do you consider alternatives. The more adjustments you allow, the harder it becomes to favor signal over background.
> Putting the blocks together is fundementally an excercise in bias, and truly limiting this bias requires significant discipline that is highly disincentivized and therefore uncommon
Another wild accusation without evidence. Why do you believe this is true? And if it is, where are all the false discoveries? In a small team with less oversight, I'm sure cutting corners happens. In a large experiment like those discussed here? No way. The embarrassment of having to retract a false discovery is a pretty strong incentive to ensure the integrity of results, and they have enough internal controls to enforce it.
It's not a "wild accusation" or personal attack, it's a fundemental truth about the nature of modeling. You can do better against it if you start off by recognizing that it is there. Nothing is unbiased.
The embarassment of retraction is also a strong incentive to not challenge the status quo. If everyone is using the same bias and assumptions no one has to worry about retraction. The more small teams you have, the easier it is for different biases to live; the more centralized, the easier it is to succumb to groupthink.
This is a very strong assertion. What changes do you think are needed?
> With enough data, good enough filters, and a wide selection of adjustments for background processes, any model can be made to work.
Sure. Which is why no scientist will care that a particular signal model "can be made to work". You try everything you can to explain your data with only background processes and only if this fails do you consider alternatives. The more adjustments you allow, the harder it becomes to favor signal over background.
> Putting the blocks together is fundementally an excercise in bias, and truly limiting this bias requires significant discipline that is highly disincentivized and therefore uncommon
Another wild accusation without evidence. Why do you believe this is true? And if it is, where are all the false discoveries? In a small team with less oversight, I'm sure cutting corners happens. In a large experiment like those discussed here? No way. The embarrassment of having to retract a false discovery is a pretty strong incentive to ensure the integrity of results, and they have enough internal controls to enforce it.