The proper role of data is to update our existing beliefs about the world. It is not to specify what our beliefs should be.
The question that we really want to answer is, "What is the probability that X is true?" What p-values do is replace that with the seemingly similar but very different, "What is the probability that I'd have the evidence I have against X by chance alone were X true?" Bayesian factors try to capture the idea of how much belief should shift.
The conclusion at the end is that replication is better than either approach. I agree. We know that there are a lot of ways to hack p-values. Bayesian factors haven't caught on because they don't match how people want to think. However if we keep consistent research standards, and replicate routinely, the replication rate gives us a sense of how much confidence we should have in a new result that we hear about.
(Spoiler. A lot less confidence than most breathless science reporting would have you believe.)
This is like Functional programming , and people have a very hard time with it. Instead of passing around numbers "95% true" or whatever, we're passing around function "It's 2x as likely as you though it was, please insert your own prior and update", but even worse, it's "please apply this complicated curve function at whatever value you chose for your prior". It's just too hard for people to manage. Computers can do it (but it's hard for them too, very computationally intensive), and you have to really trust your computer program to be working properly (and you have to put your ego in the incinerator!) to hand over your decision-making to the computer.
I question whether computers can do it at all in useful practice.
Take a look at the results quoted in https://en.wikipedia.org/wiki/Bayesian_network#Inference_com... about how updating a Bayesian net is an NP hard problem, and even an approximation algorithm that gets the probability right to within 0.5 more than 0.5 of the time is NP-hard.
> The proper role of data is to update our existing beliefs about the world. It is not to specify what our beliefs should be.
Create the schema beforehand, I get that. But feature extraction does work, providing models from data. Sometimes takes much time to analyze and understand the models.
If you have enough data and a strong enough signal, then all reasonable belief systems should converge on the same answer. Do not let that fact fool you into believing that raw data is the only thing necessary to make good decisions when faced with realistic situations.
The way that you hack Bayesian factors is selectively including data. And the selection process can be as simple as publication bias causing some results to be published and others not.
This is the fundamental weakness of meta analysis.
The proper role of data is to update our existing beliefs about the world. It is not to specify what our beliefs should be.
The question that we really want to answer is, "What is the probability that X is true?" What p-values do is replace that with the seemingly similar but very different, "What is the probability that I'd have the evidence I have against X by chance alone were X true?" Bayesian factors try to capture the idea of how much belief should shift.
The conclusion at the end is that replication is better than either approach. I agree. We know that there are a lot of ways to hack p-values. Bayesian factors haven't caught on because they don't match how people want to think. However if we keep consistent research standards, and replicate routinely, the replication rate gives us a sense of how much confidence we should have in a new result that we hear about.
(Spoiler. A lot less confidence than most breathless science reporting would have you believe.)