The problem here arises in thinking this has anything to do with bayes rule, probability or any formalism of this kind.
The rule `P(B|A) = P(A|B)P(B)/P(A)` is only a model of ratios the follow a certain sort of set logic (and measure to [0, 1]).
It says nothing about what this model refers to. Does `B` refer to a proposition? A belief? An event? Bayesians say "belief", but then what do these beliefs model?
Do I believe "the sky is blue" ? What would it mean to have this as a belief? Do I have some model of the sky, of blue? etc. Why should we suppose that beliefs should operate according to bayes rule?
Yes, for any given class of beliefs, relative to some model of the world, you can construct an argument that bayes rule must apply. But these arent beliefs, they're truth-apt propositions which model a world. Beliefs are just mental states, and little constrains their consistency.
Since we do not know what the world is like. We do not know that believing, "frozen water could fall in a glass of water in ordinary conditions" contradicts, "water is largely comprised of h2o" -- so we can believe both at the same time.
So what's the issue here? The issue is that the formalism is useless for generating descriptions of the world (events, beliefs, propositions, ... whatever you like). And it's only when we have a set of such descriptions which actually model features of reality, and which therefore have some consistency between them... that we can actually construct any kind of formal model of reasoning. At this point, most of the work is done.
Thus much of the suppose normative force, and epistemic weight, of all this machinery is an illusion. Since we have no idea what beliefs affect the probabilities of what others; of what beliefs correspond to features of the world; and so on... no problems are resolved.
I'm always irritated by professors who will stand in front of a room and demonstrate people's irrationality against some experiment they perform, asserting that the model of the experiment people have in their heads is "the one on the slide", and then conclude that they are wrong. The issue is, in my experience without exception, that the audience is not so dumb as the professor. And has not made the very bizarre assumptions scrawled on the board; and hence the entire formal model is invalid.
I have a hard time grokking your wording, but I'd like to point out that Bayes' Rule is typically "typed" to events (i.e. B is of type "event").
That being said, I don't think this needs to be the case. The syntax could "compile" differently depending on context.
From my understanding, Bayes' Rule is more about updates to pre-existing probabilities, which we label with the suggestive name 'beliefs', which aren't necessarily tied to a human's mental state.
I do agree with your last paragraph, but I also accept that those profs likely do so because some majority or plurality of students do hols that belief. That being said, there's definitely room for different wording.
in Bayesian epistemology, the arguments to probability "functions" are beliefs.
To simplify my phrasing: to model any aspect of reality requires inventing a formalism; that formalism itself largely fails to correspond to reality. insofar as it does, the formalism is useless.
Bayesian epistemology pretends to greater insights than it has, because it assumes that the modelling relation is simple; whereas, really its where the whole part of epistemology lies.
The rule `P(B|A) = P(A|B)P(B)/P(A)` is only a model of ratios the follow a certain sort of set logic (and measure to [0, 1]).
It says nothing about what this model refers to. Does `B` refer to a proposition? A belief? An event? Bayesians say "belief", but then what do these beliefs model?
Do I believe "the sky is blue" ? What would it mean to have this as a belief? Do I have some model of the sky, of blue? etc. Why should we suppose that beliefs should operate according to bayes rule?
Yes, for any given class of beliefs, relative to some model of the world, you can construct an argument that bayes rule must apply. But these arent beliefs, they're truth-apt propositions which model a world. Beliefs are just mental states, and little constrains their consistency.
Since we do not know what the world is like. We do not know that believing, "frozen water could fall in a glass of water in ordinary conditions" contradicts, "water is largely comprised of h2o" -- so we can believe both at the same time.
So what's the issue here? The issue is that the formalism is useless for generating descriptions of the world (events, beliefs, propositions, ... whatever you like). And it's only when we have a set of such descriptions which actually model features of reality, and which therefore have some consistency between them... that we can actually construct any kind of formal model of reasoning. At this point, most of the work is done.
Thus much of the suppose normative force, and epistemic weight, of all this machinery is an illusion. Since we have no idea what beliefs affect the probabilities of what others; of what beliefs correspond to features of the world; and so on... no problems are resolved.
I'm always irritated by professors who will stand in front of a room and demonstrate people's irrationality against some experiment they perform, asserting that the model of the experiment people have in their heads is "the one on the slide", and then conclude that they are wrong. The issue is, in my experience without exception, that the audience is not so dumb as the professor. And has not made the very bizarre assumptions scrawled on the board; and hence the entire formal model is invalid.