According to my tour guide at the Bodleian Library at Oxford University, preservation of books back in the olden days was all about reducing humidity. And the best way to achieve this before proper isolation were invented was to bring them up as high as possible. Rich people who collected books therefore always kept their books upstairs.
Also the curators of such collections are also often scholars. Old men, going up stairs to read their old books.
An hypothesis on my part of course, but to me I can se how this over generations gets morphed into "wizards in towers".
Secondly, is there a reason why subjective Bayesian theory isn't mentioned? To me it seems obvious that expert elicitation and assigning probabilities to logical uncertainty is perfectly fine.
Speaking very roughly, fuzzy logic applies in cases where the degree of truth of a statement is in question. Logical uncertainty (about decidable sentences) applies in cases where the sentences are definitely true or false, but we lack the resources to figure out which.
So, for example, fuzzy logic might help you quantify to what extent someone is "tall," where tallness admits of degrees rather than being binary. Or it might help you quantify to what extent a proof is "long." But it won't tell you how to calculate the subjective probability that there exists a proof of some theorem that is no more than 500 characters long in some fixed language. For that, you either need to find a proof, exhaustively demonstrate that no proof exists, or find a way to reason under logical uncertainty; and we haven't found any ways to use fuzzy logic to make progress on formalizing inference under logical uncertainty.
I agree that fuzzy logic wouldn't work for that purpose. But it addresses a formalism around the foundation of what probabilities are, which to what I could see was something you guys were doing as well. Just a thought.
As for actually addressing logical uncertainty and asymptotic convergence, I think subjective Bayesianism can be used in both cases. For example you write "the axioms of probability theory force you to put probability either 0 or 1 on those statements", which I think is simply not true. If I as an "expert" claimed that "in my experience there is a 70% chance of conjecture being correct", I can set "Prior(conjecture)=0.7".