Do "stochastic ontologies" exist? You define probabilities for certain attributes and category assignments, then you do some max likelihood estimate over all unknowns, which yields the most likely, internally consistent world model.
Isn't an LLM essentially a stochastic ontology? Maybe that's why LLM's generalize so well to problems you wouldn't think would be amenable to next word prediction based on text analysis.