
Dempster–Shafer theory - luu
http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory
======
kleiba
In practice, one of the main issues with DST is the computational complexity
inherent in its reasoning approach. Here are two classic meta-studies devoted
to approximation algorithms that try to tackle this problem:

[1] Tessem, B., Approximations for efficient computation in the theory of
evidence, Artificial Intelligence 61,315-329, 1993.

[http://www.sciencedirect.com/science/article/pii/00043702939...](http://www.sciencedirect.com/science/article/pii/000437029390072J)
(behind Elsevier paywall)

[2] Bauer, M., Approximation Algorithms and Decision Making in the Dempster-
Shafer Theory of Evidence--An Empirical study. International Journal of
Approximate Reasoning, Volume 17, Issues 2–3, August–October 1997, Pages
217–237

[http://www.sciencedirect.com/science/article/pii/S0888613X97...](http://www.sciencedirect.com/science/article/pii/S0888613X97000133)

The first one focuses on quantitative deviations while the second paper
considers the qualitaty of a decision.

~~~
eli_gottlieb
Of course, probabilistic reasoning has similar computational problems,
especially related to numerical evaluation of integrals in the denominator of
Bayes' Rule. This is why there's a lot of research into how to make
variational and Markov-Chain Monte Carlo methods run faster.

------
bstamour
For anyone interested in applying Dempster-Shafer Theory, I suggest checking
out Subjective Logic [1] as well. One of the issues that pops up in DS-Theory
is that the traditional method of belief combination tends to assign lots of
mass to uncertainty, and thus highly opposed beliefs end up with sometimes
counter-intuitive results. Subjective Logic offers an extensive battery of
operators for combining beliefs: from cumulative and averaging fusion, to
belief constraining, to consensus. I did my Master's thesis on implementing
and analyzing the various SL operators, and for many situations, working with
SL is very pleasant.

[1] [http://folk.uio.no/josang/sl/](http://folk.uio.no/josang/sl/)

------
bitL
DST's obvious problem is precise computation with imprecise numbers - I saw
some cases where people threw belief/plausibility numbers out of thin air to
get whichever outcome they favored (all you need to do is to find a
(preferably single) value where solution flips to the one you selected). A
perfect rational framework for politics...

~~~
sago
Very much the same thing happens with Bayesian probabilities, with their
notorious problems of reference class and prior. Witness the theologians who
claim Bayes's Theory proves the resurrection, or the competing group who claim
that BT proves Jesus didn't exist at all. When theologians start using your
math, you know it is well tuned for GIGO.

~~~
amalcon
A lot of people seem to be confused about the problem here, so I'll try to
demonstrate your exampe with a very simple form to show the problem.

Let's say that you want to figure out the chance of the resurrection happening
based on the evidence that a period book (the Bible) says it happens and a
straighforward application of Bayes' Theorem. Let's temporarily ignore any
uncertainty about when the Bible was actually written. I'm not actually trying
to argue either of these cases here, so it's immaterial.

On the one hand, you could decide that the prior is 50% (we don't have any
other evidence, and people come back from what at the time would be thought
"dead" with some frequency in the modern world). You could sample the
frequency of books purporting to be historical that mention resurrections
(low) to get the background frequency of these books, and that if its central
figure did rise from death, the Bible would almost certainly mention it. From
this, you get a rather high probability that the resurrection occurred.

On the other hand, you could set a very low prior (because people don't seem
to come back from death), sample the proportion of magical events in period
literature (substantial) to get the background frequency, and agree that the
Bible would mention a resurrection if it happened, and conclude a rather low
probability.

The problem is the same as in the Drake equation: all of these probabilities
are basically just guesses, as there is no good way to measure them in this
case. The result is therefore based more on how you guess than on the math.
Any framework of reasoning that is based on probability will have this problem
when dealing with real world events.

~~~
sago
I agree.

There are at least six intractable problems, I can see.

* Defining the thing you're looking for (in your example you say 'what at the time would be thought "dead"' \- that opens up the whole problem of excluding the middle, are the options only resurrection vs non-resurrection?)

* Calculating any particular probability without a direct frequentist correlate (the probability that the bible would mention a genuine resurrection, in your example).

* Choosing a frequentist correlate when they are available (the reference class problem).

* Determining the prior.

* Determining which posterior probabilities to include (to be accurate you have to include all relevant probabilities, but that's impossible in practice, there could be an almost infinite number of contributing factors, so the choice of what to include begs the question).

* For probabilities after the first, removing the correlation with previous probabilities (BT assumes that the influence of previous information is excluding from new information, because of the conditional probability, in practice this is almost possible to do).

Then there's the problem that, for small inputs, the error in BT is very
large. If your calculations ever drop to low probabilities (unless you can put
tight bounds on the error of everything you've done), you effectively lose all
information in the calculation.

------
kriro
I think the biggest problem is the conflation of true and provable that DST
tends to lead to (Gödel says hi). [This is also mentioned under "Criticism" in
the Wiki article]

