

Project Idea: Rating our 'experts' and their predictions - DanI-S

On the radio this morning, I heard that a group of politicians are meeting with the President in the White House. They are bringing with them a statement, signed by 125 economists, stating an assertion regarding a particular economic issue.<p>Our society has a singular fascination with the opinions of 'experts'. They are rolled out, opinions in hand, during any significant politicking, reporting or discussion. The criteria for being an expert, however, is rather loose.<p>Economists, for instance, make a living through attempting to predict the future. Their predictions are notoriously unreliable. You would have no trouble finding an economist who agrees with either side of any particularly contentious topic; obviously, at least one of them is often wrong. Some of them are wrong more frequently than others.<p>I suggest that it would be beneficial for discourse to keep track of the performance of these 'experts', in any field. There will no doubt be some whose predictions are uncannily accurate. There will be others who consistently perform worse than if they had made assertions at random. There is no current way for the thoughtful but uneducated observer to discern between the two.<p>Figuring out a fair way to do this is not a trivial issue. It is, however, a fascinating one. I think it would make a great - and invaluable - project for someone with an interest in data, statistics and community design. How do you allow user submission of information without being vulnerable to manipulation? What kind of hilarious correlations can you draw between the predictions of those with opposing viewpoints? How can we use historical predictive accuracy to weight future predictions?<p>The core of the idea - If I'm walking into the White House with the signatures of 125 idiots, the President should know.
======
curt
Can't really do this, the economy and weather are chaotic systems, which by
definition means you can't make predictions. Too many unknown quantities. In
economics though we can go back and isolate variables within multiple
datasets, but even then you can't extrapolate a prediction since you can't
control for every other variable.

For example in economics/policy economists took all the right to work state vs
forced unionization states and found far higher growth in non-forced
unionization state. Even with this fact you can't make a prediction due to
other uncontrolled factors. But you can make generalizations. Just look at
California vs Texas, they are too similar to be economically so different.

~~~
DanI-S
The idea isn't to extrapolate predictions; it's to provide a reference to the
accuracy of a person's prior assertions.

For example, Gordon E. Moore would be rated pretty highly:

<http://en.wikipedia.org/wiki/Moores_law>

Whereas the (Ex-)Iraqi Information Minister would not:

<http://en.wikipedia.org/wiki/Muhammad_Saeed_al-Sahhaf>

It gets interesting when you look at the intersections of predictions. Perhaps
Anne and Bob often take incorrect positions, but when they both make the same
statement, it is likely to be true.

------
pbreit
Do it! I would love to know how accurate guys like Umair Haque, Niall
Ferguson, Paul Krugman and Charles Krauthammer are.

~~~
curt
Charles Krauthammer isn't an economist he's a doctor.

