Hacker News new | past | comments | ask | show | jobs | submit login

This idea always struck me as fairly misguided. It of course makes sense to try to muster evidence and data when making decisions but it's not some sort of panacea. The book The Tyranny of Metrics (https://www.amazon.com/Tyranny-Metrics-Jerry-Z-Muller/dp/069...) goes into the successes and failures of various attempts to use data for decision making in some detail. Found it to be an interesting read.

In terms of public policy you have to decide what you're optimizing for and that decision can't be made with data alone because it does not help resolve questions of value and fairness.




The underlying problem is politization. You can assume that if some political party wants some something and another political party wants the opposite something, you could find a set of impartial experts that would provide hard data and solve the question. In the real world, there are two sets of experts, holding opposite views and providing contradictory data. Everybody will make a big noise and eventually nobody knows what happened, just that the question got muddied and you aren't so sure about anything anymore.


The problem underlying politicisation is confidence. Science isn't binary, it's a set of circles of decreasing confidence that spreads out from a core of propositions that we're very confident about - more or less what you'll learn on an undergrad physics course - to a set of increasingly tentative hypotheses.

A lot of arguments about science are really arguments about confidence. E.g. most climate change scientists are fairly sure about their models, but the lack of absolutely certainty makes it possible for deniers to cherry pick a tiny collection of outlier scientists who will argue in public that it's all nonsense.

Policy makers and the media are some combination of corrupt and clueless, so they're happy to go with the false equivalence this creates.

One way to depoliticise science would be to have an international science foundation, which was funded independently of any individual government.

Of course there would be squeals of disapproval from vested interests, but that would simply highlight the problem - the vested interests don't want independent criticism or oversight. Their entire MO is based on regulatory capture which gives them the freedom (for themselves only) to operate as they want with no personal or financial consequences.

Scientific accountability would set them on the path to democratic accountability, which is the last thing they'll accept.


> A lot of arguments about science are really arguments about confidence. E.g. most climate change scientists are fairly sure about their models, but the lack of absolutely certainty makes it possible for deniers to cherry pick a tiny collection of outlier scientists who will argue in public that it's all nonsense.

I think scale/proportion is also a problem. Humans seem to place a lot of value in narratives/stories but we aren't so good with quantities (e.g. https://en.wikipedia.org/wiki/Conjunction_fallacy ). Pretty much everything (economics, climate, etc.) has factors pushing it in different directions, so we can always find a counterargument to any position (e.g. we can rebuff climate change by pointing to solar cycles, CO2 causing extra plant growth, etc.); that's fine, but some factors are overwhelmingly more important than others, whilst we seem to cling on to these stories/narratives and give them more equal weighting than we should.

As a concrete example, a family member used to leave their lights on overnight, claiming that "they use more energy than normal when they're first switched on". Whilst true, the saving is cancelled out after seconds ( e.g. https://www.energy.gov/energysaver/when-turn-your-lights )


There is also an issue of getting what you measure for since humans game systems to their benefit. Look at standardized tests - they guided education from an early age as opposed to actual educational outcomes. I remember vividly being in elementary school and they multiple workbooks with pages of analogies with occasional ambiguous answers. There wasn't any real learning just a bunch of drilling that depended on existing knowledge. Then the SAT dropped it for a writing section and analogies practically disappeared off the face of the earth. They showed up four times a year at most - literally. Usually because the quarterly state tests had one question with them.


There's a middle ground.

Practically no systemic analysis is done within government. At least in Chicago. Government systems are compartmentalized in ways that makes interfacing with them impossible for any worthwhile analysis. Example: the only analysis that Chicago's finance department had done on its parking tickets is a single very high level spreadsheet.

Some analysis is better than no analysis.


> Practically no systemic analysis is done within government

This is so far from being true, it undermines your point and your post.

The Federal Reserve does no systematic analysis? The U.S. Treasury does no systematic analysis? The Bureau of Labor Statistics does no systematic analysis? The Congressional Budget Office does no systematic analysis? The Centers for Disease Control do no systematic analysis?


Systemic, not systematic.


I don't even understand what you mean. I have read quite a few government papers (mainly federal) and they usually were well researched. I don't know how things are on a local level but the politicians in Congress have a lot of well researched data available if they want to listen (which they often don't).


Sorry - should have been more clear. I'm mostly talking about local policy, which is where most of my experience with government comes from, through many FOIA requests. Federal is much more calculated (read: slow) in comparison to local government. I've found local government to be very "we've checked the box, let's move on", which doesn't leave room for analysis, let alone the acknowledgement that analysis is even possible.


Makes sense. The way local governments deal with things like pensions is truly horrifying. Even the simplest analysis would quickly show that they are setting themselves up for disaster.


"We could add more money to the pension fund. Or we could assume a 10% market rate of return forever, and spend that money on new office chairs and computers instead. We can't get a tax increase just for those, but we could for the prospect of homeless old people eating cat food, and the next guy will get blamed for it."

They're actually setting other people up for disaster, hoping that they will already be gone when it hits.


If it is that easy to predict then it is probably career suicide to be the person who produces the analysis that proves a disaster is going to happen.


> Some analysis is better than no analysis.

I don't agree with that. Doing some analysis on a limited and possibly skewed data set can lull you into a false sense of understanding. It makes your ideas seems objective when they can in fact be completely baseless.


How is no analysis just as good?

It's hard to believe that the percentage of successes with no analysis is GTE the percentage of successes with some analysis.


I presume because faulty analysis or bad data can actually get you more off track than when its just a "back of the napkin" based guess or hunch.

And additionally, you now have the pride/confidence thing in your even worse results because you did "analysis"...


Often the pride is constant.


As always, you need to take the limitations of the available information into account. That ought to be part of the analysis, though of course it often, or perhaps usually, isn't.


Not at all. Analysis of limited data gives you wide credible intervals, the exact thing that guards against unwarranted confidence.


Are you saying that zero middle ground exists?


Obviously a middle ground exists, but it's possible that in part of the scale you get worse results as you head toward the middle.


I didn't downvote you :(


Sorry - edited. Still didn't answer the question, though :p


Once you resolve questions of values and fairness, you should be able to leverage data and metrics to achieve outcomes consistent with those values. Say you’re designing a jet engine. You decide what output parameters you care about optimising (maximum thrust versus fuel effiency, for example). You know that you can manipulate certain input parameters to influence those outputs. And you can use data to verify and iterate on your design. But all you can do is change inputs to a very complex system. The unbending rules of the system itself decide how those affect the outputs.

The problem is that most in government simply are not systems thinkers. They are focused on values and fairness, and believe that once you’ve identified those values, you can directly legislate those into outcomes. This thinking leads to spectacular failures (the war on drugs, the war on poverty, tough on crime, etc).


> The problem is that most in government simply are not systems thinkers.

I don't think the main problem is that "people in the government are stupid". Real life and societies are much more complex than a jet engine. There are thousands of value goals, too many to be able to put numerical targets on each one.

What is the optimal ratio of potholes to unsolved murders?

And when you forget to include any of the value goals in your model, then you get a paperclip AI scenario with terrible effects somewhere else.


I didn’t say they were stupid. I said they are not systems thinkers. There are lots of very smart people who don’t view the world in terms of cause and effect, cost and benefit, action and reaction. Irrational and harmful, I might say that...


> In terms of public policy you have to decide what you're optimizing for and that decision can't be made with data alone because it does not help resolve questions of value and fairness.

Sure, but once you decide, then you should use data to optimize it. Seems pretty straightforward, and I don't think anything in this article would disagree with that.


Even after I know what I'm optimizing for I cannot use data to optimize for it. I know from experience that unintended results will happen. Some of them will be very base which will force me to come up with a whole new set of things to optimize for.

You are also assuming we can agree on what to optimize for. In fact we do not, and will not.


> Even after I know what I'm optimizing for I cannot use data to optimize for it. I know from experience that unintended results will happen. Some of them will be very base which will force me to come up with a whole new set of things to optimize for.

You can't use data to optimize anything? Google and Facebook must be wasting their time collecting all that user data, then.

> You are also assuming we can agree on what to optimize for. In fact we do not, and will not.

We agree on lots of things to optimize for. There are cases of disagreement, but very very broad agreement as well.


>You can't use data to optimize anything?

No, I can use data, the problem is I don't know what all the effects of that optimization will be. So I constantly have to change what I'm optimizing for.

> We agree on lots of things to optimize for.

Broadly, but there are limited resources and each thing effects the other. So even though we agree that A and B are worth optimizing for, we will disagree on which is more important. Worse in many cases we will agree on A and B, but the data shows you cannot optimize for one without pessimism the other.

That is the broad agreement isn't really enough to do anything with, we need the details and there we disagree.


> No, I can use data, the problem is I don't know what all the effects of that optimization will be. So I constantly have to change what I'm optimizing for.

What are you trying to say here? Optimizing things with data is hard?

> Broadly, but there are limited resources and each thing effects the other. So even though we agree that A and B are worth optimizing for, we will disagree on which is more important. Worse in many cases we will agree on A and B, but the data shows you cannot optimize for one without pessimism the other.

I think we even agree broadly enough on the relative weights of many things to optimize for them. And I think we at the very least agree enough on things to partially optimize them, or pareto optimize them. In many cases there are low-hanging fruit to be picked that can optimize a metric that we all agree is good without sacrificing another.


> In terms of public policy you have to decide what you're optimizing for and that decision can't be made with data alone because it does not help resolve questions of value and fairness. Not to mention that matters of value and fairness also change the availability of, and interest in, research itself. For example, if some people don't like the results of your research (despite presenting no challenge to the facts or methodology), they can now apparently replace it without notice after publishing it in a journal (and do so in a manner which is intended to make it difficult to publish it in other journals).

The academy is the wrong place to direct politics, because politics already direct the academy.

From my perspective, a more likely overall improvement in the ongoing quality of policy would be a requirement that all policies sunset reasonably soon by default, even if they don't seem divisive at the time they're passed. As it regards controlled substances, this would lower the bar for repeal to "nobody particularly cares to renew it" from "nobody particularly cares to do the work to repeal it".


This is a great point. I wish more people with an interest in policy/politics also had an interest of in intellectual history.

A lot of the most important debates of the last 100+ years has been on exactly this debate.

Rationalists like Ayn Rand (sort of), Rosa Luxemburg (red Rosa) and others on one side, and the like of Karl Popper & Hayek on the other.

When people talk about "testability" of a theory to decide if it's a scientific theory... they are borrowing from Popper's criticism of Freudianism, Marxism & the concept of metric based "government science."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: