
No, You Can’t Feel Sorry for Everyone - dnetesn
http://nautil.us/issue/51/limits/no-you-cant-feel-sorry-for-everyone-rp
======
schoen
There are lots of interesting things here, but I'm going to quarrel with this
policy-oriented claim:

> We could be more systematic. We could use a standardized score to examine
> how violations of personal security and national security affect happiness.
> This could allow us to state that certain values are more universal than
> others, and are therefore more central to human well-being. Such an effort
> could tell us, perhaps, that on average the anxiety people feel regarding
> the possibility of the government reading their text messages is greater
> than the distress caused by experiencing or anticipating a terrorist attack.

In order to do this, you also need some kind of estimates of how a particular
policy affects your risk of (or the risk of other people you care about)
experiencing each event. For example, the government might say that your text
messages are unlikely to be read because people's messages will only be read
when the government can convince a court that there's probable cause to
believe that those people are involved in terrorist activity, which is
unlikely to happen for the typical person, who is not involved in such
activity and doesn't appear to be, and that the most intrusive surveillance
powers are used quite rarely. Someone opposing the government might say that
terrorist attacks are incredibly rare and usually ineffective, that there's no
reason to believe that giving the government more access to private
communications will significantly reduce their frequency, especially since the
government already has incredible powers, massive amounts of personal data,
and lots of avenues for investigation (and many of the remaining would-be
terrorists are lone wolves who don't rely on communication with other suspects
to plan their attacks).

You might also need some way of measuring second-order effects about risk of
government abuse, risk of totalitarianism, risk of self-censorship based on
the sense of not being able to hide one's thoughts and associations from the
government, risk of new terrorist groups forming, risk of greater criminal
activity from the perception that the government is ineffectively deterring
crimes.

The other closely related challenge is that you might say that you can find a
good scientific way to make trade-offs at the margin, but it's still tricky to
say what you've learned from that unless you also find ways to measure things
in practically comparable units. Like maybe you can get a strong scientific
result that says that generally Americans do want to trade off privacy to get
more security, but still, does that mean that they wanted to trade off 1
privacy for 5 security, or 12 privacy for 4 security, or 30 privacy for 5
security, or 96 privacy for 8 security? And if there are 7 proposed policy
tradeoffs that all point in the same direction in some conceptual sense, which
of those can you say were sanctioned by the scientific research on people's
preferences?

