Of course this happens, and it's an obvious technique. I'm sure that not having a facebook account adds to your score, using AdBlock adds to your score, mentioning the NSA online adds to your score, refusing cookies adds to your score, using Linux adds to your score, etc.
That's how a police state works. (XKeyScore += 5)
My mother was involved in civil rights, so she has a file. It's fine that I have a file too. Hopefully I'll be gone before they start going door to door.
That's how any sort of metric works - your credit score, your Acxiom consumer category score, whatever metrics Facebook and Google use to score user engagement.
This is just how life is when databases are ubiquitous. After I bought a house and my name began appearing in property tax databases I started getting lots of (paper-based) commercial spam for things homeowners are more likely to buy, like different sorts of insurance, refinancing, satellite TV service yadda yadda.
When it emerged after 9-11 that various government agencies had failed to 'join the dots' by not sharing intelligence information effectively, there was a lot of public support for better-coordinated and more proactive intelligence gather, notwithstanding warnings about the risk to civil liberties. So collectively we got what we asked for. The lack of public outrage or mass demonstrations against the NSA strongly suggests that a large majority are OK with this state of affairs, especially since they're used to data collection in a commercial context.
I think people are too busy, too desperate, and too badly educated to do anything about it. Most people were only being kept solvent by credit cards, borrowing against the equity in their houses, and taking massive loans for major purchases - and when credit got tight, people started setting up tents on Wall Street. The next crash is going to be a terrifying experience.
A little bit of war, however, will probably dull any of that.
Someone should be a Check-My-XKeyScore site. Answer several questions (do you have a VPN? Use Linux, etc) and show the submitter how high he ranks in the NSA's eyes. 'share via Twitter etc' Could be fun and easy way to show people just how crazy this is...
I don't. If you want to fight, I'll support you. I donate money to causes that I know raise my XKeyscore, and I subscribe to magazines that I know raise my XKeyscore. My long term plan is to find somewhere where this is not happening, and move there, though.
If I were planning to stay here for the rest of my life, I wouldn't risk political posting on the internet, donating, or subscribing. You'll get more help for me as long as I can rationalize it as only worsening a temporary situation.
... you get a arrested for having a "very high score".
Considering this is PRECISELY how spam filtering works, it doesn't seem entirely irrational.
Much like spam filtering, it would all come down to dialing in your filters and picking a good threshold.
Hell, if the system was good enough, it could actually improve freedoms. We currently arrest many innocent people as part of the legal process (who are later exonerated). What if our "arrest filter" outperformed the current system, in terms of percentage of innocent people arrested? It doesn't have to be perfect to be better.
Spam filtering works using actual data that someone once used to train it - if we are taking this at face value, it means that the NSA is not just targeting people who might be terrorists and collects data on other people as a side effect, but that it actively targets people with no ties to terrorism whatsoever, for reasons that both US and international public might find unsavory if they found out about them.
Spam filtering also works in a very different context - the spam-to-nonspam ratio is something like 90% spam and 10% nonspam, which means that there is lots and lots of spam to filter out; if an important email slips through, people are bound to notice and either adjust their spam filter or do something about it.
In the other setting, you have 99.99% or more of people who have nothing to do with terrorism or criminal activities, and maybe one or two dozen (among tens of millions) who you are actually targeting. First thing, erroneously targeting a substantial chunk of your non-interesting population ties up resources - you're spending your time investigating people who are not terrorists - but since it's difficult anyways, at least you seem like you're doing something with all the money you receive, and nevermind if some of the data is used for industrial espionnage or hunting people that only poultry farmers and fracking magnates would call terrorists. And if you miss one of the two dozen other people, well, they won't do anything harmful this year or the next because they also have to fear regular law enforcement, and when they do it'll be in a moment that's probably suitable for you to ask for more money.
tl;dr: Because we don't have a large sample of actual terrorists on hand, it's hard to evaluate activities like the NSA's, which would however be desirable since we're giving large chunks of money to them that could be fruitfully used in making everyone safer if used to fight actual crime and not some fuzzy notion of terrorism.
The problem with such a filter is it will be perceived as inefficient and broken by those evaluating it if it denies an officer the authority to arrest someone he's already detained or determined to be worthy of arrest.
It will be another system which grants permission at the whims of the department, one that absolves individual officers of blame, punishment and consequences for bias and abuse they'll continue to revel in.
There is no reason such a filter need be an authoritative source. Arrest someone it didn't suggest, if you like. Choose not to arrest someone it did suggest. Human discretion would still be applied. But if it proved accurate, officers would trust it, and the population would get upset if it identified a criminal an officer did not arrest.
No different than a virus or spam scanner, really. I trust my scanners. Sometimes they are wrong, and I know they are, and I bypass them. But I know they are right most of the time.
You seem to have the notion though that officers do not arrest people to "catch the bad guy". It sounds like you are saying you believe most officers do not actually care if the arrestee is guilty (and will thus ignore the filter always), and merely arrest for kicks/pleasure/vengeance? I do not believe the majority are like that.
At least in the US, officers have quotas of arrests to fulfill, which are most easily fulfilled with petty crime that happens all the time and is easy to find. Attorneys get promoted based on successful cases, meaning those where they could convince a jury that the person is guilty; any foresight that goes above or beyond what they can present to a jury of laymen is lost time and will get them recognized as being ineffective.
>You seem to have the notion though that officers do not arrest people to "catch the bad guy". It sounds like you are saying you believe most officers do not actually care if the arrestee is guilty (and will thus ignore the filter always), and merely arrest for kicks/pleasure/vengeance? I do not believe the majority are like that.
You seem to have the notion that I believe what you think I believe. No, I do not believe that officers are arresting people for kicks. I do believe that human bias, power, career building/justification and 'gut feelings' about who might be a bad guy perpetuate unequal application of the law and abuse in a policed society.
I do believe that if a filter sought to mitigate those very real problems it would be fought by the system it meant to augment. I am giving it the very forgiving assumption that those motivations and skews in perception aren't unknowingly built into the filter by its human creators, nor that biased information isn't fed into the filter. If either of those were true, it would be welcomed with open arms by police departments nationwide.
I do believe that such a filter will be perceived as broken or inefficient if it doesn't confirm the officers' preconceived notions about who is a criminal or worthy of arrest.
Unfortunately, I do not believe that the problem is 'the bad guys are getting away, and we need a system to find them'. It's rather the opposite, 'innocent people are having their lives interrupted and sometimes ruined because officers think they look suspicious for reasons and need to justify their paychecks; we need a system to mitigate that'.
I do not think a system that blinks a little arrest light if a suspect is Muslim and uses Google has any place in society, no matter how many times you cry 'b-but machine learning!! Spam filters!! Bayes!!'
Your filter would cement this period's problems into, in the eyes of the public, an infallible machine's instructions and will enable abuse without accountability, because you can always point to the machine and say you followed your best judgment.
The 'arrest filter' is only as good as the inputs. As it currently stands, I'm sure that things like "uses drugs recreationally," "is black," "is Muslim," and "is not Christian," would end up counting towards your 'arrest score.' And also because your arrest score is computed by a machine rather than a human, that will be used as an excuse to call it unimpeachable. E.g. "Machines can't be racist, so the arrest score going after lots of poor, Black men must mean that there's something to it."
Only as good as the inputs, yes. But if it's a halfway decent filter, it will include machine learning, e.g. a Bayesian filter, and if "is Muslim" turns out to have low correspondence with actual criminal activity that input will quickly be deweighted. Or perhaps paired with other aspects- e.g., perhaps "is Muslim" is of no consequence and "Googles Jihad" is also of no consequence, but "is Muslim" && "Googles Jihad" gives you a point. Just as one example of the patterns a good filter could recognize.
Machines can't be racist, so the arrest score going after lots of poor, Black men must mean that there's something to it.
If a learning Bayesian filter targets a certain demographic, there probably IS something to it.
That really would be amusing/pleasing, if all this work we've spent developing spam filters became the lead-up to an accurate, learning crime filter. Perhaps the fork to spamassassin will be known as crimeassassin?
> "it doesn't seem entirely irrational." (Yes it does...)
> "It doesn't have to be perfect to be better." (Yes it does...)
The problem used to be approached by presuming innocence (demanding perfection), rather than with a willingness to accept false positives (20 years ago spam filters weren't available as an analogy...). It is always possible to wrongfully judge someone, but it was never a valid or acceptable outcome ("It is better that ten guilty persons escape than that one innocent suffer" - Blackstone). We accept that spam filters give false positives (not to mention that one person's spam is another person's opportunity), so I think comparing the justice system to detecting spam is a mistake, and more over that a goal of "prevention" itself is a red herring.
The goal of prevention encourages us to accept lower thresholds of guilt probability, and that is wrong. In other words, if prevention is an end, then it is worth deliberately (rather than accidentally) restricting innocent people on the basis of virtually any nonzero probability of guilt. 80% "guilty" by association (for using Tor for example), 45%, etc, would all be enough to justify legal action - and the thresholds would certainly depend on whoever is in power and has access to the database that week. This is a very different model than presuming innocence, and having not only a goal of 0 false-positives, but also providing satisfaction when the justice system is in error.
I think today we are mostly talking around the fact that a crime has to have been committed in order for it to deserve to be punished, and that, for that reason, prevention cannot be a valid goal in itself (but it's nice when it happens).
Rationalizing surveillance as a tool to "prevent" rather than to justly punish wrongdoers (which centralized surveillance does not do because it is centrally operated, due to the conflict of interest; everyone owning a camcorder on the other hand...) implies that the central database needs to go IMHO (and that individuals need to be empowered instead).
Hold on there friend. I was not suggesting we replace the judicial system with a filter. Rather arrests.
I.e., make the arrest based on the filter, then run the trial in the same old jury-of-your-peers.
Convictions should be false-positive-free. But our system would not work if arrests also needed to be 100% false-positive-free.
I'm also not advocating punishment for crimes that have not yet been committed. Rather, think of it as looking for flags for crimes that have already been committed or are in progress. For example, there are all sorts of small flags thrown by embezzlement or salami-slicing that, put together, identify the operation.
> make the arrest based on the filter, then run the trial in the same old jury-of-your-peers.
LOL, jury of peers. You mean the jury that is left after the prosecutors and defenders screen out the most competent jurors. The same jurors that typically believe you are guilty because you've been arrested. Have you been in the typical criminal courtroom lately? Any public defender will tell you that going to trial in cuffs and jailhouse orange will almost certainly get you a conviction.
There are lots of things that need to be fixed in the justice system. Lets not give them more tools to make it worse.
Granted, arrests are held to a different standard than convictions in that they merely require "probable cause" rather than proof of guilt and this lower standard does make it look like the spam filtering analogy scenario may fit - but in calculating this new "guilt probability" our spam filter is relying increasingly on the "testimony" and "facts" presented by the surveillance database itself and it is the objectiveness of this database in practice, or rather the ones accessing it, that I am directly calling into question (though I didn't elaborate above).
Unfortunately, the database cannot be trusted by virtue of its centralized nature and administration (even if that centralization is justifiable, for example to protect everyone's privacy). The hardware may be objective, but people are not - people lie cheat and steal when they can get away with it - and there are simply too few separate and competing interests to hold the small number of people with access to the database and tools accountable for their inevitably selective use of them to ensure their objective application. We have seen centralized data collected and used for private interests (and books censored, and guns regulated, and...) in the past, be they fascist governments or police protectionism (lying under oath; evidence tampering; racial "profiling"), economic fraud, etc. It is human nature to use one's control to his advantage, and it is simply too tempting for police to shoot first (detain, seize, etc), especially when it is in their interest, and ask questions later (check the database for cause; use "parallel reconstruction"; incriminating speech taken out of context).
It would be worse if that extended all the way to conviction, but it presents the same kind of problem for arrests, detainments, and searches, etc, since it is effectively the word of the administrators (who we trust not to abuse the data and tools) against the person arrested. The more centralized the data and tools become, the less we can trust them to be applied objectively without accountability.
Unfortunately, there are no checks and balances on absolute power (centralization), and so we cannot allow centralization to continue indefinitely. Absolute power corrupts, absolutely, and it is my "thesis" that arrests are not a suitable application of these tools. The risk is too great. Police already have a high level of responsibility (the authority, training, and tools/weapons to control use by force) and what feels like decreasing accountability (because the kids, because the drugs, because I said so, because I can, because of cronyism, and because wealthy people don't like hearing criticism), and since they are none the less "only human" - I don't recommend giving them more.
Granted, you are merely describing a potentially objective algorithm, but my point is that the objectivity of any given tool is moot given
the human element. Guns don't kill people, people do, and will continue to do so even with checks and balances (like laws against murder; if prevention was the goal we fail daily). It is only the distribution of accountability (peer juries, private key sharing, democratic voting, citizen groups, etc) that keeps such roles in check.
Anyways, thanks for the opportunity to flesh my thoughts out more.
I guess my theory partly depends on the filter being too sophisticated for any one person to co-opt. We can design machine learning, but there can't be many people who are capable of wrapping their head around a running machine learning system, and be able to reach in right here and peek/poke some weight and bam your nephew is arrested in Texas. On the bright side, most of those people are probably not officers, whom you seem to be most afraid of.
As for the objectivity of feeding the filter data, I envision something completely automatic. No selective entry for this or that suspicious person- the filter is fed a database of all people, and perhaps monitors the internet's traffic on its own. Maybe ACH traffic too. Financial crime could be this system's biggest win- computers are way more suited to uncovering financial crime relative to humans.
Basically, when it's big enough and sophisticated enough and automated enough that no one person can fully understand it, it becomes significantly harder to pervert. And, as I mentioned before, it needn't be perfect- our current system is pervertable too (see: papers please, racial profiling, etc), so this one would just need to be less pervertable...
Then why does the system not look out for corrupt politicians, or black military budgets? Because the filters are not tuned well enough yet, as if that will ever be an objective? I'd say it's because it's not a spam filter that filters for spam.. more like a spam filter that filters out the spam of the competition, lets yours through, and kills emails warning about this. Call me paranoid, but until the big guns are primarily used to catch the big villains, this is what I see.
There is no state of no data. You are known to exist, you are known for not participating in something that is common for your group. That, in combination with the thousands of other data points about you will determine whether you are of interest. That may determine whether your car gets searched during a traffic stop, or whether you're put on a no-fly list.
This is not complicated to build, it is simple to build, and the only logical way of accomplishing what the government claims that they're attempting to accomplish.
I think you're overestimating the NSA's capability to cross-reference your actions and compare them to "what people like me should typically do".
My point is, you should not change your behaviour to be a lesser target to the NSA. You'd just quickly become super paranoid. Instead you should live your live exactly the same, and if the NSA tries to make your live bad, that's the moment when you call them out on it - after all, it's the NSA that is behaving out of line. So they should change, not you.
I think you're overestimating the difficulty of the problem. The difficult part is access to the channels of information. After that, it's a matter of applying well-known algorithms while filtering and processing streams.
The only reason that I suspect that the government is still terrible at this is because they have to rely on government contractors to implement it. If they're intentionally funding startups that happen to be developing tools in the spaces they need, though, it's only a matter of a (short) time until the systems they have are settled and dependable, and they can concentrate on innovation.
>it's the NSA that is behaving out of line. So they should change, not you.
This is also silly. That's like people who walk into speeding traffic because they have the right-of-way. You won't get to hear about how the trial turns out from your grave.
I'm not worried (edit: about anything immediate), I'm not failing to partake in anything that I ever would, and I don't live any differently than I ever have. You're projecting something onto me, and that's not a great way to have a productive discussion.
Tangential, but I think it's high time for someone to raise awareness about the discrimination of white straight males by all those social justice warriors everywhere. Seriously, no matter what we do or say, we're overprivileged and the source of all evil. sigh
> I think you're overestimating the NSA's capability to cross-reference your actions
They don't need to do any cross referencing. Your parents, cousins, friends, former colleagues and classmates, etc, will sell you for a "like" in a hearth bit.
> and compare them to "what people like me should typically do".
Ditto. I hate every time people in my acquaintance network email me with "since you cannot be found like everybody else, I'm sending you this thing that you probably don't care about in the first place. After all we are still friends, right?"
Because anyone trying to keep anything private or secure must be hiding something bad... That's just wonderful.
After 10 years of pervasive surveillance and not being able to catch a single terrorist I can't believe the NSA is trying to rationalize it as being a good thing. It's too bad the bill to defund the NSA didn't pass: http://defundthensa.com/
A person is not a government agency ostensibly accountable to a democratic government and, therefore, public oversight. The NSA is. To the extent a democracy keeps secrets from voters, it is not a democracy. The US government keeps a lot of secrets.
>Because anyone trying to keep anything private or secure must be hiding something bad...
No, that's not the reasoning. People that do bad things try to hide them. Therefore, a good first filter to catch bad people is to target those who hide things. They can narrow the search field afterwards.
It is a vastly smaller space than "all people that use the internet". Also I think it makes sense to assume that the "privacy seekers" set would contain a higher proportion of "bad actors". I am not in any way supporting them, but I understand why they do it - even if I hate the idea.
The fraction of terrorists in "all people that use the internet" is approximately equal to the fraction of terrorists in "privacy seekers" -- both are roughly zero.
The NSA only was given such pervasive power to catch terrorists. The fact that privacy seekers are more likely to be "bad actors" is moot unless that means terrorists because regular "bad actors" are supposed to be innocent until guilty, handled by police/fbi, etc. NSA only is allowed to work essentially with no due process because it's for cathing "terrorists". Interestingly, now that terrorists know about the NSA, they no doubt will simply not use the internet (or phones) at all, thus making it so the NSA can't catch a terrorist by any of its methods.
I definitely expect all terrorists are extremely careful about internet activity now that they know the NSA is so invasive, thus making the NSA's actions even less defensible (not that I thought they ever were).
..aaand the ones who get "caught" and are away with a slap on the wrist before you can say "this is a joke, I just cannot believe the hipocrisy of this, double standards much?" (because let's be realistic, it's not instant)
I wonder if the sentiment here is that the NSA used to be known for being a highly intelligent group of people trying to solve hard problems but now they seem like a bunch of D-level bureaucrats snooping through everything hoping they'll get lucky.
That's it exactly. There are some brilliant people working at the NSA and our country would be better off if they were working improve the security of utilities, local governments, and critical businesses.
And maybe they are a highly intelligent group of bureaucrats trying to solve hard problems so they can snoop through everything and get lucky. Highly intelligent person solving hard problems is not necessary "good guy" nor "ethical guy" nor "law abiding guy".
You can literally replace "seek privacy and security" with almost anything in that statement. I mean, out of all the people that prefer Coke over Pepsi, there are definitely some bad guys.
While your statement is true, it doesn't really give any justification on targeted spying. That is, unless of course, we consider the desire to hide things to be a signal of being a bad actor. Thinking along those lines is a very slippery slope though. The erosion of freedoms is just another side-effect of policies and actions shaped by such thinking.
That's nonsense. If a credible actor poses a threat, they will attempt to do a competent job hiding their communications and data. Those that don't are by definition less competent and less likely to pose a threat.
Pardon the bluntless, but it seems like an a priori conclusion that only people taking pains to hide their data/communications should be targeted. That is not because the vast majority of those people are innocent (they are) but because that is the only group that contains a subset that poses a real & present threat.
So, that ignores your slippery slope argument about personal liberties, which are totally valid. How do you balance national security and personal liberty in this case? That's the million dollar question.
Please let me know if you question my reasoning. I'm purely looking at it as a 2x2 matrix of (highly encrypts personal data, does not ...) x (seeks to harm people/nation interests, does not ...)
So only the people hiding their tracks that seek to harm are the ones to worry about. Those that don't hide their tracks are a lot less likely to be operationally successful</euphemism>.
However, I assume that the 99.95% of people that highly encrypt personal do not seek to harm anyone, and are collateral damage here.
Constitutional tradeoffs happen all over. Fire in a crowded theater, felons rights to vote, personal rights to own certain weapons, etc. This is another one that needs to be decided very carefully. But I think both sides have very valid concerns.
When we accept that "shouting fire in a crowded theater" isn't protected speech, we're backing up an argument that was used to put someone in prison for non-violent anti-state speech. Not token prison either; 6 months. That is not a good thing, and not an acceptable baseline to guide us in the examination of other issues.
You are mischaracterizing the case and unfairly tarring the reputation of Oliver Wendell Holmes, a truly great Supreme Court justice. Perhaps you would like this better:
"we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death"
Back to the other quote: it is a classic and excellent demonstration of the one of the greatest tensions in US Constitution, the balance between individual rights and societal good. It doesn't require any context.
Perhaps I should have used the blunter formulation, also from Justice Holmes:
"The right to swing my fist ends where the other man's nose begins"
Oliver Holmes already has one of the worst reputations, nothing any of us write can compare with the man's own words regarding his reputation. Here's part of his decision in upholding the the forced sterilization of Carrie Buck
"It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. [...] Three generations of imbeciles are enough."
Of course this has nothing to do with the NSA/GCHQ/CSEC collecting IP numbers of users of privacy software. Only a police state casts a wide net and then spies on it's own citizens to determine their innocence presuming they are already guilty by seeking out this software in the first place. It's the exact guilty by association kind of nonsense every police state throughout history has done.
He does not have one of the worst reputations. That's utterly ridiculous, you don't know a thing about the history of constitutional law if you think so.
Buck v. Bell was one of his worst moments, as well as the rest of the country's. Holmes didn't create the eugenics law in VA, and he didn't hold a gun to the head of the other 7 members of the court who voted with him, though. Eugenics is disgusting and is rightfully in the dustbin of history along with debtors prisons, lobotomization, slavery, and a number of other common practices we currently & correctly view as backwards and evil.
He stands heads and shoulders above the idiot "originalists" polluting the bench right now.
edit: You might as well say that Richard Feynman was a reclusive and socially awkward man. It is not a matter of opinion, it's just false - plainly incorrect. OWH is routinely on the top 10 most influential justices of all time. He remains one of the most widely cited in other SC decisions. But to pin him down on a couple of specific cases and overlooking his enormous influence on contemporary judicial philosophy is ignorant. Don't believe me? Just spent a minute of your time to research it and you'll see how silly it is.
edit 2: Most of what I found to make sure I hadn't lost my mind is even kinder, considering him the 2nd or 3rd most influential justices behind Marshall and closely tied with Warren.
> "So, that ignores your slippery slope argument about personal liberties, which are totally valid. How do you balance national security and personal liberty in this case? That's the million dollar question."
"National Security" is a fucking joke; there is nothing that needs to be balanced. Anyone interested in terrorizing others could (after driving across state lines if necessary) walk into a Walmart and walk out with a semi-automatic rifle, walk into the nearest mall, yell their grievances with the country and start shooting people.
We know that this sort of attack is possible in the US because plenty of lone-nutters have more or less done it already in the US. We know that terrorist organizations are receptive to this style of attack because they have carried out this style of attack in other countries (Mumbai in 2008 and Kenya in 2013 are obvious examples). Yet the two have yet to be combined in the US.
The only reason why this hasn't happened in the US is that, contrary to popular belief, there just simply are not many people interested in doing this sort of thing in the US. The notion that terrorists yearning to attack America are around every corner is a myth. Those people are a rounding error.
> How do you balance national security and personal liberty in this case? That's the million dollar question.
That would be valid if they could point to any noteworthy success. The fact they can't ["because national security"] pretty much guarantees the program contributes very little real value.
If they had anything to do with something important, say Osama's death, you think they wouldn't trumpet it out as "proof" it works?
I'd say their lack of evidence that it functions is more damning than anything. They can't exactly hide they are doing it post-Snowden. Their one chance to justify funding for new programs that aren't compromised is to say "LOOK HOW SUCCESSFUL WE ARE!!!!" in broad terms.
The fact they cannot do this, and thereby justify larger budgets to Congress, convinces me they know the benefits are negligible.
Do you feel there might be information that they can't publicize? I skeptical and sympathetic to that at the same time.
I really would say that it's a question of risk aversion & utitility. Even targetting a whole class of people the odds are probably astronomical of finding someone planning harm (1M:1? more than that?). To me it comes down to the negative utility of privacy invasion * the number of people targeted ??? the probability of detecting & thwarting the one malactor, where ??? is an inequality.
Maybe it's good we have risk averse and non risk averse groups, that the balance of power between those groups can change over time as necessary.
> Do you feel there might be information that they can't publicize? I skeptical and sympathetic to that at the same time.
XKeyscore is a (at a minimum) 6 years old.
You are telling me in 6 years they can't provide the broad strokes of a reasonable number of success stories?
The fact they can't stand up and say "Terrorist plot X was stopped by XKeyscore" from 4-5 years ago pretty much proves its a failure as far as I am concerned.
> I really would say that it's a question of risk aversion & utitility. Even targetting a whole class of people the odds are probably astronomical of finding someone planning harm (1M:1? more than that?). To me it comes down to the negative utility of privacy invasion * the number of people targeted ??? the probability of detecting & thwarting the one malactor, where ??? is an inequality.
Given that with sufficient information a person is able to force another person to do quite a few things via blackmail, its simply too dangerous to trust human beings with this tool. Especially an organization like the NSA where their only knowledge of Snowden's actions came after he released all of the information publicly.
No, because anyone trying to hide something is highly likely to be trying to keep things private and secure. Ignoring the moral, ethical, and legal concerns, it is a perfectly reasonable course of action. The moral, ethical & legal concerns are of course terrible.
The only way out of this, as I see it, is making privacy the default. But this require some cooperation and motivation from the big guys at silicon valley.
Imagine if Chrome, Firefox, Safari, all of them had, just like the incognito mode, the private mode. Of course, as anonymity also depends on the behavior of the user online, other actions are needed to really ensure security and privacy. But making it the default will educate more people about the importance of privacy and, more importantly, make the point that privacy isn't only for criminals, terrorists and wrong-doers, but that "normal", law abiding citizens also should have the right to be private. And that is paramount for a democracy to work.
I think the cooperation necessary would be for the "big guys" to not have a vested interest in selling out privacy, which has been the prevailing business model for a long time. And, since the big guys only listen to their bottom line, that means not using them until they support privacy. It may mean not using the Internet substantially at all. (It's more than a little ironic to be saying this on the preeminent "business hacker" (or "startup") community, which has a visible subset who sympathize with some of the NSA's programs, or at least have been able to rationalize them...)
> But this require some cooperation and motivation from the big guys at silicon valley
Unfortunately, this is key to making strong encryption commonplace. A social graph and real-time communication could be used to make key exchange easy and secure. Open client software is needed to make security verifiable. And the storage and email infrastructure and clients need to make using encryption the default.
All the pieces of a "trust nobody" environment are there, and so are the pieces for making it an easy to use default.
Hopefully, doing this will be required for American service and technology companies to regain trust.
One of the biggest difficulties for "easy and secure" key exchange is that so many people want to be able to access private communications on many different devices.
How do you authorize a new device in an "easy and secure" way without simply outsourcing the problem to an intermediary who is then in a position to attack you by authorizing its own devices?
This issue has quite concrete implications for the security and convenience of lots of existing security tools, from GPG to iMessage to Skype to Firefox. They've chosen different approaches but the underlying problem and associated tradeoffs apply to all of them.
On the bright side, there are now a lot of people exploring the space of possibilities for dealing with these tradeoffs.
Just authorize. If you have perfect-forward secrecy, as long as you aren't being man-in-the-middled right now, you're safe.
It's better to have all people doing everything encrypted by default than not.
The goal isn't for one individual to be safe against a targeted NSA attack. That's insane--if the NSA wants you, specifically you are screwed; it simply has far too many resources to bring to bear.
The goal is to make it expensive for the big agencies to do pervasive surveillance. If everybody is encrypting all the time, random peon at Three Letter Agency has to get up from his chair and actually authorize a wiretap, get a warrant, etc. At that point, it's not going to happen unless you've actually done something very wrong.
Fully agreed up until your last sentence: It's not going to happen unless they have reason to believe it will lead to evidence of someone doing something wrong, and that it will be wrong enough to justify the effort.
Merely searching the web for the privacy-enhancing software tools outlined in the XKeyscore rules causes the NSA to mark and track the IP address of the person doing the search.
Again the media makes it sound like there exists a dragnet on (Google) searches. But this time one of the authors is J. Appelbaum.
So which is it? Terrorist Scores based on search engine searches sounds fantastically insane to me. But unencrypted it is possible to intercept. So perhaps it is something in between: All accessible searches are monitored, and search engines do not cooperate with this directly, unless they have to legally comply with the request?
One of the earlier Snowden disclosures was that the NSA had tapped private internal Google fiber lines carrying traffic between data centers. Same with Yahoo, Microsoft, other major Internet destinations. Google has since started encrypting all internal traffic, but for awhile pretty much anything was available to the NSA dragnet.
I think an even more significant thing in the XKeyScore code (in terms of the idea that "NSA targets the privacy-conscious") is the existence of a "documents/comsec/" hierarchy of fingerprints. I may have written some of the documentation that's targeted elsewhere within that hierarchy.
When the NSA collects evidence on someone and uses that evidence to prosecute a criminal case, they can and should file a motion to suppress that evidence.
The NSA data is collected under search issued by a FISA court. So, during a suppression hearing, defense counsel can challenge the validity of the warrant. If their challenge is denied, they can appeal. If their appeal fails, they can petition the Supreme Court. In all these courts, the proceedings are public record and the standard for a warrant can be debated by lawyers and the public alike. We have an open process for checking the work of the humans issuing FISA court warrants; Use it.
Even if the warrant was valid, the NSA might have overstepped its bounds. This can also be challenged when the NSA defends the admissibility of its criminal evidence in a suppression hearing. An independent judiciary can decide if the executive branch has acted outside its bounds. No, an investigator isn't punished for the overbroad evidence collection, but they are embarrassed by having a criminal get off due to their sloppiness. We have an open process for checking the work of human investigators in this country; Use it.
It isn't as if the government just takes that evidence and unilaterally decides to blow people up. We have due process in this country; Use it.
The EFF calls it Intelligence Laundering. The DEA calls it parallel construction. Either way it is sinister and immoral and a court hasn't had a chance to rule on it precisely because it is very difficult for defendants to prove that both the prosecutor and judge were lied to.
I don't actually hold the view expressed. I've just been trying (and failing) to find a compelling way to articulate the following point: "The FISA court's warrant system is flawed because the validity of its warrants or their execution are never checked because the warrants aren't actually used to bring criminal cases."
So, I thought I'd try sarcasm. But I couldn't come up with a concise way to address the fact that parallel construction means that their info actually is used in criminal cases. Oh well, back to the drafting board...
I believe it has been reported that the FBI lies about the sources used in investigations so that defendants never find out about the true sources that led to their prosecution. Your legal right to challenge sources of evidence is useless in the face of a corrupt government whose primary goal is to hide those sources from the public.
And meanwhile, years of your life will be wasted sitting in prison awaiting challenges, all while you're threatened with decades in prison for evidence that the government should never have had access to. Not to mention the legal fees if you can't get pro-bono coverage on your case.
It's no wonder so many plea out to a lesser (but certain) sentence when given the choice.
I feel quite ambiguous about these discriminating techniques. For example, it is okay for us to give females / older people lower insurance rate because that's what the statistics says. Likewise, it's likely that people who search for privacy-enhancing software are more likely to engage in "subversive" activity. So it's hard for me to determine which kind of discrimination is justified and which not.
"It also records details about visits to a popular internet journal for Linux operating system users called "the Linux Journal - the Original Magazine of the Linux Community", and calls it an "extremist forum"."
WTF? I guess I am on a list. Who knew being an extremist was so easy?
No, it doesn't say that Linux Journal itself is an ‘extremist forum’. It says that TAILs is “advocated by extremists on extremist forums”, and includes Linux Journal as a source of information about TAILs, neither of which seem surprising.
I'm certainly proud to be associated with LJ, and to be writing for them.
I'm also willing to believe that hackers who want to use encryption and other privacy-oriented technologies use and read about open-source technologies. Although my guess is that this includes nearly all serious security researchers, experts, and implementers.
That said, to claim that people who read LJ are extremists, or that the magazine is something of an "extremist forum," misses the mark in so many ways.
We begin therefore where they are determined not to end, with the question whether any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the Unites States government has led not only us but the world.
This should not actually be a complicated inquiry.
Did you seriously think news.ycombinator.com doesnt increase your score and suspectibility of having your computing devices hacked into? And puts you on a very interesting NSA/CIA/Letter-Combo/For-Your-Safety list?
Look at Ukraine. War just pops up. I wonder which list they will go by first.
What I gleaned most from the article(s) is that it's becoming increasingly important for all of us in the tech community to take a stand ourselves along with TOR to promote online anonymity in our companies (& possibly even think about supporting the TOR Project itself in some way).
This is two years old article - as far as I know the German pirates now are rather leftist (actually too leftist for my liking - but as a whole the pirate movement is rather balanced between the two poles).
Pirate Party as a new and mostly undefined movement attracted all kinds of freaks - but it can only work as a movement of those that understand how the Internet can be used in politics, both the dangers and the potential for good, and who value the freedom and openness that was associated with the early net.
There were two versions of this story on the front page. This thread has the fuller discussion, the other the original source. In such cases we usually merge them by reassigning the url and burying the other thread.
It's a conservative meme that raw milk producers are being unfairly persecuted, in those circles it's supposed to be a paradigm case of the overly intrusive nanny state.
In reality, there is hard epidemiological data showing that selling raw milk (edit: e.g. through the normal store channels) can lead to serious harm including deaths. So FDA bans it for interstate sales, but it's up to the state to decide how to regulate in-state sales. Just like any other food safety issue.
NSA is extremely unlikely to be involved in enforcing regulations against raw milk in reality, but in the mind of the conservative conspiracy theorist it's all of one totalitarian piece.
> selling raw milk ... can lead to serious harm including deaths.
Please STOP spreading misinformation. The only two deaths from raw milk in the last 20 years were traced back to bad queso fresco. In fact, over the same time period, there were more deaths attributed to pasteurized liquid milk than to raw liquid milk.
It's pretty obvious it can cause deaths, since raw milk is causes listeriosis disproportionately and risks from that (including death) are very well known.
Even your quoted web page lists 2 deaths from raw milk products and 3 deaths from pasteurized milk products. Considering the relative rarity of raw milk product consumption, that's a pretty obvious sign.
Arguing that the contamination isn't significant since it's specific to one milk product doesn't pass muster. With such a small sample you can't deduce anything about how the risk is distributed accross types of milk products.
> specific to one milk product doesn't pass muster
No it's not obvious. That's the point. The CDC has admitted those deaths were caused by a product (queso fresco) that is commonly contaminated after production. There are ZERO deaths attributed to consuming raw liquid milk.
> such a small sample you can't deduce anything
Apparently all data from 1998-2011 on all reported illness and deaths from raw milk products is too small for Chicken Little.
And if this data set is too "small" why are the conclusions drawn by the CDC ("raw milk is deadly!") valid? Shouldn't the paucity of data preclude judgement one way or the other?
> Shouldn't the paucity of data preclude judgement one way or the other?
There is no paucity of data. There are very small numbers of people who drink raw milk. And thus there are small numbers of people harmed by raw milk. But it's pretty clear that raw milk is considerably riskier than pasteurised milk.
Whether adults should be allowed to make stupid choices is another topic. I'd suggest that adults should not be allowed to inflict those stupid choices onto children - who are going to be at even greater risk from harm.
You keep talking about death. Having to have kidneys transplanted because e coli has destroyed them is not death, but I hope you agree it's a severe consequence from eating food.
> The Centers for Disease Control and Prevention (CDC) reports that of 239 hospitalizations caused by tainted dairy products from 1993 through 2006, 202 involved raw milk or raw-milk cheese. Nearly two-thirds of the patients were younger than 20. "Parents go to raw milk because they hear it's good for kids' allergies," says Michele Jay-Russell, a veterinarian and food safety specialist at the University of California-Davis who has studied the outbreaks. But children's developing immune systems are more vulnerable than those of adults. "They end up sickening their kids," Jay-Russell adds.
"Harmful" is a meaningless term and contributes nothing to the discussion. Cars are harmful. Alcohol is harmful. Freedom is harmful. So what's your fucking point?
I bring up death because that's the canard trotted out by raw milk haters. And it doesn't happen with any appreciable frequency despite the large numbers of people consuming raw milk.
I'm not disagreeing that both raw and pasteurized milk can potentially cause serious illness, however I do not believe the numbers are large enough to be cause for concern or excessive regulation by control freaks who need to dictate what people put in their bodies. Perhaps you disagree and that's fine.
You are being a troll. You don't know you're talking about . You can do the math to figure out that you're wrong. I already have. If you can contribute anything remotely productive I will respond otherwise have fun building regulation castles in the sky.
It's probably misplaced to focus only on NSA in this respect, but if we talk about government electronic surveillance capabilities generally, they've been expanding through many agencies and parts of government. ACLU's recent focus on local police use of IMSI catchers is just one example; they started out as super-secret high-tech spy stuff and now local cops think they're super-awesome and are afraid they may to have to give them up if word gets out and the courts or legislators start taking a closer look.
Electronic surveillance used to be more stigmatized in some ways, but it's becoming more culturally normalized as a basic government tool (at least in the culture of government agencies -- I hope not as much elsewhere). So you see it used in more and more contexts.
I'm totally unfamiliar with the raw milk regulations, but I think that people who are concerned about them could reasonably worry that electronic communications surveillance will be used to enforce them in the future. Likely not by NSA itself, but perhaps through something that's in part technological trickle-down from NSA development or procurement.
> In reality, there is hard epidemiological data showing that selling raw milk (edit: e.g. through the normal store channels) can lead to serious harm including deaths.
I'd love to see the evidence, and see it compared to other food sources.
I grew up in India. There all we got was raw milk from the cowherd; in fact, even today, my parents send the helper to get milk in a pail from the cowherd. It's always been raw milk, warm and fresh from the udder. And the first thing they do is to boil it.
If I were to conjecture, it's that the "no raw milk" diktat forces farmers to go to big distribution companies with the requisite facilities for pasteurization.
> It's always been raw milk, warm and fresh from the udder. And the first thing they do is to boil it.
Two things: Firstly, it's not raw if they boil it. Most store-bought milk has gone through two processes: Pasteurisation and homogenisation. Pasteurisation is simply heating the milk. If your family boiled it before drinking, you've actually heated the milk more than commercial pasteurisation does. Normally pasteurised milk is heated to only 72 degrees celsius for only 15 seconds. Homogenisation is essentially forcing the milk through filters that breaks up the globs of fat. Only pasteurisation is necessary for food safety.
And secondly, pasteurisation is most necessary if you intend to store the milk. If, as you say, it's "warm and fresh from the udder", there's little risk from drinking raw milk.
The parent of your post specifically said selling raw milk through the normal store channels. The issue is not raw milk, but selling raw milk, which when you combine storage and transport, and the consumer storing it, means plenty of time for massive amounts of bacteria growth. As I'm sure you know, even with normal pasteurisation milk spoils relatively quickly.
> If I were to conjecture, it's that the "no raw milk" diktat forces farmers to go to big distribution companies with the requisite facilities for pasteurization.
Health authorities first started to push for pasteurisation after its extensive success in massively reducing illnesses - and deaths - due to spoiled milk.
Boiling milk? No thanks, boiled milk tastes funny.
You should remember that not everyone live in the same hot climate as you where milk generally don't go bad immediately, and that there are plenty of people around in cooler climates that have stomachs that usually can handle milk without problem.
> Boiling milk? No thanks, boiled milk tastes funny.
Boiled milk, yes. But unless you get milk straight from a farm, it's likely pasteurised: Heated to 72 degrees celsius for 15 seconds. [EDIT: I didn't realise how many places allow sales of unpasteurised milk; yikes - I'll be careful about reading labels next time I'm travelling]
> and that there are plenty of people around in cooler climates that have stomachs that usually can handle milk without problem.
The "stomachs that usually can handle milk without a problem" is entirely unrelated from why we pasteurise milk. Pasteurisation does not affect the lactose content in the milk, and that, combined with whether or not your genes makes you lactose intolerant or not is what determines whether or not you handle milk well.
I agree totally on your points when it comes to pasteurized milk - I was commenting on a comment that claimed regulation wasn't necessary because farmers would boil the milk anyhow, which is simply not the case.
I would love to see the data with comparison to baseline too. and it is an example of a nanny state (the FDA is literally saying, we have to protect you from this) rather than acting as a licensed-milk-producer-seal issuer.
It's the same nanny state issue when the FDA shut down more beneficial AIDS treatments in the 80s and 90s when the only drugs on the market, that the FDA approved of (AZT), were essentially toxic and killed about the same number of people that they "helped". Why should the FDA decide what goes into the bodies of supposedly "free" people? They should only act to say, "This is the only type of drugs or milk the FDA approves of"
> They should only act to say, "This is the only type of drugs or milk the FDA approves of"
The same CDC report mentioned above, specifically address labelling, and points out that the numbers show that labelling is not shown to have significant effect.
If it was only your body you put at risk, you might have a point, but this also includes parents putting their children at substantial risk, and people putting others at risk whenever they serve non-pasteurised dairy products and people are not themselves aware of the risk.
Anyone can easily buy raw milk here and so far no diseases happened. That being said, raw milk manufacturing and transportation are heavily regulated and checked by inspectors. They are also regularly tested for possible infections.
Edit: The title and link of this HN article have changed. The link changed from a BoingBoing article to the original German article, and the headline used to be a question ("Who is the NSA spying on..." or similar) that gave the GP comment more context.
You're warning people about the possible consequences of reading BoingBoing on a site called Hacker News, Where actual hackers and technorati hang out and complain about the American government all the time.