Hacker Newsnew | comments | show | ask | jobs | submit login
NSA targets the privacy-conscious (ndr.de)
434 points by freejack 389 days ago | 176 comments



Of course this happens, and it's an obvious technique. I'm sure that not having a facebook account adds to your score, using AdBlock adds to your score, mentioning the NSA online adds to your score, refusing cookies adds to your score, using Linux adds to your score, etc.

That's how a police state works. (XKeyScore += 5)

My mother was involved in civil rights, so she has a file. It's fine that I have a file too. Hopefully I'll be gone before they start going door to door.

edit: http://www.linuxjournal.com/content/nsa-linux-journal-extrem...

-----


That's how any sort of metric works - your credit score, your Acxiom consumer category score, whatever metrics Facebook and Google use to score user engagement.

This is just how life is when databases are ubiquitous. After I bought a house and my name began appearing in property tax databases I started getting lots of (paper-based) commercial spam for things homeowners are more likely to buy, like different sorts of insurance, refinancing, satellite TV service yadda yadda.

When it emerged after 9-11 that various government agencies had failed to 'join the dots' by not sharing intelligence information effectively, there was a lot of public support for better-coordinated and more proactive intelligence gather, notwithstanding warnings about the risk to civil liberties. So collectively we got what we asked for. The lack of public outrage or mass demonstrations against the NSA strongly suggests that a large majority are OK with this state of affairs, especially since they're used to data collection in a commercial context.

-----


I think people are too busy, too desperate, and too badly educated to do anything about it. Most people were only being kept solvent by credit cards, borrowing against the equity in their houses, and taking massive loans for major purchases - and when credit got tight, people started setting up tents on Wall Street. The next crash is going to be a terrifying experience.

A little bit of war, however, will probably dull any of that.

-----


This is the agenda that justifies all this: http://100777.com/nwo/barbarians and was planned before internet was mainstream. This is information released in 1989. Read and judge for yourself

-----


And people don't really believe that protest(at least a peaceful one) can create a meaningful change. and judging how things are going lately ,they might be right. Case in point: obama care.

-----


The credit and consumer category scores are legal in the us because of non existing privacy legislation. No such things eg in the EU. Ubiquitous invasive databases aren't fate.

-----


I wish I could find out my XKeyScore. I'm genuinely interested in how threatening the NSA believes average people are.

-----


Someone should be a Check-My-XKeyScore site. Answer several questions (do you have a VPN? Use Linux, etc) and show the submitter how high he ranks in the NSA's eyes. 'share via Twitter etc' Could be fun and easy way to show people just how crazy this is...

-----


XKeyScore is the new credit report:)

-----


Credit report? Wait until fiancés gets the legal right to see your score.

-----


> Hopefully I'll be gone before they start going door to door.

Why care about the rest of your family that will still be here to endure the rest... or the rest of us. You have no kids, right?

-----


I don't. If you want to fight, I'll support you. I donate money to causes that I know raise my XKeyscore, and I subscribe to magazines that I know raise my XKeyscore. My long term plan is to find somewhere where this is not happening, and move there, though.

If I were planning to stay here for the rest of my life, I wouldn't risk political posting on the internet, donating, or subscribing. You'll get more help for me as long as I can rationalize it as only worsening a temporary situation.

-----


Unless the first amendment has changed, there is no risk for consumers.

-----


Yeah. Sure. http://vault.fbi.gov/cointel-pro

-----


I don't know what this means.

-----


Amplify this by 100, and welcome to Minority Report, where you get a arrested for having a "very high score".

This is where the "mass surveillance" and "let's monitor everyone just in case they might do something bad" thinking inevitably ends up.

-----


> you get a arrested for having a "very high score".

I was thinking more in terms of automated drone strikes, but yeah for the facade of democracy arrests may be the way to go for now.

-----


Too expensive.

-----


Drones get more cost effective as the kill frequency increases. Economy of scale

-----


You can make it cheaper still by waiting until more than one target are in the same strike zone.

This of course only matters if you really care about the money. I assume that the large black budgets of the CIA and NSA are not really a concern to those that matter.

Ideally you spend more every year. That's the usual policy of government agencies, otherwise the following year you get a budget cut.

-----


"DronEfficiency (YC2034): better protection of our country and spending less tax money due to improved algorithms!"

(Disclaimer: I'm not insinuating that YC supports this)

-----


"Luc Doevil from DronEfficiency: We've collected 10M of initial funding; that's two more rounds and we'll be able to buy our first Hellfire for testing."

-----


> Amplify this by 100, and welcome to Minority Report, where you get a arrested for having a "very high score".

Or maybe welcome to Oceania of 1984, having to deal with Thinkpol:

http://en.wikipedia.org/wiki/Thoughtcrime

"The Thought Police (thinkpol in Newspeak) are the secret police of the novel Nineteen Eighty-Four. It is their job to uncover and punish thoughtcrime."

-----


>thinkpol

Yup, we don't have thinkpol yet (and hopefully we never will), but we do have a pretty damn good analogy to it: https://en.wikipedia.org/wiki/Predictive_policing

-----


... you get a arrested for having a "very high score".

Considering this is PRECISELY how spam filtering works, it doesn't seem entirely irrational.

Much like spam filtering, it would all come down to dialing in your filters and picking a good threshold.

Hell, if the system was good enough, it could actually improve freedoms. We currently arrest many innocent people as part of the legal process (who are later exonerated). What if our "arrest filter" outperformed the current system, in terms of percentage of innocent people arrested? It doesn't have to be perfect to be better.

-----


Spam filtering works using actual data that someone once used to train it - if we are taking this at face value, it means that the NSA is not just targeting people who might be terrorists and collects data on other people as a side effect, but that it actively targets people with no ties to terrorism whatsoever, for reasons that both US and international public might find unsavory if they found out about them.

Spam filtering also works in a very different context - the spam-to-nonspam ratio is something like 90% spam and 10% nonspam, which means that there is lots and lots of spam to filter out; if an important email slips through, people are bound to notice and either adjust their spam filter or do something about it. In the other setting, you have 99.99% or more of people who have nothing to do with terrorism or criminal activities, and maybe one or two dozen (among tens of millions) who you are actually targeting. First thing, erroneously targeting a substantial chunk of your non-interesting population ties up resources - you're spending your time investigating people who are not terrorists - but since it's difficult anyways, at least you seem like you're doing something with all the money you receive, and nevermind if some of the data is used for industrial espionnage or hunting people that only poultry farmers and fracking magnates would call terrorists. And if you miss one of the two dozen other people, well, they won't do anything harmful this year or the next because they also have to fear regular law enforcement, and when they do it'll be in a moment that's probably suitable for you to ask for more money.

tl;dr: Because we don't have a large sample of actual terrorists on hand, it's hard to evaluate activities like the NSA's, which would however be desirable since we're giving large chunks of money to them that could be fruitfully used in making everyone safer if used to fight actual crime and not some fuzzy notion of terrorism.

-----


The problem with such a filter is it will be perceived as inefficient and broken by those evaluating it if it denies an officer the authority to arrest someone he's already detained or determined to be worthy of arrest.

It will be another system which grants permission at the whims of the department, one that absolves individual officers of blame, punishment and consequences for bias and abuse they'll continue to revel in.

-----


There is no reason such a filter need be an authoritative source. Arrest someone it didn't suggest, if you like. Choose not to arrest someone it did suggest. Human discretion would still be applied. But if it proved accurate, officers would trust it, and the population would get upset if it identified a criminal an officer did not arrest.

No different than a virus or spam scanner, really. I trust my scanners. Sometimes they are wrong, and I know they are, and I bypass them. But I know they are right most of the time.

You seem to have the notion though that officers do not arrest people to "catch the bad guy". It sounds like you are saying you believe most officers do not actually care if the arrestee is guilty (and will thus ignore the filter always), and merely arrest for kicks/pleasure/vengeance? I do not believe the majority are like that.

-----


At least in the US, officers have quotas of arrests to fulfill, which are most easily fulfilled with petty crime that happens all the time and is easy to find. Attorneys get promoted based on successful cases, meaning those where they could convince a jury that the person is guilty; any foresight that goes above or beyond what they can present to a jury of laymen is lost time and will get them recognized as being ineffective.

-----


>You seem to have the notion though that officers do not arrest people to "catch the bad guy". It sounds like you are saying you believe most officers do not actually care if the arrestee is guilty (and will thus ignore the filter always), and merely arrest for kicks/pleasure/vengeance? I do not believe the majority are like that.

You seem to have the notion that I believe what you think I believe. No, I do not believe that officers are arresting people for kicks. I do believe that human bias, power, career building/justification and 'gut feelings' about who might be a bad guy perpetuate unequal application of the law and abuse in a policed society.

I do believe that if a filter sought to mitigate those very real problems it would be fought by the system it meant to augment. I am giving it the very forgiving assumption that those motivations and skews in perception aren't unknowingly built into the filter by its human creators, nor that biased information isn't fed into the filter. If either of those were true, it would be welcomed with open arms by police departments nationwide.

I do believe that such a filter will be perceived as broken or inefficient if it doesn't confirm the officers' preconceived notions about who is a criminal or worthy of arrest.

Unfortunately, I do not believe that the problem is 'the bad guys are getting away, and we need a system to find them'. It's rather the opposite, 'innocent people are having their lives interrupted and sometimes ruined because officers think they look suspicious for reasons and need to justify their paychecks; we need a system to mitigate that'.

I do not think a system that blinks a little arrest light if a suspect is Muslim and uses Google has any place in society, no matter how many times you cry 'b-but machine learning!! Spam filters!! Bayes!!'

Your filter would cement this period's problems into, in the eyes of the public, an infallible machine's instructions and will enable abuse without accountability, because you can always point to the machine and say you followed your best judgment.

-----


The 'arrest filter' is only as good as the inputs. As it currently stands, I'm sure that things like "uses drugs recreationally," "is black," "is Muslim," and "is not Christian," would end up counting towards your 'arrest score.' And also because your arrest score is computed by a machine rather than a human, that will be used as an excuse to call it unimpeachable. E.g. "Machines can't be racist, so the arrest score going after lots of poor, Black men must mean that there's something to it."

-----


Only as good as the inputs, yes. But if it's a halfway decent filter, it will include machine learning, e.g. a Bayesian filter, and if "is Muslim" turns out to have low correspondence with actual criminal activity that input will quickly be deweighted. Or perhaps paired with other aspects- e.g., perhaps "is Muslim" is of no consequence and "Googles Jihad" is also of no consequence, but "is Muslim" && "Googles Jihad" gives you a point. Just as one example of the patterns a good filter could recognize.

Machines can't be racist, so the arrest score going after lots of poor, Black men must mean that there's something to it.

If a learning Bayesian filter targets a certain demographic, there probably IS something to it.

That really would be amusing/pleasing, if all this work we've spent developing spam filters became the lead-up to an accurate, learning crime filter. Perhaps the fork to spamassassin will be known as crimeassassin?

-----


> If a learning Bayesian filter targets a certain demographic, there probably IS something to it.

I'm pretty sure both Bayes and Laplace would not agree with the categorization of Bayesian probability as some sort of panacea for determining truth in criminal matters.

-----


Not for determining truth, but the degree of confidence in it, yes. Probability is the tool to get to the truth, if you allow for your information to be incomplete and uncertain.

-----


Yes, it is just probability and not truth. But an arrest is often a "guess" with some degree of confidence, not absolute certainty. Thus is a "weight" or "probability" not perfectly applicable?

Trials & convictions is a different matter, more suited to truth-seeking.

-----


Don't disregard the effects that 'just an arrest' have on people. For example, being arrested for child porn-related charges, but not charged / convicted isn't exactly a no-op.

-----


Perhaps you should watch "The Thin Blue Line" a few hundred times... and then repost...

I would be seriously concerned about a Bayesian filter being applied as the sole reason for arrests...

-----


> "it doesn't seem entirely irrational." (Yes it does...)

> "It doesn't have to be perfect to be better." (Yes it does...)

The problem used to be approached by presuming innocence (demanding perfection), rather than with a willingness to accept false positives (20 years ago spam filters weren't available as an analogy...). It is always possible to wrongfully judge someone, but it was never a valid or acceptable outcome ("It is better that ten guilty persons escape than that one innocent suffer" - Blackstone). We accept that spam filters give false positives (not to mention that one person's spam is another person's opportunity), so I think comparing the justice system to detecting spam is a mistake, and more over that a goal of "prevention" itself is a red herring.

The goal of prevention encourages us to accept lower thresholds of guilt probability, and that is wrong. In other words, if prevention is an end, then it is worth deliberately (rather than accidentally) restricting innocent people on the basis of virtually any nonzero probability of guilt. 80% "guilty" by association (for using Tor for example), 45%, etc, would all be enough to justify legal action - and the thresholds would certainly depend on whoever is in power and has access to the database that week. This is a very different model than presuming innocence, and having not only a goal of 0 false-positives, but also providing satisfaction when the justice system is in error.

I think today we are mostly talking around the fact that a crime has to have been committed in order for it to deserve to be punished, and that, for that reason, prevention cannot be a valid goal in itself (but it's nice when it happens).

Rationalizing surveillance as a tool to "prevent" rather than to justly punish wrongdoers (which centralized surveillance does not do because it is centrally operated, due to the conflict of interest; everyone owning a camcorder on the other hand...) implies that the central database needs to go IMHO (and that individuals need to be empowered instead).

-----


Hold on there friend. I was not suggesting we replace the judicial system with a filter. Rather arrests.

I.e., make the arrest based on the filter, then run the trial in the same old jury-of-your-peers.

Convictions should be false-positive-free. But our system would not work if arrests also needed to be 100% false-positive-free.

I'm also not advocating punishment for crimes that have not yet been committed. Rather, think of it as looking for flags for crimes that have already been committed or are in progress. For example, there are all sorts of small flags thrown by embezzlement or salami-slicing that, put together, identify the operation.

-----


> make the arrest based on the filter, then run the trial in the same old jury-of-your-peers.

LOL, jury of peers. You mean the jury that is left after the prosecutors and defenders screen out the most competent jurors. The same jurors that typically believe you are guilty because you've been arrested. Have you been in the typical criminal courtroom lately? Any public defender will tell you that going to trial in cuffs and jailhouse orange will almost certainly get you a conviction.

There are lots of things that need to be fixed in the justice system. Lets not give them more tools to make it worse.

-----


Granted, arrests are held to a different standard than convictions in that they merely require "probable cause" rather than proof of guilt and this lower standard does make it look like the spam filtering analogy scenario may fit - but in calculating this new "guilt probability" our spam filter is relying increasingly on the "testimony" and "facts" presented by the surveillance database itself and it is the objectiveness of this database in practice, or rather the ones accessing it, that I am directly calling into question (though I didn't elaborate above).

Unfortunately, the database cannot be trusted by virtue of its centralized nature and administration (even if that centralization is justifiable, for example to protect everyone's privacy). The hardware may be objective, but people are not - people lie cheat and steal when they can get away with it - and there are simply too few separate and competing interests to hold the small number of people with access to the database and tools accountable for their inevitably selective use of them to ensure their objective application. We have seen centralized data collected and used for private interests (and books censored, and guns regulated, and...) in the past, be they fascist governments or police protectionism (lying under oath; evidence tampering; racial "profiling"), economic fraud, etc. It is human nature to use one's control to his advantage, and it is simply too tempting for police to shoot first (detain, seize, etc), especially when it is in their interest, and ask questions later (check the database for cause; use "parallel reconstruction"; incriminating speech taken out of context).

It would be worse if that extended all the way to conviction, but it presents the same kind of problem for arrests, detainments, and searches, etc, since it is effectively the word of the administrators (who we trust not to abuse the data and tools) against the person arrested. The more centralized the data and tools become, the less we can trust them to be applied objectively without accountability.

Unfortunately, there are no checks and balances on absolute power (centralization), and so we cannot allow centralization to continue indefinitely. Absolute power corrupts, absolutely, and it is my "thesis" that arrests are not a suitable application of these tools. The risk is too great. Police already have a high level of responsibility (the authority, training, and tools/weapons to control use by force) and what feels like decreasing accountability (because the kids, because the drugs, because I said so, because I can, because of cronyism, and because wealthy people don't like hearing criticism), and since they are none the less "only human" - I don't recommend giving them more.

Granted, you are merely describing a potentially objective algorithm, but my point is that the objectivity of any given tool is moot given the human element. Guns don't kill people, people do, and will continue to do so even with checks and balances (like laws against murder; if prevention was the goal we fail daily). It is only the distribution of accountability (peer juries, private key sharing, democratic voting, citizen groups, etc) that keeps such roles in check.

Anyways, thanks for the opportunity to flesh my thoughts out more.

-----


I guess my theory partly depends on the filter being too sophisticated for any one person to co-opt. We can design machine learning, but there can't be many people who are capable of wrapping their head around a running machine learning system, and be able to reach in right here and peek/poke some weight and bam your nephew is arrested in Texas. On the bright side, most of those people are probably not officers, whom you seem to be most afraid of.

As for the objectivity of feeding the filter data, I envision something completely automatic. No selective entry for this or that suspicious person- the filter is fed a database of all people, and perhaps monitors the internet's traffic on its own. Maybe ACH traffic too. Financial crime could be this system's biggest win- computers are way more suited to uncovering financial crime relative to humans.

Basically, when it's big enough and sophisticated enough and automated enough that no one person can fully understand it, it becomes significantly harder to pervert. And, as I mentioned before, it needn't be perfect- our current system is pervertable too (see: papers please, racial profiling, etc), so this one would just need to be less pervertable...

-----


Then why does the system not look out for corrupt politicians, or black military budgets? Because the filters are not tuned well enough yet, as if that will ever be an objective? I'd say it's because it's not a spam filter that filters for spam.. more like a spam filter that filters out the spam of the competition, lets yours through, and kills emails warning about this. Call me paranoid, but until the big guns are primarily used to catch the big villains, this is what I see.

-----


Not doing any of those things will massively bump up your score; you must be trying to avoid detection.

-----


Yeah, and if you don't even have an internet connection, NSA is putting you on their most wanted list. Give me a break. Not doing something can't make you more suspicious.

-----


>Not doing something can't make you more suspicious.

That's just stupid.

-----


No. That's common sense. If you don't do something on the internet, there is no data. No data = nothing suspicious.

-----


There is no state of no data. You are known to exist, you are known for not participating in something that is common for your group. That, in combination with the thousands of other data points about you will determine whether you are of interest. That may determine whether your car gets searched during a traffic stop, or whether you're put on a no-fly list.

This is not complicated to build, it is simple to build, and the only logical way of accomplishing what the government claims that they're attempting to accomplish.

-----


I think you're overestimating the NSA's capability to cross-reference your actions and compare them to "what people like me should typically do".

My point is, you should not change your behaviour to be a lesser target to the NSA. You'd just quickly become super paranoid. Instead you should live your live exactly the same, and if the NSA tries to make your live bad, that's the moment when you call them out on it - after all, it's the NSA that is behaving out of line. So they should change, not you.

Oh, and http://defundthensa.com/ , sure.

-----


I think you're overestimating the difficulty of the problem. The difficult part is access to the channels of information. After that, it's a matter of applying well-known algorithms while filtering and processing streams.

The only reason that I suspect that the government is still terrible at this is because they have to rely on government contractors to implement it. If they're intentionally funding startups that happen to be developing tools in the spaces they need, though, it's only a matter of a (short) time until the systems they have are settled and dependable, and they can concentrate on innovation.

>it's the NSA that is behaving out of line. So they should change, not you.

This is also silly. That's like people who walk into speeding traffic because they have the right-of-way. You won't get to hear about how the trial turns out from your grave.

-----


> You won't get to hear about how the trial turns out from your grave.

Well, in contrast to you, I plan to enjoy my life instead of wasting it by worrying about some possible bad actor spying on me, which is what I'm must read is how you spend your life.

Sure, fight the NSA by engaging a little bit politically, and buying the right things. But apart from that, don't worry so much, man.

-----


I'm not worried (edit: about anything immediate), I'm not failing to partake in anything that I ever would, and I don't live any differently than I ever have. You're projecting something onto me, and that's not a great way to have a productive discussion.

-----


Well, when you say

> You won't get to hear about how the trial turns out from your grave

that's equally projecting something onto me, so... that's not a great way to have a productive discussion neither.

-----


Do you think I'm accusing you of having been hit by a car? I'm not. It was an analogy.

-----


Well, it was ambiguous to say the least, with all those "you"'s in it. If it was meant as an analogy, maybe you should have written it like this:

> This is also silly. That's like people who walk into speeding traffic because they have the right-of-way. They won't get to hear about how the trial turns out from their grave.

-----


You really thought that I was accusing you of being hit by a car? Mind blown. Point noted, I'll try to be more careful next time.

-----


>>Well, in contrast to you, I plan to enjoy my life instead of wasting it by worrying about some possible bad actor spying on me, which is what I'm must read is how you spend your life.

Let me guess: you are a straight white dude with a comfortable income.

-----


Tangential, but I think it's high time for someone to raise awareness about the discrimination of white straight males by all those social justice warriors everywhere. Seriously, no matter what we do or say, we're overprivileged and the source of all evil. sigh

-----


Kindly go fuck yourself, you bigot.

-----


> I think you're overestimating the NSA's capability to cross-reference your actions

They don't need to do any cross referencing. Your parents, cousins, friends, former colleagues and classmates, etc, will sell you for a "like" in a hearth bit.

> and compare them to "what people like me should typically do".

Ditto. I hate every time people in my acquaintance network email me with "since you cannot be found like everybody else, I'm sending you this thing that you probably don't care about in the first place. After all we are still friends, right?"

-----


No data may be very suspicious if some data is expected to be found. See http://abstrusegoose.com/396 for an illustrative example.

-----


The absence of data is, itself, data.

-----


The lack of data about you on the internet is likely more anomalous than whatever data you're trying to hide originally was.

Not finding something when you should find something is suspicious.

-----


Have you never heard the classic movie line?

It's quiet. TOO quiet.

-----


Even if you go full Stallman data leaks everywhere. Credit card purchase data is bought and sold daily as is satellite imaging time.

-----


Because anyone trying to keep anything private or secure must be hiding something bad... That's just wonderful.

After 10 years of pervasive surveillance and not being able to catch a single terrorist I can't believe the NSA is trying to rationalize it as being a good thing. It's too bad the bill to defund the NSA didn't pass: http://defundthensa.com/

-----


Because anyone trying to keep anything private or secure must be hiding something bad

The NSA does an awful lot of hiding things. It might therefore be reasonable to conclude that it is bad and should, at the very least have its funding cut.

-----


Got to stop people from doing things that they themselves are doing...

Anecdotally reminds of this guy that was key in setting up the British porn filter being arrested for child pornography: http://www.telegraph.co.uk/news/politics/david-cameron/10675...

-----


A person is not a government agency ostensibly accountable to a democratic government and, therefore, public oversight. The NSA is. To the extent a democracy keeps secrets from voters, it is not a democracy. The US government keeps a lot of secrets.

-----


>Because anyone trying to keep anything private or secure must be hiding something bad...

No, that's not the reasoning. People that do bad things try to hide them. Therefore, a good first filter to catch bad people is to target those who hide things. They can narrow the search field afterwards.

-----


This is true, but saying "treat all people trying to hide things as potential terrorists" is a very wide net to cast (to the point of useless).

-----


Is that really what happens though? They aren't treating them as potential terrorists. They are treating it as one of many signals that would almost certainly be associated with a threat.

-----


It is a vastly smaller space than "all people that use the internet". Also I think it makes sense to assume that the "privacy seekers" set would contain a higher proportion of "bad actors". I am not in any way supporting them, but I understand why they do it - even if I hate the idea.

-----


The fraction of terrorists in "all people that use the internet" is approximately equal to the fraction of terrorists in "privacy seekers" -- both are roughly zero.

The NSA only was given such pervasive power to catch terrorists. The fact that privacy seekers are more likely to be "bad actors" is moot unless that means terrorists because regular "bad actors" are supposed to be innocent until guilty, handled by police/fbi, etc. NSA only is allowed to work essentially with no due process because it's for cathing "terrorists". Interestingly, now that terrorists know about the NSA, they no doubt will simply not use the internet (or phones) at all, thus making it so the NSA can't catch a terrorist by any of its methods.

I definitely expect all terrorists are extremely careful about internet activity now that they know the NSA is so invasive, thus making the NSA's actions even less defensible (not that I thought they ever were).

-----


Nobody said that.

-----


Not sure why you were marked down for that comment. Logically it is an obvious low hanging fruit to target the people looking for privacy. It is cynical but certainly logical, like it or not.

-----


Literally everyone hides something, its all relative. Not to mention the people that do bad things in plain sight.

-----


> the people that do bad things in plain sight.

..aaand the ones who get "caught" and are away with a slap on the wrist before you can say "this is a joke, I just cannot believe the hipocrisy of this, double standards much?" (because let's be realistic, it's not instant)

-----


> I can't believe the NSA is trying to rationalize it as being a good thing

Is it really a surprise people in power wish to remain in power?

-----


I wonder if the sentiment here is that the NSA used to be known for being a highly intelligent group of people trying to solve hard problems but now they seem like a bunch of D-level bureaucrats snooping through everything hoping they'll get lucky.

-----


That's it exactly. There are some brilliant people working at the NSA and our country would be better off if they were working improve the security of utilities, local governments, and critical businesses.

-----


And maybe they are a highly intelligent group of bureaucrats trying to solve hard problems so they can snoop through everything and get lucky. Highly intelligent person solving hard problems is not necessary "good guy" nor "ethical guy" nor "law abiding guy".

-----


Maybe there's just both kinds of people working at the NSA...

-----


I would amend that to: "Is it any surprise the bureaucracy wishes to protect the bureaucracy?"

-----


"The only thing that saves us from the bureaucracy is inefficiency. An efficient bureaucracy is the greatest threat to liberty." - Eugene McCarthy

-----


"Be thankful we're not getting all the government we're paying for." - Will Rogers

-----


http://people.bethel.edu/~steken/death%20and%20despair/holze...

-----


>not being able to catch a single terrorist

Except, you know, this guy named Osama Bin Laden. http://www.ibtimes.com/nsa-snowden-leaks-satellites-drones-c...

-----


I think he meant before they wreak havoc.

-----


No, he had already died in 2001, people only get to die one time.

-----


Because anyone trying to keep anything private or secure must be hiding something bad...

Well that idea isn't exactly new, unfortunately, remember Eric Scmidt.. "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."

-----


Well, in context, I believe that quote was intended to mean "as a US company, we can't stop the government from intruding upon your privacy", not "we don't value your privacy".

-----


Like we shouldn't get our privacy.

Whereas he bought himself a specially isolated penthouse in order to live his open marriage in perfect privacy.

-----


On the other hand, anyone who is up to something bad is going to try to keep it as secret as possible, no?

After 10 years of pervasive surveillance and not being able to catch a single terrorist

Questionable. I wouldn't expect the NSA to put out a press release every time someone is nabbed on the back of their information-gathering, that wouldn't be very smart.

-----


If we are to accept that some who seek privacy and security are good and innocent, we also need to accept that some who seek those same things can also be bad.

-----


You can literally replace "seek privacy and security" with almost anything in that statement. I mean, out of all the people that prefer Coke over Pepsi, there are definitely some bad guys.

While your statement is true, it doesn't really give any justification on targeted spying. That is, unless of course, we consider the desire to hide things to be a signal of being a bad actor. Thinking along those lines is a very slippery slope though. The erosion of freedoms is just another side-effect of policies and actions shaped by such thinking.

-----


That's nonsense. If a credible actor poses a threat, they will attempt to do a competent job hiding their communications and data. Those that don't are by definition less competent and less likely to pose a threat.

Pardon the bluntless, but it seems like an a priori conclusion that only people taking pains to hide their data/communications should be targeted. That is not because the vast majority of those people are innocent (they are) but because that is the only group that contains a subset that poses a real & present threat.

So, that ignores your slippery slope argument about personal liberties, which are totally valid. How do you balance national security and personal liberty in this case? That's the million dollar question.

Please let me know if you question my reasoning. I'm purely looking at it as a 2x2 matrix of (highly encrypts personal data, does not ...) x (seeks to harm people/nation interests, does not ...)

So only the people hiding their tracks that seek to harm are the ones to worry about. Those that don't hide their tracks are a lot less likely to be operationally successful</euphemism>.

However, I assume that the 99.95% of people that highly encrypt personal do not seek to harm anyone, and are collateral damage here.

Constitutional tradeoffs happen all over. Fire in a crowded theater, felons rights to vote, personal rights to own certain weapons, etc. This is another one that needs to be decided very carefully. But I think both sides have very valid concerns.

-----


>Fire in a crowded theater

This is a quote by a judge in a trial that put someone in prison for passing out antiwar fliers. It's good to know the source of our philosophies.

edit: https://en.wikipedia.org/wiki/Schenck_v._United_States

When we accept that "shouting fire in a crowded theater" isn't protected speech, we're backing up an argument that was used to put someone in prison for non-violent anti-state speech. Not token prison either; 6 months. That is not a good thing, and not an acceptable baseline to guide us in the examination of other issues.

-----


You are mischaracterizing the case and unfairly tarring the reputation of Oliver Wendell Holmes, a truly great Supreme Court justice. Perhaps you would like this better:

"we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death"

Back to the other quote: it is a classic and excellent demonstration of the one of the greatest tensions in US Constitution, the balance between individual rights and societal good. It doesn't require any context.

Perhaps I should have used the blunter formulation, also from Justice Holmes:

"The right to swing my fist ends where the other man's nose begins"

-----


Oliver Holmes already has one of the worst reputations, nothing any of us write can compare with the man's own words regarding his reputation. Here's part of his decision in upholding the the forced sterilization of Carrie Buck

"It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. [...] Three generations of imbeciles are enough."

Of course this has nothing to do with the NSA/GCHQ/CSEC collecting IP numbers of users of privacy software. Only a police state casts a wide net and then spies on it's own citizens to determine their innocence presuming they are already guilty by seeking out this software in the first place. It's the exact guilty by association kind of nonsense every police state throughout history has done.

-----


He does not have one of the worst reputations. That's utterly ridiculous, you don't know a thing about the history of constitutional law if you think so.

Buck v. Bell was one of his worst moments, as well as the rest of the country's. Holmes didn't create the eugenics law in VA, and he didn't hold a gun to the head of the other 7 members of the court who voted with him, though. Eugenics is disgusting and is rightfully in the dustbin of history along with debtors prisons, lobotomization, slavery, and a number of other common practices we currently & correctly view as backwards and evil.

He stands heads and shoulders above the idiot "originalists" polluting the bench right now.

edit: You might as well say that Richard Feynman was a reclusive and socially awkward man. It is not a matter of opinion, it's just false - plainly incorrect. OWH is routinely on the top 10 most influential justices of all time. He remains one of the most widely cited in other SC decisions. But to pin him down on a couple of specific cases and overlooking his enormous influence on contemporary judicial philosophy is ignorant. Don't believe me? Just spent a minute of your time to research it and you'll see how silly it is.

edit 2: Most of what I found to make sure I hadn't lost my mind is even kinder, considering him the 2nd or 3rd most influential justices behind Marshall and closely tied with Warren.

-----


You're citing to a case that was overturned more than 40 years ago.[1] So I wouldn't really describe it as our 'philosophy' whatever that means.

[1] http://www.oyez.org/cases/1960-1969/1968/1968_492

-----


> "So, that ignores your slippery slope argument about personal liberties, which are totally valid. How do you balance national security and personal liberty in this case? That's the million dollar question."

"National Security" is a fucking joke; there is nothing that needs to be balanced. Anyone interested in terrorizing others could (after driving across state lines if necessary) walk into a Walmart and walk out with a semi-automatic rifle, walk into the nearest mall, yell their grievances with the country and start shooting people.

We know that this sort of attack is possible in the US because plenty of lone-nutters have more or less done it already in the US. We know that terrorist organizations are receptive to this style of attack because they have carried out this style of attack in other countries (Mumbai in 2008 and Kenya in 2013 are obvious examples). Yet the two have yet to be combined in the US.

The only reason why this hasn't happened in the US is that, contrary to popular belief, there just simply are not many people interested in doing this sort of thing in the US. The notion that terrorists yearning to attack America are around every corner is a myth. Those people are a rounding error.

-----


> How do you balance national security and personal liberty in this case? That's the million dollar question.

That would be valid if they could point to any noteworthy success. The fact they can't ["because national security"] pretty much guarantees the program contributes very little real value.

If they had anything to do with something important, say Osama's death, you think they wouldn't trumpet it out as "proof" it works?

I'd say their lack of evidence that it functions is more damning than anything. They can't exactly hide they are doing it post-Snowden. Their one chance to justify funding for new programs that aren't compromised is to say "LOOK HOW SUCCESSFUL WE ARE!!!!" in broad terms.

The fact they cannot do this, and thereby justify larger budgets to Congress, convinces me they know the benefits are negligible.

-----


Do you feel there might be information that they can't publicize? I skeptical and sympathetic to that at the same time.

I really would say that it's a question of risk aversion & utitility. Even targetting a whole class of people the odds are probably astronomical of finding someone planning harm (1M:1? more than that?). To me it comes down to the negative utility of privacy invasion * the number of people targeted ??? the probability of detecting & thwarting the one malactor, where ??? is an inequality.

Maybe it's good we have risk averse and non risk averse groups, that the balance of power between those groups can change over time as necessary.

-----


> Do you feel there might be information that they can't publicize? I skeptical and sympathetic to that at the same time.

XKeyscore is a (at a minimum) 6 years old.

You are telling me in 6 years they can't provide the broad strokes of a reasonable number of success stories?

The fact they can't stand up and say "Terrorist plot X was stopped by XKeyscore" from 4-5 years ago pretty much proves its a failure as far as I am concerned.

> I really would say that it's a question of risk aversion & utitility. Even targetting a whole class of people the odds are probably astronomical of finding someone planning harm (1M:1? more than that?). To me it comes down to the negative utility of privacy invasion * the number of people targeted ??? the probability of detecting & thwarting the one malactor, where ??? is an inequality.

Given that with sufficient information a person is able to force another person to do quite a few things via blackmail, its simply too dangerous to trust human beings with this tool. Especially an organization like the NSA where their only knowledge of Snowden's actions came after he released all of the information publicly.

-----


No, because anyone trying to hide something is highly likely to be trying to keep things private and secure. Ignoring the moral, ethical, and legal concerns, it is a perfectly reasonable course of action. The moral, ethical & legal concerns are of course terrible.

-----


The only way out of this, as I see it, is making privacy the default. But this require some cooperation and motivation from the big guys at silicon valley.

Imagine if Chrome, Firefox, Safari, all of them had, just like the incognito mode, the private mode. Of course, as anonymity also depends on the behavior of the user online, other actions are needed to really ensure security and privacy. But making it the default will educate more people about the importance of privacy and, more importantly, make the point that privacy isn't only for criminals, terrorists and wrong-doers, but that "normal", law abiding citizens also should have the right to be private. And that is paramount for a democracy to work.

-----


They can start with using HTTPS for everything and hosting things like analytics (Piwik rocks!), javascript libraries etc. themselves.

-----


I think the cooperation necessary would be for the "big guys" to not have a vested interest in selling out privacy, which has been the prevailing business model for a long time. And, since the big guys only listen to their bottom line, that means not using them until they support privacy. It may mean not using the Internet substantially at all. (It's more than a little ironic to be saying this on the preeminent "business hacker" (or "startup") community, which has a visible subset who sympathize with some of the NSA's programs, or at least have been able to rationalize them...)

As you say, the tools have always been there, but no one uses them. That might be because it's a chicken-or-egg problem. At the same time, it might be because the people in the positions to develop and promote the tools, even if only for their own use, are being prevented by a one-track culture that encourages them to sell out their client's privacy in addition to discouraging them from working on projects like Tor. (Again, the HN forum is an example of that conflict - being a largely business-oriented forum; surveillance technology sells... Even DuckDuckGo, a favorite startup in this community, has filters to protect us.) Rather than peer-to-peer solutions like Gnutella, Gnunet, Tor, and even open wireless, people continue to make websites with JavaScript encryption, despite the proven MITM threat.

I don't think JavaScript and CSS will get us out of this, but if this latest revelation doesn't wake people up in the tech community specifically, nothing will, since BoingBoing readership is a large number of them - which to me means that the tech and programmer categories are themselves a primary focus of the surveillance that some highly-respected tech pundits (and HN forum members) have defended and rationalized as only being used for terrorists and perverts. That definition now includes anyone with enough knowledge to build or use strong privacy tools. The definition now includes everyone on this forum.

-----


"No one" uses it because it is too complicated for "every one".

-----


Funny enough, Chrome used to say that incognito mode doesn't protect you against spies. It still says it doesn't protect your data from governments.

-----


> But this require some cooperation and motivation from the big guys at silicon valley

Unfortunately, this is key to making strong encryption commonplace. A social graph and real-time communication could be used to make key exchange easy and secure. Open client software is needed to make security verifiable. And the storage and email infrastructure and clients need to make using encryption the default.

All the pieces of a "trust nobody" environment are there, and so are the pieces for making it an easy to use default.

Hopefully, doing this will be required for American service and technology companies to regain trust.

-----


One of the biggest difficulties for "easy and secure" key exchange is that so many people want to be able to access private communications on many different devices.

How do you authorize a new device in an "easy and secure" way without simply outsourcing the problem to an intermediary who is then in a position to attack you by authorizing its own devices?

This issue has quite concrete implications for the security and convenience of lots of existing security tools, from GPG to iMessage to Skype to Firefox. They've chosen different approaches but the underlying problem and associated tradeoffs apply to all of them.

On the bright side, there are now a lot of people exploring the space of possibilities for dealing with these tradeoffs.

-----


"The perfect is the enemy of the good."

Just authorize. If you have perfect-forward secrecy, as long as you aren't being man-in-the-middled right now, you're safe.

It's better to have all people doing everything encrypted by default than not.

The goal isn't for one individual to be safe against a targeted NSA attack. That's insane--if the NSA wants you, specifically you are screwed; it simply has far too many resources to bring to bear.

The goal is to make it expensive for the big agencies to do pervasive surveillance. If everybody is encrypting all the time, random peon at Three Letter Agency has to get up from his chair and actually authorize a wiretap, get a warrant, etc. At that point, it's not going to happen unless you've actually done something very wrong.

-----


Fully agreed up until your last sentence: It's not going to happen unless they have reason to believe it will lead to evidence of someone doing something wrong, and that it will be wrong enough to justify the effort.

-----


I'd like to focus on:

Merely searching the web for the privacy-enhancing software tools outlined in the XKeyscore rules causes the NSA to mark and track the IP address of the person doing the search.

Again the media makes it sound like there exists a dragnet on (Google) searches. But this time one of the authors is J. Appelbaum.

So which is it? Terrorist Scores based on search engine searches sounds fantastically insane to me. But unencrypted it is possible to intercept. So perhaps it is something in between: All accessible searches are monitored, and search engines do not cooperate with this directly, unless they have to legally comply with the request?

-----


One of the earlier Snowden disclosures was that the NSA had tapped private internal Google fiber lines carrying traffic between data centers. Same with Yahoo, Microsoft, other major Internet destinations. Google has since started encrypting all internal traffic, but for awhile pretty much anything was available to the NSA dragnet.

-----


I'm sure it still is. We just don't know how yet, but a giant corporation like google has probably other ways to be attacked.

And if it is not possible on the technical level, the NSA will find the people to access the data they want.

-----


I think an even more significant thing in the XKeyScore code (in terms of the idea that "NSA targets the privacy-conscious") is the existence of a "documents/comsec/" hierarchy of fingerprints. I may have written some of the documentation that's targeted elsewhere within that hierarchy.

-----


When the NSA collects evidence on someone and uses that evidence to prosecute a criminal case, they can and should file a motion to suppress that evidence.

The NSA data is collected under search issued by a FISA court. So, during a suppression hearing, defense counsel can challenge the validity of the warrant. If their challenge is denied, they can appeal. If their appeal fails, they can petition the Supreme Court. In all these courts, the proceedings are public record and the standard for a warrant can be debated by lawyers and the public alike. We have an open process for checking the work of the humans issuing FISA court warrants; Use it.

Even if the warrant was valid, the NSA might have overstepped its bounds. This can also be challenged when the NSA defends the admissibility of its criminal evidence in a suppression hearing. An independent judiciary can decide if the executive branch has acted outside its bounds. No, an investigator isn't punished for the overbroad evidence collection, but they are embarrassed by having a criminal get off due to their sloppiness. We have an open process for checking the work of human investigators in this country; Use it.

It isn't as if the government just takes that evidence and unilaterally decides to blow people up. We have due process in this country; Use it.

/s

-----


This would be a reasonable view if there weren't a systematic campaign by the NSA, FBI, and DEA to lie to judges and prosecutors about the actual source of underlying evidence.

https://www.eff.org/deeplinks/2013/08/dea-and-nsa-team-intel...

The EFF calls it Intelligence Laundering. The DEA calls it parallel construction. Either way it is sinister and immoral and a court hasn't had a chance to rule on it precisely because it is very difficult for defendants to prove that both the prosecutor and judge were lied to.

-----


I don't actually hold the view expressed. I've just been trying (and failing) to find a compelling way to articulate the following point: "The FISA court's warrant system is flawed because the validity of its warrants or their execution are never checked because the warrants aren't actually used to bring criminal cases."

So, I thought I'd try sarcasm. But I couldn't come up with a concise way to address the fact that parallel construction means that their info actually is used in criminal cases. Oh well, back to the drafting board...

-----


It worked for me. It read like obvious fiction as far as to what I know about the NSA and the last couple of administrations.

-----


I believe it has been reported that the FBI lies about the sources used in investigations so that defendants never find out about the true sources that led to their prosecution. Your legal right to challenge sources of evidence is useless in the face of a corrupt government whose primary goal is to hide those sources from the public.

-----


But you can always ask FBI to prove that their sources are legitimate. They say the sources are legitimate, but you say no, and it's their word versus your word, right?

On an unrelated news, http://mayday.us campaign still has two days left.

-----


And meanwhile, years of your life will be wasted sitting in prison awaiting challenges, all while you're threatened with decades in prison for evidence that the government should never have had access to. Not to mention the legal fees if you can't get pro-bono coverage on your case.

It's no wonder so many plea out to a lesser (but certain) sentence when given the choice.

-----


It isn't as if the government just takes that evidence and unilaterally decides to blow people up.

Well, except when they do, in Afghanistan or Yemen, say.

-----


Yes, I am aware of that. I am also aware that the NSA does not being criminal cases even if it does give data to the FBI.

-----


I feel quite ambiguous about these discriminating techniques. For example, it is okay for us to give females / older people lower insurance rate because that's what the statistics says. Likewise, it's likely that people who search for privacy-enhancing software are more likely to engage in "subversive" activity. So it's hard for me to determine which kind of discrimination is justified and which not.

-----


Did you mean ambivalent, not ambiguous?

-----


"It also records details about visits to a popular internet journal for Linux operating system users called "the Linux Journal - the Original Magazine of the Linux Community", and calls it an "extremist forum"."

WTF? I guess I am on a list. Who knew being an extremist was so easy?

-----


No, it doesn't say that Linux Journal itself is an ‘extremist forum’. It says that TAILs is “advocated by extremists on extremist forums”, and includes Linux Journal as a source of information about TAILs, neither of which seem surprising.

-----


Yes it says exactly that. Thus the quotes "" in my post.

4'th bullet point from the top in case you wish to check again.

-----


The article, which presumably was written by the authors on its byline rather than by the NSA, says that. The actual config file, linked from the article, at http://daserste.ndr.de/panorama/xkeyscorerules100.txt says:

  /*
  These variables define terms and websites relating to the TAILs (The Amnesic
  Incognito Live System) software program, a comsec mechanism advocated by
  extremists on extremist forums.
  */
Linux Journal is listed there, as a ‘website relating to TAILS’, not as an ‘extremist forum’.

Journalists gotta journalize.

-----


I write for Linux Journal. Imagine how I feel! I had no idea that I was participating in subversive activities.

-----


Proud?

-----


I'm certainly proud to be associated with LJ, and to be writing for them.

I'm also willing to believe that hackers who want to use encryption and other privacy-oriented technologies use and read about open-source technologies. Although my guess is that this includes nearly all serious security researchers, experts, and implementers.

That said, to claim that people who read LJ are extremists, or that the magazine is something of an "extremist forum," misses the mark in so many ways.

-----


and the top key-word on their watch list is de-fund

-----


We begin therefore where they are determined not to end, with the question whether any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the Unites States government has led not only us but the world.

This should not actually be a complicated inquiry.

http://www.theguardian.com/technology/2014/may/27/-sp-privac...

-----


Did you seriously think news.ycombinator.com doesnt increase your score and suspectibility of having your computing devices hacked into? And puts you on a very interesting NSA/CIA/Letter-Combo/For-Your-Safety list?

Look at Ukraine. War just pops up. I wonder which list they will go by first.

-----


What I gleaned most from the article(s) is that it's becoming increasingly important for all of us in the tech community to take a stand ourselves along with TOR to promote online anonymity in our companies (& possibly even think about supporting the TOR Project itself in some way).

-----


That is why you should join the Pirate Party! We are a targeted group - we must organize.

-----


Unfortunately in Germany the Piratenpartei have a few too many undesirable links the NPD [1] (i.e. Neo-Nazis) for my liking.

Until they clean house and stamp out the far right, they'll have a problem attracting new voters.

http://www.sueddeutsche.de/bayern/piratenpartei-und-rechte-u...

-----


This is two years old article - as far as I know the German pirates now are rather leftist (actually too leftist for my liking - but as a whole the pirate movement is rather balanced between the two poles).

Pirate Party as a new and mostly undefined movement attracted all kinds of freaks - but it can only work as a movement of those that understand how the Internet can be used in politics, both the dangers and the potential for good, and who value the freedom and openness that was associated with the early net.

-----


Changed the url from http://boingboing.net/2014/07/03/if-you-read-boing-boing-the..., which points to this.

There were two versions of this story on the front page. This thread has the fuller discussion, the other the original source. In such cases we usually merge them by reassigning the url and burying the other thread.

-----


Raw milk distributors.

-----


Wait, what? Are you being facetious? This is so oddly specific.

-----


It's a conservative meme that raw milk producers are being unfairly persecuted, in those circles it's supposed to be a paradigm case of the overly intrusive nanny state.

In reality, there is hard epidemiological data showing that selling raw milk (edit: e.g. through the normal store channels) can lead to serious harm including deaths. So FDA bans it for interstate sales, but it's up to the state to decide how to regulate in-state sales. Just like any other food safety issue.

NSA is extremely unlikely to be involved in enforcing regulations against raw milk in reality, but in the mind of the conservative conspiracy theorist it's all of one totalitarian piece.

-----


> selling raw milk ... can lead to serious harm including deaths.

Please STOP spreading misinformation. The only two deaths from raw milk in the last 20 years were traced back to bad queso fresco. In fact, over the same time period, there were more deaths attributed to pasteurized liquid milk than to raw liquid milk[1].

It's amazing what 30 seconds of Googling can do.

[1] http://www.realrawmilkfacts.com/raw-milk-news/story/outbreak...

-----


It's pretty obvious it can cause deaths, since raw milk is causes listeriosis disproportionately and risks from that (including death) are very well known.

Even your quoted web page lists 2 deaths from raw milk products and 3 deaths from pasteurized milk products. Considering the relative rarity of raw milk product consumption, that's a pretty obvious sign.

Arguing that the contamination isn't significant since it's specific to one milk product doesn't pass muster. With such a small sample you can't deduce anything about how the risk is distributed accross types of milk products.

-----


> specific to one milk product doesn't pass muster

No it's not obvious. That's the point. The CDC has admitted those deaths were caused by a product (queso fresco) that is commonly contaminated after production. There are ZERO deaths attributed to consuming raw liquid milk.

> such a small sample you can't deduce anything

Apparently all data from 1998-2011 on all reported illness and deaths from raw milk products is too small for Chicken Little.

And if this data set is too "small" why are the conclusions drawn by the CDC ("raw milk is deadly!") valid? Shouldn't the paucity of data preclude judgement one way or the other?

-----


Raw Milk is harmful.

> Shouldn't the paucity of data preclude judgement one way or the other?

There is no paucity of data. There are very small numbers of people who drink raw milk. And thus there are small numbers of people harmed by raw milk. But it's pretty clear that raw milk is considerably riskier than pasteurised milk.

Whether adults should be allowed to make stupid choices is another topic. I'd suggest that adults should not be allowed to inflict those stupid choices onto children - who are going to be at even greater risk from harm.

You keep talking about death. Having to have kidneys transplanted because e coli has destroyed them is not death, but I hope you agree it's a severe consequence from eating food.

http://www.scientificamerican.com/article/raw-milk-debate/

http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5608a3.htm

http://www.motherjones.com/environment/2012/09/is-raw-milk-s...

> The Centers for Disease Control and Prevention (CDC) reports that of 239 hospitalizations caused by tainted dairy products from 1993 through 2006, 202 involved raw milk or raw-milk cheese. Nearly two-thirds of the patients were younger than 20. "Parents go to raw milk because they hear it's good for kids' allergies," says Michele Jay-Russell, a veterinarian and food safety specialist at the University of California-Davis who has studied the outbreaks. But children's developing immune systems are more vulnerable than those of adults. "They end up sickening their kids," Jay-Russell adds.

-----


"Harmful" is a meaningless term and contributes nothing to the discussion. Cars are harmful. Alcohol is harmful. Freedom is harmful. So what's your fucking point?

I bring up death because that's the canard trotted out by raw milk haters. And it doesn't happen with any appreciable frequency despite the large numbers of people consuming raw milk.

I'm not disagreeing that both raw and pasteurized milk can potentially cause serious illness, however I do not believe the numbers are large enough to be cause for concern or excessive regulation by control freaks who need to dictate what people put in their bodies. Perhaps you disagree and that's fine.

-----


My fucking point is a very simple point about risk. There are no rewards to drinking raw milk, but there is high risk of significant harm.

The ratio of people drinking raw milk to suffering severe harm from it is much worse than for cars, alcohol, or freedom.

> control freaks who need to dictate what people put in their bodies.

Do you agree that parents should not feed their children a dangerous product that has no benefit? Or is that a bit of control freakery that you don't care about because bias?

-----


You are being a troll. You don't know you're talking about [1]. You can do the math to figure out that you're wrong. I already have. If you can contribute anything remotely productive I will respond otherwise have fun building regulation castles in the sky.

[1] http://en.wikipedia.org/wiki/List_of_countries_by_traffic-re...

-----


It's probably misplaced to focus only on NSA in this respect, but if we talk about government electronic surveillance capabilities generally, they've been expanding through many agencies and parts of government. ACLU's recent focus on local police use of IMSI catchers is just one example; they started out as super-secret high-tech spy stuff and now local cops think they're super-awesome and are afraid they may to have to give them up if word gets out and the courts or legislators start taking a closer look.

Electronic surveillance used to be more stigmatized in some ways, but it's becoming more culturally normalized as a basic government tool (at least in the culture of government agencies -- I hope not as much elsewhere). So you see it used in more and more contexts.

I'm totally unfamiliar with the raw milk regulations, but I think that people who are concerned about them could reasonably worry that electronic communications surveillance will be used to enforce them in the future. Likely not by NSA itself, but perhaps through something that's in part technological trickle-down from NSA development or procurement.

-----


> are afraid they may to have to give them up if word gets out and the courts or legislators start taking a closer look

Seems more like it was the DOJ that was placing strings on access to the devices.

-----


> In reality, there is hard epidemiological data showing that selling raw milk (edit: e.g. through the normal store channels) can lead to serious harm including deaths.

I'd love to see the evidence, and see it compared to other food sources.

I grew up in India. There all we got was raw milk from the cowherd; in fact, even today, my parents send the helper to get milk in a pail from the cowherd. It's always been raw milk, warm and fresh from the udder. And the first thing they do is to boil it.

If I were to conjecture, it's that the "no raw milk" diktat forces farmers to go to big distribution companies with the requisite facilities for pasteurization.

-----


> It's always been raw milk, warm and fresh from the udder. And the first thing they do is to boil it.

Two things: Firstly, it's not raw if they boil it. Most store-bought milk has gone through two processes: Pasteurisation and homogenisation. Pasteurisation is simply heating the milk. If your family boiled it before drinking, you've actually heated the milk more than commercial pasteurisation does. Normally pasteurised milk is heated to only 72 degrees celsius for only 15 seconds. Homogenisation is essentially forcing the milk through filters that breaks up the globs of fat. Only pasteurisation is necessary for food safety.

And secondly, pasteurisation is most necessary if you intend to store the milk. If, as you say, it's "warm and fresh from the udder", there's little risk from drinking raw milk.

The parent of your post specifically said selling raw milk through the normal store channels. The issue is not raw milk, but selling raw milk, which when you combine storage and transport, and the consumer storing it, means plenty of time for massive amounts of bacteria growth. As I'm sure you know, even with normal pasteurisation milk spoils relatively quickly.

> If I were to conjecture, it's that the "no raw milk" diktat forces farmers to go to big distribution companies with the requisite facilities for pasteurization.

Health authorities first started to push for pasteurisation after its extensive success in massively reducing illnesses - and deaths - due to spoiled milk.

-----


Boiled milk is also known as pasteurized milk. It's no longer raw milk if you've boiled it.

-----


Boiling milk? No thanks, boiled milk tastes funny.

You should remember that not everyone live in the same hot climate as you where milk generally don't go bad immediately, and that there are plenty of people around in cooler climates that have stomachs that usually can handle milk without problem.

-----


> Boiling milk? No thanks, boiled milk tastes funny.

Boiled milk, yes. But unless you get milk straight from a farm, it's likely pasteurised: Heated to 72 degrees celsius for 15 seconds. [EDIT: I didn't realise how many places allow sales of unpasteurised milk; yikes - I'll be careful about reading labels next time I'm travelling]

> and that there are plenty of people around in cooler climates that have stomachs that usually can handle milk without problem.

The "stomachs that usually can handle milk without a problem" is entirely unrelated from why we pasteurise milk. Pasteurisation does not affect the lactose content in the milk, and that, combined with whether or not your genes makes you lactose intolerant or not is what determines whether or not you handle milk well.

-----


I agree totally on your points when it comes to pasteurized milk - I was commenting on a comment that claimed regulation wasn't necessary because farmers would boil the milk anyhow, which is simply not the case.

-----


I would love to see the data with comparison to baseline too. and it is an example of a nanny state (the FDA is literally saying, we have to protect you from this) rather than acting as a licensed-milk-producer-seal issuer.

It's the same nanny state issue when the FDA shut down more beneficial AIDS treatments in the 80s and 90s when the only drugs on the market, that the FDA approved of (AZT), were essentially toxic and killed about the same number of people that they "helped". Why should the FDA decide what goes into the bodies of supposedly "free" people? They should only act to say, "This is the only type of drugs or milk the FDA approves of"

-----


You could take a look at the Centre for Disease Control's page about the issue, which includes a PDF with detailed breakdown of disease outbreaks linked to raw milk:

http://www.cdc.gov/foodsafety/rawmilk/raw-milk-index.html

It's not particularly favorable to raw milk.

> They should only act to say, "This is the only type of drugs or milk the FDA approves of"

The same CDC report mentioned above, specifically address labelling, and points out that the numbers show that labelling is not shown to have significant effect.

If it was only your body you put at risk, you might have a point, but this also includes parents putting their children at substantial risk, and people putting others at risk whenever they serve non-pasteurised dairy products and people are not themselves aware of the risk.

-----


I replied to the parent above but wanted to point you that you, too, are 'udder'ly mistaken.

http://www.realrawmilkfacts.com/raw-milk-news/story/outbreak...

-----


Anyone can easily buy raw milk here and so far no diseases happened. That being said, raw milk manufacturing and transportation are heavily regulated and checked by inspectors. They are also regularly tested for possible infections.

-----


http://boingboing.net/2009/12/08/farm-family-put-unde.html

Edit: The title and link of this HN article have changed. The link changed from a BoingBoing article to the original German article, and the headline used to be a question ("Who is the NSA spying on..." or similar) that gave the GP comment more context.

-----


Warning:

The first sentence goes "If you read Boing Boing, the NSA considers you a target for deep surveillance".

So, if you find this interesting, maybe you shouldn't read it.

-----


https://en.wikipedia.org/wiki/First_they_came_...

First they came for the Socialists, and I did not speak out— Because I was not a Socialist.

Then they came for the Trade Unionists, and I did not speak out— Because I was not a Trade Unionist.

Then they came for the Jews, and I did not speak out— Because I was not a Jew.

Then they came for me—and there was no one left to speak for me.

-----


You're warning people about the possible consequences of reading BoingBoing on a site called Hacker News, Where actual hackers and technorati hang out and complain about the American government all the time.

-----


Or maybe you should read what you want, and fight for your right to do so.

-----


OK, just to clarify - I wanted to point out that the site basically says "once you're reading this, you're getting tracked" paradox warning in a (IMO) funny way.

I don't agree at all with these practices.

-----


"I'm not a terrorist or a criminal so I am fine with forfeiting my rights to privacy and freedom so they can keep me safe from the imminent danger of terrorists"

Heard that one before.

-----


If you are here, you can be damn sure you were already on their list long time ago.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: