That's how a police state works. (XKeyScore += 5)
My mother was involved in civil rights, so she has a file. It's fine that I have a file too. Hopefully I'll be gone before they start going door to door.
This is where the "mass surveillance" and "let's monitor everyone just in case they might do something bad" thinking inevitably ends up.
Considering this is PRECISELY how spam filtering works, it doesn't seem entirely irrational.
Much like spam filtering, it would all come down to dialing in your filters and picking a good threshold.
Hell, if the system was good enough, it could actually improve freedoms. We currently arrest many innocent people as part of the legal process (who are later exonerated). What if our "arrest filter" outperformed the current system, in terms of percentage of innocent people arrested? It doesn't have to be perfect to be better.
It will be another system which grants permission at the whims of the department, one that absolves individual officers of blame, punishment and consequences for bias and abuse they'll continue to revel in.
No different than a virus or spam scanner, really. I trust my scanners. Sometimes they are wrong, and I know they are, and I bypass them. But I know they are right most of the time.
You seem to have the notion though that officers do not arrest people to "catch the bad guy". It sounds like you are saying you believe most officers do not actually care if the arrestee is guilty (and will thus ignore the filter always), and merely arrest for kicks/pleasure/vengeance? I do not believe the majority are like that.
You seem to have the notion that I believe what you think I believe. No, I do not believe that officers are arresting people for kicks. I do believe that human bias, power, career building/justification and 'gut feelings' about who might be a bad guy perpetuate unequal application of the law and abuse in a policed society.
I do believe that if a filter sought to mitigate those very real problems it would be fought by the system it meant to augment. I am giving it the very forgiving assumption that those motivations and skews in perception aren't unknowingly built into the filter by its human creators, nor that biased information isn't fed into the filter. If either of those were true, it would be welcomed with open arms by police departments nationwide.
I do believe that such a filter will be perceived as broken or inefficient if it doesn't confirm the officers' preconceived notions about who is a criminal or worthy of arrest.
Unfortunately, I do not believe that the problem is 'the bad guys are getting away, and we need a system to find them'. It's rather the opposite, 'innocent people are having their lives interrupted and sometimes ruined because officers think they look suspicious for reasons and need to justify their paychecks; we need a system to mitigate that'.
I do not think a system that blinks a little arrest light if a suspect is Muslim and uses Google has any place in society, no matter how many times you cry 'b-but machine learning!! Spam filters!! Bayes!!'
Your filter would cement this period's problems into, in the eyes of the public, an infallible machine's instructions and will enable abuse without accountability, because you can always point to the machine and say you followed your best judgment.
Spam filtering also works in a very different context - the spam-to-nonspam ratio is something like 90% spam and 10% nonspam, which means that there is lots and lots of spam to filter out; if an important email slips through, people are bound to notice and either adjust their spam filter or do something about it.
In the other setting, you have 99.99% or more of people who have nothing to do with terrorism or criminal activities, and maybe one or two dozen (among tens of millions) who you are actually targeting. First thing, erroneously targeting a substantial chunk of your non-interesting population ties up resources - you're spending your time investigating people who are not terrorists - but since it's difficult anyways, at least you seem like you're doing something with all the money you receive, and nevermind if some of the data is used for industrial espionnage or hunting people that only poultry farmers and fracking magnates would call terrorists. And if you miss one of the two dozen other people, well, they won't do anything harmful this year or the next because they also have to fear regular law enforcement, and when they do it'll be in a moment that's probably suitable for you to ask for more money.
tl;dr: Because we don't have a large sample of actual terrorists on hand, it's hard to evaluate activities like the NSA's, which would however be desirable since we're giving large chunks of money to them that could be fruitfully used in making everyone safer if used to fight actual crime and not some fuzzy notion of terrorism.
> "It doesn't have to be perfect to be better." (Yes it does...)
The problem used to be approached by presuming innocence (demanding perfection), rather than with a willingness to accept false positives (20 years ago spam filters weren't available as an analogy...). It is always possible to wrongfully judge someone, but it was never a valid or acceptable outcome ("It is better that ten guilty persons escape than that one innocent suffer" - Blackstone). We accept that spam filters give false positives (not to mention that one person's spam is another person's opportunity), so I think comparing the justice system to detecting spam is a mistake, and more over that a goal of "prevention" itself is a red herring.
The goal of prevention encourages us to accept lower thresholds of guilt probability, and that is wrong. In other words, if prevention is an end, then it is worth deliberately (rather than accidentally) restricting innocent people on the basis of virtually any nonzero probability of guilt. 80% "guilty" by association (for using Tor for example), 45%, etc, would all be enough to justify legal action - and the thresholds would certainly depend on whoever is in power and has access to the database that week. This is a very different model than presuming innocence, and having not only a goal of 0 false-positives, but also providing satisfaction when the justice system is in error.
I think today we are mostly talking around the fact that a crime has to have been committed in order for it to deserve to be punished, and that, for that reason, prevention cannot be a valid goal in itself (but it's nice when it happens).
Rationalizing surveillance as a tool to "prevent" rather than to justly punish wrongdoers (which centralized surveillance does not do because it is centrally operated, due to the conflict of interest; everyone owning a camcorder on the other hand...) implies that the central database needs to go IMHO (and that individuals need to be empowered instead).
I.e., make the arrest based on the filter, then run the trial in the same old jury-of-your-peers.
Convictions should be false-positive-free. But our system would not work if arrests also needed to be 100% false-positive-free.
I'm also not advocating punishment for crimes that have not yet been committed. Rather, think of it as looking for flags for crimes that have already been committed or are in progress. For example, there are all sorts of small flags thrown by embezzlement or salami-slicing that, put together, identify the operation.
LOL, jury of peers. You mean the jury that is left after the prosecutors and defenders screen out the most competent jurors. The same jurors that typically believe you are guilty because you've been arrested. Have you been in the typical criminal courtroom lately? Any public defender will tell you that going to trial in cuffs and jailhouse orange will almost certainly get you a conviction.
There are lots of things that need to be fixed in the justice system. Lets not give them more tools to make it worse.
Unfortunately, the database cannot be trusted by virtue of its centralized nature and administration (even if that centralization is justifiable, for example to protect everyone's privacy). The hardware may be objective, but people are not - people lie cheat and steal when they can get away with it - and there are simply too few separate and competing interests to hold the small number of people with access to the database and tools accountable for their inevitably selective use of them to ensure their objective application. We have seen centralized data collected and used for private interests (and books censored, and guns regulated, and...) in the past, be they fascist governments or police protectionism (lying under oath; evidence tampering; racial "profiling"), economic fraud, etc. It is human nature to use one's control to his advantage, and it is simply too tempting for police to shoot first (detain, seize, etc), especially when it is in their interest, and ask questions later (check the database for cause; use "parallel reconstruction"; incriminating speech taken out of context).
It would be worse if that extended all the way to conviction, but it presents the same kind of problem for arrests, detainments, and searches, etc, since it is effectively the word of the administrators (who we trust not to abuse the data and tools) against the person arrested. The more centralized the data and tools become, the less we can trust them to be applied objectively without accountability.
Unfortunately, there are no checks and balances on absolute power (centralization), and so we cannot allow centralization to continue indefinitely. Absolute power corrupts, absolutely, and it is my "thesis" that arrests are not a suitable application of these tools. The risk is too great. Police already have a high level of responsibility (the authority, training, and tools/weapons to control use by force) and what feels like decreasing accountability (because the kids, because the drugs, because I said so, because I can, because of cronyism, and because wealthy people don't like hearing criticism), and since they are none the less "only human" - I don't recommend giving them more.
Granted, you are merely describing a potentially objective algorithm, but my point is that the objectivity of any given tool is moot given
the human element. Guns don't kill people, people do, and will continue to do so even with checks and balances (like laws against murder; if prevention was the goal we fail daily). It is only the distribution of accountability (peer juries, private key sharing, democratic voting, citizen groups, etc) that keeps such roles in check.
Anyways, thanks for the opportunity to flesh my thoughts out more.
As for the objectivity of feeding the filter data, I envision something completely automatic. No selective entry for this or that suspicious person- the filter is fed a database of all people, and perhaps monitors the internet's traffic on its own. Maybe ACH traffic too. Financial crime could be this system's biggest win- computers are way more suited to uncovering financial crime relative to humans.
Basically, when it's big enough and sophisticated enough and automated enough that no one person can fully understand it, it becomes significantly harder to pervert. And, as I mentioned before, it needn't be perfect- our current system is pervertable too (see: papers please, racial profiling, etc), so this one would just need to be less pervertable...
Machines can't be racist, so the arrest score going after lots of poor, Black men must mean that there's something to it.
If a learning Bayesian filter targets a certain demographic, there probably IS something to it.
That really would be amusing/pleasing, if all this work we've spent developing spam filters became the lead-up to an accurate, learning crime filter. Perhaps the fork to spamassassin will be known as crimeassassin?
I'm pretty sure both Bayes and Laplace would not agree with the categorization of Bayesian probability as some sort of panacea for determining truth in criminal matters.
Trials & convictions is a different matter, more suited to truth-seeking.
I would be seriously concerned about a Bayesian filter being applied as the sole reason for arrests...
I was thinking more in terms of automated drone strikes, but yeah for the facade of democracy arrests may be the way to go for now.
This of course only matters if you really care about the money. I assume that the large black budgets of the CIA and NSA are not really a concern to those that matter.
Ideally you spend more every year. That's the usual policy of government agencies, otherwise the following year you get a budget cut.
(Disclaimer: I'm not insinuating that YC supports this)
Or maybe welcome to Oceania of 1984, having to deal with Thinkpol:
"The Thought Police (thinkpol in Newspeak) are the secret police of the novel Nineteen Eighty-Four. It is their job to uncover and punish thoughtcrime."
Yup, we don't have thinkpol yet (and hopefully we never will), but we do have a pretty damn good analogy to it: https://en.wikipedia.org/wiki/Predictive_policing
This is just how life is when databases are ubiquitous. After I bought a house and my name began appearing in property tax databases I started getting lots of (paper-based) commercial spam for things homeowners are more likely to buy, like different sorts of insurance, refinancing, satellite TV service yadda yadda.
When it emerged after 9-11 that various government agencies had failed to 'join the dots' by not sharing intelligence information effectively, there was a lot of public support for better-coordinated and more proactive intelligence gather, notwithstanding warnings about the risk to civil liberties. So collectively we got what we asked for. The lack of public outrage or mass demonstrations against the NSA strongly suggests that a large majority are OK with this state of affairs, especially since they're used to data collection in a commercial context.
A little bit of war, however, will probably dull any of that.
Why care about the rest of your family that will still be here to endure the rest... or the rest of us.
You have no kids, right?
If I were planning to stay here for the rest of my life, I wouldn't risk political posting on the internet, donating, or subscribing. You'll get more help for me as long as I can rationalize it as only worsening a temporary situation.
That's just stupid.
This is not complicated to build, it is simple to build, and the only logical way of accomplishing what the government claims that they're attempting to accomplish.
My point is, you should not change your behaviour to be a lesser target to the NSA. You'd just quickly become super paranoid. Instead you should live your live exactly the same, and if the NSA tries to make your live bad, that's the moment when you call them out on it - after all, it's the NSA that is behaving out of line. So they should change, not you.
Oh, and http://defundthensa.com/ , sure.
The only reason that I suspect that the government is still terrible at this is because they have to rely on government contractors to implement it. If they're intentionally funding startups that happen to be developing tools in the spaces they need, though, it's only a matter of a (short) time until the systems they have are settled and dependable, and they can concentrate on innovation.
>it's the NSA that is behaving out of line. So they should change, not you.
This is also silly. That's like people who walk into speeding traffic because they have the right-of-way. You won't get to hear about how the trial turns out from your grave.
Well, in contrast to you, I plan to enjoy my life instead of wasting it by worrying about some possible bad actor spying on me, which is what I'm must read is how you spend your life.
Sure, fight the NSA by engaging a little bit politically, and buying the right things. But apart from that, don't worry so much, man.
> You won't get to hear about how the trial turns out from your grave
that's equally projecting something onto me, so... that's not a great way to have a productive discussion neither.
> This is also silly. That's like people who walk into speeding traffic because they have the right-of-way. They won't get to hear about how the trial turns out from their grave.
Let me guess: you are a straight white dude with a comfortable income.
They don't need to do any cross referencing. Your parents, cousins, friends, former colleagues and classmates, etc, will sell you for a "like" in a hearth bit.
> and compare them to "what people like me should typically do".
Ditto. I hate every time people in my acquaintance network email me with "since you cannot be found like everybody else, I'm sending you this thing that you probably don't care about in the first place. After all we are still friends, right?"
Not finding something when you should find something is suspicious.
It's quiet. TOO quiet.
After 10 years of pervasive surveillance and not being able to catch a single terrorist I can't believe the NSA is trying to rationalize it as being a good thing. It's too bad the bill to defund the NSA didn't pass: http://defundthensa.com/
The NSA does an awful lot of hiding things. It might therefore be reasonable to conclude that it is bad and should, at the very least have its funding cut.
Anecdotally reminds of this guy that was key in setting up the British porn filter being arrested for child pornography: http://www.telegraph.co.uk/news/politics/david-cameron/10675...
No, that's not the reasoning. People that do bad things try to hide them. Therefore, a good first filter to catch bad people is to target those who hide things. They can narrow the search field afterwards.
The NSA only was given such pervasive power to catch terrorists. The fact that privacy seekers are more likely to be "bad actors" is moot unless that means terrorists because regular "bad actors" are supposed to be innocent until guilty, handled by police/fbi, etc. NSA only is allowed to work essentially with no due process because it's for cathing "terrorists". Interestingly, now that terrorists know about the NSA, they no doubt will simply not use the internet (or phones) at all, thus making it so the NSA can't catch a terrorist by any of its methods.
I definitely expect all terrorists are extremely careful about internet activity now that they know the NSA is so invasive, thus making the NSA's actions even less defensible (not that I thought they ever were).
..aaand the ones who get "caught" and are away with a slap on the wrist before you can say "this is a joke, I just cannot believe the hipocrisy of this, double standards much?" (because let's be realistic, it's not instant)
Is it really a surprise people in power wish to remain in power?
Except, you know, this guy named Osama Bin Laden.
Well that idea isn't exactly new, unfortunately, remember Eric Scmidt.. "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."
Whereas he bought himself a specially isolated penthouse in order to live his open marriage in perfect privacy.
After 10 years of pervasive surveillance and not being able to catch a single terrorist
Questionable. I wouldn't expect the NSA to put out a press release every time someone is nabbed on the back of their information-gathering, that wouldn't be very smart.
While your statement is true, it doesn't really give any justification on targeted spying. That is, unless of course, we consider the desire to hide things to be a signal of being a bad actor. Thinking along those lines is a very slippery slope though. The erosion of freedoms is just another side-effect of policies and actions shaped by such thinking.
Pardon the bluntless, but it seems like an a priori conclusion that only people taking pains to hide their data/communications should be targeted. That is not because the vast majority of those people are innocent (they are) but because that is the only group that contains a subset that poses a real & present threat.
So, that ignores your slippery slope argument about personal liberties, which are totally valid. How do you balance national security and personal liberty in this case? That's the million dollar question.
Please let me know if you question my reasoning. I'm purely looking at it as a 2x2 matrix of (highly encrypts personal data, does not ...) x (seeks to harm people/nation interests, does not ...)
So only the people hiding their tracks that seek to harm are the ones to worry about. Those that don't hide their tracks are a lot less likely to be operationally successful</euphemism>.
However, I assume that the 99.95% of people that highly encrypt personal do not seek to harm anyone, and are collateral damage here.
Constitutional tradeoffs happen all over. Fire in a crowded theater, felons rights to vote, personal rights to own certain weapons, etc. This is another one that needs to be decided very carefully. But I think both sides have very valid concerns.
This is a quote by a judge in a trial that put someone in prison for passing out antiwar fliers. It's good to know the source of our philosophies.
When we accept that "shouting fire in a crowded theater" isn't protected speech, we're backing up an argument that was used to put someone in prison for non-violent anti-state speech. Not token prison either; 6 months. That is not a good thing, and not an acceptable baseline to guide us in the examination of other issues.
"we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death"
Back to the other quote: it is a classic and excellent demonstration of the one of the greatest tensions in US Constitution, the balance between individual rights and societal good. It doesn't require any context.
Perhaps I should have used the blunter formulation, also from Justice Holmes:
"The right to swing my fist ends where the other man's nose begins"
"It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. [...] Three generations of imbeciles are enough."
Of course this has nothing to do with the NSA/GCHQ/CSEC collecting IP numbers of users of privacy software. Only a police state casts a wide net and then spies on it's own citizens to determine their innocence presuming they are already guilty by seeking out this software in the first place. It's the exact guilty by association kind of nonsense every police state throughout history has done.
Buck v. Bell was one of his worst moments, as well as the rest of the country's. Holmes didn't create the eugenics law in VA, and he didn't hold a gun to the head of the other 7 members of the court who voted with him, though. Eugenics is disgusting and is rightfully in the dustbin of history along with debtors prisons, lobotomization, slavery, and a number of other common practices we currently & correctly view as backwards and evil.
He stands heads and shoulders above the idiot "originalists" polluting the bench right now.
edit: You might as well say that Richard Feynman was a reclusive and socially awkward man. It is not a matter of opinion, it's just false - plainly incorrect. OWH is routinely on the top 10 most influential justices of all time. He remains one of the most widely cited in other SC decisions. But to pin him down on a couple of specific cases and overlooking his enormous influence on contemporary judicial philosophy is ignorant. Don't believe me? Just spent a minute of your time to research it and you'll see how silly it is.
edit 2: Most of what I found to make sure I hadn't lost my mind is even kinder, considering him the 2nd or 3rd most influential justices behind Marshall and closely tied with Warren.
"National Security" is a fucking joke; there is nothing that needs to be balanced. Anyone interested in terrorizing others could (after driving across state lines if necessary) walk into a Walmart and walk out with a semi-automatic rifle, walk into the nearest mall, yell their grievances with the country and start shooting people.
We know that this sort of attack is possible in the US because plenty of lone-nutters have more or less done it already in the US. We know that terrorist organizations are receptive to this style of attack because they have carried out this style of attack in other countries (Mumbai in 2008 and Kenya in 2013 are obvious examples). Yet the two have yet to be combined in the US.
The only reason why this hasn't happened in the US is that, contrary to popular belief, there just simply are not many people interested in doing this sort of thing in the US. The notion that terrorists yearning to attack America are around every corner is a myth. Those people are a rounding error.
That would be valid if they could point to any noteworthy success. The fact they can't ["because national security"] pretty much guarantees the program contributes very little real value.
If they had anything to do with something important, say Osama's death, you think they wouldn't trumpet it out as "proof" it works?
I'd say their lack of evidence that it functions is more damning than anything. They can't exactly hide they are doing it post-Snowden. Their one chance to justify funding for new programs that aren't compromised is to say "LOOK HOW SUCCESSFUL WE ARE!!!!" in broad terms.
The fact they cannot do this, and thereby justify larger budgets to Congress, convinces me they know the benefits are negligible.
I really would say that it's a question of risk aversion & utitility. Even targetting a whole class of people the odds are probably astronomical of finding someone planning harm (1M:1? more than that?). To me it comes down to the negative utility of privacy invasion * the number of people targeted ??? the probability of detecting & thwarting the one malactor, where ??? is an inequality.
Maybe it's good we have risk averse and non risk averse groups, that the balance of power between those groups can change over time as necessary.
XKeyscore is a (at a minimum) 6 years old.
You are telling me in 6 years they can't provide the broad strokes of a reasonable number of success stories?
The fact they can't stand up and say "Terrorist plot X was stopped by XKeyscore" from 4-5 years ago pretty much proves its a failure as far as I am concerned.
> I really would say that it's a question of risk aversion & utitility. Even targetting a whole class of people the odds are probably astronomical of finding someone planning harm (1M:1? more than that?). To me it comes down to the negative utility of privacy invasion * the number of people targeted ??? the probability of detecting & thwarting the one malactor, where ??? is an inequality.
Given that with sufficient information a person is able to force another person to do quite a few things via blackmail, its simply too dangerous to trust human beings with this tool. Especially an organization like the NSA where their only knowledge of Snowden's actions came after he released all of the information publicly.
Imagine if Chrome, Firefox, Safari, all of them had, just like the incognito mode, the private mode. Of course, as anonymity also depends on the behavior of the user online, other actions are needed to really ensure security and privacy. But making it the default will educate more people about the importance of privacy and, more importantly, make the point that privacy isn't only for criminals, terrorists and wrong-doers, but that "normal", law abiding citizens also should have the right to be private. And that is paramount for a democracy to work.
Unfortunately, this is key to making strong encryption commonplace. A social graph and real-time communication could be used to make key exchange easy and secure. Open client software is needed to make security verifiable. And the storage and email infrastructure and clients need to make using encryption the default.
All the pieces of a "trust nobody" environment are there, and so are the pieces for making it an easy to use default.
Hopefully, doing this will be required for American service and technology companies to regain trust.
How do you authorize a new device in an "easy and secure" way without simply outsourcing the problem to an intermediary who is then in a position to attack you by authorizing its own devices?
This issue has quite concrete implications for the security and convenience of lots of existing security tools, from GPG to iMessage to Skype to Firefox. They've chosen different approaches but the underlying problem and associated tradeoffs apply to all of them.
On the bright side, there are now a lot of people exploring the space of possibilities for dealing with these tradeoffs.
Just authorize. If you have perfect-forward secrecy, as long as you aren't being man-in-the-middled right now, you're safe.
It's better to have all people doing everything encrypted by default than not.
The goal isn't for one individual to be safe against a targeted NSA attack. That's insane--if the NSA wants you, specifically you are screwed; it simply has far too many resources to bring to bear.
The goal is to make it expensive for the big agencies to do pervasive surveillance. If everybody is encrypting all the time, random peon at Three Letter Agency has to get up from his chair and actually authorize a wiretap, get a warrant, etc. At that point, it's not going to happen unless you've actually done something very wrong.
WTF? I guess I am on a list. Who knew being an extremist was so easy?
4'th bullet point from the top in case you wish to check again.
These variables define terms and websites relating to the TAILs (The Amnesic
Incognito Live System) software program, a comsec mechanism advocated by
extremists on extremist forums.
Journalists gotta journalize.
I'm also willing to believe that hackers who want to use encryption and other privacy-oriented technologies use and read about open-source technologies. Although my guess is that this includes nearly all serious security researchers, experts, and implementers.
That said, to claim that people who read LJ are extremists, or that the magazine is something of an "extremist forum," misses the mark in so many ways.
Merely searching the web for the privacy-enhancing software tools outlined in the XKeyscore rules causes the NSA to mark and track the IP address of the person doing the search.
Again the media makes it sound like there exists a dragnet on (Google) searches. But this time one of the authors is J. Appelbaum.
So which is it? Terrorist Scores based on search engine searches sounds fantastically insane to me. But unencrypted it is possible to intercept. So perhaps it is something in between: All accessible searches are monitored, and search engines do not cooperate with this directly, unless they have to legally comply with the request?
And if it is not possible on the technical level, the NSA will find the people to access the data they want.
The NSA data is collected under search issued by a FISA court. So, during a suppression hearing, defense counsel can challenge the validity of the warrant. If their challenge is denied, they can appeal. If their appeal fails, they can petition the Supreme Court. In all these courts, the proceedings are public record and the standard for a warrant can be debated by lawyers and the public alike. We have an open process for checking the work of the humans issuing FISA court warrants; Use it.
Even if the warrant was valid, the NSA might have overstepped its bounds. This can also be challenged when the NSA defends the admissibility of its criminal evidence in a suppression hearing. An independent judiciary can decide if the executive branch has acted outside its bounds. No, an investigator isn't punished for the overbroad evidence collection, but they are embarrassed by having a criminal get off due to their sloppiness. We have an open process for checking the work of human investigators in this country; Use it.
It isn't as if the government just takes that evidence and unilaterally decides to blow people up. We have due process in this country; Use it.
The EFF calls it Intelligence Laundering. The DEA calls it parallel construction. Either way it is sinister and immoral and a court hasn't had a chance to rule on it precisely because it is very difficult for defendants to prove that both the prosecutor and judge were lied to.
So, I thought I'd try sarcasm. But I couldn't come up with a concise way to address the fact that parallel construction means that their info actually is used in criminal cases. Oh well, back to the drafting board...
On an unrelated news, http://mayday.us campaign still has two days left.
Well, except when they do, in Afghanistan or Yemen, say.
It's no wonder so many plea out to a lesser (but certain) sentence when given the choice.
This should not actually be a complicated inquiry.
Until they clean house and stamp out the far right, they'll have a problem attracting new voters.
Pirate Party as a new and mostly undefined movement attracted all kinds of freaks - but it can only work as a movement of those that understand how the Internet can be used in politics, both the dangers and the potential for good, and who value the freedom and openness that was associated with the early net.
There were two versions of this story on the front page. This thread has the fuller discussion, the other the original source. In such cases we usually merge them by reassigning the url and burying the other thread.
Look at Ukraine. War just pops up. I wonder which list they will go by first.
Edit: The title and link of this HN article have changed. The link changed from a BoingBoing article to the original German article, and the headline used to be a question ("Who is the NSA spying on..." or similar) that gave the GP comment more context.
In reality, there is hard epidemiological data showing that selling raw milk (edit: e.g. through the normal store channels) can lead to serious harm including deaths. So FDA bans it for interstate sales, but it's up to the state to decide how to regulate in-state sales. Just like any other food safety issue.
NSA is extremely unlikely to be involved in enforcing regulations against raw milk in reality, but in the mind of the conservative conspiracy theorist it's all of one totalitarian piece.
Please STOP spreading misinformation. The only two deaths from raw milk in the last 20 years were traced back to bad queso fresco. In fact, over the same time period, there were more deaths attributed to pasteurized liquid milk than to raw liquid milk.
It's amazing what 30 seconds of Googling can do.
Even your quoted web page lists 2 deaths from raw milk products and 3 deaths from pasteurized milk products. Considering the relative rarity of raw milk product consumption, that's a pretty obvious sign.
Arguing that the contamination isn't significant since it's specific to one milk product doesn't pass muster. With such a small sample you can't deduce anything about how the risk is distributed accross types of milk products.
No it's not obvious. That's the point. The CDC has admitted those deaths were caused by a product (queso fresco) that is commonly contaminated after production. There are ZERO deaths attributed to consuming raw liquid milk.
> such a small sample you can't deduce anything
Apparently all data from 1998-2011 on all reported illness and deaths from raw milk products is too small for Chicken Little.
And if this data set is too "small" why are the conclusions drawn by the CDC ("raw milk is deadly!") valid? Shouldn't the paucity of data preclude judgement one way or the other?
> Shouldn't the paucity of data preclude judgement one way or the other?
There is no paucity of data. There are very small numbers of people who drink raw milk. And thus there are small numbers of people harmed by raw milk. But it's pretty clear that raw milk is considerably riskier than pasteurised milk.
Whether adults should be allowed to make stupid choices is another topic. I'd suggest that adults should not be allowed to inflict those stupid choices onto children - who are going to be at even greater risk from harm.
You keep talking about death. Having to have kidneys transplanted because e coli has destroyed them is not death, but I hope you agree it's a severe consequence from eating food.
> The Centers for Disease Control and Prevention (CDC) reports that of 239 hospitalizations caused by tainted dairy products from 1993 through 2006, 202 involved raw milk or raw-milk cheese. Nearly two-thirds of the patients were younger than 20. "Parents go to raw milk because they hear it's good for kids' allergies," says Michele Jay-Russell, a veterinarian and food safety specialist at the University of California-Davis who has studied the outbreaks. But children's developing immune systems are more vulnerable than those of adults. "They end up sickening their kids," Jay-Russell adds.
I bring up death because that's the canard trotted out by raw milk haters. And it doesn't happen with any appreciable frequency despite the large numbers of people consuming raw milk.
I'm not disagreeing that both raw and pasteurized milk can potentially cause serious illness, however I do not believe the numbers are large enough to be cause for concern or excessive regulation by control freaks who need to dictate what people put in their bodies. Perhaps you disagree and that's fine.
The ratio of people drinking raw milk to suffering severe harm from it is much worse than for cars, alcohol, or freedom.
> control freaks who need to dictate what people put in their bodies.
Do you agree that parents should not feed their children a dangerous product that has no benefit? Or is that a bit of control freakery that you don't care about because bias?
Electronic surveillance used to be more stigmatized in some ways, but it's becoming more culturally normalized as a basic government tool (at least in the culture of government agencies -- I hope not as much elsewhere). So you see it used in more and more contexts.
I'm totally unfamiliar with the raw milk regulations, but I think that people who are concerned about them could reasonably worry that electronic communications surveillance will be used to enforce them in the future. Likely not by NSA itself, but perhaps through something that's in part technological trickle-down from NSA development or procurement.
Seems more like it was the DOJ that was placing strings on access to the devices.
I'd love to see the evidence, and see it compared to other food sources.
I grew up in India. There all we got was raw milk from the cowherd; in fact, even today, my parents send the helper to get milk in a pail from the cowherd. It's always been raw milk, warm and fresh from the udder. And the first thing they do is to boil it.
If I were to conjecture, it's that the "no raw milk" diktat forces farmers to go to big distribution companies with the requisite facilities for pasteurization.
Two things: Firstly, it's not raw if they boil it. Most store-bought milk has gone through two processes: Pasteurisation and homogenisation. Pasteurisation is simply heating the milk. If your family boiled it before drinking, you've actually heated the milk more than commercial pasteurisation does. Normally pasteurised milk is heated to only 72 degrees celsius for only 15 seconds. Homogenisation is essentially forcing the milk through filters that breaks up the globs of fat. Only pasteurisation is necessary for food safety.
And secondly, pasteurisation is most necessary if you intend to store the milk. If, as you say, it's "warm and fresh from the udder", there's little risk from drinking raw milk.
The parent of your post specifically said selling raw milk through the normal store channels. The issue is not raw milk, but selling raw milk, which when you combine storage and transport, and the consumer storing it, means plenty of time for massive amounts of bacteria growth. As I'm sure you know, even with normal pasteurisation milk spoils relatively quickly.
> If I were to conjecture, it's that the "no raw milk" diktat forces farmers to go to big distribution companies with the requisite facilities for pasteurization.
Health authorities first started to push for pasteurisation after its extensive success in massively reducing illnesses - and deaths - due to spoiled milk.
You should remember that not everyone live in the same hot climate as you where milk generally don't go bad immediately, and that there are plenty of people around in cooler climates that have stomachs that usually can handle milk without problem.
Boiled milk, yes. But unless you get milk straight from a farm, it's likely pasteurised: Heated to 72 degrees celsius for 15 seconds. [EDIT: I didn't realise how many places allow sales of unpasteurised milk; yikes - I'll be careful about reading labels next time I'm travelling]
> and that there are plenty of people around in cooler climates that have stomachs that usually can handle milk without problem.
The "stomachs that usually can handle milk without a problem" is entirely unrelated from why we pasteurise milk. Pasteurisation does not affect the lactose content in the milk, and that, combined with whether or not your genes makes you lactose intolerant or not is what determines whether or not you handle milk well.
It's the same nanny state issue when the FDA shut down more beneficial AIDS treatments in the 80s and 90s when the only drugs on the market, that the FDA approved of (AZT), were essentially toxic and killed about the same number of people that they "helped". Why should the FDA decide what goes into the bodies of supposedly "free" people? They should only act to say, "This is the only type of drugs or milk the FDA approves of"
It's not particularly favorable to raw milk.
> They should only act to say, "This is the only type of drugs or milk the FDA approves of"
The same CDC report mentioned above, specifically address labelling, and points out that the numbers show that labelling is not shown to have significant effect.
If it was only your body you put at risk, you might have a point, but this also includes parents putting their children at substantial risk, and people putting others at risk whenever they serve non-pasteurised dairy products and people are not themselves aware of the risk.
The first sentence goes "If you read Boing Boing, the NSA considers you a target for deep surveillance".
So, if you find this interesting, maybe you shouldn't read it.
First they came for the Socialists, and I did not speak out—
Because I was not a Socialist.
Then they came for the Trade Unionists, and I did not speak out—
Because I was not a Trade Unionist.
Then they came for the Jews, and I did not speak out—
Because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
I don't agree at all with these practices.
Heard that one before.