Hacker News new | comments | ask | show | jobs | submit login
Times Pulls Article Blaming Encryption in Paris Terror Attack (insidesources.com)
643 points by ColinWright on Nov 17, 2015 | hide | past | web | favorite | 296 comments

"One key premise here seems to be that prior to the Snowden reporting, The Terrorists helpfully and stupidly used telephones and unencrypted emails to plot, so Western governments were able to track their plotting and disrupt at least large-scale attacks. That would come as a massive surprise to the victims of the attacks of 2002 in Bali, 2004 in Madrid, 2005 in London, 2008 in Mumbai, and April 2013 at the Boston Marathon."


This is always the part of the anti-Snowden case that baffled me. Those who seem to think that he alerted terrorists to the most secure means of communication seem to assume that, prior to the Snowden leaks, they were communicating by yelling really loudly across the NSA buildings. It's like they simply forgot about the biggest reason it took so long to find Osama bin Laden: he was so security-concerned that he used couriers and relays (which, counter to the broad narrative about the terrorist shift to security, actually cannot be decrypted)

The most dangerous terrorists have probably already reverted to couriers with one-time pads. One-time pads are uncrackable, yet they were used extensively before modern cryptography was even invented. They're cumbersome and constrained but very effective. No amount of mass surveillance will alter their efficacy.


So I read the wiki on the One-time pad and there's something I'm a little stuck on. There's a statement (paraphrasing) that the OTP is immune to cryptanalysis (brute force) because any given key translates to all possible plain-text, and the viable words all have a-priori the same likelihood.

The thing I'm stuck on though, isn't it still possible to do semantic analysis on the various permutations. Basically reading permutations for cogent statements? So do some sort of a-posteriori analysis

Infeasible for a human to do, but assuming one could construct a significantly advanced parser (non-trivial of course), wouldn't it be possible to brute force still? What am I missing?

No. What you described will work for a simple substitution cypher, but not for a one time pad. A one time pad is the same length as the message, and permutates every letter independently. Trying all keys will yield every possible plaintext. For example the phrase:

"The swallow flies at midnight"

May (with a one time pad) be encrypted into


(which, incidentally, is indistinguishable from random noise)

If you just bruteforced that by xor'ing every character with every other possible character you could derive every possible message of that length, such as:

"garfield hate lasagna someday"

"men are cats why even bother?"

"pocket knives go to space yay"

etc ad infinitum

No measure of semantic analysis will help you here!

Well-known caveat for people who are familiar with encryption, but it's worth calling out explicitly here:

If you use the same one time pad to encode two or more different messages, then all the sorts of attack proposed here become plausible again.

The security provided by a one time pad relies entirely on the fact that it is only ever used once.

I'd like to add this scenario actually happened during the Cold War. Soviets were reusing one time pads and the US army decrypted some of the messages, among other things this lead to discovery of Soviet spies targeting the US nuclear weapon program https://en.wikipedia.org/wiki/Venona_project


Computerphile recently showed how this was done.

Is it really still a "one time pad" if you used it multiple times?

Easy to tell the right sentence because it's the only one capitalized correctly! /s

Ahh yep, got it.

I am out of my element here, but my understanding is that since the key is equal in length to the message, there is no way for you to know whether you are simply seeing a pattern in the key or a pattern in the message.

Imagine a one time pad made for encoding numbers that used a "MOD 10" operation on each digit.

Then imagine the key is:

And the message is:

The output is:

Alternative messages:

    1234567890123 -> 7150027069897
    1111111111111 -> 7037671370885
In all cases, the patterns that you can discern may be from my message and may be from the key. As an analyst, you can't tell.

If this were English letters rather than numbers, and you know 'e' is very common, you still can't get anywhere because each 'e' is encoded with a unique character from the key.

This is a good description, but to add on to it: If there is a pattern in the plaintext, it does not increase the probability that there is a pattern in the ciphertext. It is true that there may be patterns in the ciphertext, but they give you no information about if there is a pattern in the plaintext.

The key is the same size as the message. Each letter translates the corresponding letter and no others. You could make a key to translate the message to anything with the same number of letters.

Well no. What you are describing is basically searchig through all permissible permutations in a given search space, i.e. a thousand monkeys with typewriters. Fron time to time the system will produce something that is not gibberish, but there is no way of knowing if it is related to the true message at all.

The message has an equally probably of decoding to ANY message. You essentially have no information to work with.

It doesn't seem very likely that anyone has broken a modern symmetric cipher like AES or ChaCha. If not, a small random key is just as good as a one time pad, and you can reuse it for as many messages as you want. The bigger risks are that you reveal the key or that your hardware is evil, but OTPs don't save you from either of those.

With public key crypto it's a lot more likely that something might be broken. But then again if you somehow solve the problem of swapping secret keys/OTPs with everyone you want to talk to, you don't need public key crypto.

I understand what you're saying, and I agree. But I think it might be a little disingenuous to use Osama Bin Laden's crypto practices as an example. I believe this is the more interesting story of what really happened with Bin Laden?

>Pakistan secretly captures Bin Laden by bribing tribesmen. The US finds out by bribing Pakistani officials. Further bribes with foreign aid money get other Pakistani officials to issue a stand down order. The SEALS swoop in unopposed but somehow still lose a helicopter. They kill a captive Bin Laden as part of a deal to avoid exposing Saudi support for Al Qaeda. The media gets fed a cover story about the compound being a command center. Some doctor guy becomes a scapegoat and vaccination programs are derailed in all of Pakistan. The CIA fabricates documents from the compound and flirts with claiming credit for "enhanced interrogation" technique in the matter.

If you don't like that story then sure by all means stick with Story A: The CIA does brilliant investigative work. The commander-in-chief makes a gutsy call. The SEALs storm in and kill the bad guy in a firefight. He is buried at sea with full rituals. The 2012 presidential campaign starts a few days afterwards.

If you support Story A, then this would certainly make sense:

>It's like they simply forgot about the biggest reason it took so long to find Osama bin Laden: he was so security-concerned that he used couriers and relays (which, counter to the broad narrative about the terrorist shift to security, actually cannot be decrypted)


Claims comes from a previous HN article about the Killing of Bin Laden: https://news.ycombinator.com/item?id=9520984

Those are some pretty big claims, have any evidence to back them up?

The only thing that lends credibility to the story in the post you're responding to is the person who's claiming it's the truth. Seymour Hersh [1] has enough of a track record that we shouldn't dismiss it out of hand. As far as evidence goes, I'm not sure the official story has been proven any more than Hersh's story has, so it's hard to know what to believe.

[1] https://en.wikipedia.org/wiki/Seymour_Hersh

This IIRC is essentially the story reported by investigative journalist Sy Hersh, relying on unnamed sources. See here for the full read: http://www.lrb.co.uk/v37/n10/seymour-m-hersh/the-killing-of-...

Thank you! Ok I remember when that story broke but I didn't read the details. Seymour Hersh is such a respected and renowned journalist who has a track record for revealing exactly this stuff. But the lack of openness in the sources is disquieting. But the story is so odious and so serious, I suspect there are no sources that could weather the storm regardless.

The less evidence and the more faith involved, the more captivating the conspiracy theory. That's how they work.

I fail to see how story b contradicts the parent post. If Osama was easily traceable there was no need to bribe everyone.

I was just using it as an example of it long being on the mind of terrorists that they need to take extreme precautions to avoid their electronics being compromised. Snowden didn't alert them to the concept of decryption. Sorry if I implied anything more.

The other 'wild' claim I heard was that his compound was actually a prison, built especially to house him. Again no proof, but an interesting idea none the less.

To be fair, an ultra-secure compound that the owner/resident of doesn't ever leave for fear of his own safety can be indistinguishable from a luxury prison in terms of outcome, even if it wasn't intended that way.

It's even more absurd than that. The premise is that we can have a public worldwide debate that emphasizes how important encryption is to successfully carrying out terrorist attacks, convince the world to give up their privacy for the sake of safety, and after the majority of the protestors have been defeated by public awareness that encryption and terrorism go hand-in-hand, the terrorists will go back to using phone calls and unencrypted email.

My thought is that if we implement a back door policy, the public will have communication links which are much easier to compromise by any party, while terrorists can still easily use encryption and/or steganography to secure their communications.

All the back doors accomplish in the end is harming the general public.

It's not just that federally-mandated backdoors increase the chance of compromise (although they certainly do). If the U.S. government can mandate that technology companies provide access through encryption, then so can other countries. Sovereignty is still a powerful concept under international law.

So imagine if Apple, upon condition of selling iPhones within China, must provide the Chinese government with a backdoor through Apple's encryption. Do you know how many federal employees use Apple technology? Including the President himself, who totes an iPad everywhere?

I don't think that federal intelligence and law enforcement officials calling for backdoors have fully thought through the consequences.

> I don't think that federal intelligence and law enforcement officials calling for backdoors have fully thought through the consequences

I think assuming that level of incompetence is a big claim.

It's simply much more likely that the actual plans/goals and the talking points and press releases about the plans/goals are mostly unrelated, as usual. You can infer that those calling for backdoors have decided that calling for backdoors is the best thing to say, inferring anything more requires more information.

> I think assuming that level of incompetence is a big claim.

Not really. Have you listened to the average member of Congress lately? Incompetence runs deep throughout all levels of government.

I think largely politicians are very intelligent and competent. What they're not though, is open about their own thoughts. That's not their job. Their job is to reflect what their supporters (financially and votingly) want. If their supporters have wild nonsense ideas, they have to promote those ideas to keep their job. That's why they were elected. We don't choose politicians because they're smart and have good ideas. We choose them because they tell us our own foolish ideas back to us.

They key point here though is that their "supporters" are NOT the American public. They are private corporations and banks with their own agendas, usually contrary to the public's well being.

The best salesman is someone who believes in the product. I'm willing to take them at their word.

It's not fair to say incompetent; I didn't say that.

Instead, look at it this way: everyone has their area of responsibility. The head of the CIA is charged with providing the best possible situational awareness for the U.S. government. He's going to make proposals and requests that will help him do that.

He's not charged with balancing all possible consequences from his requests, and he's not going to do so.

> He's not charged with balancing all possible consequences from his requests, and he's not going to do so.

But ... that seems like quite a level of incompetence for someone trusted with the position of Head of the CIA?

By which I mean, not charging him with that responsibility is a mistake (in gov structure), but him actually not doing so is his own incompetence, is it not?

I think assuming that level of _competence_, at least in fields other than diplomacy and public speaking, is a big claim. It's not reasonable to assume that every single ancient senator that's pushing for this is fully aware of the consequences, but it is reasonable to think they simply haven't thought any further than "I need to make it look like I'm doing something, or I'll look bad".

To a man with a hammer, everything looks like a nail. To a man who only has words, everything looks like a soapbox.

> You can infer that those calling for backdoors have decided that calling for backdoors is the best thing to say, inferring anything more requires more information.

How is calling for a really bad idea because it's the best thing to say different from the level of incompetence you say is too big to assume?

It sounds like the implication is that the politicians are smart enough to know that the backdoors are ultimately not going to happen but that calling for them is a way to appease voters who haven't followed this through to its logical conclusion.

edit: "not going to happen" could be read as "not going to be effective." I wouldn't actually be surprised if the US ended up passing some law restricting crypto to an approved list of backdoored schemes (surprised: no, dismayed: yes), forcing people into hiding their crypto in deniable ways. What some people don't seem to grasp is that no matter how much you outlaw certain math operations, whether or not the end users comply with those laws is ultimately up to them, and the terrorists simply won't comply.

It would help conventional law enforcement. Most violent criminals probably don't put enough forethought into their spontaneous attacks to properly avoid notice by the NSA. Unfortunately the NSA doesn't seem to use their resources to secure the nation, and just ignores all the large scale violent crime that goes on daily.

> Unfortunately the NSA doesn't seem to use their resources to secure the nation, and just ignores all the large scale violent crime that goes on daily.

The NSA is part of the Department of Defense charged with foreign signals intelligence for national security purposes, its not a law enforcement agency. (And many of the tools it uses would be flagrantly unconstitutional if used for domestic law enforcement.)

Generally, blurring of the military and law enforcement roles is not a healthy thing; it may be good for order, but rarely for liberty.

It's largely outside of the NSA mandate to monitor the communications of US citizen criminals. It is not the fault of the NSA that Congress has not passed a law telling them to spy on domestic criminals.

I get that some of the controversy lately is about how well they stick to their mandate, but that doesn't change what the mandate is.

Many regulations do. The real trick is giving even the terrorists reasons to behave.

It sounds impossible, of course, but that's just because we're not sure where these guys put their goals; some may be fighting for lack of another purpose in their life, while others may be motivated by personal loss or a thorough belief in the violent interpretation of Islam.

So what we really need to do is find a common thread among all the terrorists, and pull on it.

Indeed. Also, combine your steganography argument with this image


and it's pretty easy to see that this avenue of thought --- that we will put our efforts toward finding a way to ban encryption --- is totally ridiculous.

One of the reports on the Osama Bin Laden raid stated that he had a porn stash, I wondered at the time if this was for use as a starting point for steganographic communications.

The State is far more efficient and effective at instilling terror than any loose organization of dissidents could ever hope to be.

Read pretty much anything Amnesty International publishes, or even the right history books and you'll have all the evidence you ever need as to why privacy is important even when you think it isn't.

Once bad things start happening it's too late to change your mind. They already have what they need.

If (almost) everyone who is not a terrorist were to give up encryption, then it would be much easier to track down/narrow down the terrorists if they keep using it, no?

No because they're not using it in the first place. It's a totally unrelated issue, dear to some for reasons entirely their own. The fear of terrorism is just a convenient little button to press to get their cookie.

Maybe, but I'd rather not send my financial information over the Internet in cleartext every time I buy something from a web site.

You wouldn't.

The government would just have the private keys so they could decrypt traffic easily.

Kind of like the TSA approved locks for your baggage. Those are very secure and can be opened only by the government officials, last I heard. The only downside is that those TSA master keys might become available to the third party somehow, but I don't see that happening anytime soon, since we can always trust the government to keep such information secure.

Not "might become available", they already are - https://github.com/Xyl2k/TSA-Travel-Sentry-master-keys

Someone posted a photo of master keys and Internet was, as usual, the Internet.

Call me crazy, but I think that may have been the joke.

No lock is secure, ever.

Then the terrorists would just use non-encrypted communications and blend in with the massive flow of everyday data...

...Unless you're intercepting and sniffing everything that's unencrypted. Then we're back to square one.

Indulging this fantasy, it begs the question of whether those (enencrypted) communications would have been targeted and flagged for analysis

How does it beg the question?

There is no truth being asserted in the premise causing the conclusion to be true. It's fairly clear that it was a statement from an original source, that source being trustworthy is where we would find (or not find) the information which would show the conclusion to be true, not the premise of the 'fantasy.'

Give it up. In 30+ years of learning English, the only times I've ever seen it used in that sense has been when someone is being a pedant on forums and illustrating the now rapidly outmoded meaning of the phrase.

In 20+ years of learning English I've heard it used several times to refer to an obvious question or lack of information

"$noun did $verb which resulted in $result and $result is $something interest which begs the question $question."

well, your confusion of tenses kinda colors your knowledge about grammar rather questionable and i say that as a foreign speaker

Until it changes meaning, I cannot give it up. A curse I tell you!

Not really, because the proposals aren't to go to no encryption but to go to government-backdoored encryption. To an observer you've still just a random stream of bits until someone tries to decrypt it with the backdoor key. Pretty easy to hide in unless we're monitoring every message.

And terrorists would be smart - they'd employ non-backdoored encryption, hide that in "legit" communications, and encrypt that with the proper back-doored government encryption. They'd appear to be a totally normal snapchat user, or whatever.

If we actually meant to give up all encryption, card-kiddies would destroy civilization in three weeks.

And when the next Snowden publishes the global decryption keys we'll all just update our software together.

Surely you mean "the next Impact Team" (those behind the Ashley-Madison breaches) because Snowden actually went through painstaking lengths to not reveal stuff that would put the public at danger like that.

If there was some way to learn the identity of said terrorist simply by observing the origin and destination of an encrypted message, then perhaps. It would possible for them to coordinate and avoid this outcome. And there are still plenty of secure crypto systems out there so we wouldn't be able to decrypt their messages (unless quantum computers turn into an actual thing).

I don't think so, as far as I know encrypted data doesn't really have a signature that can be easily spotted

It does, it looks random (hight entropy).

Yes, measured data and unmarked compressed data have this same property, as do actual random data. But is does not look like 9 nines of false positive rate are a concern to those people.

You can set the entropy to any amount you want. You need to consider encryption methods that put in at least a bit of effort to hide themselves. It could select random phrases and pretend to be a spambot.

Or it could look like the output of a Markov chain sentence generator on the topic of religion and post the messages in some well-known forum.

Well, where's the boundary between cryptography that hides itself and stenography? Is there one?

If you include stenography, yes, it's certainly not easily recognizable. I don't think good stenography can be recognized at all, but that's not my area and I've got people contradict me at this (without further info), thus I'm not sure.

There's not much of a boundary, but that wasn't exactly the point I meant to make. You can do something like encode as ASCII 0s and 1s and have low entropy without that hiding anything.

> It's even more absurd than that... after the majority of the protestors have been defeated by public awareness that encryption and terrorism go hand-in-hand, the terrorists will go back to using phone calls and unencrypted email.

I think you're straw-manning their position here.

The opponents of encryption aren't claiming terrorists will start using unencrypted communication, they're advocating for legally-mandated "back doors" to be built into supposedly-secure communication systems.

It's not a straw-men. It's not of the problems with the rationale.

Terrorisms won't use legally-mandated software for they illegal operations.

You're right, but I didn't mean only phone calls and unencrypted email. I considered adding "or their equivalent back-doored technologies," but I hoped it would be clear. Using a compromised technology is the equivalent of unencrypted email and phone calls to whoever has the key. All it does is draw attention to how important it is to communicate in secret. This will be the Streisand effect of crypto.

How would they know, though? For all we know, OpenSSL has been backdoored in some subtle way.

ISIS actually tweeted in the clear before this one. Forget crypto, nobody was reading the open sources even.


I don't fully understand this line of thinking. To me, it's like saying "The belief that seat belts save lives might come as a surprise to these people who died in automobile accidents while wearing seat belts."

It may indeed be that this kind of signals intelligence isn't actually helpful, but I don't think this is a very good argument either way. If there was a massive disruption of planned terrorist attacks, it is not useless just because there wasn't a complete elimination of terrorist attacks.

Exactly. Same reason you have locks on your doors even though they are easy to defeat. Tech types want to believe that the government is always overreaching and everything is a slippery slope to some other loss of freedom. Mixed in with a bit of overcooked paranoia thinking the government has enough time and energy to track down everyone and whatever laws they are breaking by reading their emails.

Bad analogy. The sites I help run are often receiving 10k attacks per minute with fuzzers and known exploits. The internet provides anonymity unlike some guy standing at your front door. I have yet to have an army of bad guys trying to pick my lock 10k time per minute on my house. Backdoors into things are economy is based on SSL etc. are just plain irresponsible.

If they actually did have enough time to enforce all broken laws by reading our emails then their unreasonable effectiveness may warrant the intrusion. If these capabilities only are able to thwart a small percent of lawless plots then perhaps the cost to our personal privacy and security is too high to justify.

The terrorists also utilized roads and clean drinking water... better get rid of those, too!

This argument misses the point IMHO: the US government was investing astronomically large sums of money in an infrastructure that was fated to quickly become less and less effective. In doing so, they were even willing to violate basic constitutional rights.

It would indeed come as a massive surprise to the 2008 Mumbai terrorists, who were well versed in in using encrypted channels to communicate --


They were not using encrypted channels for their operational control. The security forces had no trouble listening in on the calls and text messages being sent between the attackers and their command and control infrastructure.

The issue the security forces _did_ have was in identifying where the calls were originating. The actual content was not encrypted though.

It was sarcasm. Encryption was used in all the listed attacks.

I think I remember a terrorist attack in 2011 too.

Increasingly we're seeing 'encryption' discussed in mainstream media as a tool that can only be used for nefarious ends. Kind of like how they talked about "cookies" in the late 90s and early 00s.

It's unfortunate that there's no way [without a lot of money] to launch a campaign that highlights the fact that most sensitive online transactions rely on encryption - banking, purchasing, etc - to protect you from The Bad People who want to steal your critical financial information.

That would also highlight that it's not just about keeping secrets from the government - in fact, that's the smallest fraction of it.

Unfortunately, it's without a hint of irony, as the same media companies deploy SSL/TLS on their websites for - at least - login and signup, taking payments, and more. I'm not sure if it's simply that no one has explained that SSL/TLS is encryption, and the nefarious things that could be done (stealing identity, credit cards) if certain communications aren't encrypted.

In the end, it's more education, isn't it? Most things are. But in some cases, like this, it's a race between general education and political (and spy agency) advantage-taking.

That's Snowden's greatest legacy - a few more people became educated, and the public frustration and pouting spy agencies are doing regarding encrypted communications are a bit more transparent than once they were.

I'm pretty sure they are trying to push for encryption with government backdoor access. SSL certificates are issued by centralized authorities that can easily provide access to governments (if they don't do so already). Not that I agree with what they're trying to do.

CAs do not have backdoor access. The most a rogue CA can do is possibly enable a MITM against a website that has taken no protections.

Google pins their public key fingerprints right into Chrome, and this feature is open:


I don't understand. How can you be sure that they haven't shared their keys to the government?

Having a CA's private keys only allows you to generate new signed certificates for a site, not decrypt traffic encrypted using an existing signed key pair. At best, it gives you the ability to spoof a website with a man-in-the-middle attack (e.g. you run a rogue wifi hotspot with a fake amazon.com that uses a private key you generated), although certificate-pinning would warn the user that Amazon's certificate had changed unexpectedly.

You don't need to send your private key to a CA to get a cert. You send them a CSR with your public key.


Edit: Ah, if you were talking about the CA's private keys, then the parent is correct.

Do they want attackable weaknesses in crypto, or a "master" key?

We have to get rid of this narrative of bad versus good people while we're at it. It removes every bit of nuance from the discussion.

It doesn't just remove nuance, the entire framework of the discussion is built on untruths when we start there.

Unfortunately this is a very widespread notion in culture and is helped by major religions.

I wonder how true that is. For Christianity at least, most dominant theology takes the view that the things that exist in the world can be used for good or for ill and aren't inherently one or the other in and of themselves.

Same with Buddhism

That's the point. What the Paris attackers did makes them bad people. It's not up for debate.

Surely you can see that they think the exact opposite. In order to do something like that, they have to convince themselves that they the good guys, and it has to be done to "purify the evil in the world" or something. The truth is that people are not inherently bad. Evil feeds on itself; if you insist on calling them bad, then you are falling into the same trap that they did. Hopefully you wouldn't act in such a heinous manner, but Western societies kill thousands of innocents all the time, and it is masked with the same rationalizations; if a drone strike kills someone, we define them as a bad guy.

> The truth is that people are not inherently bad.

I don't know about that. I'd argue that most people don't think of themselves as bad, but it doesn't make them not-bad. And humanity definitely has base instincts - if we didn't, we wouldn't need governments and police.

> but it doesn't make them not-bad.

I disagree. I've never found a coherent absolute ethical framework. Not even "Don't kill people."

When anyone is arguing that something is bad, that person has to appeal an authority or belief. Often the reasoning leads to utilitarianism: "If we want society to continue, we should ban murder." But that's still an if/then statement. Beneath the if/then is an appeal that society is good or desired.

> And humanity definitely has base instincts - if we didn't, we wouldn't need governments and police.

This viewpoint becomes popular with Hobbes in the 1600s. I disagree with the term "base instincts," which negatively connotes those things. I'll change it to "randomness"; i.e. in a society of nondeterministic people, some will try to kill the others. Government and police try to reduce that kind of randomness. But from that same randomness we get music, science, justice. I'm speaking loosely of course.

I think of my American government as social contract, not as protective parent.

> I'll change it to "randomness"; i.e. in a society of nondeterministic people, some will try to kill the others.

I think "randomness" is even worse, because it makes it sound like these actions could be expected from anyone at anytime. Statistically, a small portion of the population is responsible for most of the violence, through repeat offenses. Also, in many situations there are clear warning signs (e.g. mentally ill with clear homicidal ideation, member of a gang). Are you really arguing that a dice roll is a good fundamental model for human behavior?

Nondeterminism (assuming humans are truly nondeterministic) doesn't really matter here, except for the fact that we don't have a way of precisely predicting people's behavior (and may never have one). If it turns out that humans are just complicated, deterministic machines that we can not feasibly predict, the reasons for which we have developed societal structures do not disappear.

> But from that same randomness we get music, science, justice.

I think it's pretty easy to distinguish between the first two and violence. I agree that the third is a bit trickier. If what you mean is that the exercise of both these and the dispositions-formerly-known-as-base-instincts are a result of allowing for a significant measure of personal freedom, I'll agree with you. But writing human behavior off as having no more structure than a random number generator ignores a lot of predictive and explanative power that we do actually have.

> it makes it sound like these actions could be expected from anyone at anytime

They can. A neurotypical person's brain could spontaneously generate a violent psychotic episode due to a stroke or adrenergic tumor or somesuch. Just like any area of the sky could spontaneously throw a lightning strike at you at any time. It's low probability, certainly, but murder is a low-probability event to begin with.

The important point is that a model with "non-deterministic" people in it has more predictive power, epidemiologically, than a model where it's impossible to become a murderer without "warning signs." It's not at all "writing human behavior off"; the fact that the model includes randomness can actually help you prevent murders more effectively, by leading you toward strategies to cope with unpredictable murders—e.g. building education toward methods of "de-escalation" for psychotic episodes, crimes of passion, etc. into your society—rather than simply trying to reinforce policing and social work.

> They can. A neurotypical person's brain could spontaneously generate a violent psychotic episode due to a stroke or adrenergic tumor or somesuch. Just like any area of the sky could spontaneously throw a lightning strike at you at any time. It's low probability, certainly, but murder is a low-probability event to begin with.

I think if we eliminated all murders except these, we would be in excellent shape. My point is these are not the ones worth focusing on, because we don't have good tools to deal with "random, history free, psychotic break."

> The important point is that a model with "non-deterministic" people in it has more predictive power, epidemiologically, than a model where it's impossible to become a murderer without "warning signs."

Well, if your probabilistic model has no "warning signs", then how does it provide any information at all? If you don't have a method of using information to differentiate the probabilities when given a person/group of people/location etc. then you have no predictive power at all, except for the average murder per capita.

> building education toward methods of "de-escalation" for psychotic episodes, crimes of passion, etc.

De-escalation of psychotic episodes is an impossibly hard thing to teach without protracted work with a mental health professional. In addition, I seriously doubt that there would be any effectiveness when taught to people who have not experienced psychosis. Teaching this to everybody would be inhibited not only by cost, but by the fact that there is not likely enough people in a society that would be good enough therapists to do this on a large scale.

Depending on what you mean by premeditation a large percentage of US murders are more or less random.

#Pollution #Cars

GP's point was that human's thoughts and intents are a lot different from randomness. It isn't likely that most people intend on running into pedestrians whenever they get in their car.

When someone drives a car into a farmers market there not exactly choosing there victims. In extreme cases you have things like people flying airplanes into someone's home.

Sure, they did not intend to crash, but choosing to risk others lives is considered a reasonable thing to do. Assuming your not overly blatant about it.

Well if you want to get really pedantic, nothing is inherently anything because all meaning is constructed in our minds. Evil is a word humans use to categorize things, not a description of objective reality.

I would agree that most people don't think of themselves as bad, but since there's no objective 'bad', wether or not they're 'actually bad' is up to the observer. When a person is labeled as 'evil' it's not a description of them as a person but a description of what the describer thinks of them. Hitler wasn't evil because he killed millions of people, he was evil because the general consensus is that doing the stuff he did makes you an evil person.

If one loses sight of this and starts think of evil as objective reality, they're taking their biases and opinions as objective reality, and down that path lies ruin.

I would argue that, while "evil" cannot be defined in precise terms, you can definitely say that evil acts are unnatural acts that only humans are capable of. Why? Well for one because a natural foodchain is balanced and biased towards long term survival of all the species it involves and towards further creation of life. It's a little ironic that human beings are the only ones in the entire animal kingdom that can choose how to live, we are the only ones able to choose what are, our intellect transcending our DNA coding, yet we are also the only ones destroying our habitat and each other. Isn't that funny?

But back to Hitler, if you're trying to make the case that his evilness is subjective, I mean no offense but that's a really dumb argument. Hitler was in no small part evil because his actions were, on one hand irrational, fueling and amplifying his people's potential for hatred and destruction and on the other hand detrimental to the survival of our species and of Earth itself. And again, genocide is not natural. You see, in nature animals kill to eat, but that's to satisfy a basic necessity and not out of some wicked sense of justice and animals can definitely not kill on an industrial scale like we do. Whatever definition for "evil" you find, genocide on an industrial scale is pure evil by definition.

And if that doesn't sound objective enough, consider that culture is a part of who we are. We aren't DNA-coded to eat certain foods, or to live in a certain place, or in a certain way. Compared with rats, we can rely on the wisdom of our elders in order to survive. And we've survived this way for a long time. As an example, our rich culture, which includes preconceptions and taboos, is what prevents us to eat each other, or to have sex with our siblings, or to bring human sacrifices to our gods. Actually some preconceptions are more subtle and newer than others - for example the notion that children are fragile beings that need to be loved and protected, instead of someone's property, is pretty new, being popularized by Christianity.

So you know, if popular conception is that doing this or that is evil or toxic or taboo, there's a high probability that such judgments are correct, helping us to survive and thrive. Even with all the false positives (which tends to be the lack of tolerance towards people that are different from us), dismissing our heritage would not be wise. Plus usually the guidelines are simple, like being a murdering maniac counts as evil, though somebody should tell those jihadists.

You make a lot of excellent points but I think you might have misinterpreted my comment, because it doesn't seem like we disagree. You mention that evil cannot be defined in precise terms, and that's exactly what I'm talking about. It's not a property of things in reality that can be measured, it's a judgement about things made by people. The judgment may be very rational, but it can't be based wholly on objective reality. For that to be the case, it'd have to be based on a physical law or property rather than a moral principle, and I just don't see how that's possible. Your argument for why Hitler was evil is really solid, not because it's based in objective reality but because it's based the principle that the continued existence of the human race is a good thing. I'd say that's a solid enough principle that it may as well be objective reality. But technically speaking, it's still a subjective judgement that we're making about how things should be, rather than an objective observation of how they are.

This is a really pedantic, subtle and (I think) important distinction. If one sees good and evil as objective truth, they're reliant on the source of that truth for their moral judgement. This explains the jihadists you mention. It's not that they enjoy being murdering maniacs (although I'm sure some do), it's that their source of moral truth tells them that the infidels are evil and must be destroyed (or whatever), so they see what they're doing as a good thing.

Ideas can be bad, which is why we have to be honest with ourselves and call the idea that leads people to conduct evil acts evil.

The problem is that by calling them 'bad people' you imply that they committed the crime because they were 'bad people'. And that's the core of the problem. They were not 'bad people' before they committed the crime. They were just people in need of help. By focusing on how they were 'bad people' you make it seem like you haven't contributed to the problem and there is no other solution then to fight them (using violence) while in fact there are many things we could have done differently and many things we still can do to prevent these crimes from happening. The fact is that it's much easier to blame it on them for being 'bad people' then to look at the consequences of our own actions.

This sounds uncomfortably close to victim blaming.

I am not blaming the victims at all. For example: You could call many people living in the hood 'bad people' because of the crimes they commit but are they really 'bad people' or do they eventually commit these crimes because of the situation they grow up in and the disadvantages they have had in life? By stating this, am I blaming the victims of armed robberies, of theft of murder? Of course I am not. I just think we should have a nuanced rational discussion about how we can prevent crimes instead of discussing about offenders and victims, bad people and good people.

That's fair, I think I see where you are coming from now. I do think, however, that we have to be careful not to remove personal responsibility from the equation. Circumstances can certainly motivate behaviour but we are still responsible for our actions. Suggesting otherwise seems compassionate on the surface, but it can also be incredibly disempowering and corrosive.

There was an article lately about interviews with captured ISIS fighters on the death row. (I can't find the link unfortunately, maybe anyone's got it?) The idea was that what some of them describe is basically being forced to join the movement because there's no other way for them to provide for their families - they just suddenly lived in the middle of a war, but they don't care about the ideology itself. Biased? Of course, and doesn't apply to the European attackers. But still something to think about.

I agree with you both, but I think we can argue that people can have mitigating circumstances, and that you can (and should) try to understand why it is they do what they do, rather than "Any gun in the hands of a bad man is a bad thing. Any gun in the hands of a decent person is no threat to anybody — except bad people." Life isn't a comic book.

Agreed, good point.

Actually, that's an entirely subjective statement. While I think they're monsters, many people feel that western culture is immoral and downright evil. To people who have seen nothing but the West waging war in their region, these people may appear to be freedom fighters dying for a noble cause. Good and Evil are entirely a matter of perspective.

They carried out a successful military action that furthered their goals. And they killed far fewer people than many of the military operations we have engaged in. They are only bad because they didn't follow our rules of war and weren't us. (We often defend our own who don't follow our rules of war; see all those defending torture of detainees.)

They are bad, we are bad. The only not bad involved in all of this is the little children who we both have hurt, and even then those children will grow up and become what the other side considers bad.

I'm not sure what you're trying to say or what you think the OP and GP are, but saying "the Paris attackers were bad" doesn't really add much so you might need to clarify.

In the spirit of rapprochement, how about we say bad vs bad people?

Depends on what you consider as bad people. One could easily say that both east and west worlds are bad or evil

I don't see the point to inserting a philosophical debate about good and evil here. What nuance are you trying to keep?

Funny how politicians twist reason to suit their own ends. Bad guys use guns, but apparently were not allowed to ban them because they'll somehow find a way to get them no matter what. So the solution is to legalise them.

However when it comes to encryption, bad guys use it so it needs to be banned!

It's like they have it completely backwards.

Who are "they"? Many politicians are on the same side of each debate, wanting to restrict the rights of free people in the name of safety and security. I'm curious, do you see a distinction between the two issues? Sounds like you're on opposite sides for each of them.

I think the argument is that if bad guys didn't have guns, then there would be no reason for good guys to have guns except to fend off wildlife.

However, even if the bad guys didn't have encryption, we would still want good guys to have encryption to prevent the bad guys from intercepting private and sensitive communications.

>then there would be no reason for good guys to have guns except to fend off wildlife.

Ignoring the issues such as fending off bad guys (so what if they only have a knife, I'd rather bring a gun to a knife fight), hunting wild life, shooting for fun, owning collections, any many other reasons. Outright dismissing all of those is like just dismissing the need to protect private electronic communications.

Some would argue that banning tools of any kind is a stupid and feckless way to address cultural problems.

But that arguing would clearly be flawed. I could thing of a lot of weapons that I would not want to see in the hands of everybody I meet on the street. I think it is partially correct to say that about certain information based "tools" (like encryption or information in newspapers) but sometimes that is not true as well, for example with certain malware

Historically, though, the real threat is usually a state actor -- often one's own government -- that has been granted (or has seized) a monopoly on such tools. In particular, the modern notion that the state is the only rightful wielder of force has an unlimited downside.

Terrorism is pretty far down the list of things that frighten me.

Excellent point and what many people miss. Look at history.

The problem with having everybody possess potentially deathly tools is not terrorism, but crime in general. I, for myself, can think of a lot more crimes committed by normal people than the government. The historical argument is of course not completely unreasonable, but there are also an awful lot of discrepancies when looking at states government today and 300 years ago. At least in Europe, where I live, the government can be trusted in a lot of issues --- especially it is highly likely that they _try_ to do what is best and are no "evil emperors".

At least in Europe, where I live, the government can be trusted in a lot of issues --- especially it is highly likely that they _try_ to do what is best and are no "evil emperors".

The Europeans are fond of reminding Americans that we think 200 years is a long time. So, how long ago was the Stasi disbanded, again?

You can not compare our situation to that of the DDR. If there is more surveillance going on than we currently know, it is certainly not as much as in that time -- a big chunk of the population were reporting to the "Ministerium für Staatssicherheit" directly, which was a well known fact.

I believe this is known as the "Special Pleading Fallacy."

Of course, you could play the "Slippery Slope Fallacy" card against my argument... but we're not dealing with logical entities, are we? We're dealing with often-irrational human beings and governments composed of them.

The EU is still losing its mind over cookies. Every Euro site now has a big disclaimer about cookies. The hysteria, at least for Europeans, never ended. Which is amusing considering how much local storage HTML5 can do and how easily it is to track people via their unique browser fingerprints, even with cookies and no plugins enabled.

The reality is, of course, encryption was used in these attacks. These terrorists aren't stupid. That doesn't mean it needs to be banned the say way a randoms stabbing don't mean knives need to be banned. I think the people saying "No way, never no encryption has every been used for nefarious purposes" are just as bad as the other side. Having a sophisticated view of this stuff makes a lot more sense.

Encryption will be like the gun debate. It will flare up from time to time. Its an easy horse to beat because its too technical for the layperson to really have a good understanding of and clearly there are political elements that would love Clinton-era laws limiting key size and such. The reality is that this ship has long sailed away. The genie is long out of the bottle to control encryption like its 1993.

Yes. And (as someone pro-gun) the unconnected statistical arguments ("More people die from X every year..." and the hyperbolic arguments along the lines of, "You could kill someone with a knife, let's ban them!" are offensive to me.

Unfortunately, I'm already seeing similar hyperbolic arguments about cryptography - "let's outlaw math", etc.

As you implied, I'd like to see a rational discussion. Yes, cryptography can and will be used for nefarious purpose -- but it's used in much greater capacity for legitimate purposes. That's what needs to be emphasized - it's the only logical, supportable argument to make.

I know, it's hilarious and annoying to be honest. It shows how behind the EU really is.

Pretty much all the British digital policies are silly. I think the simple thing they seem to not understand is that the Internet is smarter and moves fast than policy.

"Home and mobile working, including use of personal devices for work: ensure that sensitive data is encrypted when stored or transmitted online so that data can only be accessed by authorised users."

Quote from UK government advice to small business people, see


I suspect that GCHQ people want people using encryption but encryption that they have a feasible way of breaking if needed. Yes, I know, no way of stopping back-doors being used by other agencies/bad actors.

To be fair, I bet a lot of agencies would be ok with just having the keys at all times so that they can do surveillance. You could still have encryption for logins, financial transactions, etc.

I don't think that's a good idea either, but there are people who would probably see that as a reasonable middle ground.

Hmmm... i wonder if this would be a good use of Facebook and Twitter.


Or guns today.

This link implies that the NYT is backtracking about a piece apparently "blaming encryption." The NYT article referenced did not at all focus on encryption, and it didn't go so far as to definitively blame encryption.

The only paragraph discussing encryption is buried in the middle of the article. """ The attackers are believed to have communicated using encryption technology, according to European officials """

The lede did not mention encryption at all: "The attackers in Friday’s terrorist assault in Paris communicated at some point beforehand with known members of the Islamic State in Syria, officials on both sides of the Atlantic say."


you're saying its only possible the NYT "blamed encryption" if encryption was the article's main topic or the article's lede?

No, actually they're not saying that at all. They're saying that the article only barely mentions encryption and doesn't try to push an opinion on the matter.

I'm confused. Can you say more?

What are you confused about?

Since there was so much other content there, the pulling of the article could have had nothing to do with that one sentence.

NYT changes the there story headlines alot these days ...

After Paris Attacks, C.I.A. Director Rekindles Debate Over Surveillance


That's part of the strategy. The most eyeballs see it right after it's released, and the intended effect is created. They've been doing this for quite some time. The first experiments were with using different headlines for the same story in the online and print edition. Generally the print edition headlines are the most restrained.

Do you have any more information on this tactic? It's fascinating, I didn't realize that there was actual intent to it.

I don't know the intended goal. It might just be clicks or page views, but what typically happens is:

- Article launches with sensational, buzzfeed quality linkbait headline.

- Article gets lots of clicks

- The title is quietly changed to a more restrained version which often matches the print edition title.

At first I thought there must just be different editors overseeing the web and print editions, but now I think it's an intentional form of viral marketing or activist editing.

In some cases, the sensational headlines really diminish the journalistic quality/seriousness of the article, or create the impression that a minor point in the article was the main thrust of the article.

Similarly, there has occasionally been a headline that appears intentionally boring so that the paper can publish a story but effectively hide it from view.

So in a nutshell it's classic misdirection, enabled by the digital medium, since tracks can be fairly easily covered making it harder for those who care/notice to call attention to it. For a while I thought about creating a screenshot archive of changed titles but eventually just stopped reading the paper.

I remember reading this article about how newspapers have started tailoring their online headlines for Google/SEO in 2006: http://www.nytimes.com/2006/04/09/weekinreview/09lohr.html

More specifically, the NYTimes public editor wrote about headlines earlier this year: http://www.nytimes.com/2015/04/19/public-editor/hey-google-c...

I blame prime numbers, even co-primes should be banned. Please contact your representative and help us free the world from evil math.

I blame it on unicode. Without unicode these terrorists would not be able to communicate with impunity.

In fairness, character encoding is clearly the work of terrorists.

If you come to America, you should have to learn the AMERICAN Standard Code for Information Interchange. Speak the language. None of this Unicode i18n nonsense!

"Press  for billing, or please hold for an attendant."

And they go around planting byte-order marks everywhere.

I think more to the point, blame Joel Spolsky: http://www.joelonsoftware.com/articles/Unicode.html

He made it easier to understand unicode!

So irrational.

No, clearly it was rational. ᴨ = 3.2

"So irrational"

To me, it's transcendental.

The question is whether it's normal.

This! The generation of large primes needs to be banned. The government needs to work with chips manufacturers to insure that no computing device can generate large primes. This is the root of the issue. If large primes go away, so does encryption! We need to make sure our government understands this.


> The government needs to work with chips manufacturers to insure that no computing device can generate large primes. Please don't tempt them.

I'm sure I don't have to tell you I was being sarcastic. :)

I'll notify the Mersenne project.

Not a joke -- there are cases where the government can get a warrant to compromise an RSA key and gag-order it so you can't say so.

The implication is that it can be illegal in some cases to say "The government knows the factors of X" or even "The factors of X are Y and Z."

Your scope is too small. The root of all this is decent education.

Even without having encryption in main stream applications, like WhatsApp, it's child's play to communicate secretly. There are so many open source applications that allow you to do this, it wouldn't even be a speed bump for criminals.

Targeting main stream applications only hurts main stream users. NYT should write an article about that.

Or to put it another way,

"If you outlaw encryption, only outlaws will have encryption"

I heard that you can download plans for encryption from the internet and build it secretly in your basement with widely available tools.

Yes, and that reduces the number of people you need to watch more closely by about 100x.

Watch for what? Random noise? Innocent Looking Traffic? https://www.torproject.org/docs/pluggable-transports.html.en

In the end bad guys could just use an one-time pad through unsecured communication channels. Then we have to ban pens, paper and dices.

You seem to be overlooking the obvious argument that if a small group of people use encryption, then you greatly reduce the number of messages that you need to flag. Furthermore, if someone on a Watch List starts using encryption then perhaps you have an imminent problem.

I'm not on the side of reading messages but you missed the real argument being made.

Typical HN. How about someone answering the argument that will really be made by governments instead of the pointless one the NSA, etc have already answered. You can downvote me but you don't get a downvote in governments around the world when they outlaw real encryption.

> Typical HN. How about someone answering the argument that will really be made by governments instead of the pointless one the NSA, etc have already answered. You can downvote me but you don't get a downvote in governments around the world when they outlaw real encryption.

I was ready to up-vote your argument because I agree with the first two paragraphs but the snark turned me off from supporting you. Leave it out next time and I you'll make a more convincing argument.


The up-vote wasn't about giving you karma, it's about making your comment more visible. If you didn't care about visibility than you wouldn't be posting in the first place.

Good point. In the future, I'll try to remember that I'm wasting my time. Too late to delete most of them.

Encryption is important and is crucial to the security mechanisms that underly the whole internet. I don't think anybody is seriously advocating making encryption illegal. (Which makes your argument pointless.)

What does seem being argued for is to mandate adding a 'backdoor' for the government.

The counter argument to that is that adding a 'backdoor' makes the encryption pretty much worthless.

The weighing of risks of this and the importance for privacy versus the anti-terrorism fighting benefits is the only debate (not) happening.

"Which makes your argument pointless"

Of course it doesn't, and that's not what I meant. I didn't mean wholesale outlaw, as in no one would be allowed to use it. Governments can mandate consumer communication technology, for example, like they are trying to do with Apple now.


I work for a Bank and we use encrypted external email all the time. We have to for regulatory reasons. I'm sure it's the same for defence contractors, and it's a prudent precaution for governments and many other businesses and in many parts of the world.

The ideal scenario for the NSA-types is "unbreakable" crypto with a built-in NSA-only escrow/backdoor and possibly a secondary backdoor for five eyes allies (with domestic traffic free of 5-eyes backdoor.)

Suspension of disbelief on security of escrow mechanism is a necessary pre-condition.

Fortunately that unicorn has been shot in the nineties already. Several times. Still, it won't completely die.

banks... defence contractors... governments... If you guys didn't have something horrible to hide you wouldn't need this encryption.

I'm happy that my bank uses encryption. I bet most people with bank accounts would agree.

If only the government(s) could be trusted to not pass along confidential business information from foreign companies (or even perhaps local) to help the companies in their own country.

> You can downvote me but you don't get a downvote in governments around the world when they outlaw real encryption.

I'm pretty sure you can downvote them by voting for representatives that do not support such laws. At least if you're in some kind of democracy.

There should be a name for this fallacy. This is easy to say for any issue in isolation, but what is a voter to do when they only have two or three choices? When there are dozens of issues at any given time the chances of having a candidate that aligns with you on all of them is nil.

Unfortunately that process isn't as easy as clicking the little triangle, and hence only 4 out of 10 eligible "moderators" participate[1]...

[1]: https://www.census.gov/content/dam/Census/library/publicatio...

You can vote for either a) the representative on the one side who supports such laws or b) the representative on the other side who supports such laws or c) throw your vote away and let the majority of people who don't understand/care about this issue decide between (a) and (b).

Voting is not a solution to a problem like this and is generally very ineffective in the US.

Not in a gerrymandered district. It won't do you any good, even if you presuppose the existence of a pro-encryption candidate, which is a bad supposition.

I have not seen them use that argument. If they used that it would be better than what they normally do; which is imply they don't understand what's going on. But even this argument is subject to the same problems; it implies they don't really know why we'd want/need encryption because it assumes that a government could create a back door only they could use.

I don't think that would be the case. If a couple of western Governments "ban encryption" it would probably just mean that they weaken the crypto (e.g. backdoor). The traffic of weakened crypto still looks the same as the traffic of strong crypto - I don't think it would make strong crypto any more noticeable.

Also remember that crypto would still be legal in the rest of the world (China is not going to backdoor their crypto for the NSA's benefit) so you would still have heaps of encrypted traffic moving across the internet. I doubt that such a measure would do anything but make mass surveillance of western populations easier.

Assuming the numbers wouldn't go up from people encrypting out of spite

Over the last year or more I've started encrypting everything I can out of spite, just in the hope of creating noise.

You could plan terrorist activity using postcards. Should we ban the Post Office? You could use phone calls from pay phones (where they still exist) and people used to do that all the time to plan crimes. You could go on vacation together and plan something. You could use voice or text chat in an online game to plan stuff. The people in charge of these organizations are not the suicide bombers; the head folks know full well what to use to avoid getting caught.

I have a friend who used to get erotic messages from her mathematician boyfriend steganographically encrypted on postcards. I had to show her how to decrypt them. It was pretty standard cryptanalysis once I guessed that the messages existed. Some of them were filthy.

Stenographically encrypted in the printed image? How would that work?

Probably on the side you write on.

Yep, that would make far more sense... For some reason I assumed it was an image encoded on the postcard. Rereading the original post, that's not what it says, is it...

Your post seems to miss the fact that the mail, phone calls and popular chat services are already logged, and asking for encryption back-doors is just wanting that process to continue.

Mail may be logged, but they're certainly not opening most envelopes to read their contents.

Postcard steganography. Would be downright devious.

It's true that the NYT pulled (or dramatically rewrote, close enough) an article claiming: "The attackers are believed to have communicated using encryption technology..." You can see the original version here thanks to Archive.org: https://web.archive.org/web/20151115191248/http:/www.nytimes...

But then the NYT doubled down on the terrorists-and-encryption angle in a separate story here: http://www.nytimes.com/2015/11/17/world/europe/encrypted-mes...

Which says in the second sentence: "Obama administration officials say the Islamic State has used a range of encryption technologies over the past year and a half, many of which defy cracking by the National Security Agency..."

HN discussion of that second story is here: https://news.ycombinator.com/item?id=10579201

A link to the NYT article now redirects readers to a separate, general article on the attacks, which does not contain the word “encrypt.” The original piece can be found on the Internet Archive.


The language seems pretty reasonable...

"The attackers are believed to have communicated using encryption technology, according to European officials who had been briefed on the investigation but were not authorized to speak publicly. It was not clear whether the encryption was part of widely used communications tools, like WhatsApp, which the authorities have a hard time monitoring, or something more elaborate. Intelligence officials have been pressing for more leeway to counter the growing use of encryption."

Sure, if you think unproven hearsay in NYT that we all ought to take for granted is "reasonable". If the authorities believe it was the encryption to blame, then they should come out and say it outright - with details, of course. They shouldn't hide under the "anonymous officials" tag which "believe" it was encryption the culprit that stopped them from learning about the attacks, and nothing else. It could've been just some random cop's opinion, who heard it from someone else, who was also wrong about it.

Seriously, yes. "News" outlets for decades have printed "anonymous officials" and "inside government sources" with devastating opinions.. even worse is "So and so was CALLED XYZ by ... " this is like assigning a variable in programming, X=Y.. talk about programming the mass consciousness!

What's wrong with newspaper reporting hearsay, while describing it as hearsay?

NYT didn't seem to pass it as anything else than it is. What's your logic here? Should all anonymous sources be banned? Or only the sources with opinions that you don't like?

Probably sources that have a political bone to pick with no evidence should be banned.

Should the NYT report that a UFO passed over because some official told them so? No.

When you receive an anonymous tip, its your job as a journalist to try and confirm it, not report it directly.

Additionally, when a paper reports something with no evidence and screws up, they usually publish a retraction, not change the article and direct you to a new copy.

Actually, most reputable newspapers have very strict rules about anonymous sources, including some who ban them altogether.

Here is the NY Times Standard and Ethics Statement, which describes the newspaper's "distaste" for anonymous sources:


In fact, that's probably the reason it was pulled: a complaint was made to the NYT Public Editor about the piece's sources.

If so, they should say so. Silently pulling the article and redirecting the URL is a little misleading.

Because, with anonymous sources you can sway public opinion, saying pretty much anything you want without the possibility of being held accountable for it.

the NYT was instrumental in printing outright lies leading up to the Iraq war for instance, we absolutely should not trust unidentified sources from them when there is a clear political motive at play.

Unproven hearsay in the New York Times has never led to disaster before <cough>Judith Miller</cough>.

What's this "encryption technology"? It just sounds like another case of "a hacker named 4chan", meaning the media throwing around buzzwords like they know what they mean. Encryption is built into technologies, not something that stands on its own like a weapon. It's like saying, "the hacker used http to gain access to the server".

What is the deal with insidesources.com? Something feels a "little off" after browsing the rest of the site. It feels like a undercover PR machine for someone or some group. It also seems like they hate copy editors.

editor of insidesources' twitter handle is @warroomalerts

> writing messages via in-game functions, like spelling words with dropped items or shooting walls

I'm kinda ashamed I haven't thought of that.

I think this may be the most important thing in the article. If the conspirators really were communicating via in-game ephemera, the communication probably wasn't encrypted at all. If so, the hurdle that the security forces faced is not that they couldn't access the communication, but that they couldn't interpret it.

And that just shows how little a mandatory backdoor policy would help: being able to read something doesn't mean you can understand it. It's always possible for someone to come up with a communication scheme that you didn't anticipate or can't interpret, and you can't legislate away that capability.

So a backdoor policy doesn't trade privacy for safety -- instead it trades privacy for a chance at safety, and pretty much just safety from careless, unprepared attackers.

It would not be the first time that obscurity is confused with security.

Oh man. Writing banned words with object drops (or corpses) was classic humor back in the EQ days. Even better if your writing made the zone laggy.

It reminded me of the "The Wire"'s Season 5 (I think), where Marlo Stanfield's guys were communicating with each other by sending photos of street maps, which would indicate meetup locations. It now seems easy to catch, but at that time (2008) it was still kind of a new thing to send data (like photos) instead of plain texts.

Shame..that they wrote it in the first place.

Care to elaborate?

At the risk of putting words into other someone else's mouth, it's a shame that the level of education/awareness regarding the fundamentals of how society works is so lacking.

Without encryption, internet doesn't work. Without internet, banking and e-commerce doesn't work. Really, not a single user-identifying service can function without encryption.

Also, encryption is based on math and logic. You can't really prohibit people from using it, or having it in any sensible way. Meaning that if we live in a fantasy land where this became outlawed, you'd still have any two nodes using encryption should they wish to. I doubt terrorist have the necessary incentive to follow international laws.

In fact, a mathematically unbreakable encryption is simply generating random data (aka. a one-time pad https://en.wikipedia.org/wiki/One-time_pad), and having a copy of this data at each end of the communication. Want to transfer sensitive data across the border? Just xor your data with the random data, and take the random data with you across the border. If the microSD card didn't leave your butt-crack at any point, you can merrily download the xor-ed data through NSA's mainframe, and they can't do jack about it, even with quantum computing, or any imaginable alien technology.

To play devil's advocate, saying "but you can never ban encryption anyway" sets up a false dichotomy. You're right, there are open source projects that will survive no matter what you do. But you can certainly force or convince companies like Apple and Google to prevent those apps from being available in their stores, and you can force large companies like Facebook to give you a backdoor.

Sure, someone tech savvy can get around that, but not everyone. Just look at how many criminal cases end up finding emails explicitly talking about plans and crimes. So there are people nowadays not even using encryption when committing crimes. If you make encryption-by-default impossible, the amount of unencrypted messages will increase, and you'll have gained those.

Now whether that's good or bad is a separate discussion; I tend to think it's bad, but that's a more nuanced argument than "you can't ban it, so don't do anything about it".

Considering some of the largest app markets are not run by US (or EU) based companies (e.g. China), the assumption of control is also a fallacy.

"If you make encryption-by-default impossible, the amount of unencrypted messages will increase, and you'll have gained those." Requires you to monitor every message for every form of data, and assuming the false assumption that banning encryption increases the simplicity of finding messages, you would probably only find more things not related to what you are looking for. Only assuming you have control over all data channels, which even the US doesn't have.

On the other hand: This message is then flagged: bomb, truck, explode. While you could better be looking at other stuff like origins and communication contacts. These give a clearer view than the random words you will be picking up, even when they are encrypted.

I'm assuming control over specific companies, and claiming that such control would have greater than zero effectiveness.

Don't we assume the NSA is logging any US traffic anyway? Even if not, they could require a direct line into all messages sent via some apps.

> someone tech savvy can get around that

Or someone who has a strong incentive to keep their communication hidden - like a terrorist or a drug enforcer. This leads to a situation where law abiding citizens are punished when their right to privacy is denied, while criminals aren't hindered. "When you outlaw X, X will only be used by outlaws"

My claim is that "criminals aren't hindered" is false. Sure, there are people that will get around it, but as I mentioned, there are plenty of criminals that don't use encryption and have been caught with incriminating emails, so that makes me think that if encryption was made more difficult, some criminals who use encryption now would stop.

Criminals aren't perfect.

You are right, there would be many petty criminals who would be caught because they'd talk about their crimes in plaintext. There would be a reduction in petty crime, while leaving major crime pretty much untouched. That sounds like a good thing... except it might not be in society's interest to eliminate petty crime, as Snowden explains here [1].

[1] - http://nindalf.com/OverconfidentArowana

Would you consider any crime which the settlement is in the hundreds of millions "petty"? If not, then http://www.bloombergview.com/articles/2015-11-05/sanctions-s... is a counter-example.

See also http://www.bloombergview.com/articles/2015-10-20/shh-credit-..., http://www.bloombergview.com/articles/2015-08-18/bny-mellon-... and http://www.bloombergview.com/articles/2015-10-20/accidental-...

Clearly neither being really smart nor having a lot at stake will prevent someone from sending sensitive things over email.

As for terrorists, a short search turns up https://books.google.com/books?id=VqY4Wr3T5K4C&pg=PA410&lpg=....

I think if you claim that major criminals' encryption usage would be unaffected by making it difficult to use, you need to support that at least as well as I've just argued for the opposite.

Encryption existing is an important part of not being in a police state. It's the freedom to not have the gouvernment judge your every word, be it with an algorithm or by being able to look up your every word without a warrant in an investigation, without civil transparancy.

Encryption is a basic human right. The freedom to speak without being judged.

I agree and it is ironic as journalism depends upon free speech.

So is life, what do you do when 2 basic rights clash? It's definitely a tricky question, and I certainly don't have the answer.

Encryption or the use thereof doesn't threaten your life. Screwdrivers or the use thereof don't either, however if someone misuses the screwdriver and stabs you in the temple you will die.

Does this mean we should outlaw the use of screwdrivers ?

But do they clash in this case?

The best encryption is simply to communicate the old fashioned way. Just because they had some playstations and access to "whatsapp" does not mean the terrorist trusted electronic devices.

If your favorite tool is the pervasive surveillance hammer, you try to make everything look like a nail.

I believe the conclusion of the article is the most important takeaway: (so, points for good writing, I guess :) )

> Former CIA Deputy Director Michael Morell said he suspects the Paris attacks will weigh heavily on the encryption fight ongoing.

> “I think what we’re going to learn is that these guys are communicating via these encrypted apps, the commercial encryption, which is very difficult, if not impossible, for governments to break, and the producers of which don’t produce the keys necessary for law enforcement to read the encrypted messages,” Morell said on CBS’ “Face the Nation” Sunday.

> “We need to have a public debate about this,” he continued. “We have in a sense had a public debate — that debate was defined by Edward Snowden, and the concern about privacy. I think we’re now going to have another debate about that — it’s going to be defined by what happened in Paris.”

So, "They" want to misuse the attacks on Paris as an excuse to attract mind and screen-time for an alternative counter-debate about privacy and encryption. But Snowden got to "define the debate" because of the new information (as well as evidence for old information) he brought to the public, it's an entirely different thing if you're a politician grabbing screen-time and "define the debate" to a shocked audience over a terror-attack--which is exactly what the terrorists' want. Any freedoms or security taken away from us as a response to a terror-attack, is a win for the terrorists.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact