Hacker News new | comments | ask | show | jobs | submit login
Juniper hack has U.S. fearing foreign infiltration (cnn.com)
226 points by doener on Dec 20, 2015 | hide | past | web | favorite | 104 comments



Of course, CNN leaves out the fact that the NSA shares substantial blame for this by putting a back door into Dual_EC.

This is an incredibly poignant example of the inherent danger of any cryptographic back door. It's a real shame that the media is both too technically illiterate and too pro-government to explain that.


Can you explain how they are related?

My understanding of this is that malicious code was deliberately added to Juniper's software, not that it exploited some existing code that Juniper thought was safe. This could happen regardless of what kind of encryption is in use in the surrounding code/infrastructure.

If my understanding is correct, why is Dual_EC relevant?

edit: And a follow on question: If this back door only works by assuming Dual EC is backdoored, is that not incontrovertible proof that the NSA is behind the entire thing, which there is at least some doubt that they are? That, or someone else has found the hypothesized private key in Dual EC. Either scenario seems like far more significant news than this story already is.


Disclaimer: I am by no means a cryptography expert and my understanding of this is based on [1] and [2].

Basically, Juniper used Dual_EC, which they knew was backdoored. Because they knew it was backdoored, they replaced the NSA key with their own, which they thought made it "safe."

Now it turns out that a third actor might have somehow replaced the Juniper key with their own key.

The point is that by using a CSPRNG with a backdoor, even when they tried to close that backdoor, they still left a backdoor open. Dual_EC is relevant because if the USG had never promoted it there never would have been a backdoor to leave open. Another CSPRNG would have been harder to leave insecure.

> If this back door only works by assuming Dual EC is backdoored, is that not incontrovertible proof that the NSA is behind the entire thing, which there is at least some doubt that they are?

Not necessarily. As Juniper is supposedly not using the NSA codepoints, it could have been "any" actor which changed the back door, including but not only the NSA.

Personally, I don't think it is the NSA in this case. If it were, I don't think we'd be reading about it on CNN at all.

[1] https://www.imperialviolet.org/2015/12/19/juniper.html

[2] https://kb.juniper.net/InfoCenter/index?page=content&id=KB28...


> If it were, I don't think we'd be reading about it on CNN at all.

NSA has absolutely no reason to tell Juniper they were behind this, even if they were. If the blame falls on a foreign actor that's entirely in their interest.

(That not saying they did it, of course. We don't know. But the fact that CNN writes about it should not be taken as evidence either way.)


It's even more complicated than that - Juniper used Dual EC, and changed the Dual EC parameters, but ultimately the output of that PRNG was being used to seed a different PRNG (probably for speed purposes).

From your 2nd link:

ScreenOS does make use of the Dual_EC_DRBG standard, but is designed to not use Dual_EC_DRBG as its primary random number generator. ScreenOS uses it in a way that should not be vulnerable to the possible issue that has been brought to light. Instead of using the NIST recommended curve points it uses self-generated basis points and then takes the output as an input to FIPS/ANSI X.9.31 PRNG, which is the random number generator used in ScreenOS cryptographic operations.

Because of this, it's not entirely clear at this point that an attack would have been feasible even for an actor that had the P and Q used for Dual EC here[1].

[1] https://twitter.com/pwnallthethings/status/67837170536721203...


However: even in this setting, all it takes is a single unauthorized call to Dual EC and an exfiltration of 240 bits to obtain the values used in all subsequent re-seeding of the ANSI generator. We already know there is unauthorized code in ScreenOS based on Juniper's admission. So the next step is to determine whether something like this has occurred.


If code was added to leak the state of the PRNG, then whether or not Dual EC is used becomes a non-issue. The person who created the backdoor could leak the state regardless of which PRNG was used.


I think it's less of an "NSA directly did this" and more of a "NSA invented cryptography that absolutely no one thought was a good idea."

The juicy bit, and the piece that really is obnoxiously bad, is that NIST, who decides cryptographic standards, contracted out to two agencies when they were looking at introducing new cryptography in 2006. Those two agencies? RSA and NSA. They paid both of these organizations for the privilege of getting insight.

NSA had been pushing for Dual_EC for a couple years at this point, and wanting people to take the bait, they secretly gave $10 million to RSA for them to start using it in some of their products and for them to tell NIST that they thought it was cryptographically sound. All completely behind the back of NIST and the public.

So NIST is consulting out with two of the biggest names in cryptography, one of which had been championing Dual_EC for years (NSA), and one of which started using it in their products (which are primarily sold to government agencies that MUST use NIST-approved crypto, and presumably stood to lose a lot of money by "betting" on Dual_EC).

NIST never saw it coming.

And that's the irony of the whole thing. NSA is supposed to make the US, especially the US government, more technologically secure... yet they directly undermined cryptography that was predominately used by government agencies. Meanwhile the rest of the tech world saw it for the bullshit it was. The only people who were hurt by it was our government.

Thomas Massie gave a great speech to congress about this earlier this year, and pushed through a vote that now prevents NIST from contracting out with NSA. I wish I could find it.

(Aside: Thomas Massie, a congressman from Kentucky, graduated from MIT with a BS in EE and a MS in ME, founded a successful tech startup, and now serves on the Committee on Science, Space and Technology. One of the few cases where I feel someone in our government is adequately educated in what they rule over.)


EDIT: Just saw that this recent Omnibus budget bill stripped out his restriction at the last minute. NIST can still pay NSA to lie to them.


> NIST never saw it coming.

How so? The NSA is a spy agency, they are the one institution that you clearly should NOT ask.

BTW I love the USA Committee on Science, Space and Technology[1], full of geniuses.

[1] https://www.youtube.com/watch?v=lPgZfhnCAdI&spfreload=10


> How so? The NSA is a spy agency, they are the one institution that you clearly should NOT ask.

also the largest employer of cryptographers in the world, and is routinely sought for input by NIST, usually to no ill consequence


That's certainly true now but it hasn't always been such a given because the NSA has a defensive mission, even if they've grossly neglected it, and in the past acted to make crypto standards safer:

https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA.2...


It's hard to remember what the world was like in 2006, but no one thought the NSA had gone rogue back then.


Except Congress?

http://www.cnn.com/2006/POLITICS/01/01/nsa.spying/

Or does criticizing the executive branch by calling their phone monitoring program illegal not reach the bar set by gone rogue?


Well to that point, NSA contracted out both with NSA and with RSA. At least they didn't give NSA their complete trust.

The nasty part is that NSA went behind the backs of everyone and bribed RSA for their own nefarious purposes. I don't think anyone saw that coming.


There are 2 separate, unrelated issues (but described in the same advisory from Juniper).

One is they found code that shouldn't be there, allowing remoe ssh login to attackers. The other is weaknesses in Dual_EC.


if not all the blame.

consider this is nsa backdoor 2.0 and they got caught. wouldn't their answer be exactly what they are saying now? confess nothing and use as a convenient excuse to get more funding and reason to deploy v3.0


Lazy reporting: "The U.S. officials said they are certain U.S. spy agencies themselves aren't behind the back door."

If you're not going to provide context or get a quote from a disinterested party, then just omit the US saying the US didn't do it.


It is guaranteed that they would say that regardless of the truth, so the quote source is irrelevant really.


Of course they said that...

This feels incredibly uncomfortable to say but in a decade or two it seem possible or maybe even likely that China, Russia and South Korea may have an edge in encryption technology and products by virtue of them actually being secure. I mean if they take cyberwar seriously, and think their economy has anything to do with national security, they'll pour energy into securing their businesses whereas the west does the opposite.


Lazy? Try spoon-fed. This whole piece stinks of astroturf.


They're always certain about who was not behind something. Those statements never have supporting evidence.


We clearly need to give up more privacy and add more fast-lane backdoors to protect against these attacks.


I know talking politics on this site is taboo, but this is a nice segue into a conversation about who is everyone voting for... There doesn't appear to be a serious contender on either side that isn't making ridiculous claims about the dangers of encryption.

The audacity of Hilary Clinton last night, admitting she doesn't know a whole lot about encryption, but thinks for some reason that the tech community is on her side, because... ISIS?

Is there anyone in tech that actually agrees with these people? I'm being serious, am I just shielded from that side of the conversation? Are there educated people, who understand encryption, that don't work for NSA, that think key escrow is a reasonable request?

It leads me to the conclusion that policymakers are that out of touch with the world.


You seem to have left the fact that Hillary Clinton also ruled out backdoors last night. That is quite a significant statement in my opinion.


She called for a Manhattan-like project to give law enforcement exceptional access without breaking encryption because "maybe the back door is not the right door." Hardly a ringing endorsement for privacy!


That's political doublespeak. No one is going to sit up on a stage and say that they think "backdoors" are a good idea.

But if you've been paying attention the past few months, when they're not on stage, both Hillary and Obama have been calling for key escrow. To think she's changed from that while still using the same exact statement of, "Silicon Valley geniuses are going to come around and be our Manhattan project" is wrong.


We hear a lot of things from candidates. But something seems to happen to frosh presidents that makes them go all squishy when it comes to screwing with the natsec apparatus, something more tangible than just the usual collection of forgotten campaign promises. Something more effective than a simple "Four Horsemen" briefing that the CIA/NSA/ETC dust off every fourth January.

On the other hand, politicians are snakes and will say anything to get elected. So there's that.


It's easy to sit behind a computer or on a debate stage and repeat trite aphorisms like "we can't trade our privacy for security". It's very different when you are personally responsible for the security of not only 300 million Americans, but of the entire world. When you get briefings every day about the activity of people and organizations who will do everything that they can to kill as many innocent people as possible.


Backdoors aside, I'm wary of what the proposed Manhattan project would be without specifics. Without details it sounds like a continuation or expansion of the current NSA bulk surveillance programs, which while not explicit encryption backdoors themselves (Dual EC aside), are not an ideal solution.


Indeed! The best thing to do would to be to ban encryption all together! /s


Relevant and from last night's Democratic Debate.

RADDATZ: You'll be happy. I'll let -- I'll let you talk then.

Secretary Clinton, I want to talk about a new terrorist tool used in the Paris attacks, encryption. FBI Director James Comey says terrorists can hold secret communications which law enforcement cannot get to, even with a court order.

You've talked a lot about bringing tech leaders and government officials together, but Apple CEO Tim Cook said removing encryption tools from our products altogether would only hurt law-abiding citizens who rely on us to protect their data. So would you force him to give law enforcement a key to encrypted technology by making it law?

CLINTON: I would not want to go to that point. I would hope that, given the extraordinary capacities that the tech community has and the legitimate needs and questions from law enforcement, that there could be a Manhattan-like project, something that would bring the government and the tech communities together to see they're not adversaries, they've got to be partners.

It doesn't do anybody any good if terrorists can move toward encrypted communication that no law enforcement agency can break into before or after. There must be some way. I don't know enough about the technology, Martha, to be able to say what it is, but I have a lot of confidence in our tech experts.

And maybe the back door is the wrong door, and I understand what Apple and others are saying about that. But I also understand, when a law enforcement official charged with the responsibility of preventing attacks -- to go back to our early questions, how do we prevent attacks -- well, if we can't know what someone is planning, we are going to have to rely on the neighbor or, you know, the member of the mosque or the teacher, somebody to see something.

CLINTON: I just think there's got to be a way, and I would hope that our tech companies would work with government to figure that out. Otherwise, law enforcement is blind -- blind before, blind during, and, unfortunately, in many instances, blind after.

So we always have to balance liberty and security, privacy and safety, but I know that law enforcement needs the tools to keep us safe. And that's what i hope, there can be some understanding and cooperation to achieve.

RADDATZ: And Governor O'Malley, where do you draw the line between national security and personal security?

O'MALLEY: I believe that we should never give up our privacy; never should give up our freedoms in exchange for a promise of security. We need to figure this out together. We need a collaborative approach. We need new leadership.

The way that things work in the modern era is actually to gather people around the table and figure these things out. The federal government should have to get warrants. That's not some sort of passe you know, antique sort of principle that safeguards our freedoms.

But at the same time with new technologies I believe that the people creating these projects -- I mean these products also have an obligation to come together with law enforcement to figure these things out; true to our American principles and values.

My friend Kashif, who is a doctor in Maryland; back to this issue of our danger as a democracy of turning against ourselves. He was putting his 10 and 12-year-old boys to bed the other night. And he is a proud American Muslim. And one of his little boys said to him, "Dad, what happens if Donald Trump wins and we have to move out of our homes?" These are very, very real issues. this is a clear and present danger in our politics within.

We need to speak to what unites us as a people; freedom of worship, freedom of religion, freedom of expression. And we should never be convinced to give up those freedoms in exchange for a promise of greater security; especially from someone as untried and as incompetent as Donald Trump.

RADDATZ: Thank you, Governor O'Malley.

https://www.washingtonpost.com/news/the-fix/wp/2015/12/19/3r...


Interesting. The "new terrorist tool used in the Paris attacks, encryption" is far from new, and that story has revolved around the Playstation network (which the media told us was used by the Paris terrorists, despite the originators of that rumor retracting their story about Jambon).

The "problem" here is not secure communication. It is media propaganda / information warfare. Facebook and Twitter being used to instill hate and spread conspiracies. It is all in the open: Facebook images stating that Israel is behind ISIS, or Twitter accounts that post nothing but Anwar al-Awlaki videos. Could you imagine that happening 10 years ago, on your own homepage, without being raided? If Twitter can block porn, surely they can block terrorist propaganda too. But the law enforcement probably want to use these for fishing. Instead Clinton wants to build another nuke.

> "Dad, what happens if Donald Trump wins and we have to move out of our homes?"

These are propaganda tactics close to character assassination. In the next sentence he says that "freedom of expression" unites us, but when Trump uses this freedom of expression he is suddenly scaring Muslim kids. Very recognizable.

> especially from someone as untried and as incompetent as Donald Trump.

You just know that they made this a talking point, a hook. And O'Malley wrestled it in his answer, because that is what he prepared.

> where do you draw the line between national security and personal security?

Donald Trump is incompetent. Next question! Next!


> It doesn't do anybody any good if terrorists can move toward encrypted communication that no law enforcement agency can break into before or after. There must be some way. I don't know enough about the technology, Martha, to be able to say what it is, but I have a lot of confidence in our tech experts.

It's time politicians realized that it's not a matter of technology, but a matter of crazy reality and bad political climate.

We live in age of three-letter agencies abusing their powers for god-knows-what reasons, dumb TSA employees publishing photos of "TSA lock" master keys on the Internet and Russian/Chinese/Nigerian hackers replacing NSA crypto backdoors with their own.

Potential usefulness of crypto backdoors is too large and communication technology is too advanced to contain such backdoors in trusted hands. Either we work to be damn sure that there are no known vulns and backdoors whatsoever, or there will be a backdoor for everybody and his dog. No technology can provide a Hollywood-style "find and decrypt bad guys" button devoid of nasty side-effects.

Not to mention that Paris terrorists used the uber-secure, NIST-certified DOUBLE-ROT13 for their cunning (i.e. f#$^&n plaintext).


Encryption? We need to go a step further. Let's ban anything remotely secure so we can stop these backdoors with our own dozens of other backdoors!


If we make enough back doors the hackers won't know which door to take! They'll be trapped by indecision!


Who are these "officials" and why are they anonymous here?

Anyway, what's most likely to have happened is that other nation-states discovered NSA's own backdoors and started using them. The NSA then freaked out and told Juniper it's ok to patch them now. The reason why I'm implying cooperation between Juniper and NSA is because Juniper keeps refusing to eliminate the well known Dual_EC backdoor from their systems and are giving stupid reasons for keeping it.


Could there be a legal case involved here? I mean if I as a customer paid for a door-lock which was marketed as "secure" and you knowingly shipped me a door-lock which can be subverted by tools some particular burglars are known to have access to? That sounds quite malicious to say the least.


At least in the literal sense, that's what you are getting when you buy a door lock marked "secure".

I don't think a product like this has really been "secure" since the early 1800s or earlier.


Sure... "The U.S. officials said they are certain U.S. spy agencies themselves aren't behind the back door." because if they were they would immediately tell everyone. It's not like they have a political agenda to both deny involvement and blame foreign agencies... /s

Either way this will have a big economic impact for Juniper. In no way are they going to win the next big banking or government contracts.


> Either way this will have a big economic impact for Juniper. In no way are they going to win the next big banking or government contracts.

If I were in a three-letter position at any big company, I'd choose Juniper over Cisco gear now. Juniper has proven they do code audits and throw such stuff out if they find it. Cisco, not so much.


I like the reassurance of Juniper doing code audits. If they do them regularly, though, why did it take them over three years to find this "unauthorized code"?


I can imagine that the software going into a typical router is as complex and huge as the kernel. I'd guess they do rolling audits because of the complexity.


> Note that a skilled attacker would likely remove these entries from the log file, thus effectively eliminating any reliable signature that the device had been compromised

Sending logs off as they're written to a centralized logging server or a time-series database would have been useful in this context.


> Sending logs off as they're written to a centralized logging server or a time-series database would have been useful in this context.

Schneier & Kelsey's paper Secure Audit Logs to Support Computer Forensics[1] seems relevant here. At the time I know that they wished to patent the work; does anyone know if a patent were granted, and if so when it expires?

[1] https://www.schneier.com/cryptography/paperfiles/paper-audit...


I would also assume you could do a blockchain-type log -- ie the hash of an entry includes the hash of the previous entry so that if one entry is removed, the whole thing is tamper-evident.

Of course, then you need to ensure the integrity of the latest hash (if you can change one log, you can change them all). If the centralized logging server(s) are protected with the same rooted ssh, then it's just an additional step for our sophisticated state-sponsored boogeyman.


just print the logs as they come in.


Why would anyone choose to use Dual_EC even with their own keys unless they are completely ignorant or coerced? It is especially galling for people who sell supposedly secure products. I can't imagine that any competent CTO/CSO would sign off on such a stupid choice.


Does Juniper not have source control? Don't they know who inserted this code?


Lots of source control systems don't offer that level of verification. Even Git, with SHA-1 hashes everywhere, lets you claim to be anybody when you write a commit. The only thing you can verify the origin of in Git is the author of a tag signature, for tags that are signed with a PGP key.


> In more recent versions of Git (v1.7.9 and above), you can now also sign individual commits.

https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work

does this resolve the issue for git? or is it still possible to subvert?


That only solves authentication, not authorization.


So you can trace for sure who made every commit, but you're saying an unauthorized actor can add a commit? Well you would detect that in the code review before the release.


git supports signed pushes as well as signed commits and tags. The signed pushes thing had a vulnerability though:

http://mikegerwitz.com/papers/git-horror-story https://developer.atlassian.com/blog/2015/05/git-horror-stor...


Well, that's the commit. Usually the "push" operation to the origin is auth'd, isn't it?


In most common configurations, authenticated but not logged, and there is no association between the push origin and the contents. If multiple employees have push access to a common repo, then Alice can push a commit which claims to be written by Bob, and nobody would probably notice, and there would be very little forensic evidence, if any.


> nobody would probably notice

Really? Most two-bit cat selfie startups are at least aware enough to notice a phantom commit from Bob, but Juniper isn't?!


Doesn't it depend on how good their code reviews and release engineers are? You also forget how the negative attributes of big corporations (eg. size, bureaucracy, etc) can make it difficult to notice a phantom commit from Bob, especially if Bob is asleep at the wheel or not motivated.


'Minor changes'


Which is why commercial crypto and git should never be in the same room together.


What do you recommend instead?


"even git"?

I'm not sure git is the best example of a secure system. Commercial source control systems have more serious authentication.


"Even git" because Git is known to use cryptographic hashes and cryptographically signed tags to verify contents, whereas older systems like SVN and CVS had no such structure.


Yeah but what I think he means is that in any field were security is important svn and cvs aren't relevant in anyway. Compared to other in that area, git is not the best.


So another poster mentioned Perforce. Which "others" are you referring to?


Of course, OpenBSD uses CVS exclusively.


Indeed. Serious players use Perforce and consider it to be worth every penny.


But it certainly was universally hated by the developers at my last employer.


The perforce website just talks about git. Does the old perforce vcs thing still exist?


Yes. Their git stuff is a front-end so users can use git, which then funnels it to the back end. Works like Git-TFS or the built-in integration of Git into Visual Studio.

What used to be the central p4 server was renamed Helix, turned into a federated architecture, and there's some sort of data exfiltration protection option that is based on Interset's behavioral analytics stuff.

As far as I can tell from the marketing materials, the idea seems to be that if if a coder starts regularly looking at the sales numbers, file a report. Maybe someone needs a reminder about the consequences of insider trading, but maybe they just want to know if the project's getting traction in the real world, and the false alarm can be resolved with a brief chat. If someone tries to clone an entire corporate monorepo on a salesperson's workstation at 3 in the morning local time, shut it down, because the most likely explanation is that the salesperson's workstation has been compromised.


"Commercial source control systems" are usually a steaming piece of crap and I take git over anything that was bought by the higher-ups in ties


What do they use?


Probably something with Dual_EC in it...


One would hope. Yet, SCM security is so lax and such an afterthought that Wheeler had to write a whole piece on it:

http://www.dwheeler.com/essays/scm-security.html

After all that, only one OSS project responded with claims to meet many of the requirements. I figure the commercial situation isn't much better with most "benefits" existing on paper rather than with strong security.

To top it off, anyone defending against nation-states must remember they always attack what's below and around the software. Possessing 0-days in OS or management software should let them bypass build-system security to insert stuff in. I'd say OpenBSD, memory-safe implementation, and highly-assured guard for protocol-level at a minimum if better stuff wasn't available.


If you have access to the build server you can modify the code, after pulling from the repo.

Or, if you have access to the binary repo where the code is pushed by the build server after building (often just a file server and FTP) then you can place your own binary.

Of course it us possible to use crypto hashes throughout the process to prevent these kinds of hacks from working, but...

If you do that, then the security process itself is what hackers will attack.

To prevent installation of exploits you really need to pay serious attention to the whole release process and not assume that anything is simple or secure. Only the paranoid can succeed in security.


Might be the NSA, might be a foreign competitor to the NSA. In any case, don't do to others, what you wouldn't want done to you!


Right, that's the spy agency moto, isn't it?


One can start to question using proprietary security software and devices with closed source vs using opensource.

If its one device, it may be in others.


Open source is not a panacea. It doesn't matter if the code is perfect, if the algorithm it implements is flawed.


Can we have a discussion about not every nation state being NSA and what that means?


Juniper is providing crucial Radio-access and backhaul security features to mobile network vendors such as in Nokia / NSN equipment. would be interesting to understand how these are impacted.

http://www.lightreading.com/mobile/mobile-security/nsn-junip...


CNN is the People magazine of cable network news. I suppose this story is important because it's exposing this issue to the wider American audience, but anyone who is reading HN already knows about this issue.


I hope this may let them think more seriously about the potential security threat that a so-called "governamento backdoor" would pose.


> A senior administration official told CNN, "... The administration remains committed to enhancing our national cybersecurity by raising our cyber defenses"

I hope you can see my eyes because I'm rolling them as hard as I can.

This does raise the question of whether the US government is foolish enough to use insecure US made electronics. And I suppose answers it.


Oops, we didn't notice that China/Whoever was using our own back door for 3 years!

Clearly we need even MORE back doors! /s


Is that a big deal? after all, it's common knowledge that the US spy routinely on their allies [1,2]. Nobody really seemed to care at the time.

[1] http://www.theguardian.com/us-news/2015/jul/08/nsa-tapped-ge... [2] https://wikileaks.org/nsa-france/


I think the undercurrent that in implied is that there are countries like China who don't play by the normal rules of international espionage. Since they are so tightly coupled to Chinese industry they abuse their resources and power to feed domestic industry. To most people this seems very scummy.

Are there examples the other way? Where the US stole secrets and then handed them off to domestic companies? Maybe in defense? but otherwise?


Yes, there have been such allegations. A recent case involves Airbus who allege industrial espionage in the context of revelations that the German BND helped the NSA spy on European businesses: http://www.bbc.com/news/world-europe-32542140


Your link does not support your confirmation.

> Leaks from a secret BND document suggest that its monitoring station at Bad Aibling checked whether European companies were breaking trade embargos after a request from the NSA.

If all the NSA did was look for illegal (off the books) sales of military equipment, that's exactly the kind of thing that's easy to justify ethically. If they were taking Airbus data and funneling it to Boeing, that would be highly unethical.

Is there any evidence the NSA has ever been motivated by economic espionage?


Indeed, the link only confirms that he allegation had been made; whether it has ever occurred is rather harder to confirm, of course.


The whole scandal with Brazil was a bit of a problem too. Although the Petrobras incident is a little fuzzy (the way they like it) as it's very closely tied to the state. http://www.theguardian.com/world/2013/sep/09/nsa-spying-braz...


You can't consistently maintain that real or alleged spying by NSA is a big bad deal, but the same thing happening to the US isn't a big bad deal, unless you just make a special exception for the sake of anti-Americanism.


Well, it is an American news outlet. I'm pretty sure media outlets in other countries report stuff from that country's point of view...


Occam's Razor would say that the code was modified by a disgruntled employee just before leaving, no? Why is the jump being made to foreign governments?


Occam's Razor is heavily abused. It has a specific definition, and is typically used for scientific and metaphysical hypothesis [1], not human behavior, which is not a universal law. It is used as "a heuristic technique" and "is not considered an irrefutable principle of logic or a scientific result." Occam's Razor says nothing about what the actual truth is - it's just a tool use when choosing hypotheses to pursue.

Even if we were to contort Occam's Razor into this use-case, there is countless evidence for state actors adding or requesting back doors to major routing hardware.

[1] https://en.wikipedia.org/wiki/Occam%27s_razor


As the thread has evolved, I am seeing better explanations of what happened, as well as better explanations of how the underlying technology worked. So I'll take back my idea on the disgruntled employee. I originally posted when I saw very few comments.

That being said, I'd like to clarify thoughts on Occam's Razor. It's a bit off-topic, but I think it would be an enlightening discussion and don't expect it to go on long. The rest of the discussion here about OP is far more interesting. I just appreciate it if someone points out when my logic is flawed, so I hope you take it the same way.

It is used as "a heuristic technique" and "is not considered an irrefutable principle of logic or a scientific result." Occam's Razor says nothing about what the actual truth is - it's just a tool use when choosing hypotheses to pursue.

I don't think you and I disagree with anything here. I am surprised and curious as to why you might think I thought otherwise. Using your words instead of mine, I was curious why the investigation was so quickly pursuing the hypothesis of foreign governments hacking Juniper's engineering team to plant code. My question has now been answered as the thread evolved.

I think that you understand Occam's Razor's letter and how it is used in academic circles, but you do not understand how its spirit still applies in non-academic circles.

Occam's Razor simply states (quoting from the Wikipedia article): "Among competing hypotheses, the one with the fewest assumptions should be selected." Hypotheses are theories on how the world works. Detectives also have their hypotheses in their investigations on what happened in a crime. Infosec professionals also have their hypotheses on what makes a newfound piece of malware tick.

Kepler, the FBI agent that chases serial killers, and the Symantec guys who discovered Stuxnet all share in common the dogged determination to figure out the truth by chasing facts. If you want to be a stickler on semantics and say that Occam’s Razor is referring to scientifically testable hypotheses, it seems a bit too dogmatic to say that we can’t scientifically test with experiments whether someone is the person that stuck the knife in another person, so Occam’s Razor should never be discussed. Good (I emphasize good) police work doesn’t make it necessary to design an experiment for that.

Focusing on simpler ideas that fit the facts is more profitable than focusing on grandiose ideas, even if grandiose ideas also fit the facts, if both ideas explain the subject equally (equal quality of explanation being a key point of consideration, as discussed in the Wikipedia article). Those grandiose ideas aren't discarded because Occam's Razor is indeed only "a heuristic technique" and "is not considered an irrefutable principle of logic or a scientific result." If more facts arise that cause the simpler explanations to lose weight, the investigator changes course.

I fear you are projecting too many assumptions onto people who cite Occam’s Razor, thinking that they’re taking Occam’s Razor to mean license to conclude an argument as finished when most people likely are not doing so. Certainly, I wasn’t. :)


> Focusing on simpler ideas that fit the facts is more profitable than focusing on grandiose ideas, even if grandiose ideas also fit the facts

Prove it.

There is no scientific basis for Occam's razor. It's just a rule of thumb, used by its originator to suggest that the simplest explanation for everything is God. Feel free to use it for ordering possible explanations by complexity, but don't mistake it for a scientific tool, because it isn't.


Well, this is getting a bit longer than I wanted for an off-topic thread, but... Prove it. OK, I'll see what I can do to change your mind. :)

There is no scientific basis for Occam's razor. It's just a rule of thumb

YES

Feel free to use it for ordering possible explanations by complexity

YES

but don't mistake it for a scientific tool, because it isn't.

Wrong. As was stated by colordrops above, it's a heuristic technique. As you said, it's useful for ordering levels of complexity. Where you and he may disagree with me is that such a tool is an excellent tool for pursuing investigations.

This is where my choice of the word "profitable" comes into play. Someone is figuring out some mystery and has hypotheses X and Y. For Occam's Razor to apply, X and Y have to have equal predictive performance, but Y is more grandiose. As such, X is more likely, per Occam's Razor. Pursuing X will either result in X being proven correct (at least until new contradictory information arises later), or quickly pivoting to Y (or in the worst-case scenario, new hypothesis Z). This course of action results in the least amount of time and resources wasted. It is by definition more profitable. You could choose to investigate Y first, but being more grandiose, it will be much more difficult to test Y. If Y turns out to be wrong, you will have wasted much more time and resources. If someone chooses Y first, it's because X and Y are NOT equal, and Occam's Razor then would not be relevant.

This guy gives a much better explanation of Occam's Razor than I ever could: https://www.quora.com/Why-is-Occams-Razor-true

Most changes in accepted theories are not because Occam's razor picked them - but because data falsified the preceding theory that Occam's razor picked for a newer one - turning the old one into merely a useful model - and eliminating it from discussion (falsified).

Seriously, why would anyone jump to investigate a more complicated hypothesis when there's a simpler hypothesis to investigate, and both hypotheses have equal explanation performance and supporting evidence? It is far less expensive to investigate the simpler hypothesis first. It certainly is not acceptable to be dismissive of the simpler hypothesis.

At the time of my original comment, there was almost no information in the thread. There certainly was zero information in the OP article. Given multiple possible explanations, I could not see any reason why the article jumped to the foreign nation state explanation. Imagine what that would have involved:

- hacking their engineering team to remotely modify the code

- infiltrating their engineering team with a mole to modify the code

Both of these are big conspiracy theories. With no additional information, what is more likely? That or:

- crap, that guy that left the company in a huff really hated us and had to screw us one last time

- whoops, our engineering team accidentally screwed up bad

Since when is jumping to conspiracy theories without good reasons an acceptable pattern of logic?

edit: formatting


You're right about the efficiency of analyzing the simpler hypothesis first. This kind of reasoning reminds me more of Pascal's Wager than Occam's Razor, but it's all good. We can use decision theory to prove that it makes sense.

However, you can't go from "according to Occam's Razor, let's test the simpler stuff first" to "according to Occam's Razor, the simple stuff is more likely to be true". And that's exactly what you did when you wrote "Occam's Razor would say that the code was modified by a disgruntled employee just before leaving, no?".

> Both of these are big conspiracy theories.

Proven to be true, in the past, by whistleblowers. There's also the choice of an encryption scheme that no sane actor would choose on its own. So the likeliness of a malicious state-level attack is higher than the baseline you probably assign to accident / disgruntled employee scenarios.


I see that my original choice of words was poor, and it could be taken the way it was. Thanks for showing me that. Not my intention. Hope that my intention was made clear in the second sentence: "Why is the jump being made to foreign governments?"

There's also the choice of an encryption scheme that no sane actor would choose on its own. So the likeliness of a malicious state-level attack is higher than the baseline you probably assign to accident / disgruntled employee scenarios.

Quite possible. Not my area of expertise, so I will defer to others like yourself. :)


Bypassing VPN encryption is only broadly useful if you can get access to packets in transit. Nation-state agencies are the only actors that probably already have such wiretaps. (As discussed here: https://www.imperialviolet.org/2015/12/19/juniper.html)


Non-nation-states could still get access to a reduced set of packets. I can sit next to you in a coffee shop with public wifi and see everything in the clear if I can decrypt your VPN.

Also, maybe they never intended to use it, and just wanted to make their company look bad.


Is this guy sitting in a coffee shop with a 2U rackmount ScreenOS-based hardware VPN connected to his laptop? If not, this particular attack that the article is discussing wouldn't work.


I choose to blame Comcast -- with that backdoor in place, they have the resources to decrypt about... 300 gb/month per customer of traffic. Very interesting :D




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: