This is an incredibly poignant example of the inherent danger of any cryptographic back door. It's a real shame that the media is both too technically illiterate and too pro-government to explain that.
My understanding of this is that malicious code was deliberately added to Juniper's software, not that it exploited some existing code that Juniper thought was safe. This could happen regardless of what kind of encryption is in use in the surrounding code/infrastructure.
If my understanding is correct, why is Dual_EC relevant?
edit: And a follow on question: If this back door only works by assuming Dual EC is backdoored, is that not incontrovertible proof that the NSA is behind the entire thing, which there is at least some doubt that they are? That, or someone else has found the hypothesized private key in Dual EC. Either scenario seems like far more significant news than this story already is.
Basically, Juniper used Dual_EC, which they knew was backdoored. Because they knew it was backdoored, they replaced the NSA key with their own, which they thought made it "safe."
Now it turns out that a third actor might have somehow replaced the Juniper key with their own key.
The point is that by using a CSPRNG with a backdoor, even when they tried to close that backdoor, they still left a backdoor open. Dual_EC is relevant because if the USG had never promoted it there never would have been a backdoor to leave open. Another CSPRNG would have been harder to leave insecure.
> If this back door only works by assuming Dual EC is backdoored, is that not incontrovertible proof that the NSA is behind the entire thing, which there is at least some doubt that they are?
Not necessarily. As Juniper is supposedly not using the NSA codepoints, it could have been "any" actor which changed the back door, including but not only the NSA.
Personally, I don't think it is the NSA in this case. If it were, I don't think we'd be reading about it on CNN at all.
NSA has absolutely no reason to tell Juniper they were behind this, even if they were. If the blame falls on a foreign actor that's entirely in their interest.
(That not saying they did it, of course. We don't know. But the fact that CNN writes about it should not be taken as evidence either way.)
From your 2nd link:
ScreenOS does make use of the Dual_EC_DRBG standard, but is designed to not use Dual_EC_DRBG as its primary random number generator. ScreenOS uses it in a way that should not be vulnerable to the possible issue that has been brought to light. Instead of using the NIST recommended curve points it uses self-generated basis points and then takes the output as an input to FIPS/ANSI X.9.31 PRNG, which is the random number generator used in ScreenOS cryptographic operations.
Because of this, it's not entirely clear at this point that an attack would have been feasible even for an actor that had the P and Q used for Dual EC here.
The juicy bit, and the piece that really is obnoxiously bad, is that NIST, who decides cryptographic standards, contracted out to two agencies when they were looking at introducing new cryptography in 2006. Those two agencies? RSA and NSA. They paid both of these organizations for the privilege of getting insight.
NSA had been pushing for Dual_EC for a couple years at this point, and wanting people to take the bait, they secretly gave $10 million to RSA for them to start using it in some of their products and for them to tell NIST that they thought it was cryptographically sound. All completely behind the back of NIST and the public.
So NIST is consulting out with two of the biggest names in cryptography, one of which had been championing Dual_EC for years (NSA), and one of which started using it in their products (which are primarily sold to government agencies that MUST use NIST-approved crypto, and presumably stood to lose a lot of money by "betting" on Dual_EC).
NIST never saw it coming.
And that's the irony of the whole thing. NSA is supposed to make the US, especially the US government, more technologically secure... yet they directly undermined cryptography that was predominately used by government agencies. Meanwhile the rest of the tech world saw it for the bullshit it was. The only people who were hurt by it was our government.
Thomas Massie gave a great speech to congress about this earlier this year, and pushed through a vote that now prevents NIST from contracting out with NSA. I wish I could find it.
(Aside: Thomas Massie, a congressman from Kentucky, graduated from MIT with a BS in EE and a MS in ME, founded a successful tech startup, and now serves on the Committee on Science, Space and Technology. One of the few cases where I feel someone in our government is adequately educated in what they rule over.)
How so? The NSA is a spy agency, they are the one institution that you clearly should NOT ask.
BTW I love the USA Committee on Science, Space and Technology, full of geniuses.
also the largest employer of cryptographers in the world, and is routinely sought for input by NIST, usually to no ill consequence
Or does criticizing the executive branch by calling their phone monitoring program illegal not reach the bar set by gone rogue?
The nasty part is that NSA went behind the backs of everyone and bribed RSA for their own nefarious purposes. I don't think anyone saw that coming.
One is they found code that shouldn't be there, allowing remoe ssh login to attackers. The other is weaknesses in Dual_EC.
consider this is nsa backdoor 2.0 and they got caught. wouldn't their answer be exactly what they are saying now? confess nothing and use as a convenient excuse to get more funding and reason to deploy v3.0
If you're not going to provide context or get a quote from a disinterested party, then just omit the US saying the US didn't do it.
This feels incredibly uncomfortable to say but in a decade or two it seem possible or maybe even likely that China, Russia and South Korea may have an edge in encryption technology and products by virtue of them actually being secure. I mean if they take cyberwar seriously, and think their economy has anything to do with national security, they'll pour energy into securing their businesses whereas the west does the opposite.
The audacity of Hilary Clinton last night, admitting she doesn't know a whole lot about encryption, but thinks for some reason that the tech community is on her side, because... ISIS?
Is there anyone in tech that actually agrees with these people? I'm being serious, am I just shielded from that side of the conversation? Are there educated people, who understand encryption, that don't work for NSA, that think key escrow is a reasonable request?
It leads me to the conclusion that policymakers are that out of touch with the world.
But if you've been paying attention the past few months, when they're not on stage, both Hillary and Obama have been calling for key escrow. To think she's changed from that while still using the same exact statement of, "Silicon Valley geniuses are going to come around and be our Manhattan project" is wrong.
On the other hand, politicians are snakes and will say anything to get elected. So there's that.
RADDATZ: You'll be happy. I'll let -- I'll let you talk then.
Secretary Clinton, I want to talk about a new terrorist tool used in the Paris attacks, encryption. FBI Director James Comey says terrorists can hold secret communications which law enforcement cannot get to, even with a court order.
You've talked a lot about bringing tech leaders and government officials together, but Apple CEO Tim Cook said removing encryption tools from our products altogether would only hurt law-abiding citizens who rely on us to protect their data. So would you force him to give law enforcement a key to encrypted technology by making it law?
CLINTON: I would not want to go to that point. I would hope that, given the extraordinary capacities that the tech community has and the legitimate needs and questions from law enforcement, that there could be a Manhattan-like project, something that would bring the government and the tech communities together to see they're not adversaries, they've got to be partners.
It doesn't do anybody any good if terrorists can move toward encrypted communication that no law enforcement agency can break into before or after. There must be some way. I don't know enough about the technology, Martha, to be able to say what it is, but I have a lot of confidence in our tech experts.
And maybe the back door is the wrong door, and I understand what Apple and others are saying about that. But I also understand, when a law enforcement official charged with the responsibility of preventing attacks -- to go back to our early questions, how do we prevent attacks -- well, if we can't know what someone is planning, we are going to have to rely on the neighbor or, you know, the member of the mosque or the teacher, somebody to see something.
CLINTON: I just think there's got to be a way, and I would hope that our tech companies would work with government to figure that out. Otherwise, law enforcement is blind -- blind before, blind during, and, unfortunately, in many instances, blind after.
So we always have to balance liberty and security, privacy and safety, but I know that law enforcement needs the tools to keep us safe. And that's what i hope, there can be some understanding and cooperation to achieve.
RADDATZ: And Governor O'Malley, where do you draw the line between national security and personal security?
O'MALLEY: I believe that we should never give up our privacy; never should give up our freedoms in exchange for a promise of security. We need to figure this out together. We need a collaborative approach. We need new leadership.
The way that things work in the modern era is actually to gather people around the table and figure these things out. The federal government should have to get warrants. That's not some sort of passe you know, antique sort of principle that safeguards our freedoms.
But at the same time with new technologies I believe that the people creating these projects -- I mean these products also have an obligation to come together with law enforcement to figure these things out; true to our American principles and values.
My friend Kashif, who is a doctor in Maryland; back to this issue of our danger as a democracy of turning against ourselves. He was putting his 10 and 12-year-old boys to bed the other night. And he is a proud American Muslim. And one of his little boys said to him, "Dad, what happens if Donald Trump wins and we have to move out of our homes?" These are very, very real issues. this is a clear and present danger in our politics within.
We need to speak to what unites us as a people; freedom of worship, freedom of religion, freedom of expression. And we should never be convinced to give up those freedoms in exchange for a promise of greater security; especially from someone as untried and as incompetent as Donald Trump.
RADDATZ: Thank you, Governor O'Malley.
The "problem" here is not secure communication. It is media propaganda / information warfare. Facebook and Twitter being used to instill hate and spread conspiracies. It is all in the open: Facebook images stating that Israel is behind ISIS, or Twitter accounts that post nothing but Anwar al-Awlaki videos. Could you imagine that happening 10 years ago, on your own homepage, without being raided? If Twitter can block porn, surely they can block terrorist propaganda too. But the law enforcement probably want to use these for fishing. Instead Clinton wants to build another nuke.
> "Dad, what happens if Donald Trump wins and we have to move out of our homes?"
These are propaganda tactics close to character assassination. In the next sentence he says that "freedom of expression" unites us, but when Trump uses this freedom of expression he is suddenly scaring Muslim kids. Very recognizable.
> especially from someone as untried and as incompetent as Donald Trump.
You just know that they made this a talking point, a hook. And O'Malley wrestled it in his answer, because that is what he prepared.
> where do you draw the line between national security and personal security?
Donald Trump is incompetent. Next question! Next!
It's time politicians realized that it's not a matter of technology, but a matter of crazy reality and bad political climate.
We live in age of three-letter agencies abusing their powers for god-knows-what reasons, dumb TSA employees publishing photos of "TSA lock" master keys on the Internet and Russian/Chinese/Nigerian hackers replacing NSA crypto backdoors with their own.
Potential usefulness of crypto backdoors is too large and communication technology is too advanced to contain such backdoors in trusted hands. Either we work to be damn sure that there are no known vulns and backdoors whatsoever, or there will be a backdoor for everybody and his dog. No technology can provide a Hollywood-style "find and decrypt bad guys" button devoid of nasty side-effects.
Not to mention that Paris terrorists used the uber-secure, NIST-certified DOUBLE-ROT13 for their cunning (i.e. f#$^&n plaintext).
Anyway, what's most likely to have happened is that other nation-states discovered NSA's own backdoors and started using them. The NSA then freaked out and told Juniper it's ok to patch them now. The reason why I'm implying cooperation between Juniper and NSA is because Juniper keeps refusing to eliminate the well known Dual_EC backdoor from their systems and are giving stupid reasons for keeping it.
I don't think a product like this has really been "secure" since the early 1800s or earlier.
Either way this will have a big economic impact for Juniper. In no way are they going to win the next big banking or government contracts.
If I were in a three-letter position at any big company, I'd choose Juniper over Cisco gear now. Juniper has proven they do code audits and throw such stuff out if they find it. Cisco, not so much.
Sending logs off as they're written to a centralized logging server or a time-series database would have been useful in this context.
Schneier & Kelsey's paper Secure Audit Logs to Support Computer Forensics seems relevant here. At the time I know that they wished to patent the work; does anyone know if a patent were granted, and if so when it expires?
Of course, then you need to ensure the integrity of the latest hash (if you can change one log, you can change them all). If the centralized logging server(s) are protected with the same rooted ssh, then it's just an additional step for our sophisticated state-sponsored boogeyman.
does this resolve the issue for git? or is it still possible to subvert?
Really? Most two-bit cat selfie startups are at least aware enough to notice a phantom commit from Bob, but Juniper isn't?!
I'm not sure git is the best example of a secure system. Commercial source control systems have more serious authentication.
What used to be the central p4 server was renamed Helix, turned into a federated architecture, and there's some sort of data exfiltration protection option that is based on Interset's behavioral analytics stuff.
As far as I can tell from the marketing materials, the idea seems to be that if if a coder starts regularly looking at the sales numbers, file a report. Maybe someone needs a reminder about the consequences of insider trading, but maybe they just want to know if the project's getting traction in the real world, and the false alarm can be resolved with a brief chat. If someone tries to clone an entire corporate monorepo on a salesperson's workstation at 3 in the morning local time, shut it down, because the most likely explanation is that the salesperson's workstation has been compromised.
After all that, only one OSS project responded with claims to meet many of the requirements. I figure the commercial situation isn't much better with most "benefits" existing on paper rather than with strong security.
To top it off, anyone defending against nation-states must remember they always attack what's below and around the software. Possessing 0-days in OS or management software should let them bypass build-system security to insert stuff in. I'd say OpenBSD, memory-safe implementation, and highly-assured guard for protocol-level at a minimum if better stuff wasn't available.
Or, if you have access to the binary repo where the code is pushed by the build server after building (often just a file server and FTP) then you can place your own binary.
Of course it us possible to use crypto hashes throughout the process to prevent these kinds of hacks from working, but...
If you do that, then the security process itself is what hackers will attack.
To prevent installation of exploits you really need to pay serious attention to the whole release process and not assume that anything is simple or secure. Only the paranoid can succeed in security.
If its one device, it may be in others.
I hope you can see my eyes because I'm rolling them as hard as I can.
This does raise the question of whether the US government is foolish enough to use insecure US made electronics. And I suppose answers it.
Clearly we need even MORE back doors! /s
Are there examples the other way? Where the US stole secrets and then handed them off to domestic companies? Maybe in defense? but otherwise?
> Leaks from a secret BND document suggest that its monitoring station at Bad Aibling checked whether European companies were breaking trade embargos after a request from the NSA.
If all the NSA did was look for illegal (off the books) sales of military equipment, that's exactly the kind of thing that's easy to justify ethically. If they were taking Airbus data and funneling it to Boeing, that would be highly unethical.
Is there any evidence the NSA has ever been motivated by economic espionage?
Even if we were to contort Occam's Razor into this use-case, there is countless evidence for state actors adding or requesting back doors to major routing hardware.
That being said, I'd like to clarify thoughts on Occam's Razor. It's a bit off-topic, but I think it would be an enlightening discussion and don't expect it to go on long. The rest of the discussion here about OP is far more interesting. I just appreciate it if someone points out when my logic is flawed, so I hope you take it the same way.
It is used as "a heuristic technique" and "is not considered an irrefutable principle of logic or a scientific result." Occam's Razor says nothing about what the actual truth is - it's just a tool use when choosing hypotheses to pursue.
I don't think you and I disagree with anything here. I am surprised and curious as to why you might think I thought otherwise. Using your words instead of mine, I was curious why the investigation was so quickly pursuing the hypothesis of foreign governments hacking Juniper's engineering team to plant code. My question has now been answered as the thread evolved.
I think that you understand Occam's Razor's letter and how it is used in academic circles, but you do not understand how its spirit still applies in non-academic circles.
Occam's Razor simply states (quoting from the Wikipedia article): "Among competing hypotheses, the one with the fewest assumptions should be selected." Hypotheses are theories on how the world works. Detectives also have their hypotheses in their investigations on what happened in a crime. Infosec professionals also have their hypotheses on what makes a newfound piece of malware tick.
Kepler, the FBI agent that chases serial killers, and the Symantec guys who discovered Stuxnet all share in common the dogged determination to figure out the truth by chasing facts. If you want to be a stickler on semantics and say that Occam’s Razor is referring to scientifically testable hypotheses, it seems a bit too dogmatic to say that we can’t scientifically test with experiments whether someone is the person that stuck the knife in another person, so Occam’s Razor should never be discussed. Good (I emphasize good) police work doesn’t make it necessary to design an experiment for that.
Focusing on simpler ideas that fit the facts is more profitable than focusing on grandiose ideas, even if grandiose ideas also fit the facts, if both ideas explain the subject equally (equal quality of explanation being a key point of consideration, as discussed in the Wikipedia article). Those grandiose ideas aren't discarded because Occam's Razor is indeed only "a heuristic technique" and "is not considered an irrefutable principle of logic or a scientific result." If more facts arise that cause the simpler explanations to lose weight, the investigator changes course.
I fear you are projecting too many assumptions onto people who cite Occam’s Razor, thinking that they’re taking Occam’s Razor to mean license to conclude an argument as finished when most people likely are not doing so. Certainly, I wasn’t. :)
There is no scientific basis for Occam's razor. It's just a rule of thumb, used by its originator to suggest that the simplest explanation for everything is God. Feel free to use it for ordering possible explanations by complexity, but don't mistake it for a scientific tool, because it isn't.
There is no scientific basis for Occam's razor. It's just a rule of thumb
Feel free to use it for ordering possible explanations by complexity
but don't mistake it for a scientific tool, because it isn't.
Wrong. As was stated by colordrops above, it's a heuristic technique. As you said, it's useful for ordering levels of complexity. Where you and he may disagree with me is that such a tool is an excellent tool for pursuing investigations.
This is where my choice of the word "profitable" comes into play. Someone is figuring out some mystery and has hypotheses X and Y. For Occam's Razor to apply, X and Y have to have equal predictive performance, but Y is more grandiose. As such, X is more likely, per Occam's Razor. Pursuing X will either result in X being proven correct (at least until new contradictory information arises later), or quickly pivoting to Y (or in the worst-case scenario, new hypothesis Z). This course of action results in the least amount of time and resources wasted. It is by definition more profitable. You could choose to investigate Y first, but being more grandiose, it will be much more difficult to test Y. If Y turns out to be wrong, you will have wasted much more time and resources. If someone chooses Y first, it's because X and Y are NOT equal, and Occam's Razor then would not be relevant.
This guy gives a much better explanation of Occam's Razor than I ever could: https://www.quora.com/Why-is-Occams-Razor-true
Most changes in accepted theories are not because Occam's razor picked them - but because data falsified the preceding theory that Occam's razor picked for a newer one - turning the old one into merely a useful model - and eliminating it from discussion (falsified).
Seriously, why would anyone jump to investigate a more complicated hypothesis when there's a simpler hypothesis to investigate, and both hypotheses have equal explanation performance and supporting evidence? It is far less expensive to investigate the simpler hypothesis first. It certainly is not acceptable to be dismissive of the simpler hypothesis.
At the time of my original comment, there was almost no information in the thread. There certainly was zero information in the OP article. Given multiple possible explanations, I could not see any reason why the article jumped to the foreign nation state explanation. Imagine what that would have involved:
- hacking their engineering team to remotely modify the code
- infiltrating their engineering team with a mole to modify the code
Both of these are big conspiracy theories. With no additional information, what is more likely? That or:
- crap, that guy that left the company in a huff really hated us and had to screw us one last time
- whoops, our engineering team accidentally screwed up bad
Since when is jumping to conspiracy theories without good reasons an acceptable pattern of logic?
However, you can't go from "according to Occam's Razor, let's test the simpler stuff first" to "according to Occam's Razor, the simple stuff is more likely to be true". And that's exactly what you did when you wrote "Occam's Razor would say that the code was modified by a disgruntled employee just before leaving, no?".
> Both of these are big conspiracy theories.
Proven to be true, in the past, by whistleblowers. There's also the choice of an encryption scheme that no sane actor would choose on its own. So the likeliness of a malicious state-level attack is higher than the baseline you probably assign to accident / disgruntled employee scenarios.
There's also the choice of an encryption scheme that no sane actor would choose on its own. So the likeliness of a malicious state-level attack is higher than the baseline you probably assign to accident / disgruntled employee scenarios.
Quite possible. Not my area of expertise, so I will defer to others like yourself. :)
Also, maybe they never intended to use it, and just wanted to make their company look bad.