Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Secret Code Found in Juniper's Firewalls Shows Risk of Government Backdoors (wired.com)
338 points by r721 on Dec 19, 2015 | hide | past | favorite | 117 comments


Just a quick note that 'lawnchair_larry has me dead to rights on this one.

I conceded awhile ago that Dual EC was a crypto backdoor (before BULLRUN and the antics that were uncovered with RSA and with the European standards, I had suggested, as some other crypto people had, that Dual EC was too hamfisted and obvious to be a crypto backdoor).

But I've maintained since then that virtually nobody uses Dual EC, so its impact --- while clearly malign! --- is probably limited.

Nope. ScreenOS apparently (I'm not 100% sure, but that seems to be the way the wind is blowing) uses it to key VPN connections!

FULLY CONCEDED. The immediate known practical impact of Dual EC is, if that's true, enormous.

The weird thing about this particular backdoor is that the adversary seems to have modified the Dual EC parameters. Dual EC is an RNG with an embedded public key, where an adversary with the private key can "decrypt" the random bytes it generates to recover its state and rewind/fast forward it. This backdoor appears to swap out the public key, which is something NSA has no interest in doing.

My money is that this is the work of GCHQ, the world's most unhinged signals intelligence agency, and our partners in peace.


Could someone provide background on the the Dual EC issue, and why this announcement is probably more important for the Dual EC implications than for Juniper's backdoor, as tptacek says below?

While I wish I was an uber-crypto-nerd, unfortunately I only have time to be an occasional crypto geek.

EDIT: Here's a start, from another front page HN article: "Dual-EC was an NSA effort to introduce a backdoored random number generator (RNG) that, given knowledge of a secret key, allowed an attacker to observe output from the RNG and then predict its future output."


Basically, Dual EC is never something you'll choose willingly, given alternatives (it's quite slow and has had a slight bias demonstrated). So no one* did in practice. This made the backdoor almost a non-issue (so no heartbleed-like panic to patch, etc.).

However, having it in the standard (since removed) is perfect for fallback-like attacks or surreptitious changes (in the best case, your target has already implemented and deployed your exploit code and all you have to do is throw the switch to enable it!). That's what this is demonstrating (though some of the details are still speculative).

* Exception being those paid or required to do so by NSA.


"Instead of using the NIST recommended curve points [ScreenOS] uses self-generated basis points..." [0]

The way I read this statement is that each device generates its own set of points. If this is the case, I don't see how it would work as a crypto backdoor.

If by "self-generated" they mean generated by Juniper once, well, thats fishy.

[0] http://kb.juniper.net/InfoCenter/index?page=content&id=KB282...

Edited to add: Upon further research, the latter possibility seems more likely.


Instead of using the NIST recommended curve points it uses self-generated basis points and then takes the output as an input to FIPS/ANSI X.9.31 PRNG, which is the random number generator used in ScreenOS cryptographic operations."

Looks like they feed the output through a standard CPRNG. Assuming it's true, that pretty much breaks the DUAL_EC attack because you can't use the output of the final CPRNG to recover the DUAL_EC state.


I wonder if that's going to be demonstrated to be a true statement, and further whether the tampering Juniper discovered will have disabled that second step.


It seems to be a true statement: Dual EC is used to seed a X9.31 generator with 3DES, where 8 bytes are the initial seed V, and the remaining 24 are K (cf. [1]). I don't see any other usage of Dual EC other than to self-test and to seed X9.31.

Oddly, you can disable the Dual EC seeding with the flag 'one-stage-rng'. But not the other way around.

[1] http://csrc.nist.gov/groups/STM/cavp/documents/rng/931rngext...


Unless the backdoor disables the X9.31 stage, what's the point of tampering with the Dual EC RNG, if its outputs are going to be mangled anyways?


I don't know, it makes little sense to me too. Maybe there's some subtle flaw somewhere, which I haven't spotted. Since subtlety doesn't seem to be a thing with the changes we've seen so far, I'm not sure what to think.


> This backdoor appears to swap out the public key, which is something NSA has no interest in doing.

While they wouldn't need to swap it out (since they can unlock standard Dual EC), that doesn't mean they wouldn't.

Assuming 1) that Dual EC was used here and 2) was inserted surreptitiously in a way that has a nonnegligible chance of discovery, it would make sense to rekey it since failing to do so would strongly attribute the attack to NSA (or someone willing to give up the passive backdoor opportunity in order to pin it on NSA).

The only case for NSA where using the old key would be best is if the use of standards-based Dual EC would pass scrutiny but a modified one would not. This depends on the details.

If switching Juniper to Dual EC required only calling a different function in Juniper's existing crypto library and/or detection was unlikely, standard Dual EC might be best. If the compromise added a full Dual EC implementation, then changing the constants is good (different magic constants don't significantly increase risk of detection for the large inserted code blob while significantly decreasing the risk of attribution).


No. It is vitally important for NSA not to call attention to their crypto backdoor --- remember, this was inserted in 2012 --- and external tampering with the PKRNG in a VPN device is a smoking gun that Dual_EC is not a benign standard (still a plausible claim in 2012), but rather a surreptitious key escrow mechanism.

No. It is not at all plausible that NSA backdoored ScreenOS in 2012 in order to rekey their backdoor.


The backdoor possibility was known in 2007 and the standard included a way to set your own constants (which no one used, true, but just because it was true for Dual EC in general).

I disagree that following the standard on that point and creating your own would be a smoking gun that the standard is malicious. Rather, it could be a smoking gun that this implementation was. If the tampering would likely be detected anyway, I'd argue it's better to avoid attribution.


The fact that you could use Dual EC to implement a backdoor was known in 2007, but it wasn't taken especially seriously; Schneier, for instance --- long a critic of elliptic curve crypto --- publicly cast doubt on it.

It is certainly not the case that any part of the US Government acknowledged anything hinky about Dual EC in 2012. The notion that Dual EC was a cryptographic standards backdoor would have been one of the more closely guarded secrets in the entire government.

Virtually everything we now know about Dual EC is a result of the Snowden disclosures and the followup work people like Bernstein and Lange did in the wake of those disclosures. When analyzing stuff like this, it's important not to project knowledge we have now back before we had it.


I agree that the oracle of hindsight can often lead us astray.

However, in 2007, it wasn't just known that you could implement a backdoor, but how to do so. This of course means that any constants generated after this point could not be given the benefit of the doubt since anyone could launch this attack (and I think you'll grant me that all major intelligence services knew how to create and use a similar backdoor after 2007). And while USG did not acknowledge a backdoor, there was real, public doubt about the constants: Schneier's article in Wired was titled (notwithstanding Betteridge) "Did NSA put a Secret Backdoor in New Encryption Standard?"

Those in the standard were of unknown status, but even 2007 Schneier had the appropriately conservative cryptographer response and recommended "not to use Dual_EC_DRBG under any circumstances."

The pieces were there in 2012 for anyone that noticed a new Dual EC dependency being added, but I agree that the knowledge was not well-known enough (though it saddens me that a VPN manufacturer didn't know to avoid it 5 years later).

I still hold to the statement that rekeying Dual EC in 2012 is not a smoking gun on the standard, but more suggestive of an opportunistic attacker using the known mechanism for how to embed such a backdoor. If detected, it's obvious it's a backdoor, but that's not an indictment on the standard's constants (in fact, if you could otherwise attribute the attack to NSA, it's a tiny bit of evidence that either there is no standard backdoor or that USG don't want to use it in this case).

Here's a statement that I think we might disagree on: I believe that no one in 2012 that knew why the provenance of the constants could be a problem would have allowed Dual EC in the codebase in any form. This is why I believe the incremental chance of detection from changing the constants was small and why it'd be worth it to avoid attribution.

That said, I've stated my positions and see no need to continuing pressing legitimate points of disagreement. I just wanted to understand your position.


Kudos to you for the public admission. And yeah, I have to admit that when I saw this I thought of the Marshall McLuhan scene from Annie Hall with you in the role of the NYU professor.


It's pretty jaw-dropping. The news (still unconfirmed, but really looking that way) that ScreenOS keyed sessions from DualEC is probably bigger than the news that ScreenOS had an unauthorized backdoor.


"I am shocked—shocked—to find that gambling is going on in here!"

--from "Casablanca"

I'd be shocked to learn that there are no back doors in routing equipment. Having that kind of control is just too appealing to the most powerful players -- the NSA, China, perhaps Russia.

One hopes that people who care about the privacy of their communications are not relying on the routers for encryption. I would encrypt end-to-end. Even if the spooks are capturing the data, let them work for their cleartext.

Of course, we have to use algorithms that aren't compromised, either.

Annoying and disturbing. And they can't claim it's needed to stop terrorism, either. The U.S. anti-terrorism apparatus didn't spot an obviously dangerous couple in San Bernardino, even after one of them posted jihadist goals on her stream. They didn't stop the Tsarnaev brothers from bombing the Boston Marathon even after the Russians phoned to warn us about them. Idiots.


I read from a reliable source I cannot immediately recall that she in fact did not post anything jihadist or even inflammatory on any social media account of hers.

are you repeating a convenient falsehood or am I? in other words -- do you have a source that verified she in fact posted jihadist anything, anywhere?


Excellent point. The whole episode is very instructive.

On Sunday the New York Times quoted "law enforcement sources" as saying that she had made postings on her stream. The story got a huge amount of coverage and even came up during Tuesday's Republican debate.

On Wednesday, FBI Director Comey described the reporting as "grabled" and clarified that no, it was just private messages -- and the Times (and others) rewrote their stories to reflect it. But you can't unring a bell; most people's opinion, and even more importantly their emotional responses, are based on the original story.

Erik Wemple covers this well in the Washington Post [1]

Times' "public editor" Margeret Sullivan's looks at it as well [2], noting that two of the reporters also incorrectly reported that Hillary Clinton was the target of a criminal investigation.

[1] https://www.washingtonpost.com/blogs/erik-wemple/wp/2015/12/...

[2] http://publiceditor.blogs.nytimes.com/2015/12/18/new-york-ti...


I wish we had something that tracked news articles and noted when they changed without either an inline note about the change or an update at the end. It would be like the snopes of news journalism, and we could get some really interesting statistics from that with regard to the journalistic integrity of different sources. There's a large population of people that could do with some good evidence to force them to be more critical of certain news sources.



Awesome, thank you!


It would be like, like a wiki, but before the admins go off the reservation.


Eh, I think it has to be third-party for exactly that reason. You can never rely on the admins not going off-reservation,or being explicitly ordered to hide changes. If an organization is willing to update a story to change major facts without any indication of such, I see no reason to trust they wouldn't hide change history. Sure, it may be a smaller portion of organizations on a smaller number of articles that are willing to take it to this next level, but it would be better to bypass that entirely and just have an impartial record.


You are right. The FBI has corrected the report of her Facebook warnings that was floating around the news networks.

Marquez (the Muslim convert who sold them the guns), however, did post something on his verified Facebook account a month before the attack.[1]

And Malik (the wife) did apparently post something on Facebook minutes before the attack.

Do you disagree with my overall point, that the U.S. law enforcement has over-surveillance and under-intelligence?

1. http://www.nbcnews.com/storyline/san-bernardino-shooting/san...


Surveillance is simply a useful scapegoat it's actually fairly useless. The problem is if there are any read flags that get 24/7 monitoring then having several people flip them will quickly destroy the system.

Let's say they vent in private conversation well that's legal, they then buy some guns and ammo. Well again that's legal. Then one day they go out and shoot people, well sorry we can't afford to have a swat team around and you can go from 100% legal to shooting people in about 30 seconds.


I would disagree that surveillance is "fairly useless". Obviously, there are times when keeping a watch on potentially dangerous people is going to pay off. How many times, I can't say.

The problems begin when we have a process-bound bureaucracy rather than a group of smart people, each acting on hunches and excellent information. Bureaucracies can be composed of smart people yet act stupidly.

Also, there are political mandates that certain minorities not be singled out, e.g. the Muslims. Thus, the fact that the three involved in San Bernardino were Muslims did not come into play until well after the shooting. The initial news articles I saw didn't even mention their ethnicities and religious affiliations; even the conservative Wall Street Journal only mentioned it at the very end, and the fact that Marquez was a recent convert to Islam didn't come out until recently.


In fiction hunches work well, in the real world there far less valuable.

In terms of mass shootings Muslims are far from the most common profile. Seung-Hui Cho aged 23 for example killed 32 people and wounded 17 in VT on April 16, 2007. Jeffrey Weise, a 16-year-old killed 10 in Red Lake, Minnesota. 21 died at Columbine.

Go though: http://timelines.latimes.com/deadliest-shooting-rampages/ they don't really fit just 1 or 2 profiles.


Mass murderers in the U.S. are typically the mentally ill. There are many markers: isolated, anti-social, history of mental illness, "neighbors considered him a little strange", family who are fully aware ("we always feared this would happen" or "he scared us").

Of course, you can't just go pick up every strange person and lock them away, or 20% of us might be incarcerated. But we can definitely have better scrutiny of sociopathic children in junior high and high school, and steer them into treatment that might help protect them from hurting themselves and others. We've gone from excessive incarceration back in the 1950s to inadequate attention to the mentally ill since the 1970s, when the flawed policy of mainstreaming became popular.

The Columbine massacre was perpetrated by two disturbed boys whose families suspected something. Afterwards one of their dads said "That sounds like him." Well, if you suspected violence, why didn't you do something about it sooner?

The other kind of mass murderers today are ideology-driven Muslims. 3,000 on 9/11. Dozens at a Texas base. 14 in San Bernardino. I suspect there will be many more coming. These people are quite possibly mentally ill, sociopathic, something wrong upstairs. The ideology gives them a focus for their delusions. We don't need to go after Muslims in general; we need to focus on the unstable ones who are headed for trouble.

It's going to take clear heads and a lot of work, and in my opinion the very worst approach is to sweep all these problems under the carpet and simply tap everyone's phone and email. That's just an evasion and confers a dangerous sense of complacence.


Actually, there's a recurring theme among the non-muslim ones: Age 15-25 and taking antidepressants. The only reference I can find on a quick google is ZeroHedge ("The conspiracy site without the conspiracies"), but I've actually looked at some of the references in the past: It's not random.

[0] http://www.zerohedge.com/news/2015-09-17/antidepressants-sci...


Not really the page I linked had ~10 people age 40+:

  age 36 SEPT. 28, 2012 Andrew Engeldinger 6 killed, 2 injured 
  age 43 APRIL 2, 2012 7 killed, 3 injured: Oakland
  age 41 OCT. 12, 2011 Scott Dekraai, 8 killed, 1 injured: Seal Beach, Calif.
  age 34 AUG. 3, 2010 Omar S. Thornton, 6 killed, 11 injured: Tucson, Ariz.
  age 45 FEB. 12, 2010 Amy Bishop 45: 3 killed, 3 wounded: Huntsville, Ala.
  age 39 November 5, 2009 Nidal Malik Hasan: fatally shooting 13 people and injuring more than 30
Sure, you could say it's mostly 14-45 but that's true for most criminals and not that useful. And even just on that page there is Gian Luigi Ferri, age 55 JULY 1, 1993


Do you know why they listed these specifically? All the title says is "here are some of the deadliest ..."

Even if you don't subscribe to the "more than once a day" statistic (which does include gang violence), it's still "about once a week" in 2015 for non-gang completely-innocent random victims -- and yet, this page lists only about 5 a year.


If you talking about 5 or fewer people being involved then it's not exactly a mass shooting. And at that point you might as well start looking into car accidents or even jaywalking and realize untreated depressed people are younger and simply do less stuff. So, even a successful treatment without side effects is going to look bad on a lot of violent crime statistics.

something something heatmap. https://xkcd.com/1138/


Wish I could up vote this more. The government has nearly nothing to show for all their surveillance and yet they keep asking for more.


Well, it's because if they had more powers they could be more effective, right? This line of logic ends only when the people stop asking for perfect security.


But they say,

> To be clear, we do not work with governments or anyone else to purposely introduce weaknesses or vulnerabilities into our products…

Can we assume they can force a software modification in the interest of national security, just as we know they can force taps into the likes of Yahoo and AT&T?


I like CNN's take on the story:

http://edition.cnn.com/2015/12/18/politics/juniper-networks-...

Obviously it must be either Russia or China - NSA couldn't possibly be responsible ;)


It can't be NSA agents who caught intercepting network gear from Cisco Systems as it was being shipped to a customer (as revealed by snowden) it is highly unlikely they infected juniper networks as well.

</sarcasm>


To play devil's advocate, it is a big leap going from backdooring a specific device sent to a specific person you may by monitoring, to backdooring every one of those devices. Not that I would put it past NSA, though.


I don't disagree that they might have the intent to, but it's possible they didn't have the capability to.

Much like the difference between forcing sites to hand over SSL certs and breaking SSL, there's a gulf between compromising hardware that you have physical access to and compromising the software stack at the source.

In this case I wouldn't be surprised if that turned out to be the narrative, but the "NSA can accomplish anything" position is a narrative that plays into their favor.


>Much like the difference between forcing sites to hand over SSL certs and breaking SSL, there's a gulf between compromising hardware that you have physical access to and compromising the software stack at the source.

You know that they could have their own programmers working there, or just approach, bribe or blackmail, an existing programmer to compromise the software (e.g. one found with child porn on his HD).


I'll bet you ten dollars there are more backdoors, better hidden than the ones they found. Say, with Underhanded C style coding. An additional ten bucks says that Cisco and the top handful of consumer appliances also contain such backdoors.

I hope the folks at Juniper are checking their toolchains, build machines and repositories for signs of similar attack. Of course, enough time has elapsed that they may need to establish a cleanroom for their code. Hoo boy.


> I hope the folks at Juniper are checking their toolchains, build machines and repositories for signs of similar attack.

I hope they figure out who planted it there and that they change their hiring/code review policies to make sure that such a thing can not happen again. Firewalls should be tamper-proof to the point where they simply refuse to operate at all if the code has been messed with after it leaves the premises, the absence of such tamper proofing is already a problem and should be taken quite serious.


Which is DRM, right? Which is fine, but it needs to be DRM in control of the owner, not the supplying company. There's all sorts of evils in the world to worry about, so we shouldn't be too quick to get away from one that we run into the arms of another. There are, not coincidentally, parallels with terrorism and the security state.


I would love an attestation system. For instance, the firewalls around a store's credit card info (even if this data isn't stored, but tokenized, it's still damned useful) should sing like a canary if their configuration or firmware are hacked.

Attestation is sort of like DRM with policies that you decide.

Note that many of these devices have significant complexity in hardware. Lots of things that state-level actors can do to your hardware, on the order of:

- see packet starting with a known signature

- over-write the rest of that packet with interesting stuff, and transmit

Something at this level would be really hard to find.


I'll bet you there are tons more unintentional vulnerabilities than actual backdoors. The state of secure software is so poor that you only need to install an actual "backdoor" if you don't have the money to throw a couple of security researchers at it for a few weeks.


It makes me wonder about their control of the software design and build process. It looks like something went really wrong with their software supply chain.


The honeymoon is over. The Internet is now a hostile environment. We cannot assume good conduct from any party of reasonable size and should assume deception from anything that isn't fully open source and vocal about it. It sucks to assume the worst...


That has always been the case. It's always been the case that if your adversary is a well funded government you need very careful security.

We knew this from Echelon in the 1980s.


I was thinking the same thing. Honeymoon's over? What honeymoon? There never was a honeymoon!


Objectively, perhaps not. (I suppose I remember the early days of the Internet when it wasn't that popular.) My gist is that our implicit trust in the system/infrastructure we rely on is undermined by this sort of revelation. And yet, as a whole, we de facto continue to trust in opaque entities that provide valuable yet likely compromised services because it is convenient.


This should not be a revelation though. Granted, back when the Internet wasn't so popular, the risk for people like you and me, or for small companies, was relatively low. Partly because the value of Internet-accessible information was relatively low, too -- twenty years ago, if I bought a backdoored home router, all the Evil Guys would get access to would be really bad code snippets and a few folders of porn pics.

But the fact that people with access to your infrastructure also had access to your data was more or less well known ever since "the beginning", and it's also why internetworks other than the Internet were a thing for a long time.

> And yet, as a whole, we de facto continue to trust in opaque entities that provide valuable yet likely compromised services because it is convenient.

Don't forget the scarcity of alternatives though. It takes a lot of money to come up with a credible alternative to Juniper and Cisco. Siphoning personal data is what sells nowadays, not protecting it.


What implicit trust?

Phrases like If you want something to stay private, don't post it. are bandied about by the general public. Not just by single issue privacy advocates, but by people of all sorts.


The US government has had essentially unfettered access to landline communications since 1928[1]. See also: [2]

1. https://en.wikipedia.org/wiki/Olmstead_v._United_States 2. http://scarinciattorney.com/olmstead-v-united-states-and-the...


Now? The code was introduced in 2008.


It would appear that the "party of reasonable size" here is China or Russia, not a corporation.


I don't really get why you specifically mention these two countries. It could be because your government is at war with them but that would be speculation.

The problem goes much deeper. It could be any "larger" corporation or government or other entity having enough manpower. It could even be parts of a corporation or government entity.

However which way you put it, a program of which you don't have access to the source cannot be trusted.


Can you please explain how you reached that conclusion? What made you exclude US and UK?


Parent is probably taking the CNN article at face value.


I'm not sure that matters. Any powerful entity wrapped in secrecy is suspect, whether they're a nation state or a corporation. We, the users, are always the victims - dependant on the infrastructure or services that have been compromised.


This also highlights why it would be better to use opensource firewalls such as Openbsd instead of proprietary ones!

If you care about your security then you need to be able to inspect the code that protects your assets.

Distributed open source firewall vs propritary firewall with backdoors.


Of course the idea that open source software in general, and firewalls in specific are better than closed source ones relies on people actually having the skills and time necessary to conduct a decent audit.

Some of the very large security bugs found in open source software which were present in that code for years, indicate that this is not commonly done.

And that was just bugs as opposed to actual backdoors which would likely be harder to find if inserted competently.

So whilst in theory you are correct, I'm not so sure you are in practice.


Indeed, for those of us not taking the time (or lacking the expertise) to audit code ourselves, this is all ultimately appeal to authority.

If anything, the advantages of open source here are more about open development processes and being able to know just who is the gate keeper. As an example, if you use OpenBSD, you have reasonable assurances that Theo will do a good job, because he has earned his reputation.

Juniper? Who really knows what goes on internally? How are people allocated and shuffled around? It's a black box.

Ultimately I'd put more trust in OpenBSD than in Juniper because of this, but I'm still making a leap of faith. And OpenBSD is an extreme example of how much trust a project can earn; a vast majority of open source code is not held to such scrutiny.


With visible source code, at least the security bugs are actual bugs instead of deliberate attacks on the users. Psychologically, people are less likely to commit antisocial acts if they think that they are being watched. So, if nothing else, having visible source code helps reducing some attacks. The more popular a particular project is, the more watched they will feel. I therefore still trust OpenVPN a lot more than I trust Juniper. Every one of its patches goes through a public code review:

https://community.openvpn.net/openvpn/wiki/DeveloperDocument...

http://sourceforge.net/p/openvpn/mailman/openvpn-devel/?view...


Given that this was hidden even from the organization that was in control of the codebase, it's not clear that open source on its own is a real solution. This made it through whatever initial review processes Juniper has, and was only caught by an "internal code review" performed after the fact - an exercise only infrequently conducted on most open source projects.

Given enough eyeballs backdoors can be easy to spot in source code, but eyeballs aren't an unlimited resource. In addition to open sourcing your software, the community that cares about the project needs enough funding or institutional support to actually review the code in question.


>Given enough eyeballs backdoors can be easy to spot in source code

After looking at some examples of the Underhanded C contest (http://underhanded-c.org/) and its predecessor, the Obfuscated V contest (http://web.archive.org/web/20110605000401/http://graphics.st...), I'm not so sure that well-implemented backdoors are easy to spot.


Not that OpenBSD hasn't had its own scares[1]. If you find those allegations feasible, even if you don't believe they are true in this instance, then you should not necessarily consider open source and/or free security software as more secure than commercial software.

1: https://www.schneier.com/blog/archives/2010/12/did_the_fbi_p...


I agree with the premise, but I'm not aware of an open source firewall that can provide the same functionality and scale as Juniper's product line.

You can certainly provide, for example, open source designs for a specialized ASIC, but it doesn't mean anyone could afford to actually make it.


Mmm. For those downvoting, it's fairly easy to compare features and throughput. The pfSense team is making great strides, and has a roadmap that starts to close the gap.

But, there is a notable gap. Compare throughput for a 3DES vpn for example. Or total throughput with filtering on.


I'm confused. Are these accidental vulnerabilities or deliberate backdoors? If deliberate, why is there speculation about who might have installed this "secret code"? Do they have version control? Is there a specific human attached to the relevant commits? Serious question.


> If deliberate, why is there speculation about who might have installed this "secret code"?

Would you take whatever your VCS claims as face value in this case? I wouldn't, which makes answering this very difficult, so I think it is to be expected that they don't have an answer yet.


I wonder the same thing, but with enough resources, there are many ways to attack version control. For example, probably there are exploits in the VCS system and in the database that underlies it, they could steal a developer's credentials or hack his/her account otherwise, attack the compiler ...

Think of the value of all the data in the world that Juniper firewalls protect. Of course every actor that can afford to will invest the resources to introduce back doors. As an example, the NSA had ~30,000 employees and a ~$10 billion budget as of a few years ago; they can afford to do it.

For me, the real question is, how does Juniper address the fact that they are such a target? Do they take adequate security measures for the situation (which would be very expensive)? Just take standard security measures, which very likely would be inadequte, so they can escape liability? How does any such organization deal with it?: Google, Microsoft, Apple, Apache, Linux, etc.


They might know the person, but not necessarily their affiliation. They might want to investigate before they divulge. If it's a US person, it takes time or would never be divulged.


And a good one. They were definitely deliberate, but the other details are not public.


If this wasn't an intentional backdoor it raises the question, what source control methods were being used and are they secure? Has Juniper been compromised on a larger scale?


That's a good question, too. I'm sure people are asking it and many others like it internally, a well.

On the outside, I think this is yet another reason to use git, which, if I recall correctly, is designed to make attacks on commit history significantly more difficult than they are with SVN and CVS.


This is a very important point. Are the backdoor(s) traceable to a specific event or individual? How far up the company hierarchy does the rot go? Whatever the answer, it is insufficient. We are relying on hardware and software that is opaque at the network level, and therefore open to this sort of manipulation.


It's sad but events like this one make me turn away from Internet.

I started using Signal because I don't want people seeing the messages I post. But in the end it's only trust that makes me think Signal is safe to use.

A lot of people also trusted Juniper. But that trust is gone. And not only for Juniper. What about other brands? We don't know.


Signal is open source. But you have to trust the OS it runs on...


And even if you trust the OS, you have no idea what is going on on the phone's baseband processor: https://en.m.wikipedia.org/wiki/Baseband_processor

One has to assume that all are back-doored. Mobile phones are inherently not trustable.

Same goes for all major firewall vendors. If you going to hack one of them as a nation state, then you're going to do all of them.


There is at least one project that seeks to mitigate the threat posed by baseband processors having DMA, Neo900: http://neo900.org/faq#privacy


Even if you trust the OS and the baseband, you have to trust that the federated server for Signal (OpenWhisper, Cyanogen, etc) isn't storing contact discovery requests.


And the hardware the OS runs on... Intel and others are now embedding "management" features at a very low level.


You're still secure if you use https. If neither your computer nor the host you're connecting to has been tampered with, then you're safe irrespective of what's happening between.


I get where you're coming from and I want to agree with you but...

While that used to be true, based upon recent history we, unfortunately, can't blindly trust HTTPS to always be "secure" 100% of the time anymore -- whether due to things like Heartbleed, fake certs signed by a root CA, protocol attacks, or some other vulnerability that hasn't even been discovered yet.


Well, what if the host you're connecting to is a Juniper firewall?


Yeah, because illegitimate / spoofed certificates will never happen...


I think we're well on our way to a point where certificates can be mostly trusted. Of course certs can be stolen, but I would bet that "spoofed" certificates will soon be an occurrence of the past.


With certificate transparency, at least we'll know about it - afterwards, at least.


Well, afterwards, if the attack stops (software does not need to stop) in a timeframe short enough for your computer to no clear its history.


Many https sites use RC4, I wouldn't be so sure.


Then use a browser that rejects RC4 (like latest Chrome).


[deleted]


The private key is used to negotiate a session key, which is then used as the symmetric key for RC4 or whatever stream or block cipher you are using. Those session keys are ephemeral and per-session, so leaking them is only a problem for those sessions.

(Also, since it's a stream cipher, it can't use the same key ever again, else you can xor those ciphertexts to get 2 xored plaintexts, which are much easier to crack.)


It's sad but comments like this made me turn away from language as a reliable means of communication. I used to believe anything I'd read. But, having become aware of trolls and worse, those days are gone. Not only English, but Japanese and who knows which other languages are untrustworthy.


I'm not sure what you are trying to communicate.

Trust is something delicate which works best by using face to face communication.

So it's ok not to trust everything you read.

But this story is about private communication not being as private as thought.


Wow that nation state is stupid.

They embedded the backdoor password right into it. Clearly they should have embedded the hash of the password instead. Then it would be unbreakable and no other party would be able to use the backdoor.

Hashing passwords is extremely basic security practice.


When you're trying to do something secretly, you run into a new tradeoff: doing it the right way vs. doing it in such a way that it doesn't draw attention to itself. It might have been the case that hashing it would have been too flashy.


I would imagine it would hook into the existing code that is hashing the passwords... Which suggests that they are not in fact hashing the user inputs and are relying on the client to do that for them.


>They embedded the backdoor password right into it

Or you know, that's just one obvious backdoor they put in, to divert from the other 2-3 non-obvious they have.


But one thing the NSA likes to say is that the backdoors they insert are only accessible to them, not to others. For example the DUAL_EC backdoor could only be exploited by the NSA. With this backdoor, now China and everyday criminals can also use it.

I see the benefit of inserting multiple backdoors. But none of them should be vulnerable to rival nations.


>But one thing the NSA likes to say is that the backdoors they insert are only accessible to them, not to others.

Which is an empty statement when it comes to security principles. If you've placed a backdoor (and some are blatantly obvious if you look for it, like some SSL issues), then others can use it too. Most backdoors are not of the "we have an encrypted password only we know" variety, but of the purposeful hole to exploit.

>I see the benefit of inserting multiple backdoors. But none of them should be vulnerable to rival nations.

Except if the idea is to monitor the local population, and you do not care that much if others monitor them too.


That's not a good idea if you're hiding a backdoor. The last thing you want is for people to have a reason to audit the code base.


The big story here is not that Juniper is weak but rather that these types of attacks are succeeding. If Juniper has fallen, god knows what else is vulnerable.


If anyone is interested, you can find the unpacked firmware and some rough diffs online at https://github.com/hdm/juniper-cve-2015-7755/


Some corporations will not know about the issue or ignore it and thus will get hacked by the backdoor. Now this backdoor does not only benefit those who implanted it, it will benefit the opposing side who can hack using it.


I think these is a good example for people that complained about OpenBSD refusing to use cloud servers for their infrastructure. Point being security at this level shouldn't be taken lightly. Juniper must have a ton of security measures in place and they end up with this.


Juniper is using Dual EC....are you kidding me? Now I have zero doubt this is Juniper's fault because of its cooperation with NSA to keep backdoors in its systems.

If I remember correctly even tptacek was claiming initially that "Dual EC is not so bad...not that many companies use it anyway, because they would be stupid to use a 1000x slower algorithm". Yeah, except some of the biggest networking equipment makers in the world who do use it, and who sell products to many other small and large companies, too. Quite a bit of an attack surface for the NSA.

The point was always that Dual_EC should've never become a NIST standard, no matter how "bad it was and that probably nobody would use it anyway". It was made a standard for a reason by the NSA, to convince at least some of the big companies to use it. And they succeeded in that.

We can only hope that the good people who work in standard bodies will never allow something like that to happen again, because in the end backdoors always end up being used for "evil", whether by the initial creators or by someone else who finds them later.


If you're going to use quotation marks when referring to something I said, you should actually quote me, instead of making stuff up.


Don't gouvernements can check the source code? like for Windows?


It's an interesting thought. If they checked and spotted this would they report it to defend against the attacker that injected it, or would they just pocket the master password and use it themselves?


Maybe both: They found a second backdoor, reported one and kept the other. This makes them trustworthy in the eyes of Juniper, while still netting them a backdoor.


Well, at the specific case of Windows, Microsoft ships something they say is the source code of it. But nobody is allowed to compile it, and can not do so in practice, because they don't ship the building environment.


I'm looking at Juniper's news page [1] and its Twitter feed [2]...it doesn't give me a lot of confidence that this security breach or even its (apparently inadequate) patch doesn't even a news item or a Tweet.

[1] http://newsroom.juniper.net/

[2] https://twitter.com/JuniperNetworks/with_replies


Was discussed yesterday on this thread https://news.ycombinator.com/item?id=10754917 which points to the proactive announcement Juniper made


There's also http://advisory.juniper.net/ with further details on each security notice.


It's not the first time when HN stories are doubled with Wired articles. I had one blatantly replaced. This site has a preference for fear-mongering reporting... I mean Wired.


At least their notice went out on a Thursday, instead of waiting until late Friday evening or, worse, next Friday evening (by which time a large percentage of the engineers managing these boxes will be on vacation).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: