Hacker News new | past | comments | ask | show | jobs | submit login
Netgear Signed TLS Cert Private Key Disclosure (gist.github.com)
198 points by Maxious 36 days ago | hide | past | web | favorite | 153 comments

Some commenters are decrying that this post fails to meet the bar for "responsible disclosure".

Please stop using that phrase. "Responsible disclosure". It's wrong and harmful, and the person who coined it agrees with me: https://adamcaudill.com/2015/11/19/responsible-disclosure-is...

You want "coordinated disclosure" instead.

Netgear doesn't do coordinated disclosure. They do non-disclosure. In the absence of a coordinated effort, full disclosure is the responsible thing to do.

Agreed, the key was already publicly available, and it was just a matter of time for someone with malicious intent to find it if that's not already the case.

Exposing something already public to speed up the resolution and make sure the impact is kept to a minimum considering the circumstances.

Well, one can argue that every vulnerability is "already publicly available, and just a matter of time for someone with malicious intent to find it"

This is correct, which is one of the reasons that I personally support full disclosure for everything.

I didn't know that Netgear already has proven to not be following coordinated disclosure; generally I think it's fair to give some time to the vendor.

I like how Google Project Zero does it: give time, but don't extend indefinitely.

It should be noted that it’s essentially never the people who find bugs arguing for “responsible disclosure”.

It sure is easy to tell others what to do with their work product when you have zero stake in the game.

There exists a really easy solution to the purported problem of full disclosure, vendors could just offer significant enough financial compensation for non-disclosure.

Let's take that avenue as a thought experiment...

1. Netgear buys the bug.

2. They keep shipping firmware with a private key.

3. Someone with malicious intent finds it.

4. Someone with malicious intent MITMs a Netgear network and does something bad.

5. Bad actor sells bug online.

6. Millions of Netgear devices continue operating with compromised certs.

Instead, here's what happened/happens now...

1. Researcher tries to coordinate with the vendor, realizes NDAs are involved and the vendor intends to keep shipping their private key.

2. Researcher dumps details of private key on GH instead.

3. Mozilla and Google stop trusting the cert.

4. Now Netgear no longer gets to decide if they want to fix the problem. Browser vendors have fixed it for them.

5. Millions of otherwise insecure Netgear devices are protected by the browser instead.

6. Netgear rightfully gets bad press and hopefully the shaming makes them rethink their positions on NDAs in their bug-bounty program.

7. Future bug hunters submit bugs straight to a receptive Netgear, who has learned a valuable lesson about being a lazy hack.

Ah yes, it seems totally reasonable for netgear to not patch bugs they paid for.

That isn't what disclosing these sort of vulnerabilities is about. The core reason for just going full disclosure is that the vendor has absolutely not incentive to fix any sort of bug that is kept private. The customer is impacted, never the vendor.

Nah, disclosing these vulnerabilities is usually about CV-padding and publicity .

There aren’t many people out there hunting bugs just to be nice.

I happen to know one of the authors of this post (hey Tom!). He's actually a really nice, down to earth guy. Helped out with our college cyber defense programs and is a killer red teamer. Very patient in explaining how he got in and defaced your website, time and time again.

Is this his best work? Nah, this is amateur hour on the part of Netgear. But am I glad it was him who found it? Definitely.

Keep in mind, there really wasn't anything _to_ this vulnerability other than copying and pasting from one website (and a firmware zip) to another. They just added a little pretty formating and saved you the trouble of extracting the firmware.

I don’t doubt that the authors of this post are wonderful people. I just don’t think that there are too many people in this space working for free, only motivated by a desire to help people.

I certainly don’t think that there’s anything wrong with dropping bugs to pad your résumé. To the contrary, I think it’s fundamentally unreasonable to expect that more than a couple of people would do this work for purely selfless reasons.

People who do this kind of work for purely selfless reasons and don't end up starving or homeless are probably very privileged.

I think the millions of people with Netgear equipment deployed have some "stake in the game".

How does non-disclosure benefit them?

> How does non-disclosure benefit them?

And how does dumping the vulnerability without a fix help Netgear owners? They're all flapping in the wind right now.

IMHO, the best scenario is coordinated disclosure. Full disclosure may be necessary to force a vendor to do something, but let us not pretend it is a good thing.

> And how does dumping the vulnerability without a fix help Netgear owners?

It lets us know to buy another brand immediately and never buy Netgear again.

> IMHO, the best scenario is coordinated disclosure

The original post said that Netgear doesn't do coordinated disclosure, and subsequent posts were arguing whether non-disclosure or full disclosure were better. Nobody was disagreeing with what you said.

Netgear making worthless routers isn’t really news.

No one (that I've seen in this thread) disagrees that coordinated disclosure is the best path. If the vendor doesn't want to coordinate, then dumping the vulnerability is the best course available.

'ryanlol is suggesting that if companies view "full disclosure" as a problem, the solution is (in my words) bribery.

I don't think "benefiting the end users" is even present in that equation.

I have zero sympathy for end users who aren’t willing to hold the vendors accountable.

And besides, there’s nothing stopping the millions of end users from offering their own bribes in order to get shit fixed. $10 from each would go very far.

They already gave their "bribes" to get it fixed, by buying a product and expecting a reasonable level of support.

Personally, I haven't liked most Netgear hardware I've tried... but since the FCC cracked down on open firmware for consumer gear, options are really limited with most vendors.

> They already gave their "bribes" to get it fixed, by buying a product and expecting a reasonable level of support

Ah yes, that certainly sufficiently compensates independent security research.

Your complaints aren’t worth shit unless you’re willing to pay up.

Because this post is specifically a found private key for a certificate in the Web PKI it was not necessary to post that key in order to achieve all the positive consequences of public disclosure. The key only enables negative consequences.

I could give some leeway to a grey hat who finds the data but doesn't understand what it is and posts it. "Hey, what's this blob of data?". But this poster clearly understands it's a private key for a certificate in the Web PKI.

You can disclose the fact that you know a private key that isn't yours without revealing the key by various methods, but one that's very easy for non-experts is to create a bogus CSR. Just tell OpenSSL (or a tool that's actually good) that you have this private key and you want to request a certificate for it. Give bogus details, for example in the Common Name of the proposed certificate you can explain this is a Netgear key you found.

You can now publish the CSR, it is inherently proof that you've got Netgear's private key but it does not contain that key and so black hats will need to do their own work, for which you are certainly not responsible, to get that key from a Netgear box if they want to.

It would also make sense to tell the issuing CA, you should send them that CSR, although it's not terribly harmful to send them the actual private key and I guess if you're worried the CA's representatives don't "get it" this is a very blunt way to make your point. If the issuing CA doens't respond, tell m.d.s.policy both that you found this key and that the CA did not respond, and if necessary that can be escalated until the CA is distrusted (by Mozilla, and in my experience eventually everybody for reasons we'll not think too hard about here).

In the case of Let's Encrypt the process is, like everything else, fully automated. Call the API (by running your client software of choice) and prove you know the private key for a certificate they issued and want it revoked, its status changes to revoked and the next batch of OCSP signatures will show revoked for that certificate.

Yeah, but remember how they found it? On Netgear's website! In their firmware images!

It wasn't like they had to hack a router, get root, and exfil the keys. All of this data was already made public, by Netgear! They just put together a document with prettier forms for everyone else to see.

Hell, they probably could've given you a shell one liner to grab it from Netgear and extract the keys yourself. :-)

Edit: and the Comodo-issued cert is already revoked. I'm too lazy to pull the other cert but I'd bet that it's revoked too.

It's still not, as of the time of this comment: https://crt.sh/?id=1955992027&opt=ocsp

It's still not, as of the time of this comment.

Netgear has reliably demonstrated to me over more than 10 years that they are incapable of delivering secure products of an acceptable quality. I actively avoid this brand like the plague.

I have posted this in another thread, but I am posting it again because more people need to be aware of Netgear's practices:

I recently bought a couple of Netgear Managed Switches (for Business)⁰ and in their datasheet they list "Local-only management" as a feature. Only after they arrived we discovered that you only get limited functionality in the Local-only management mode, you have to register the switches to your Netgear Cloud account to get access to the full functionality.

Reading up on it, this was achieved only after a community outcry because in the prior firmware versions the switch would have to connect to the Netgear Cloud on every bootup.

Needless to say I would not have bought the swiches if I had knew I needed to register them to Netgear Cloud to have access to the full functionality specified in the data sheet. If I had bought them as a consumer, not as a business, I would have returned them immediately.

Netgear are now on our purchasing blacklist.

⓪ - the switches are Netgear GS-108Tv3

Interesting. I’ve got a bunch of (older) Netgear GS-108s and it’s definitely entirely locally managed. Good to know I shouldn’t buy any newer ones, thanks for sharing!

>Netgear Managed Switches (for Business)⁰

>⓪ - the switches are Netgear GS-108Tv3

are we not allowed to use any sane method of doing footnotes? Who does this? Why would you even use footnotes in internet comments other than to specifically screw over anyone who uses a screen reader?

> are we not allowed to use any sane method of doing footnotes?

There is no markup on HN for footnotes. https://news.ycombinator.com/formatdoc

> Who does this?

Blame the people running the site, this is just a small inconvenience compared with other lack of features.

> screw over anyone who uses a screen reader?

As GP's footnote is plain text – not markup – there is no problem with screen readers (or any user-agent, really). Curious why you would think otherwise.

Because it interrupts the flow completely when they could have just written the model number along inside of the already-used parens like a normal human being. With a screen reader, it'll get to the superscript 0, read that out, and then won't be able to read the actual footnote content until the very end. Can't jump around like our eyes can. "⓪" isn't even read by a screen reader (at least not narrator on windows 10). Footnotes in internet comments are useless and unnecessary anyway.

True, but the devices themselves are often pretty powerful in the home networking space, especially for the price.

I've got an R7800 running as the main router in my home, however, it's flashed with OpenWRT as their own software is a complete mess.

I discovered this same thing in 2017. My bugcrowd report was shot down as dupe


The difference is that these newly discovered certs with private keys were signed by real Certificate Authorities trusted by browsers by default.

So their security got worse. It’s a miracle people even bother attempting to do responsible disclosure to fools like this.

> so their security got worse

Only because current browsers makes it increasingly hard to run medium security, user-trusted networking.

How did security get worse?

    Tuesday, January 14th 2020 - Initial Discovery

    Tuesday, January 14 2020 - Tweet sent attempting to establish communications with Netgear  

    Wednesday, January 15 2020 - Reached out to Bugcrowd to attempt to establish communications.  

    Thursday, January 16 - Bugcrowd responds, but we are unable to establish a communications channel outside of the Netgear bug bounty programs.  

    Friday, Jaunary 17th - Conversation with bugcrowd proves inconclusive  

    Sunday, January 19th - Feeling we have exhausted our disclosure avenues, we decide to publish

I didn't understand, can someone explain this to me - I see the author tried to talk to Netgear, then it's BUgcrowd the next day... it's not clear to me what the relation between Netgear and Bugcrowd is.

Bugcrowd run bug bounty programs like https://bugcrowd.com/netgear on behalf of companies

They explained earlier that if they entered the bug bounty program with Netgear, they would be agreeing to not disclose: "By submitting the security bug, you affirm that you have not disclosed and agree that you will not disclose the security bug to anyone other than NETGEAR."

So it seems they were trying to get Bugcrowd as a intermediary to contact Netgear on their behalf to "establish a communications channel outside of the Netgear bug bounty programs"

Thanks for the explanation!

Netgear picked a terrible and clearly unacceptable approach. I I read through this discussion, trying to understand if there exists a good solution.

Using plain http isn't a good solution because it involves telling the user to ignore the "not secure" warnings by browsers. This seems like the best available solution.

Without network connectivity, the traffic can not be tunneled through a remote server with proper cert. Nor can the box request a cert in realtime at the time of setup.

Pre-provisioned cert won't work if the box is purchased more than 27 months after manufacturing (the current max validity period). For home network equipment, I suppose this isn't unusual.

I see TLS-SRP mentioned in comments. But the commenter hinted at its limited adoption in browsers.

Unfortunately this is a problem that's mostly a result of the browser vendors (who are in control of the validation/UI), and can't be fixed by Netgear / etc. They've decided that security trumps usability.

I suspect at done point the router manufacturers will give up and just force setup through a phone app (assuming you could download it over a cellular connection). That makes me sad, though.

When I can control the client configuration, I just set up my own trusted root CA and then have the distributed systems automatically submit a CSR on initialization. This doesn't work when you have to support end user's web browsers though (unless you're somehow lucky enough to own a public CA and don't care about ICANN rules). It really does seem like there should be an encryption-only option for TLS. Maybe anything that resolves by IP address and not name should display the green padlock even if the SAN doesn't match the name?

Encryption-only TLS doesn't protect MITM. It prevents sniffing.

At a higher level, TLS provides integrity and privacy when the network linked is not trustworthy. As a result, I'm not sure what your proposal achieves.

Similarly, resolution by IP address does not provide privacy or integrity unless you trust the network link.

The green padlock indicates two things about the server to the user: "This is really my name" and "no one can snoop on our conversation." It doesn't indicate anything about the trustworthiness of the server. The first one is meaningless in the context of an IP address. If a user is going to get pished by an IP address, then they could just as well get phished by 123.sketchydomain.com. The important part when communicating directly with a device by IP address, is that the payload is encrypted end to end.

> Similarly, resolution by IP address does not provide privacy or integrity unless you trust the network link.

That's a fair point. It's certainly no worse than our current situation where you have to trust a self signed certificate on a case-by-case basis. The router could forge a self signed certificate even more easily than it could forge a certificate signed by a trusted CA.

> The important part when communicating directly with a device by IP address, is that the payload is encrypted end to end.

Which you can't be certain of unless you can verify where the "end" actually is. The IP address is no help here; the ARP cache can't be trusted. Packets can be redirected to any host on the local network regardless of the official IP address assignments. If you aren't authenticating the other end of the link then you can't be sure no one is snooping. Active MitM attacks on local networks are trivial. You don't even need to compromise the router—any peer on the network will do.

> The router could forge a self signed certificate even more easily than it could forge a certificate signed by a trusted CA.

Sure. As a form of TLS without authentication, one-time-use self-signed certificates would be worse than useless, offering only a false sense of security. The options here are "trust on first use" (betting against a malicious actor being present during initial setup) or offering some way to manually verify the fingerprint of the device's private key. Both approaches require authentication, not just encryption. For the second, more secure option, the cheapest way would be to permanently provision each device with a unique private key and print the key's fingerprint on the device's exterior. For more flexibility, store the key in a small removable HSM (similar to a SIM card) and print the fingerprint on the HSM itself and/or the accompanying documentation.

The HTTPS cert is used for the router's login page - apparently putting an IP address on the box backside label confuses too many people. It makes sense to put it behind HTTPS because the browser will whine "this page is insecure"... but how is a router vendor supposed to include the neccessary certificate that won't get leaked?

The only thing I can imagine here is a dedicated HSM chip... but that's overkill for a 10$ router?

1. Just let it be HTTP. Stupid browsers are stupid, but at least they don't prevent this page from working yet.

2. Router coordinates with company server to get its own hostname like n-123123123.netgear.com (which points to or whatever), generates private key and company server issues certificate for that key. HTTP requests to return HTTP 302 to this address. It requires Netgear to operate CA or make some extended agreement with existing CA to let them issue certificates. I know that Plex does something similar, so it should be possible.

3. Company server creates hostname which points to a public router IP address and router uses letsencrypt to get a certificate for that hostname. But that requires public IP and some providers are using NAT nowadays, so it won't work universally.

4. To configure a device you're talking with company server, rather than your device and your device is talking with that server as well. It requires working Internet, so not a complete solution as well.

I don't believe that HSM chip would work. You would just extract that chip and use it as a private key and that's about it. You won't be able to extract private key bits, but you don't need them. May be it might work with very secure device like iPhone, where everything is signed, but yeah, $10 device and someone will still hack it, causing bad situation.

All but one make for an undesirable UX during initial setup before WAN is up, which is probably the only time most users login to their router.

Yeah, that's why I think that movement to HTTPS everywhere must at least exclude private IP addresses. While it's expected to have HTTPS on public resources, what happens in my local network is my business and should not be considered insecure.

The problem is, that someone setting up a public wifi (in a restaurant for example), will be vulnerable to sniffing attacks (if they don't know what they're doing).

So get a cert before setting up wifi. This is not a hard problem. 1. Connect to AP with random key from the box 2. Open admin panel over HTTP 3. Follow instructions to get Internet access 4. As soon as it has a connection, the router obtains a cert and redirects you to HTTPS 5. Now you can make all your poor security decisions

Have you met ‘normals’?

Everything is a hard problem if it’s any steps at all.

I meant that "how to make router setup secure without making it more difficult" is not a hard problem. My system is no more difficult than the currently most common ones and about as secure as it gets.

Although, one could argue that setting up a router really shouldn't be left to people who don't understand how they work...

#3 is why IPv6 needs universally pushed out by all ISPs.

The issue is you're going to have the ignorant customers calling Netgear to complain their brand new router is not secure if their browser highlights it for situation #1. Same for an untrusted certificate.

The risk with #2 is then an attacker could potentially probe the firmware to figure out that process, and phish from that domain? The other is we need to trust Netgear to run HA infrastructure to support that. (lol)

So it sounds like pretty much marketing/sales runs the company and the product people have no say at Netgear?

In case number 2, it's possible to ask CA to reissue n-123123123.netgear.com, and if it's not possible (due to uniqueness verification during key issuing process), to ask n-123132123.netgear.com and the system administrator may not notice.

With CAA Netgear gets to decide which CA is authorized for netgear.com and they can cut a deal with that CA for any rules they want.

Facebook, for example, has a deal with their preferred CA which says that all certificates for names under fb.com and facebook.com are only to be issued with authorization from Facebook's central security team.

So even if you can fulfil that CA's normal checking perfectly for fake-bank.facebook.com, if you haven't got someone on the inside of the Facebook security team you can't get a working certificate for your fake bank social media scam.

> 1. Just let it be HTTP. Stupid browsers are stupid, but at least they don't prevent this page from working yet.

This would be categorically worse than hardcoding the SSL key.

The other way this could be done is if TLS-SRP was more widely supported. This variant of TLS doesn't depend on certificates at all and instead uses mutual knowledge of a password to authenticate the connection in both directions. In the case of a router this would be written on a sticker on the device for initial configuration.

Ubiquiti do this by tunnelling it through their servers. You can still get insecure access by going direct to the IP, but going to the Unifi site gives you full HTTPS.

Plex do it with a wildcard and generate a new one for each server: https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...

We could change the standards to facilitate client apps using a private network's own PKI. This would solve about a million problems that currently exist with most private orgs that need to enforce policy on content on secure networks, as well as allowing consumers to securely browse self-hosted private services. I don't know what that would look like or what would need to change, but it could definitely be done.


1) An extension to PKI that browsers support. If you browse a service with a certain type of cert on a certain TLD, a) it won't work except on private networks, b) it will prompt the user to enter a key, such as a key printed on the backside of a router (this is already used to get consumers onto WPA routers, so we know it's reasonable). The device with the cert creates the cert itself. If the key is right, you can browse the service, and your browser caches the key with the cert. If an attacker learns the key they can try to make a new fake cert, but the browser would know it's not the same as the old cert and block it. Replacing these should be quite infrequent, as often as people replace their home network devices.

2) A TLD that browsers will only ever respond to if its names resolve to private address ranges. If a service wants to serve HTTPS privately, it will request a cert be signed by a device that is auto-detected on the local network. The device will return signed certs for a particular host on the local-only TLD. The device can have any logic it wants to restrict what can get a cert and how. The trick is how to make the client trust the device, which I think 1) would solve. This device can, for example, only ever create a given FQDN's cert once; if an attacker tries to generate a duplicate, it will fail. If the cert ever needs to be refreshed, the memory/flash of the device needs to be reset. This makes it possible to have a captive portal request a cert from a local router, and no attacker can generate an identical cert unless the router was reset and they triggered the cert generation before the captive portal did. (Also for captive portals, the admin of the router could just hard-code what clients could request what certs, so the captive portal could always request particular local certs for itself, while no attacker could). It's still tricky to determine trust on first connection in the case of things like coffee shops, but maybe you could receive a public key via WPA or something (then again that's kinda kicking the can down the road to WPA)

Or you know - don't use TLS at all since all your efforts will be moot anyway. You could always make a self-signed certificate and print the fingerprint on the device itself, but that costs money, for very little gain.

Number 2: already dealt with:

  * .test
  * .example
  * .invalid
  * .localhost

What? No. While those are meant to be reserved, nothing stops you from configuring your resolver to answer for them.

Yes, but users will not be happy if they see ".example" on the URL bar. And you can't get a trusted cert for them either.

Not confusing users is why the fiasco with the https certificate problem here was initially created. And to be honest I do not have any idea how all needs (users have a trusted HTTPS certificate, a domain name and no "insecure" oe other warnings in the browser, while hackers don't get access to the certificate, and all of it works without internet uplink) can be met...

"How" is that they only inlude the private key for that specific router, and then that key/cert itself is certified by the real CA cert, the key for which is not kept on the router itself?

That requires generating and burning keys to devices during manufacturing. This process will be hacked by NSA & all, exactly like Gemalto https://www.theguardian.com/us-news/2015/feb/19/nsa-gchq-sim...

Haha that's a tall order...

Constructing a scheme where NSA is an active agent in the threat model was not an original requirement :)

You are welcome to introduce any way to produce any part of a router or a PC for that matter that would protect from NSA, it seems that the biggest players in the field are still working out and it is very much a work in progress. When you have an adversary that is able to intercept hardware in transit and spend endless amounts of dollars on devising clever hacks or undetectable hardware exploits, then yes, you're right, some TLS scheme, regardless of where the certs are, is not going to be enough.

Delegated Credentials for TLS I guess ?

but this requires internet access so barely a solution. https://engineering.fb.com/security/delegated-credentials/

I don't think that the fact that private keys for routerlogin.net are bundled with the router are an issue. It's very logical to do so, and BETTER than plain HTTP in most real-life scenarios.

routerlogin.net is a local server (when you are using the router) and not a remote website so it's expected that you can trust the content as much as trustable your local network is.

It’s impossible, they need to try something else. They could get dedicated domain names for each router, for instance.

No, you can do something like CloudFlare does for TLS for their enterprise customers, where the TLS session is signed by a key held by Netgear.


The fallback would have to be self-signed TLS with random keys for initial local router configuration, which is hopefully done over an Ethernet connection rather than WiFi.

You don't need a secure enclave or HSM to solve this problem. It just requires caring about security and provisioning enough time to engineer and test the obvious solution. From what I can tell, that doesn't line up with Netgear's MO.

You need a working internet connection. In most cases you go to the webinterface because you are either setting it up, or something (most likely connection) doesn't work.

That would be a central point of failure and a ton of extra complexity without providing any kind of meaningful security benefit.

What do you mean "without providing any kind of meaningful security benefit"?

It prevents a bored kid who dumps the private key from one router from being able to reliably impersonate thousands of consumers' routers, the world over.

That's a meaningful security benefit.

The attacker would just dump the individual router's key instead and use the remote signer as a signing oracle to impersonate devices.

The owner/operator of the router could do that to their own device, yes.

The idea is that each router would get a certificate signed with a distinct subdomain unique to that router. So you still can't impersonate other devices. It wouldn't be a generic domain or wildcard certificate.

The remote signer would furthermore suspend suspicious connections (i.e. from multiple public IP addresses).

6 days is nowhere near a justifiable timeframe for full disclosure.

Even if you disagree with that, you should have first reported Key Compromises to Entrust and Comodo before publicly posting the private keys. They are bound by BRs and their own CPS to revoke certificates such as this one - and they would have done so promptly.

This is not what you should do as a security researcher - delete the gist until the CAs have a chance to revoke it via OCSP.

> 6 days is nowhere near a justifiable timeframe for full disclosure.

Of course it is, if you didn't get a response in the first 48 hours you're not going to, that's the way things like this tend to operate. The security contact for all companies is either a) inactive or b) very active. It's a detriment to everybody that for some reason it has been normalized that people should wait months or years for disclosing bugs. Full disclosure is the only way things get fixed.

> This is not what you should do as a security researcher - delete the gist until the CAs have a chance to revoke it via OCSP.

Github is archived in real time for the most part. Especially you'll notice that if you post a private key for a cryptocurrency address or credentials for AWS, it'll be stolen and used within seconds. There's some really good sets of information out there like https://www.gharchive.org/ which give you an idea of the sheet amount of data that github produces on a daily basis, and that's just the metadata and things like comments rather than actual git repository contents.

The idea that you could delete something from there and have it actually "removed from the internet" is amusing.

Responsible disclosure is meant as a way to harm researchers and users for the benefit of companies. This is absolutely the right thing to do as a security researcher, especially given that Netgear prevents public disclosure, ever, if you submit a bug bounty. To make sure it stays online:


I think in this case it's to force browser vendors (who have the most exploitable endpoints) and companies like Apple and Microsoft at the OS level, to blacklist the offending certs.

Though the CAs in question should probably have an automated challenge-response system to which a timed signed reply of a given message of their choice causes a revocation of the key that signed the message. (As sufficient proof of "this is vuln, kill it now, ask questions after".)

Did this researcher even contact the CAs? Generally, they're pretty responsive and will revoke quickly; there are CA/Browser forum guidelines that address this; if nothing else, you can send them the private key to prove possession.

The OS level actions won't be fast. So this disclosure is closer to an irresponsible attitude. At least CAs should be informed.

I tend to differ:

Netgear security is paid to manage security. They failed by not responding to these legitimate communications requests.

The researcher are not paid. They did what could be done to really fix the problem. Of course you can always do better as a researcher, always, but consider time available and paid vs. pro bono time. Also consider all the people who probably found this before and may have sold it on black market, you’re attacking the wrong people.

Boo hoo. Netgear should have known better than to expose their keys like this in the first place, this is really amateur hour stuff.

They were reached out to and didn’t respond. It’s their own fault and they have to make do with what they’ve been granted. Now they get to pick up the pieces.

Yep. Defending bad security products only leads to people buying more. Netgear has not and will not learn any lesson, and will continue to serve bad devices. This time the good to users comes from publicizing that.

I discovered this same thing in 2017. My bugcrowd report was shot down as dupe


It wasn't the researcher who disclosed the keys, it was the vendor who did, knowingly and purposefully (in their firmware updates).

Who cares, this does not even warrant a responsible disclosure. Netgear can't fix that in a way that makes everyone happy. You can revoke the cert, then what? A firmware update with a new cert? Same problem, private key will still be on the device and in the firmware file. Self signed? You get users crying about the cert. HTTP only? You get users crying about "not secure" even though that might not be true in all cases.

My guess is that people before the ones in the gist have tried to contact Netgear about this (since it's not hard to find) and chose to ignore it.

Full disclosure is always perfectly justifiable, unless you’re stealing someone else’s work.

“ Thursday, January 16 - Bugcrowd responds, but we are unable to establish a communications channel outside of the Netgear bug bounty programs.”

I’m curious to what does this mean? They got a response form Bugcrowd but not Netgear themselves? Or they got through to Netgear but didn’t like what Netgear said so they went public? Something else?

Guessing they meant they were/are unwilling to.

Full disclosure means 0 days generally.

There is another reason why responsible disclosure would have been better (and less short-sighted): vendor might make an internal audit, find root cause and invalidate a bunch of certificates at once (think about similarly leaked certificates due to a bug in some deployment tool).

Now it's an open race with the bad guys, and surely a lot of them all at once; hardly an advantageous scenario for end users.

That's an awfully big (and awfully optimistic) "might" in your first paragraph there...

Not to mention that all the bad guys had to do to get the private key was unpack the firmware image, so they've probably noticed long ago.

If what you said were true then zero-days would be always ignored and not a concern.

How does that follow from what I said?

It's not optimistic, it's what a responsible vendor is supposed to do. By not following responsible disclosure the researcher has lifted the vendor from the possibility of organizing a proper response.

By the way, I meant coordinated disclosure and I am not aware of Netgear not doing any at all, never.

I don't envy Netgear. They sell home routers to people who hardly know the difference between HTTP and HTTPS. In this regard, I respect them for NOT making those people submit sensitive information by ignoring browser security warnings and/or accepting self signed certificates. I.e, not teaching bad habits.

On the other hand, making private keys publicly available is obviously far from ideal.

Damned if you do, damned if you don't. Again, I don't envy Netgear...

HTTP(S) might just be the wrong protocol for initial setup of a router!

Maybe instead generate a password for each router and print it on the box along with the device's unique SSH Key fingerprint.

Technical users can then SSH in and bootstrap the system (also generate/upload their own TLS certs if they want to use a browser and connect over HTTPS, etc).

Non-technical users get an app and they can scan a QR code on the box which has the SSH user, password, and fingerprint for verification.

The App connects to the router over SSH to do router setup.

If an App is involved you wouldn't technically NEED to use SSH with the App but it was the first thing that came to mind.

This could be extended to something like:

Ship a USB stick with the device with installer software on it.

The installer software sets up an SSH tunnel to the device, so it includes an SSH client and a script (batch) file that effectively does: SSH -Llocalhost:<some random local port>:localhost:<installer initial port> setup@<ip address of device>

It has the known SSH signature of the device in its known_hosts, and of course the user has to type the password from the sticker on the device.

Then it fires up the browser with the url http://localhost:<local port used in previous step> which has access to the full setup of the device via the browser. It is HTTP, therefore there are no initial setup TLS issues with the browser, as you are going via an encrypted tunnel set up via SSH using HTTP encapsulated within that encrypted tunnel (effectively a temporary VPN) rather than using TLS.

As part of the setup process, the device generates a self-signed cert, which you can put in your computers/browsers trusted cert store for future use. Also as part of the setup, once the self-signed keys and certs are generated, it disables this local HTTP webserver that is being forwarded to via SSH, so in future the user can use a standard TLS connection to the device using the generated certificates. If a factory reset of the device is done, it blats its config and re-enables the 'setup' webserver (that is only running on localhost therefore can only be access locally,i.e. via SSH tunneling).

> Non-technical users get an app ...

So now I need an already-working smartphone to set up my internet connection.

That's a good point actually. You'd need the internet to install the app too.

I think https could work fine, just give each device a unique key and dns name.

That approach has been mentioned in the comments. Some problems with it are:

If a certificate is valid beyond 825 days Chrome will not honor it. So if you generate a key and cert at the factory for every device you run into the issue of the end-user getting a device with an expired cert.

If you generate a unique key and self-signed cert users will complain and call support about the "insecure" site warning.

Communicating upstream to a third party to get a cert won't work for all users (e.g. those that need to configure a static IP from their ISP.)

You shouldn't plug something with more than 2 years of missing security updates to the internet anyway (or sell it). Reflash & restock time by then.

The cert for www.routerlogin.net (Serial c1:a1:00:64:07:61:2c:07:00:00:00:00:50:f1:09:6a) isn't revoked yet: https://crt.sh/?id=1955992027&opt=ocsp

The reporters should have asked the CAs to revoke the certificate. If the CA doesn't do that within 24 hours, it's considered a violation of the rules that the CA needs to follow to stay in browsers.

Anyone with access to the private key can submit a problem report to the CA, they're obligated to revoke if the key is exposed to a non-subscriber.

I have submitted a report - they are obligated to revoke within 24 hours.

Looks like they've failed to perform the revocation in time -- other users reported to Entrust >24h ago.

They revoked it 7 minutes shy of the 24 hour deadline after my report.

If other users reported it and they failed to act, that's a BR violation and they're required to provide an incident reports to the root programs via Mozilla's mechanisms.

If they don't, and you have evidence of those reports, you should provide them to the root programs via the mozilla.dev.security.policy mailing list/group. There's already a thread about this issue, though not claiming that EnTrust was notified.

(Based on timestamps, it's also possible they revoked in response to that thread).

Thanks. Here is the timestamp showing that it took more than 32 hours from reporting to revocation.


Thanks for doing that.

Can anyone make out what the funjsq.com is about?

Chinese gaming VPN service or something, bypasses China IP blocks to allow them to play on NA/EU servers.

I'm wondering why such a VPN service is included in the netgear firmware image? Wouldn't this resonate negatively with Chinese authorities?

Unless it was mandated/deployed by a state actor, looking to MITM/monitor VPN users within the country?

This reminds me of something I have been looking for for a long time. If you know any reasonable way to fix it please let me know!

This "any CA can authenticate any domain" system is ridiculous. I manage my own CA, which is fine for my own self-hosted sites, but the issue is that it doesn't protect me from a cert made valid by some other CA.

Is there any way I can whitelist my own CA such that IT ALONE will work for my domains? (note: this is a me-only thing. Don't need anyone to be able to validate these domains).

PS, I've asked this question elsewhere [0] before. I did not find an adequate answer.

[0]: https://security.stackexchange.com/questions/211401/can-you-...

Thanks for the reply!

This is very interesting, I will surely take a look. Only issue I see is that it seems to use DNS, and that's not exactly secure either.

In my mind I am thinking of something that would be more theoretically secure from any remote attack (closer to a form of key-pinning maybe?)

This is a world after QUANTUMINSERT for you. Somehow everything should be encrypted but key management are still unsolved

I guess I run a different firmware for my Netgear ReadyNAS 312.

This is easy to inspect because the FW is Debian-based, and enabling inbound SSH is easy.

Checking on mine, it has a cert only for the "nas.local"-domain, and the cert is stored in /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key respectively.

Have to admit I love the naming of those files :)

Edit: Obviously it is a different firmware. Mine is a NAS, this was a router. And “snakeoil” seems to be default debianism.

> These certificates are trusted by browsers on all platforms, but will surely be added to revocation lists shortly.

So what about all the routers that were already sold and the customers that bought them?

Are we ok with leaving all devices that are already in operation or that are currently being sold unconfigurable for non-technical users, just to protect against a highly hypothetical scenario of abusing routerlogin.net?

What’s a good netgear alternative for switches from 5 port to 48 port - too late for us now but for next buying wave

Is this cert revoked yet?

Sure, Entrust is required to revoke it. It is bound by BRs. If it refuses, Entrust will get itself blacklisted by browsers. On the other hand, having the cert revoked will cost Netgear dearly. As a result, I wonder if Entrust might be dragging its feet on the revocation.

mini-app.funjsq.com is revoked (https://decoder.link/result/418b8d20793d3f4daa4153752e45e78b...), can't check routerlogin.net/com as they didn't paste the public cert.

routerlogin is not revoked: https://crt.sh/?id=1955992027&opt=ocsp

Name and Shame.

Responsible disclosure ? How about reposonsible security pracitces first.

This is why I use OpenWrt. Shout out to the dude sticking up for screen reader users to! Thanks friend. Shit like footnotes and captchirs are the bane of my life!

To all the people shitting on Netgear and security in this thread, just how do you propose one deliver a secure network appliance to end customers which they can deploy on their network? And which is user-accessible to common users in modern browsers rejecting everything not touched by a proper CA?

Really. Please educate the world with your ingenious insight. I’ll be waiting.

The unavoidable truth is: You have to stuff the key in there somehow. There’s no way around that.

Either browsers have to start accepting “local” certs more easily, or end-user network deployed appliances can’t have ssl.

Take a pick.

Define "secure". If you've "stuffed the key in there" in such a way that other people can get it out again, and it works across all routers of that model, then it's possible for an active attacker on your network to MITM your connection to your router. So it's slightly more secure than cleartext but not much; about the same as a random self-signed certificate.

It is definitely a hard problem because there's no easy way to authenticate the router to the client, but on the other hand I'm not sure how important all of this is when it's on a local link anyway.

A "best possible practice" solution would be to have the routers issued with individual certificates at factory programming time, and provide a rollover mechanism through the admin UI.

AFAIK CA can't issue certificate for more than two years. So if you've bought the device two years after it was manufactured, you're getting error and it's even worse than HTTP for user.

825 days is the limit. It's enforced in, for example, Chrome, if a certificate claims to have been issued after this rule changed and it lasts longer than 825 days Chrome considers the certificate bogus immediately.

Certificate Authorities use the difference between 825 days and two years to offer certificate renewal weeks prior to the expiry date while keeping your "extra" days. e.g. your certificate expires 17 February 2020 but you renew today for two years, they can issue a cert which expires 17 February 2022 because that's less than 825 days in the future, if the rule was a hard two years they couldn't do that.

As to how you'd fix this: One option is firmware updates. After all a device which goes two years without firmware updates is also not in good shape, so you could arrange with a CA to bake certificate renewal into the firmware update process. At scale it would totally make sense for a CA to offer to issue renewals at say 10¢ per device renewal for up to ten years from inception date.

The certificate issuance does not require knowledge of the private key, that stays on each individual device, renewal just issues a newer cert periodically to the same device for use with its existing key.

So now your local, trusted device depends on untrusted, ephemeral cloud-services to be trustable by you on your local network?

That sounds like the worst of both worlds.

Can you please provide a reference to the "825 days" limitation.

Indeed. So what was the expiry situation on this CA-signed certificate in the original article?

On local networks TLS doesn't make things more secure, maybe even less in some scenarios given all the obtaining of certificates over public internet and leaking info to the public internet. So sane thing to do would be plain http. But really, browsers should stop the bullshit with https "security" and accept ssh-like behavior at least for local networks.

This. Yes, Netgear has a bad track record with security, but this is likely a calculated trade-off.


- Self signed certificates. Users will learn to acknowledge the error and an attacker can just present their own self-signed cert. This what most vendors do.

- Plain HTTP. Worst option, no security, not even against a passive listener (but no bad PR because of "security disclosures"... hooray!).

- Something similar to Keyless SSL, as suggested on GitHub and in this thread. Will at best slightly inconvenience the attacker, who can still extract any device key and impersonate the device. It also introduces a single point of failure. Server is down, nobody can log into their devices (and HN would not be happy).

- Secure enclaves and hardware key management for the shared key. This would be workable and reasonably secure, but the implementation costs likely won't fit the threat model.

- Issue an individual SSL cert for each domain and print the domain on the box. That would work, but be very complex to manage (plus certificate costs, and plenty of failure cases around renewal).

- Reverse proxy through vendor servers. Single point of failure, and the most likely time to log into your router is the internet is down.

Hardcoding a valid SSL cert for a single-purpose domain (routerlogin.net) is similar to a self-signed cert in terms of security, provides a better user experience and does not teach them to blindly acknowledge TLS errors. The threat model requires an attacker doing a MitM attack on the local link. Also, compromising a router provides very little access to an attacker in today's TLS-by-default world.

Anyone downvoting the parent post should offer a better solution.

Plain HTTP is fine because the IP is a site-local.

So you trust every cheap IoT device connected to your local network? The LAN may be a slightly less chaotic environment than the open Internet, but that doesn't make it "safe". With a bit of ARP trickery any device on the LAN can intercept traffic intended for any other local device and carry out an active MitM attack—and with HTTP you don't even get the benefit of "trust on first use" that you'd get with a persistent self-signed certificate.

Except for the browser complaint when filling out the form.

I have to agree with the calls for allowing warning-less HTTP for LAN connections.

Edit: Add last sentence.

>I have to agree with the calls for allowing warning-less HTTP

Completely flawed assumption, see the following for details:


This is not a hard-stop like the TLS warning pages. So not a "big" issue.

This was answered on the Gist in a comment[0] linking to a tweet[1]:

Things to do instead of shipping TLS certs and static private keys to consumer-grade routers you sell by the thousand:

Generate a unique keypair per device.

Use this keypair to communicate upstream, in a similar fashion to CloudFlare's Keyless SSL.

This keypair you generate on the device would need to be preloaded at the factory, unique per device, and the public key would need to be stored in a database.

When a TLS handshake comes in, you verify the entire request is signed by that keypair before signing it.

Delegated online signing for a TLS handshake isn't even in the top 10 engineering challenges for cryptography in the 2020s.

Oh, and, if you don't have connectivity to upstream and your need users to connect to configure something?

Fallback to HTTP reachable via port 80 on the private IP for the server.

[0] https://gist.github.com/nstarke/a611a19aab433555e91c656fe1f0...

[1] https://twitter.com/CiPHPerCoder/status/1219228544548720640

An attacker can just dump the unique key, impersonate the device and use that for a MitM attack. There's just no way to do this securely without completely locking down the devices using hardware key management (which would be unreasonably expensive for a cheap router, plus bad for people who want to flash their own firmware).

Same level of security, and a lot of extra complexity and cost.

Yep, and that compromises that one device in the field. Not every device globally, today and tomorrow.

Only if each device has a unique subdomain.

> Fallback to HTTP reachable via port 80 on the private IP for the server.

How, exactly, is that better than using a common cert? What threat do you propose that applies to HTTPS with a shared cert, and not to HTTP?

This is intentional on Netgear's part, and does not in any way degrade security compared to the alternative (an untrusted cert or no HTTPS). It is neither a bug nor a security issue.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact