Hacker News new | past | comments | ask | show | jobs | submit login

I find the social aspect of this interesting. Us "smart tech people" have been pushing https everywhere for a few years now as a way of protecting internet privacy "for the masses".

And now the government found a very simple non-technical workaround. Send a message to everyone requiring a government root CA with an easy install, or their internet won't work.

Now "us techies" have to find a new technical solution to a very social problem.

It never ends. :(

But we are in a better place than before. Without HTTPS everywhere and governments needing to ask people to install new root certs, we would not have learned about this Kazakhstan MITM issue.

No. we just feel better because it just sounds so obviously reasonable doesn't it?

Kazakhstan's low-tech approach is just that, low-tech and low-effort. They could have used tons of vectors besides simply saying "install this cert."

A tiny shred of effort would have been to package an "updater" that did the install without explicitly saying that's what it was for. Or better yet: Kazakhstan is committed to a greener more ecologically friendly future! All tax documentation will go paperless! Just use the provided USB Key to access your documents in electronic format!

A small morsel of effort would be to force it on OS vendors through regulation/licensing/threats/money for localized copies. A good deal of effort would hijack CRLs, pinnings, et al while demanding/sneaking the private keys of the CAs.

Public Key Infrastructure is fucking pointless when the infrastructure is precisely what you can't trust.

No, it's not pointless. This attack was detectable because of PKI. Without it the attack would not have been detectable.

Being imperfect is different than being pointless. Even if you developed the perfect algorithm for global security infrastructure, the Kazakhstan government could still just break down your door and implant the backdoor into your hardware if they wanted. So by your logic should we just forget about this encryption stuff and just do everything in plain text again?

> Being imperfect is different than being pointless.

In particular, we'd see a lot more places than Kazakhstan do this if good countermeasures weren't in place...

An implant is not necessary. Intel ME is embedded with the CPU and has access to everything.

Intel ME is indeed scary stuff, but I think you need to provide stronger evidence if you want to convince us that governments can snoop on anyone anywhere just because of ME.

We still would have easily found out had any of these methods been employed; any uncompromised machines still left in the country when they flipped the switch would start getting TLS warnings.

They could, of course, avoid spying on uncompromised machines to avoid detection, but then anyone practicing good security hygiene would be automatically left unaffected by the government spy program. Plus there'd still be the possibility of detecting malware through other means (malware on client machines is far easier to detect than MITM of unencrypted communications). Not to mention how much more difficult all this would be than simply MITMing unencrypted traffic.

The situation with HTTPS is significantly improved.

> Public Key Infrastructure is fucking pointless when the infrastructure is precisely what you can't trust.

This seems a cynical and lazy evaluation of the situation. No solution is perfect, trade offs must be made everywhere. With the right precautions the average person can have his/her communications encrypted. This is a much better situation than the one we were on before.

How would they sneak the private keys from e.g. Digicert/Geotrust/ISRG?

And there would be no point in doing so anyway.

Chrome, etc., require that certificates which descend from publicly trusted roots have their certificates published in certificate transparency logs. Someone would quickly notice bogus certs being issued and the associated root would get blacklisted.

This is why certificate transparency is required now - it means that we no longer need to trust the CAs to tell us when they’ve issued an unconstrained intermediate or cross signed a root. Previously it was essentially luck that led to CA malfeasance being detected.

Especially in the post-finally-ending Symantec world the CAs understand that issuing any such cert is likely to very quickly end their business in most other countries.

I feel the real problem kz is going to have is that they have now demonstrated that they will abuse having a root cert, so there is no way any root stores will let them in in future. I imagine they’d even have difficulty getting any of the other roots to issue certs for them (managed sub-ca I think? I forget terminology)

Yep - without HTTPS everywhere, governments would have been silently able to snoop on Internet traffic without anyone knowing.

Sarcasm? Not sure.

But all a government has to do is embed within the endpoint, post-decryption. "Or else."

It is a valid point, it becomes much more obvious that you're snooping if you're trying to MITM. If you weren't snooping, you wouldn't bother trying.

”all a government has to do is embed within the endpoint”

That’s a pretty high bar to clear though.

Not really. NSA requests are backed by LE either directly or... extortion style. https://www.wired.com/2007/10/nsa-asked-for-p/

Allright. But they didn’t do it for all ~300M citizens though, did they?


They did it to everyone whose traffic transited ATT's backbone

I haven’t immersed myself into the details of the Room 641A scandal, but it does indeed sound awful. I do not approve of the operations of NSA/Five Eyes.

But let my re-phrase my question like this: Do we have any evidence that NSA can perform MITM on TLS 1.3? Using a federal US CA would be one way, tricking a CA to issue fraudulent leaf certificates would be another, but as established elsewhere in this thread, both those ways are quite noisy. Attacking the endpoint is another way, but once Mallory does that, all bets are off.

Given it's already happened in the US, I don't think it's high enough.

This is exactly what Carnivore and PRISM were

And how would Carnivore/PRISM strip off the TLS encryption?

Not only that but they can happily MITM HTTPS as well. Not all the HTTPS sites use certificate pinning or HSTS.

It's a tough problem because certificate pinning kills a lot of legitimate use patterns; it's not something I'd like to see being the default everywhere.

Yes but this is how many companies protect their HTTPS traffic (including one financial institution I work for).

What root cert would they us for that?

The government of my country has at least one certificate that's trusted by Mozilla (and I guess Chrome and Windows too) by default.

It won't stay trusted if it is actively used for MITM attacks. At least that's the idea.

You mean CA? There are many options depending on which agency and which target you are talking about. They have few options from stealing a CA from a legitimate CA user if the want anonymity or use one that is built in to your browsers or systems somebody else already pointed out in the thread.

Oh I agree 100%. It just makes me sad that governments keep trying to spy and we have to keep coming up with new technology to make that harder.

It's not a technology problem, it's a social one. If a society values it's privacy enough then it will change.

The real issue is how abstract the consequences of loss of privacy are. It requires people to actually think beyond "I've got nothing to hide".

No worries though. Greedy corporations and governments are greedy and they'll keep pushing the limits of societie's tolerance until it blows up in their face.

Governments are way scarier though. I can decide not to use Google, people in third world countries need the internet.

> It's not a technology problem, it's a social one.

It's both, everyone can contribute to the solution or the problem.

I'm less pessimistic. The practical result of this is likely just going to be more business for the cottage industry of Great Firewall VPNs, which already compete with one another in traffic obfuscation against an adversary far more sophisticated than the government of Kazakhstan. Thankfully, this is currently a case in which the incentives of the market happen to align well with the goals of defeating censorship.

The way that a real authoritarian government entity would handle that is...

An agency is tasked with doing random sample captures of randomly selected target internet connections.

Inventory all the types of traffic being exchanged.

Flag anything that isn't obvious plaintext or already being MiTM'ed for analysis follow up.

Implement new blocking rules or interception implementation for each flow that isn't already being intercepted.

> Flag anything that isn't obvious plaintext or already being MiTM'ed for analysis follow up.

The failure being that the long tail of uncategorized data would be large.

Do you have a good reference for what game state updates look like for every game on the internet? What about custom IoT device protocols? Every type of DRM used for media streaming? Document attachments of spreadsheets or database images containing arbitrary numeric data?

How do you distinguish data like that, which outside of some headers may be indistinguishable from random numbers, from someone using the same format or protocol for encoding arbitrary encrypted data?

They don't need to.

In an authoritarian state, you just start blocking and breaking things.

Everything you don't understand, you block. And then you make the user explain it to you and then if it's a use case you care about, you do the work to either decide it doesn't have any danger of carrying traffic you care about or build an intercept scheme for it.

Ethernet can carry protocols other than IPv4. IPv6 is one of them, but there were at one time a whole slew of them, like IPX and Appletalk. But ISPs don't carry them, so they're effectively blocked and have largely died out, and everything uses IPv4 or IPv6. Even if you want to use Appletalk today, you encapsulate in IPv4 or IPv6.

There are also a whole bunch of IP transport protocols other than TCP and UDP, but firewalls have a tendency to block them, so today people just encapsulate everything in TCP or UDP.

There are a lot of TCP and UDP ports too, with their own protocols, but those darn firewalls again, so now everything is increasingly using HTTP[S].

The things that get blocked never go away, they're just made to look like whatever is still allowed. Yes sir, Mr. Firewall, this is Hypertext Transfer Protocol over SSL on TCP port 443 using IPv4, which is approved for intercept.

Except that it's really email and games and file downloads and whatever else, with things added daily by everyone on the internet, and no reference for what all of that plaintext is even expected to look like.

So you say you're going to get a DPI classifier and try to distinguish all these different types of HTTP. Except that whatever you exclude will soon be right back encoded as formats and protocols you allowed, because information theory says you can encode anything into anything.

And it gets harder to distinguish them with every iteration, because what you're really using to distinguish them is their encoding inefficiency -- it's the things that are always the same for a given class of data, even though the relevant part of the message is the things that are different. The end state of all of this is that the real entropy is all that's left and there is nothing there to distinguish with anymore.

I'll be 40 years old later this year. I've been interested in communications and communications protocols since I was about 12. I've been a software developer with a focus on network communications for over 15 years.

I'm well aware of all that you've said.

My point was, they get TLS interception down, and they capture what they want from a target of interest.

When they look closely at your traffic and decide all these cat gifs have too much or too little entropy in the data that forms their pixels, they simply (if they're courteous) say, "Persuade me that you did not know that this app was helping you hide messages back and forth. Persuade me or we shoot you now." And then they shoot.

I could split hairs and suggest that the browser accept the phony CA and simply use a secondary encryption layer on top of it, but that misses your point. A sufficiently clever evil government will see that you're doing something encryption-like and shoot you.

But, being "sufficiently clever" isn't all that easy. China has done a good job, but they're a very big country with a lot of resources and a lot of very smart people, and let's be honest, even as good as they are, anyone with a will to get that censored information will get it.

It costs a lot to censor people on the Internet. The goal of people like me is not to stop the most determined, intelligent censorship approaches, but rather to make them as expensive as possible to build and maintain.

My ideal is force governments to either accept the Internet without censorship, or almost completely disconnect from the Internet (and simultaneously deny their nations the competitive advantages that come with it). North Korea is a good model. They basically don't have Internet in North Korea. It's sad, but I can live with that; it's better than allowing an oppressive regime to benefit from the Internet while oppressing their citizens.

"Sufficiently clever" has historically been more expensive than difficult.

For example, in order to scale less expensively, the Great Firewall is architected such that it need not actively be in the middle of the entire flow of traffic and need not actively proxy. Historically, they didn't need it to do so in order to achieve their goals.

Now, however, the advancement of a combination of new technologies is finally closing that gap.

In order to maintain historic blocking capability it becomes necessary in the long run to actively MiTM all the connections.

But that can be made to scale and there are nations who can afford it.

How do we know? Because the job is not significantly harder than serving up all that content. (At worst it's a little more than 2x the work.)

And today most content is served up from a handful of privately owned infrastructures. If a corporation can build it, so too can a lot of nation-states.

The incentives to build this have changed.

You're proposing that the penalty for being suspected of subverting the firewall is death. In those cases you're going to want a highly refined system for avoiding detection, and it's also very important that one exist, because regimes that oppressive deserve to be opposed.

Fortunately the more typical case isn't kidnapping and execution but only having your connection blocked, which creates a helpful feedback loop that enables continuous improvement in the ability of secure communications to avoid detection. Which benefits everybody, but especially those in violent authoritarian countries that need it all the more.

No disagreement here. What's being done is despicable.

Rather than death, if we look at the history of oppressive societies, the more likely outcome is a job offer, the kind they won't let you refuse but they'll make it so you don't want to refuse anyway. They find the clever people who are working around the filters and interception and hire them to be the watchers. They get perks like time to spend on a real private connection, etc. Meanwhile they are required to contribute to making the noose ever tighter.

> You're proposing that the penalty for being suspected of subverting the firewall is death.

no, he's being hyperbolic to make the point that in an extreme situation, a default-deny approach could facilitate mass suppression of 'undesirable' traffic without creating an insurmountable backlog of traffic for the 'bad actor state' to review in determining what to process further.

> no, he's being hyperbolic to make the point that in an extreme situation, a default-deny approach could facilitate mass suppression of 'undesirable' traffic without creating an insurmountable backlog of traffic for the 'bad actor state' to review in determining what to process further.

Only it doesn't, because as soon as they allow anything, everything else starts to look enough like whatever is still allowed to make it through, because that's the only way to make it through.

Slashing away more things only increases the resources people will put behind making arbitrary traffic look like allowed traffic. It trades not having to review everything for having to fight everyone instead of only the people they want to block.

Then some people win, everyone copies the winners' methods to get through, and you're back to square one only now everything looks even more like everything else than it did before.

You've elegantly stated my point precisely. Thank you!

In CIS states they prefer the term thermo-rectal cryptanalysis. A soldering iron in one's nether regions does wonders for extracting secrets.

In that case, the old field of steganography might become useful. Embed illegal content within legal content and figure out another means of sharing the decryption scheme.

> Everything you don't understand, you block. And then you make the user explain it to you and then if it's a use case you care about, you do the work to either decide it doesn't have any danger of carrying traffic you care about...

You say "authoritarian state", sounds to me like the network at many employers and institutions in the US!

In essence, what happens is they implement a "if we can't see it, you can't see it" policy.

> An agency is tasked with doing random sample captures

not really, we know exactly what the government response is and it's turning citizen one another, that applied with the gestapo back then and it's happening today with the "social credit system"

why do all the random sampling work if all you need is one "regime believer" among a hundred person or so to maintain full awareness of dissident activities.

One of our ("tech people") main failures was that, while we made a heavy push for server authentication, we didn't make a similarly strong push for client authentication. With client certificates, MITM like that is not possible, unless the server also trusts the MITM CA to authenticate its clients (and uses a CA for the client certificates in the first place, instead of a direct mapping between users and their certificates).

Using CAs to authenticate clients is subject to the same attack. They block communication from any client that won't disclose its private key to the MITM box or use it to encrypt/sign whatever the MITM requires it to.

You can't have security if you have a MITM that says "compromise your endpoint or we block you" and you concede to that. The only real solutions are either political or making the encrypted traffic look like some permitted traffic. (Or using a different network.)

> Using CAs to authenticate clients is subject to the same attack.

You don't need to use a publicly available CA to verify client-side certificates. The server could use its own internal CA to sign CSRs from clients and send the reslting certificate back to the client via email or some other means.

In which case the MITM will be unable to connect to the server because it won't have the certificate (that you sent via email or some other means), so the service simply won't work. That's the whole point, you either go through MITM or not at all.

Somewhat related: if there were a shared password between the client and server, Password Authenticated Key Exchange techniques [0] could offer protection even when the server CA was compromised. PAKEs use zero-knowledge proof techniques to assert that each side already had password material (and derive a key from it) without revealing what the actual password was if the other side didn't have it to begin with.

In this case, only connections where a password was already agreed on would be protected vs. general unauthenticated browsing.

There was a draft proposal to add PAKE support to TLS 1.3, but it appears to have unfortunately expired [1].

0: https://en.wikipedia.org/wiki/Password-authenticated_key_agr... 1: https://tools.ietf.org/html/draft-barnes-tls-pake-04

It died for lack of interest, you can basically watch that happen at IETF 102 here:


TLS 1.3 was in some part an exercise in removing crap people thought might be a good idea in earlier versions, but then either never used or turned out to be a terrible idea but was notionally "optional" so you could say to keep using TLS but just disable that feature. So there is skepticism pre-existing in that room against the idea of just adding more stuff than might be cool unless it's clearly _needed_.

A feature that keeps six people in Kazakhstan (who happen to have manually pre-configured a PAKE) safe but everybody else is still screwed isn't the sort of impact TLS 1.3 was looking for.

I suppose that'd take something pike obfuscated paper mail password exchange before digital. Or rtty. Or 56k international calls for key setup?

How would the client certificates be distributed to users in Kazakhstan?

You could send them to the email address you used to register the account. Then install it in your browser's or OS' certificate store.

Email is similarly unencrypted by default and can be blocked if encrypted and being used as an active circumvention vector. Yes, mail servers outside the jurisdiction could work until https access to gmail is also blocked- but that’s not outside the realm of possibility (see: China) and also begs the question: how do you get to your secure gmail web interface if you need to receive a secure email before your first login?

This is why I'm always advocating for political engagement for fighting these kind of issues. It's not exactly hard for a government to ban or forbid circumventing their monitoring. It does take time, but they're about to catch up.

It’s far harder if you have a major tech industry to push back and the whole massive security risk this exposes big corporations to. Which is something Kazakhstan must not have much of.

This is also terrible for foreign investment and attracting business. It also makes foreign intelligence’s job easier.

You’ve got their priorities mixed up. Staying in power is more important than foreign investment if you’re an authoritarian government. What’s the point of growing the economic pie if you’re not in a position to profit from it ?

Now if you’re a politician in a democracy, you know it may be all over in about 8 years, so it’s more your interest to cosy up to the companies

It’s rare for a politician to only work 4yrs at the policy making level. Most of them are career politicians these days or retired wealthy people, not people with regular jobs giving politics a shot. Yet they all seem to be wealthy in the US, even after years of public service, regardless of their overt stance on business politically. Which is something the big firms can always rely on.

Except I look at the linked mailing list and you already get "us techies" arguing "uh yeah but uhm this isn't so different from the corporate CA intercept thing right so let's not blacklist it uhm".

What the fuck.

There's a broader reason. If the normal browsers break this, the response will just be that they do their own national fork of an open-source browser and distribute that to their people.

The downside of pushing them to that is that that browser will be unlikely to get regular security updates and will likely hide the interception.

Actually, I don't see the issue here. It is literally the same thing as corps intercepting the connections of their employees or visitors. In fact I trust my employer even less than I trust the government.

But I disagree with the response that says we should do nothing. In fact, corporate root certs should be blocked / ignored by the browser in the exact same way and for the exact same reason. The only exception should be certs issued for a limited number of domains that are only active in a specific developer mode that can be enabled by knowledgeable users.

Sure, technological solutions can't solve this issue 100%. (My employer can also fork a browser.) But acting as if everything is OK when the connection is being MITMed is wrong and browsers shouldn't do it.

> corporate root certs should be blocked / ignored by the browser in the exact same way and for the exact same reason ... technological solutions can't solve this issue 100%

Technological solutions can't solve this at all if the entire stack is controlled by the interested party.

In the case of government snooping, you (theoretically) own the end device being used for access. In the case of corporate snooping, you're using corporate owned and managed devices. There is absolutely no technological solution that exists that will prevent another person from building software for (or selling to) corporations who need to snoop on their employees. Considering the selling price of appliances that perform these services (e.g. Bluecoat's range), the cost of a browser is negligible in comparison.

I don't think it's fair to conflate a lack of privacy on corporate owned devices with a lack of privacy on your own personal devices.

"So do we make our flagship product useless for the entire country or not?" - The real question

Yes? This isn't that complicated. You break it, and when competitive browser X refuses to do so, you sell the idea that browser X is compromised for all users everywhere (not just in Kazakhstan)

Stop thinking about the country with literally less than 1% of world internet users and start thinking of the reputational damage a less than charitable presentation of your collaboration with a totalitarian state against your users would do to the other 99%+ of your market.

Apple is openly collaborating with Chinese regime, including allowing the government to snoop on all Chinese traffic, yet they still have a high reputation for privacy. This just doesn't work, people don't give a shit about other countries.

That's fair, but the country doing this will just fork an open-source browser and make it their official browser.

Sure. "don't use Kazakhfox, it's malware, we've submitted definitions to the AV databases" isn't a hard sell for your 99%+ audience.

Malware forks of open source projects (and closed-source software!) are not a new problem.

Except they are a new problem when the use of them is mandated by a nation-state.

Which is bad news for the ~15m internet users in Kazakhstan. For the ~4000m internet users not in Kazakhstan & generally immune to their rubber hose attack, protecting them from being one BGP fuckup away from being MITMed by a hostile foreign power is much more important.

Totally separate problem that I agree needs to be fixed.

In reality, being one BGP trick away from a mere dedicated individual or corporate owning certs for your domain is an actual risk today.

Are you willing to intentionally break your software (which is currently working) for an entire country?

If you want to put a stop to things like this, then you have to. Complaints from companies and the general population should be enough to fix the issue.

Mere collateral damage.

Is it better to be complicit with an authoritarian regime that is actively spying on their people, in order to have a marginally larger user base? I don't think so.

In fact, you're making it worse because you're giving legitimacy to a government that is conducting actions which we shouldn't consider acceptable. If the US government started doing the same thing, I would really hope that browsers would block those certificates too.

The solution to that problem was invented and reinvented hundreds of years ago. It is called gunpowder.

This is both uncomfortable and correct.

Russian revolution says hello.

It only becomes a social problem after the society gets the tools to know that the government is messing with their communications.

HTTPS is that tool. It is a social problem now, it was a technical problem problem just recently.

Will oscp stapling be able to be used to detect "something fishy" going on, because in that case the root ca wouldn't actually match. Do browsers compare the oscp root with the root of the current chain?

Actually, if it's mitm it's "all bets are off" isn't it, because the KZ government can filter that it out the proxied response?

Still, if oscp can assist at all, it's probably worth it that the browsers check for mismatch (if they don't already)

Edit: typos

Browsers always trust manually installed CA roots, because that scenario is used by many corporations to monitor their traffic. OCSP, HPKP, etc won't help.

There are more benign uses too - many organisations run an internal PKI, and installing their root CA prevents employees' browsers from displaying warnings about untrusted certificates when accessing internal web apps/sites.

You might be able to make intranet.company-name.tld and have a parking page on the company-name.tld and use that to get a wildcard cert that can be used for the internal pages.

Which you distribute to thousands of people on tens of thousands of devices?

With this you would have a vaild cert for your intranet aka no need to install a self-signed one.

However you would have a single wildcart cert + key that would need to be placed on thousands, or tens of thousands, of machines, by hundreds of staff is dozens of departments.

It would be meaningless.

I can prove ownership and then receive a wildcard certificate for *.internal.company.com, usually by a TXT record or similar (lets ignore EV certs for now), however that certificate isn't an intermediate certificate which is limited to signing new end certificates for blah1.internal.company.com, but wouldn't be able to sign for blah1.not.company.com.

I'm no SSL/TLS expert by any means, so please let me know if I'm wrong and it is fairly easy to get intermediate certificates that are domain name limited - x509 constraints are apparently flakey.

That would be a bad use IMO. Letsencrypt solves any need for legitimate certificates.

I don't think it's a bad use. When I logon to my SAN or UPS web interfaces, I don't want to type https://ups01.publicDNSdomain.com, and visit a site with a CT logged certificate. It's an absolutely internal thing and every Active Directory domain already has an (ideally) non-externally resolved DNS domain setup for use. You've already got an internal CA and deployed your own root because there's a series of Microsoft services that work best this way, so it makes a lot of sense to continue to use rather than trying to introduce Lets Encrypt in this scenario.

You don't have to serve that website publicly or even set up DNS records. You only need to set up DNS verification to serve one public TXT record for letsencrypt. Everything else could be internal. Letsencrypt certifies that you own domain. You can do anything with that domain.

Sometimes you don't want to make that information public though. For security (you don't want to publish your whole tech stack information) and secrecy (you don't want to publish registration of halflife3.internal.valve.com).

Then just use a wildcard cert.

Wildcard certs are a security ops nightmare. You really don't want to throw the private key for that around to every small project, and you need some good, automated way of rolling them across multiple services. Doable, but if you can avoid this, it's a better to avoid.

This 100x - in just about any organisation of any size, if you use a single wildcard cert for all internal services, then it's inevitable that the private key will end up in the hands of an employee that shouldn't have it.

I'm aware you can use Lets Encrypt that way, I just don't agree that it's bad use of an internal PKI to use it as an alternative.

Well, it's unnecessary work to install and maintain that internal CA. Keeping CA key safe is very important, because leaked key might lead to your internal connections to, e.g., Google be compromised, so it's like keeping a bomb inside your building. If you already have that internal PKI, you can use it, sure, but I still think that it's a bad idea to use it only for internal websites.

> Letsencrypt solves any need for legitimate certificates.

... unless you want any private keys to be personally signed and or generated by bob & alice over in security after checking some boxes in an internal audit form, or any other number of company-internal schemes involving signing and encryption of business-specific data

You're generating private key securely. You're generating CSR which contains public key and signed by that private key and now you need to move that CSR from private location to a public location. But that's not bad, it does not contain anything that could be compromised and your private key is kept safe. Then you're using letsencrypt to issue that certificate using that CSR and keep using that CSR (it does not expire) to renew certificate. All that time private key is kept in safety and is only used by your webserver. Letsencrypt allows you to generate legitimate certificate for internal websites without any compromise on security.

The only use-case that's not possible with Letsencrypt is to issue certificate for IP address.

Lol. Sure, company sysadmins will run certbot on their mainframes.

There are plenty of clients for letsencrypt, including even Bash ones. That should not be a problem.

Letsencrypt only issues certs for publicly accessible hosts. If you've got a bunch of intranet servers / REST services / whatever that are firewalled from the public internet, you're out of luck.

Thats incorrect, you can verify using DNS zone records, so the server can be as firewalled or air gapped as you want.

For mobile apps, though, you can bootstrap HPKP with a key built into the app. I worked on an app doing this, and it would certainly fail to connect in this scenario.

A lot of internal enterprise networks use MITM, so your app won't work there as well. It might be a good thing or not, depending on your use-case.

Yeah, I considered this a feature. As mentioned elsewhere in these comments, we should have a way to limit the scope of corporate certs.

One solution is to use Name Constraints. The organizational certificate authority could be issued with Name Constraints limiting its power to a certain domain name only, e.g. *.example.com, using Permitted Subtree.

If I was setting up an organizational CA for internal websites (not MITM), I would consider using Name Constraints to limit the certificate's scope and potential for abuse or compromise.

If the app is not for that particular corporation, then no harm done.

Not when the cert has been previously CT and Staple preloaded I suspect?

If a user manually imports a CA, it bypasses protections like CT [1]. This is a feature specifically designed to allow MITM for corporate proxies.

Always seemed like a misfeature to me, but all the browsers do it.

[1] https://chromium.googlesource.com/chromium/src/+/master/net/...

Sounds ridiculous that even when a site host specifically says they want things Stapled and CTd are ignored like that.

Would the firewall allow your package if you do not use Kazakh certificate as root certificate?

> Now "us techies" have to find a new technical solution to a very social problem.

Cert pinning does mitigate it for apps, doesn't it? The end-user doesn't need to really worry abt rouge root CAs, if my understanding is right.

Traditional VPNs, P2P VPNs, Tor as a Proxy (decentralised net? dat/i2p/freenet/ipfs) could solve it generally across various use-cases, of which, VPNs are already mainstream.

> Cert pinning does mitigate it for apps, doesn't it?

Applications where the developer has pinned to their own certificate will stop this attack.

Chrome and Firefox will ignore pinning for locally installed CAs. This is a very common use case in the enterprise where, for example, a bank has audit requirements to decrypt and store all workstation traffic.

It'll "stop this attack" by ensuring that the app won't work through the MITM - so it won't be able to connect from any Kazakhstan users unless/until the pinning is removed.

Not necessarily. We have rolled out SSL inspection at my company and have to exclude certain apps (e.g. Dropbox, Google Drive) or else they won't work. The FW just blocks the connection and the user gets a SSL/TLS error.

Sure, but then instead of the government saying "hey, run this thing or else your app won't work," they can only say "your app doesn't work now." Spying on you is still prevented.

I dont think pinning will work with for example letsencrypt. You can pin many certs but if you loose them all you are screwed. If you check your root cers you will likely find one from every major ISP in your country.

You would usually pin an intermediate, so for Let's Encrypt that would be Let's Encrypt Authority X3 (it might also make sense to pin Let's Encrypt Authority X4 as a backup)

And er, no, the overlap between operators of public Certificate Authorities and national ISPs is very small. There are only 57 root CAs trusted by Mozilla.

You don’t need to pin against a state. Just deselect Kazachstan as a region where your app is offered, because it’s not going to work anyway if you try.

The solution is to warn users that their security+privacy is compromised, and let them make their own informed choice. Techies don't often see that their own wishes shouldn't trump those of individuals (but maybe we're getting into politics now)

Another technical solution would have been to allow security without privacy. If the purpose of the government actions is just to monitor content, you can enable that without disabling security. The HTTP protocol could be modified to transmit checksums signed by a cert, so that a client can verify that content has not been modified, but that content can be (optionally) not encrypted, but still no content-injecting attacks can take place.

But privacy advocates don't like it, so the result is either you have total security + privacy (such as it is), or none at all.

Unfortunately governments like that will continue to do low effort workarounds as long as they have police and military forces to respond to those who don't conform.

A first suggestion is to ban the cert in chrome/firefox and then keep banning certs as they issue them.

> Send a message to everyone requiring a government root CA with an easy install, or their internet won't work.

They’re training their entire population to install things that they get in unsolicited emails that purport to be from a legitimate source.

What could go wrong?

It protects privacy for the masses in the countries where most techies live, which is what most of us were paying attention to.

In places like Kazakhstan and China it's a harder problem, and HTTPS is necessary but not sufficient to solve it.

Before we celebrate defeat, let's just acknowledge that these practices are not taking place in the US, EU, etc.

And compromising HTTPS in places with a functional judicial system (and human rights) would probably be blocked by an end-less series of law suites.

Everything can be justified by national security. You can't try and block something with "an end-less series of law suites" if the defending party doesn't even need to provide any kind of proof. "Why are we MTiMing HTTPS? This is un-constitutional! -- Because internet is a threat to national security which in turn has higher priority than your rights, Citizen!. Everything else is classified and will be discussed in a closed court." Take a wild guess what that court is going to decide...

Aren't some US providers (Comcast, Verizon?) injecting nasty tracking/advertising into HTTP pages?

That's extremely worrying as well and it appears politics so far are unwilling to make it illegal. There needs to be more protest and more competition so consumers can vote with their wallets.

Maybe this sort of interruption is manageable when half of your 18 million people are rural and the economy isn't heavily dependent on internet traffic. Try doing this in a more urban populated country and you will see a much different outcome.

You just start small, from small cities, so rest of people have enough time to prepare.

Technology enables policies both good and not so good. This is just another example of that.

The goal here is to validate that one is communicating with whom they think they are. That's as pure a technical goal as you can get. Oddly this is still an unsolved problem on the internet. BTW any solution to this problem will be attacked or shot down by governments and companies around the world.

what good?

I was certain I read a few years ago that Google would mandate that all OEMs would be forced to use a single unified certificate list, which I thought at the time was a way to pre-empt this sort of thing. But I can't find any new info about that anywhere. I only found an article about how to add new certificates on new Android versions in 2019, so I guess you can still change them.

I wonder if Google changed its mind about this once Sundar Pichai took over and then gave Project Dragonfly the greenlight.

>Send a message to everyone requiring a government root CA with an easy install, or their internet won't work.

but atleast we know

> It never ends.

Yeah. Fangs vs shells. Microbes vs white cells.

It's just the way this universe works. The struggle is eternal. Probably built into the root parameters of the Big Bang, if you could somehow trace it that far back in time and causality (which you probably can't, I dunno).

If this struggle ever ended, we would just find another struggle, its in our nature.

It already exists https://www.torproject.org .

We need new measures to not allow these certificates to be installed unless they're verified, or at least the OS shows a massive giant warning "DO NOT DO THIS unless you accept this cert gives $identity access to all your data".

Seems a very solvable problem.

Verified by whom? I certainly want my browser and OS to retain my possibility of installing certificates all day long.

Trivial technological solutions will not stop the state actor from retaliating against those not following their policy either.

I mean, the choice being presented is to install the MITM cert, or to not use the internet at all. The latter is an answer, certainly, but not what I would call a very good solution.

The government is forcing people to find a third choice and they might not like what they pick.

It's a common meme that users will click "yes" to everything, but I'm not sure people realise just how far that goes. Look how it looks when Chrome marks a site as malware:


Wait until you're doing forensics on a cryptolocker outbreak and you find not only did a user do that, but multiple users helped her through it and the management then praised her for overcoming technical barriers even after it was found to be the cause of the incident.

Unfortunately nothing about warnings makes anything a solved problem.

Corporations also do this so they can scan traffic for data exfil.

Which is, tbqh, a useless solution. Oh wow, now an attacker just has to include some obfuscated javascript encryption lib. Bam. Exfil detection completely bypassed.

For example corporations might want to make sure that worker is not sending e-mails with confidential data from its gmail. Sophisticated thief surely will circumvent that kind of protection, but a lot of thieves are stupid, so simple measures actually work.

True, but Joe Dipseedoodle doesn't accidentally send out an HR report because he was logged into his personal email account.

Too much security is willing to give up on the 95% because they can't get the 100%.

Is that a new word I should know?

It's a shortened version of "exfiltration"

Steganography. With a good key and enough stuffing, it is undetectable

If they're scrutinizing you closely, I would not trust my life to it. If the hash of the cat gifs you're exchanging is different from others, eventually there's going to be too much or too little entropy in those pixels and they'll make a strong statistical case against you.

Do you mean steganography? Stenography is writing in shorthand (or typing on a stenotype, like court reporters do).

autocorrect strikes again :)

to be fair some of us told from the beginning that making all user used to trust the green check would have caused this sort of trust fatigue to the point the majority would have stopped bothering with the actual certificate content and trust chain, and you can search my history highlighting this very issue in relation to let's encrypt, it was a social issue from the very beginning and I got downvoted heavily and repeatedly because apparently "techies" can't be bothered with exceptions and failure modes once a catch all solution is found

but the warning signs were all there i.e. https://news.ycombinator.com/item?id=17298747#17304077

We don’t want users to trust the green check, because it never meant you could trust a site. We do want users to distrust plaintext, because it means the café you’re visiting can steal your password. I don’t see how this is a good criticism of the push for HTTPS everywhere (which appears to be the context here).

because cafe can just ask people 'install this extension to navigate' and non-tech users being users will fall for it most of the time, because state actor can do even worse, and once we trained the users that non green check is dangerous they won't have the knowledge to distinguish between "insecure with no check" and "maybe secure with check but I've to control every time the details", people will shortcut that part, until some researcher finds our for them and even then it's not guaranteed that the message will reach everyone

Exactly - this is why we don't want to train people that non-TLS traffic is insecure, but rather keep them from encountering it, ever, ideally. TLS must be the default- a baseline- and deviating from that baseline must be at least as hard as getting a user to install a malicious extension.

Ever SSHed into a server and been told by your SSH client that, oh, by the way, the server is using the NULL cipher with no authentication, and network attackers can mess with your session arbitrarily? Probably not. That's what using plaintext HTTP should feel like.

I think you're missing one important detail: the idea behind the green padlock is that the average end user isn't technically capable of (or shouldn't have to) monitor all the details of their internet connections to make sure they're secure.

If that basic intuition about users is correct, the solution is not to give up on this and force users to deal with the true complexity of the situation. The solution is for the browser to show a red blinking INSECURE instead of the green padlock when the cert it receives for a site doesn't have a valid chain to a root in the default key store shipped with the browser.

To be honest, I can't figure out why this isn't already the default behavior. It would solve a bunch of other problems as a side effect, including insecure crappy antivirus programs that MITM your internet connection.

if they can force a cert into your os trust store they can force a cert into your browser trust store, this solves some very specific issue but not this one.

That's why I said "store shipped with the browser". I don't think Kazakhstan has the ability to get Firefox to ship their root cert.

this is kinda rich under an article where they forced a cert into the is trust store. it takes the same amount of effort to get the cert into browser specific stores because these need to be editable and an installer get control of the system anyway

"it rather involves being already at the other end of this airtight d doorway"

the current page ask the user to run an installer, elevating privilege. there's nothing a browser can really do against that. DLL can be replaced and signatures can be tempered etc.

just because you said "ship them with the browse" doesn't make you magically right nor safe under the linked threat

Alerting the user when a MITM certificate is active in the trust store is relying on a completely different threat model than "protect the entire operating system against state-mandated malware". I'm saying browsers should at least do the former. You seem to think that's pointless unless they also do the latter, but of course they can't do that. Some security of the trust store is better than no security.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact