And now the government found a very simple non-technical workaround. Send a message to everyone requiring a government root CA with an easy install, or their internet won't work.
Now "us techies" have to find a new technical solution to a very social problem.
It never ends. :(
Kazakhstan's low-tech approach is just that, low-tech and low-effort. They could have used tons of vectors besides simply saying "install this cert."
A tiny shred of effort would have been to package an "updater" that did the install without explicitly saying that's what it was for. Or better yet: Kazakhstan is committed to a greener more ecologically friendly future! All tax documentation will go paperless! Just use the provided USB Key to access your documents in electronic format!
A small morsel of effort would be to force it on OS vendors through regulation/licensing/threats/money for localized copies. A good deal of effort would hijack CRLs, pinnings, et al while demanding/sneaking the private keys of the CAs.
Public Key Infrastructure is fucking pointless when the infrastructure is precisely what you can't trust.
Being imperfect is different than being pointless. Even if you developed the perfect algorithm for global security infrastructure, the Kazakhstan government could still just break down your door and implant the backdoor into your hardware if they wanted. So by your logic should we just forget about this encryption stuff and just do everything in plain text again?
In particular, we'd see a lot more places than Kazakhstan do this if good countermeasures weren't in place...
They could, of course, avoid spying on uncompromised machines to avoid detection, but then anyone practicing good security hygiene would be automatically left unaffected by the government spy program. Plus there'd still be the possibility of detecting malware through other means (malware on client machines is far easier to detect than MITM of unencrypted communications). Not to mention how much more difficult all this would be than simply MITMing unencrypted traffic.
The situation with HTTPS is significantly improved.
This seems a cynical and lazy evaluation of the situation. No solution is perfect, trade offs must be made everywhere. With the right precautions the average person can have his/her communications encrypted. This is a much better situation than the one we were on before.
Chrome, etc., require that certificates which descend from publicly trusted roots have their certificates published in certificate transparency logs. Someone would quickly notice bogus certs being issued and the associated root would get blacklisted.
Especially in the post-finally-ending Symantec world the CAs understand that issuing any such cert is likely to very quickly end their business in most other countries.
I feel the real problem kz is going to have is that they have now demonstrated that they will abuse having a root cert, so there is no way any root stores will let them in in future. I imagine they’d even have difficulty getting any of the other roots to issue certs for them (managed sub-ca I think? I forget terminology)
But all a government has to do is embed within the endpoint, post-decryption. "Or else."
That’s a pretty high bar to clear though.
They did it to everyone whose traffic transited ATT's backbone
But let my re-phrase my question like this: Do we have any evidence that NSA can perform MITM on TLS 1.3? Using a federal US CA would be one way, tricking a CA to issue fraudulent leaf certificates would be another, but as established elsewhere in this thread, both those ways are quite noisy. Attacking the endpoint is another way, but once Mallory does that, all bets are off.
The real issue is how abstract the consequences of loss of privacy are. It requires people to actually think beyond "I've got nothing to hide".
No worries though. Greedy corporations and governments are greedy and they'll keep pushing the limits of societie's tolerance until it blows up in their face.
It's both, everyone can contribute to the solution or the problem.
An agency is tasked with doing random sample captures of randomly selected target internet connections.
Inventory all the types of traffic being exchanged.
Flag anything that isn't obvious plaintext or already being MiTM'ed for analysis follow up.
Implement new blocking rules or interception implementation for each flow that isn't already being intercepted.
The failure being that the long tail of uncategorized data would be large.
Do you have a good reference for what game state updates look like for every game on the internet? What about custom IoT device protocols? Every type of DRM used for media streaming? Document attachments of spreadsheets or database images containing arbitrary numeric data?
How do you distinguish data like that, which outside of some headers may be indistinguishable from random numbers, from someone using the same format or protocol for encoding arbitrary encrypted data?
In an authoritarian state, you just start blocking and breaking things.
Everything you don't understand, you block. And then you make the user explain it to you and then if it's a use case you care about, you do the work to either decide it doesn't have any danger of carrying traffic you care about or build an intercept scheme for it.
There are also a whole bunch of IP transport protocols other than TCP and UDP, but firewalls have a tendency to block them, so today people just encapsulate everything in TCP or UDP.
There are a lot of TCP and UDP ports too, with their own protocols, but those darn firewalls again, so now everything is increasingly using HTTP[S].
The things that get blocked never go away, they're just made to look like whatever is still allowed. Yes sir, Mr. Firewall, this is Hypertext Transfer Protocol over SSL on TCP port 443 using IPv4, which is approved for intercept.
Except that it's really email and games and file downloads and whatever else, with things added daily by everyone on the internet, and no reference for what all of that plaintext is even expected to look like.
So you say you're going to get a DPI classifier and try to distinguish all these different types of HTTP. Except that whatever you exclude will soon be right back encoded as formats and protocols you allowed, because information theory says you can encode anything into anything.
And it gets harder to distinguish them with every iteration, because what you're really using to distinguish them is their encoding inefficiency -- it's the things that are always the same for a given class of data, even though the relevant part of the message is the things that are different. The end state of all of this is that the real entropy is all that's left and there is nothing there to distinguish with anymore.
I'm well aware of all that you've said.
My point was, they get TLS interception down, and they capture what they want from a target of interest.
When they look closely at your traffic and decide all these cat gifs have too much or too little entropy in the data that forms their pixels, they simply (if they're courteous) say, "Persuade me that you did not know that this app was helping you hide messages back and forth. Persuade me or we shoot you now." And then they shoot.
But, being "sufficiently clever" isn't all that easy. China has done a good job, but they're a very big country with a lot of resources and a lot of very smart people, and let's be honest, even as good as they are, anyone with a will to get that censored information will get it.
It costs a lot to censor people on the Internet. The goal of people like me is not to stop the most determined, intelligent censorship approaches, but rather to make them as expensive as possible to build and maintain.
My ideal is force governments to either accept the Internet without censorship, or almost completely disconnect from the Internet (and simultaneously deny their nations the competitive advantages that come with it). North Korea is a good model. They basically don't have Internet in North Korea. It's sad, but I can live with that; it's better than allowing an oppressive regime to benefit from the Internet while oppressing their citizens.
For example, in order to scale less expensively, the Great Firewall is architected such that it need not actively be in the middle of the entire flow of traffic and need not actively proxy. Historically, they didn't need it to do so in order to achieve their goals.
Now, however, the advancement of a combination of new technologies is finally closing that gap.
In order to maintain historic blocking capability it becomes necessary in the long run to actively MiTM all the connections.
But that can be made to scale and there are nations who can afford it.
How do we know? Because the job is not significantly harder than serving up all that content. (At worst it's a little more than 2x the work.)
And today most content is served up from a handful of privately owned infrastructures. If a corporation can build it, so too can a lot of nation-states.
The incentives to build this have changed.
Fortunately the more typical case isn't kidnapping and execution but only having your connection blocked, which creates a helpful feedback loop that enables continuous improvement in the ability of secure communications to avoid detection. Which benefits everybody, but especially those in violent authoritarian countries that need it all the more.
Rather than death, if we look at the history of oppressive societies, the more likely outcome is a job offer, the kind they won't let you refuse but they'll make it so you don't want to refuse anyway. They find the clever people who are working around the filters and interception and hire them to be the watchers. They get perks like time to spend on a real private connection, etc. Meanwhile they are required to contribute to making the noose ever tighter.
no, he's being hyperbolic to make the point that in an extreme situation, a default-deny approach could facilitate mass suppression of 'undesirable' traffic without creating an insurmountable backlog of traffic for the 'bad actor state' to review in determining what to process further.
Only it doesn't, because as soon as they allow anything, everything else starts to look enough like whatever is still allowed to make it through, because that's the only way to make it through.
Slashing away more things only increases the resources people will put behind making arbitrary traffic look like allowed traffic. It trades not having to review everything for having to fight everyone instead of only the people they want to block.
Then some people win, everyone copies the winners' methods to get through, and you're back to square one only now everything looks even more like everything else than it did before.
You say "authoritarian state", sounds to me like the network at many employers and institutions in the US!
not really, we know exactly what the government response is and it's turning citizen one another, that applied with the gestapo back then and it's happening today with the "social credit system"
why do all the random sampling work if all you need is one "regime believer" among a hundred person or so to maintain full awareness of dissident activities.
You can't have security if you have a MITM that says "compromise your endpoint or we block you" and you concede to that. The only real solutions are either political or making the encrypted traffic look like some permitted traffic. (Or using a different network.)
You don't need to use a publicly available CA to verify client-side certificates. The server could use its own internal CA to sign CSRs from clients and send the reslting certificate back to the client via email or some other means.
In this case, only connections where a password was already agreed on would be protected vs. general unauthenticated browsing.
There was a draft proposal to add PAKE support to TLS 1.3, but it appears to have unfortunately expired .
TLS 1.3 was in some part an exercise in removing crap people thought might be a good idea in earlier versions, but then either never used or turned out to be a terrible idea but was notionally "optional" so you could say to keep using TLS but just disable that feature. So there is skepticism pre-existing in that room against the idea of just adding more stuff than might be cool unless it's clearly _needed_.
A feature that keeps six people in Kazakhstan (who happen to have manually pre-configured a PAKE) safe but everybody else is still screwed isn't the sort of impact TLS 1.3 was looking for.
This is also terrible for foreign investment and attracting business. It also makes foreign intelligence’s job easier.
Now if you’re a politician in a democracy, you know it may be all over in about 8 years, so it’s more your interest to cosy up to the companies
What the fuck.
The downside of pushing them to that is that that browser will be unlikely to get regular security updates and will likely hide the interception.
But I disagree with the response that says we should do nothing. In fact, corporate root certs should be blocked / ignored by the browser in the exact same way and for the exact same reason. The only exception should be certs issued for a limited number of domains that are only active in a specific developer mode that can be enabled by knowledgeable users.
Sure, technological solutions can't solve this issue 100%. (My employer can also fork a browser.) But acting as if everything is OK when the connection is being MITMed is wrong and browsers shouldn't do it.
Technological solutions can't solve this at all if the entire stack is controlled by the interested party.
In the case of government snooping, you (theoretically) own the end device being used for access. In the case of corporate snooping, you're using corporate owned and managed devices. There is absolutely no technological solution that exists that will prevent another person from building software for (or selling to) corporations who need to snoop on their employees. Considering the selling price of appliances that perform these services (e.g. Bluecoat's range), the cost of a browser is negligible in comparison.
I don't think it's fair to conflate a lack of privacy on corporate owned devices with a lack of privacy on your own personal devices.
Stop thinking about the country with literally less than 1% of world internet users and start thinking of the reputational damage a less than charitable presentation of your collaboration with a totalitarian state against your users would do to the other 99%+ of your market.
Malware forks of open source projects (and closed-source software!) are not a new problem.
In reality, being one BGP trick away from a mere dedicated individual or corporate owning certs for your domain is an actual risk today.
In fact, you're making it worse because you're giving legitimacy to a government that is conducting actions which we shouldn't consider acceptable. If the US government started doing the same thing, I would really hope that browsers would block those certificates too.
HTTPS is that tool. It is a social problem now, it was a technical problem problem just recently.
Actually, if it's mitm it's "all bets are off" isn't it, because the KZ government can filter that it out the proxied response?
Still, if oscp can assist at all, it's probably worth it that the browsers check for mismatch (if they don't already)
It would be meaningless.
I can prove ownership and then receive a wildcard certificate for *.internal.company.com, usually by a TXT record or similar (lets ignore EV certs for now), however that certificate isn't an intermediate certificate which is limited to signing new end certificates for blah1.internal.company.com, but wouldn't be able to sign for blah1.not.company.com.
I'm no SSL/TLS expert by any means, so please let me know if I'm wrong and it is fairly easy to get intermediate certificates that are domain name limited - x509 constraints are apparently flakey.
... unless you want any private keys to be personally signed and or generated by bob & alice over in security after checking some boxes in an internal audit form, or any other number of company-internal schemes involving signing and encryption of business-specific data
The only use-case that's not possible with Letsencrypt is to issue certificate for IP address.
If I was setting up an organizational CA for internal websites (not MITM), I would consider using Name Constraints to limit the certificate's scope and potential for abuse or compromise.
Always seemed like a misfeature to me, but all the browsers do it.
Cert pinning does mitigate it for apps, doesn't it? The end-user doesn't need to really worry abt rouge root CAs, if my understanding is right.
Traditional VPNs, P2P VPNs, Tor as a Proxy (decentralised net? dat/i2p/freenet/ipfs) could solve it generally across various use-cases, of which, VPNs are already mainstream.
Applications where the developer has pinned to their own certificate will stop this attack.
Chrome and Firefox will ignore pinning for locally installed CAs. This is a very common use case in the enterprise where, for example, a bank has audit requirements to decrypt and store all workstation traffic.
And er, no, the overlap between operators of public Certificate Authorities and national ISPs is very small. There are only 57 root CAs trusted by Mozilla.
Another technical solution would have been to allow security without privacy. If the purpose of the government actions is just to monitor content, you can enable that without disabling security. The HTTP protocol could be modified to transmit checksums signed by a cert, so that a client can verify that content has not been modified, but that content can be (optionally) not encrypted, but still no content-injecting attacks can take place.
But privacy advocates don't like it, so the result is either you have total security + privacy (such as it is), or none at all.
They’re training their entire population to install things that they get in unsolicited emails that purport to be from a legitimate source.
What could go wrong?
In places like Kazakhstan and China it's a harder problem, and HTTPS is necessary but not sufficient to solve it.
And compromising HTTPS in places with a functional judicial system (and human rights) would probably be blocked by an end-less series of law suites.
That's extremely worrying as well and it appears politics so far are unwilling to make it illegal. There needs to be more protest and more competition so consumers can vote with their wallets.
I wonder if Google changed its mind about this once Sundar Pichai took over and then gave Project Dragonfly the greenlight.
but atleast we know
Yeah. Fangs vs shells. Microbes vs white cells.
It's just the way this universe works. The struggle is eternal. Probably built into the root parameters of the Big Bang, if you could somehow trace it that far back in time and causality (which you probably can't, I dunno).
Seems a very solvable problem.
Trivial technological solutions will not stop the state actor from retaliating against those not following their policy either.
Wait until you're doing forensics on a cryptolocker outbreak and you find not only did a user do that, but multiple users helped her through it and the management then praised her for overcoming technical barriers even after it was found to be the cause of the incident.
Unfortunately nothing about warnings makes anything a solved problem.
Too much security is willing to give up on the 95% because they can't get the 100%.
but the warning signs were all there i.e. https://news.ycombinator.com/item?id=17298747#17304077
Ever SSHed into a server and been told by your SSH client that, oh, by the way, the server is using the NULL cipher with no authentication, and network attackers can mess with your session arbitrarily? Probably not. That's what using plaintext HTTP should feel like.
If that basic intuition about users is correct, the solution is not to give up on this and force users to deal with the true complexity of the situation. The solution is for the browser to show a red blinking INSECURE instead of the green padlock when the cert it receives for a site doesn't have a valid chain to a root in the default key store shipped with the browser.
To be honest, I can't figure out why this isn't already the default behavior. It would solve a bunch of other problems as a side effect, including insecure crappy antivirus programs that MITM your internet connection.
"it rather involves being already at the other end of this airtight d doorway"
the current page ask the user to run an installer, elevating privilege. there's nothing a browser can really do against that. DLL can be replaced and signatures can be tempered etc.
just because you said "ship them with the browse" doesn't make you magically right nor safe under the linked threat