This section title makes no sense.
Any application that makes an HTTP(s) connection is incapable of distinguishing between one that is secure with one that is being MITMed - that's the definition of an MITM attack. Certificate pinning (which is good) is simply a way of making it harder for a connection to be MITMed. If browser has pinned the certificate for a particular website, the only way to MITM it is to compromise the actual private key itself, and use that to fake a secure connection. (This is of course not what people usually mean when they talk about MITM attacks, but it fits the same basic structure as one).
Even if you did know a trusted IP address, how do you know the packets aren't being sent by pf, or iptables, or ARP spoofing to a logging squid proxy?
This almost makes me cringe as much as draft-loreto-httpbis-trusted-proxy20-01.
PKP is obviously a band-aid, and nobody involved with it says otherwise; it's a static list delivered with browsers. But the idea of pinning scales just fine and isn't simply a band-aid. The TACK proposal allows every site to assert their own pins, and does in fact largely mitigate the risks of compromised certificate authority.
It's disingenuous to argue that Google and Mozilla implement PKP simply to protect their own properties. They protect many properties other than their own.
Do these "mitigation technologies" require us to continue paying CAs protection money? From what I read about TACK, it seems like it doesn't require that, but what's actually being done?
Not that some CAs aren't overpriced, but calling it "protection money" is just silly. You're free to go use your own CA. Once you've got it all setup and meet criteria, I'm sure Mozilla and others will let you in.
The exorbitant fees are a joke that don't result in any improved security for the end user.
Granted, it's still a scam. "Pay us money or all your customer's browsers will get scary and misleading error messages!"
And even if we ignore that, the whole "EV Cert" thing is a total sham. All the EV cert does is indicate that you overpaid for it.
It's not necessary, at all.
I know there are other proposals, but within the current constraints of the "global PKI", there's no alternative.
On a side note, for an EV cert, the CA must verify the registrant of the domain. They usually do a phone call to verify a phone number matching some other record with the name of the registrant. That takes some human effort to verify; more than an automated email. (In addition to just running a company, keeping a CA online but secure, etc.)
With current technology, it is not necessary.
Because of the blockchain (current tech) they serve absolutely no useful purpose.
You cannot claim that because web browsers voluntarily choose not to use it that it somehow disqualifies it as current tech.
Same goes for the EV cert and phone calls and all that jazz. Simply not necessary, and adds no extra value. In fact, it makes all of us less secure as the article points out.
If, to take one of my favorite companies as an example, Mozilla were to support the blockchain for authentication, they would register themselves there, and that's that. No phone calls necessary, they simply own their identity, and can, via established existing channels (@mozilla twitter account, their newsletter, their .com, etc.) declare what is and isn't official.
They can even get their .com into the blockchain, although that isn't yet current tech:
The post didn't name specific companies or products, but on that note, is there a list of sites that Chrome and/or Firefox have pinned somewhere? I don't think TACK has been implemented in them.
The thing with TACK is that it doesn't actually solve the key distribution problem. The draft itself states:
TACK pins are easily shared between clients. For example, a TACK
client may scan the internet to discover TACK pins, then publish
these pins through some 3rd-party trust infrastructure for other
clients to rely upon.
That is what DNSChain and the blockchain provides. It provides more than that though, and thus providing everything that TACK proposes too.
Contrast to DNSChain, which requires that ordinary users trust a third-party server.
DNSChain isn't going to happen.
So how do you know that your subscription to EFF is secure?
First-party, not third-party. What you described with EFF is a third-party, whereas DNSChain is a first (or 2nd) party, because either you own the server (1st party), or your friend does (2nd? party).
It's already happened. I'm using it right now. Where's TACK?
If I didn't think EFF was trustworthy, I wouldn't subscribe. Key continuity would still make TACK valuable; even without trusted third parties, widespread deployment of TACK would practically eradicate dragnet surveillance through compromised CAs, because every attempt to forge a certificate would quickly be detected.
I am not. Those words have real meaning that doesn't disappear when you accuse me of playing semantic games.
If I didn't think EFF was trustworthy, I wouldn't subscribe.
I asked how do you _know_?
With DNSChain, you actually have a great deal of certainty that is based on math, not public reputation.
widespread deployment of TACK would practically eradicate dragnet surveillance through compromised CAs, because every attempt to forge a certificate would quickly be detected.
With DNSChain there are no compromised CAs to eradicate, and therefore no problem. You also get to keep your money, no more need to pay for SSL certificates.
It's a nice idea but inherently unusable in almost any circumstance.
The developer doesn't really understand what he is talking about when it comes to the blockchain and he misrepresents his work as being fundamentally different than what a CA offers. He honestly thinks everyone will setup their own DNSChain on top of a Namecoin install.
This project (DNSChain), uses the entire blockchain, so it's all good. And no, you don't need to store it on your phone as has been explained in other comments here, on the blog, and on the github page.
The first part of this document tells me that a local daemon communicated with Namecoin, ergo the system is dependent on having it running locally for anything approaching security.
> Instead, all of that is done for you by your chosen DNSChain server.
That's just it, why would I ever trust a remote resolver? There's two public resolvers listed on the homepage of the site, one is doing resolution in the clear and the "encrypted" one has no listed public key to verify that again there's no MITM.
None of this is filling me with confidence and we haven't even asked who is running the things yet.
The solution as presented doesn't solve ALL of our problems, but solutions rarely do, and it's not reasonable to expect them to. But it does solve SOME, and doesn't present any new ones (although I expect to eat my words as I type that). Progress is progress, even if it doesn't get us the full mile.
As for trusting those writing this particular software, we don't really have to trust them, we simply have to trust the code.
There's is no way to square the circle here - Zooko's triangle  guarantees that you can't have a decentralized system with memorable secure names. See Dan Kaminsky's analysis of Aaron's system (which was implemented by NameCoin. 
That's about right, I think.
> That's just it, why would I ever trust a remote resolver?
You trust yourself right? The docs emphasize in multiple locations that you should be the one who is running the resolver. And if you don't know how, you can use a friend's while you're learning or others are making it easy for you to have your own.
> the "encrypted" one has no listed public key to verify that again there's no MITM.
It does, click on the IP and the public key is listed there in a gist, along with the command to use it with dnscrypt-proxy.
> They promise resolution will be signed by the server some day, but it's not a feature enabled right now.
It's not like it's difficult to implement this. I could do it in a couple of hours, but I came to a stopping point with the code and began focusing on community building for a bit. Don't worry, it's coming real soon, and you're welcome to implement it yourself and submit a pull request if you can't wait. :)
The point is that this approach works.
You're right, github must have hiccuped before. When I looked at that before it was a blank document. I assumed it was a placeholder because the client didn't support any sort of request signing yet.
(Sorry for editing my post out from under you, it wasn't intentional)
Against DNSSEC + certificate records:
1) There's a centralized PKI administration issue, but less so than the current CA morass
2) Administrative/rubber hose/legal/monetary compromise of the DNS root administrators (currently US NTIA) would allow substitution of the TLD keys, but this would be blatantly obvious to every organization or individual paying attention
3) DNSSEC has some annoying properties with regard to name enumeration and such
4) Need to either use a primary resolver that supports DNSSEC (Google Public DNS and Comcast, among others) or do a separate query to one who does
5) Minimal extra infrastructure, already supported in some libraries
6) Extremely cacheable with low resource requirements (root & TLD keys, MRU/MFU certs)
7) No need to add a new dependency on a previously untrusted source of code and crypto data
8) As long as the parent TLD has deployed DNSSEC the organization only needs to get their key set up, add a theoretical cert record, and sign records as needed.
However, the first thing I think of when I see:
"We’ve taken several important steps on the road to making “NSA-proof communication” on your favorite websites possible."
is that a lot of use cases involve mobile phones (of course) which have two other computers inside of them that you have no control over and are regularly used to manipulate your android/ios environment out from under you.
I love to see these pieces coming together to form a more secure ecosystem, but it's really one step forward and two steps back as long as we don't have an open baseband and sim card OS, etc.
 the baseband processor and the SIM card
 OTA updates
I think the best compromise for that is to have OTA updates but the source of the software being upgraded needs to be open source, so at least there's a disincentive to backdoor it and also the possibility of finding the backdoor inside the new update.
As for the baseband processor, we need to keep poking at Qualcomm and others to open source their modem firmware, and then vote with your wallet.
Getting a SIM card with an open source OS is probably going to take a while, although Google is in a very good position to push for that right now, since their OS is so dominant in the mobile market, so it's probably Google the one we need to nag to get it.
I understand how blockchains can be used to trace the history of transactions back to the root but I don't see how that can be used to do secure key distribution.
Assuming everyone trusts the root block, how does that trust get transmitted down to subsequent blocks?