SSL snake oil. SSL and the percieved trust around it has to die. It's a big lie, especially with broken CAs, lax security, poor encryption due to international policy and several technical and conceptual flaws.
So you don't like the way SSL/TLS is configured in your browser and therefore we should throw out 15+ years of intense study for whatever flavor-of-the-month protocol is most popular when everyone agrees with you?
How about instead of "SSL sucks!", you propose things that will (from your vantage point) improve SSL/TLS? What, so you don't like that Mozilla gets to dictate what all the trust roots are? Neither do I. But that is not one of the worlds great compute or UX problems to solve. Propose a solution.
It's a virulent misconception that the SSL/TLS security model depends on Verisign and Thawte. The only place in the whole model we have that dependency is your browser's configuration and that stupid dialog it shows when a cert doesn't check out. You can replace just those two parts and get virtually any other trust model you want.
Yes I'd quite happily throw out 15+ years of intense study and research (inclusing SSL/TLS) as it occasionally breaks down and kicks you in the face (google for TLS MITM vulns).
There should be an explicit trust model where the service vendor ships your keys via an alternative side channel -or- a decentralised model which works on reputation or trust.
If it works, why do all the online banks use two-factor authentication these days?
At the current rate of discovery, every vulnerability in TLS should make you more confident in the protocol. We're running an average of 3-5 years between protocol-level discoveries in TLS. Each one is the product of, literally, millions and millions of dollars of adversarial research.
Psst. I've got news for you. adb56780a76686326612a1eb3c2b32053bbcf3d8. Ask a friendly vulnerability researcher what I probably meant by saying that. Guess what? More confident as a result of knowing it.
The rest of your comment is just pique. "Why do online banks use two-factor auth?" Because the alternative is passwords. "There should be an explicit trust model!" Go ahead and build it. Because TLS is well-designed, your new scheme will work just fine with it.
> adb56780a76686326612a1eb3c2b32053bbcf3d8. Ask a friendly vulnerability researcher what I probably meant by saying that.
OK. Hey tptacek... :)
Mind elaborating a bit more, for those of us who don't happen to know any other vulnerability researcher who know what that hash means? The one vulnerability researcher I know had no idea what you were talking about.
I assume you're referring to the practice of releasing hashes of one's advisory to prove that you discovering it while giving the vendor time to fix their software before disclosing it publicly.
Google doesn't turn up anything for that particular hash, though, nor did I see anything on the Matasano website, but I'm probably looking in the wrong places. I don't claim to be a security researcher, after all, even if I try to keep up to date.
> We're running an average of 3-5 years between protocol-level discoveries in TLS.
Everyone knows the weakness with our current system isn't the protocol--it's the "trusted" roots. There are upwards of 100 organizations that have roots installed on my computer and each of them is an attack surface. And it's frustrating because if I'm Google, there's no way I can protect myself from these other firm's security flaws. That is the part of SSL/TLS that needs to change. Getting certs from DNSSEC would be a good start...
Getting certs from DNSSEC is morally equivalent to getting them from X509 PKIs.
Again: the problem we have is that the UX and policy for HTTPS/TLS is brittle, so that it's hard to recognize and even harder to react to misconfigurations like "we're trusting CAs who are not trustworthy".
Faced with a policy and UX problem, someone needs to explain to me how a reasonable next step is to bake a new PKI into the core of the Internet.
There is a reason the IETF-types are so gung-ho on DNSSEC. But it isn't that they know it's a sound replacement for the CA system. It's that it bothers them that DNS- the- protocol is insecure. And that's fine (I think their solution stands a good chance of making DNS even less secure, but whatever). But they shouldn't get to piggyback on other security issues to achieve their goal.
You are using the words "authenticate" and "identify" in ways that professionals do not. In reality, a card and a pin are two factors in an authentication system; your fingerprint is a third (biometric) factor which does not need yet another synonym for "identify" to describe; the reputation of your origin IP is a fourth factor, your behavior a fifth.
If it works, why do all the online banks use two-factor authentication these days?
Oh wait, there is the solution!
That's used to authenticate the client. SSL's job here is to authenticate the server. So no, that's not a solution.
Besides, it's completely preposterous to claim that HTTPS is useless just because it's not enough for banks. Should each website send an RSA Token to each user?
I've been quietly hoping that perhaps Google's Authenticator app might provide this for us... If there were a way for one of my websites to seed the Authenticator, I could provide an RSA-token-like 3rd factor to my logins effectively for free (at least to anyone with an iOS/Android device).
(I don't suppose anybody's more up-to-date than me on if/when that'll be possible?)
Two factor auth is more about stopping your login details being nicked and then used at a later time - not atall really a question of SSL, not to mention that as far as I can see this attack here would not be affected by it in the slighest except for future login attempts.
The idea of having a distributed certificate mechanism over a peer to peer net seems like it might be a good starting point. Essentially, you'd self-sign and serve out your certificate on your ports; possibly connecting to some sort of super-node. You get connections from other members of the net, and they'd send you some certificates; you'd cache the certs and their IP addresses. Then after a timeout, you'd broadcast your certificates to all the interested IPs.
Part of the intrinsic concept is that there's no authoritative identity service. Without an authority, you can't crack the authority.
Of course, this is just a sketch of an idea; it has the usual issues with peer to peer nets such as "evil majority wins". Plus, it has the potential to eat a good deal of bandwidth.
Regardless, I think it'd be a good idea to explore, probably in an academic setting first.
I get that a false sense of security is worse, but we haven't yet figured out a better way that is still decentralized and works with a wide variety of user skill levels.
Because people who aren't expert users deserve security too. And they deserve functionality.
people who aren't expert users deserve security too. And they deserve functionality.
Well, SSL certainly isn't giving serious security in the "My ISP has been intruded" attack model.
You can't have both security and ignorance. If you really want to be secure, you're going to have to learn how to be secure, and then to know how to implement it.
No they can't. That's not how TLS works. The problem we are talking about is not that SSL/TLS- the- protocol allows ISPs to decrypt traffic. It's that some browsers have been shipped with a certificate authority that is willing to sign Google CN's. Remove that CA's cert. What's your ISP going to do now?
- I can install Fiddler (Windows https debugger) on my machine and read https traffic in the clear.
- A competent admin assures me that Websense can read https traffic. Couldn't really dig real detail up on the websense sites though. This chap suggests the admin is not wrong (search for corporate, it's toward the end): http://www.carbonwind.net/blog/post/Random-SSLTLS-101%E2%80%...
I have no idea how I might go about arguing with you; it's as if I were to argue "a brief search of the literature suggests that P is definitely NP". Give me something more specific and I'll give you the context behind it; right now, I don't know what you're talking about.
(Sigh). Fiddler pops the browser certificate warning when you use it; it's not breaking TLS.
You must just as productively say "my friend assures me AES is broken; maybe that's what Iran is doing." Just like your competent admin friend, there would be some reason for him to say that; it just wouldn't be relevant.
It is crazy that you're even entertaining the thought that the entire security model of the world wide web has been circumvented by WebSense. I guess they just really know how to keep a secret?
Websense doesn't break TLS or SSL or PKI. Websense abuses an organizations control over their own workstations to conduct a 'mitm' or 'proxy' of the TLS connection. It does that in a fairly straight forward manner.
Websense is used in organizations that distribute their own root ca key to the workstations behind it. The Websense machine is then given that root ca key and allowed to generate dynamic certs with it, so that a workstation with your organizational CA trusts them, but nobody on the regular inter webs will.
It's a really, really shitty way to do things, and effectively violates the trust of every user on your corporate network, but hey, they signed an agreement.
Use case 1: 60 year old man wants to access his bank online from his home. He has been using computer for 10 years, but only been using the internet actively for 5, and still requires help from IT for basic issues at work.
Use case 2: 40 year old man wants to use his laptop from starbucks for email. He has used computers for much of his life, but doesn't really understand how it works. He is on the road working 60-80 hours a week. He is taught to VPN in to work, but that is the extent of his understanding of security.
We're not talking about the use case 3: activist in Iran needs to communicate with compatriots. That is a much harder problem for which SSL is insufficient.
How do we set up a sufficiently easy system for use cases 1 and 2? Charging them $1,000 for a 2 day class in security is not going to work in scale, and nobody has yet written a security primer for dummies past the most basic of "don't give people your password and don't use your dog's name."
This is not true. There is no cryptosystem we know of that is more suited to "User Case 3" than TLS. There is a problem with the way activists in Iran are using TLS: to wit, they are trusting Mozilla, Microsoft, Apple, or Google to make decisions about who they trust. But TLS does not require them to do that. They are a point-and-click HOWTO away from not being in that position.
If Iranian activists want to trust Google for their sensitive email, all they have to do is track down Google's authentic certificate (by asking anybody outside of Iran to fetch it) and add it to their browser. Iran does not have the ability to break RSA. All they (apparently) have the ability to do is to con incompetent CAs into making new RSA signatures that some browsers are configured to believe.
So build one. Your preference actually has very little to do with the X509 CA architecture. Today we have a complex little forest of fiat CAs. No part of the TLS architecture prevents you from replacing that with an even more complex and more full-featured web of smaller CAs.
I'm not being sarcastic. Your decentralized reputation scheme could very well be better than our severely compromised system of central CAs.
And the solution is? Getting everyone on to PGP/GPG? Explaining to the general public what a web of trust is and actually get them to use it in the correct fashion (rather then clicking 'trust' the same way they click through all dialog boxes)?
So when the certificate error comes up, ask your friends whether they trust it. You are talking about a web application that is almost "hello world" in Django or Rails.
So, build it. Then make it secure enough that any schmuck won't be able to hack it and serve fake 'trusts'. Then decentralize it so that governments like Iran's can't MITM it. Then solve the problem of having a single organization deploying thousands of fake nodes and poisoning the data.
In what sense? DNSSEC is also a PKI which also has roots that can be compromised. Not only that, but it only works server-to-server (late in the game, we've decided "that's OK, everyone will just be a server!). And DNSSEC relies on HTTPS/TLS to actually protect content.
This comment is tautological. The verification you're talking about works on the same principle as X.509 (SSL/TLS) does; all it does is attempt to give individual DNS zone holders the ability to sign their own names. But every zone has a parent zone upon whose security it depends.
The reality is that DNSSEC is a lot more fragile than even that. Even on paper (DNSSEC hasn't been deployed in the large successfully) it has gaps. For instance, did you know that as conceived in the standards, your browser isn't even meant to speak DNSSEC? Your browser doesn't run a full recursive cache --- in other words, your browser isn't a full DNS name server, which is why your computer has to be configured with the address of a name server. DNSSEC doesn't secure the leg between your computer and its name server.
"No cryptographic system is perfect" is a worthless sentiment. Only in the rarest instances do we know whether any system, from "hello world" to the Tacoma Narrows, is absolutely sound. Our job as engineers is to make determinations about which system is sounder. DNSSEC takes approximately the same architectural problems that X.509 has, rebuilds them from scratch in the context of a protocol that has for 20 years been riven with security flaws, and then still requires HTTPS/TLS to function. It is less sound than HTTPS/TLS.
What I meant was there are mechanisms that let you determine that a DNS answer is correct.
I won't say more regarding this matter because you know a lot more about cryptography than I do. What do you think the best solution for this kind of problem is? PGP?
I literally think 99.999% of the problem with HTTPS/TLS is UI. The UI for certificate management and validation is virtually unchanged since the 1990s. More importantly, Mozilla, Google, Apple, and Microsoft --- however well they mean --- are not appropriate stewards for the security of the whole fucking Internet. We need the secure UX required to outsource CA management to trusted third parties that unskilled users can reasonably choose among.
Back to DNSSEC:
What we do not need to do is take this extremely complex policy problem and bake it further into the Internet's infrastructure, so that only the greybeards at the IETF and the product managers at the five largest software companies in the world get to make choices about the Internet's security model.
Today's HTTPS debacle was brought to you by a policy mistake. When we originally deployed HTTPS/TLS, we did not understand enough about the forces acting on us to make totally safe policy decisions. What we need is more flexibility and more thought, not the exact same system we have now baked into one of the core protocols on the Internet.
Wouldn't it be neat if there was a dialog somewhere in your browser that gave you the option of trusting anything Colin Percival trusted, or anything that the EFF trusted?
I had thought about something like this, sort of a combination of karma scores and netflix/pandora-style recommendations for ranking user content on sites like HN. I could say 'I like what tptacek has to say' and give your content a higher precedence for ME, rather than for the whole of the site user base. Others could say 'I dislike what tptacek has to say', and you get a corresponding drop for THAT USER, not everyone. Now add a chain - if I 'like' you, all that you 'like' gets a smaller, but significant bump for me, and so on. Add in 'OSCP' style repudiation if the source of your trust for someone changes their mind for any reason.
Moving this into authentication/trust - how do you 'start' something like this at an Internet scale, assuming actively hostile, nation-state level players, such as in this article? It seems very chicken-and-egg for something as critical as encryption and protection of important content.
Just thoughts - it seems there's some big issues I'm missing though, such as initial seeding for 'new' users en masse. I can also see the code to keep track of all the trust vectors, etc. becoming VERY complicated...
Seems like an awesome plan, but making it scale requires a lot of work. In this system, how do banks ensure their site shows up as secure in people's browsers? How do random sites of varying popularity?
Actually, for many people their bank seems like a good start for the trust chain; visit them in person and get the information you need for your trust root. And unlike the certificate companies, banks have end-users as their customers, not people who want certificates.
I'd consider subscribing to a few services that provided plausible claims to properly curate lists of ssl cert fingerprints for at least the majority of my security-critical domains. Even if it were just limited to the major internet properties (start with google/yahoo/amazon/paypal/twitter/facebook/etc), and add in major (and minor) banks, government institutions... If someone started up a service claiming to have telephoned the appropriate security people at those places and confirmed the cert fingerprints via non-internet/real-world-confirmable means, I'd certainly consider paying a few tens of dollars a year for that... Especially if it integrated nicely with a browser/os plugin.
And, hopefully, a way for Colin, the EFF, your friends, or Hacker News to say "Ummm, looks like the EFF trust list might have been compromised sometime in the last $n weeks..." and for you to be able to audit/rollback changes in that trust list, and/or compare changes with unrelated sources...
DNSSEC may have a lot of the same design limitations as HTTPS PKI, but it could still be an improvement if it represented a parallel, redundant piece of security.
But what we see is CAs authenticating cert applicants based on DNS (the ability to receive email verifies your "control" of the domain) and DNSSEC rooting to a Verisign-managed registry, just like the early days of SSL PKI.
Only if that parallel, redundant piece of security (a) isn't relied upon as a single point of failure by any component of the rest of the system (note how many DNSSEC advocates seem to want DNSSEC as a replacement for the X509 PKI), and (b) doesn't have 15 years of its own security vulnerabilities in store for us.
Yup.
a) a rare opportunity missed
b) I'd like to think we know more than we did 15 years ago, but compatibility with the existing DNS brings new challenges too
That may be true, but you have no guarantee that the IP that is returned actually routes where it should, or that a resulting connection is to one of google's actual servers.
note: Nothing against dnssec. I think it is an improvement, but I don't see how it would have helped in the situation in question.
I don't think DNSSEC is an improvement. I think that when you take a fragile and problematic security model, reimplement it from the bottom up in a setting that's even more restrictive than HTTPS, bake the result into the core of the Internet (or rather, the fraction of the core of the Internet that knows how to be a full-fledged DNS server), cause untold disruption the network as a whole and incur tens of millions of dollars in administrative overhead that could have gone to other security objectives, you are very likely looking at a measure that maybe just maybe might be a tiny step backwards.
Some critique here to back me up:
http://www.theregister.co.uk/2011/04/11/state_of_ssl_analysi...