* Enable HSTS
* Set the secure flag on cookies
Very few of the sites we test enable HSTS. But it's easy to do; it's just an extra header you set.
Another technology to start preparing for is TACK. It allows you, the server owner, to control browser pinning of your certs while maintaining CA mobility. This gives you the control over your security that Google has over Gmail via Chrome cert pinning without having to issue a new browser build every time you change CAs.
One way to think of it is like a domain transfer lock but with cryptography. You control how you unlock your pin to allow mobility to a new CA by sticking a signed file on your SSL server.
[Disclosure: one of the authors of TACK is a former co-worker.]
I think the first problem gets considerably easier to solve once the latter is in place, and there's a lot we could do with Convergence-like systems that would make them more deployable if TACK is adopted.
In the short term, however, TACK stands on its own, and we hope it's a fairly uncontroversial proposal that will be easy to integrate into the ecosystem.
this will then use the same transport as the containing page uses.
At Quantcast we tried to use "//" without the protocol in our tags (to eliminate the need for a separate http: and https: tag), but we had a huge number of complaints about a bug in our tag (missing http:!). Users also tried to be helpful and add in the "http:" and then complained when it broke. In the end we went with two separate tags to reduce the support burden, despite the added complexity of having to explain the two tags.
Less than a year ago, you were saying HSTS wasn't worth the trouble. Ref: https://news.ycombinator.com/item?id=2909613
Glad you've changed your mind.
I encourage everyone to read through it, and follow it. Once you know what to do, it's easy. Part 2, dealing with advanced topics, is coming in October.
config.force_ssl = true
If your app uses session ID cookies, then another implication of this is that attackers can set a user's session ID to a value they know, wait for the user to log in, and then use the session ID to hijack the logged-in session. To prevent this make sure you regenerate session IDs when logging a user in. (This isn't the only reason to regenerate session IDs on log in but it's a very compelling one.)
Session fixation used to be a common problem. There were lots of J2EE applications which were not only fix-able, but which would allow an attacker to fix a session ID with a carefully crafted GET URL. It's much rarer now.
In other words, a MITM could downgrade any HTTPS traffic and simply remove that STS header. The browser would be none the wiser.
You are, obviously, vulnerable on first contact to a site, in that an attacker can prevent you from ever seeing the STS header. The point of STS is that attackers don't generally get to intercept your first contact with a site.
Adam Langley, by the way, is one of Google's Chrome SSL/TLS/HTTPS people.
Similarly: a country savvy enough to have a whole regime for ensuring they have custody of all transactions from first contact on probably isn't a country that offers safe access to browser binaries either, which kind of hurts the utility of baked-in SSL restrictions.
If clearing the browser cache/cookies does not make the browser forget about STS for each domain, then we got another way to maintain http://samy.pl/evercookie/
As an example, a rogue Apple Store employee could insert him self as a MITM between the access point and the internet connection. Anyone testing out a new laptop in the store (or logging in to their bank from a just-activated iphone) would be vulnerable, without the attacker ever having touched any of those devices.
Let's assume you have it set to 1 year.
If user A visits the site in March, receives the header, uses the site for a while.
In May, the site/user's DNS/whatever is hijacked. Users are sent to a dummy site, which does not set the header. The dummy site is over HTTP.
The next day, the user tries to go to the site. Because it is not over HTTPS, the browser refuses to load the page, even though the header is no longer sent.
If I visit again tomorrow, the browser-cached version of that header will be updated with a new expiration date, and expire a month from tomorrow, not now.
edit: I think we're agreeing.
I know what I'm doing. I'll reset the option when the underlying issue is resolved, and overall it's a great feature for the browser, but I need to have the ability to be responsible for myself.
Then, the proxy makes its certificate available to users, you download it, and add it to your CA certs via the UI that browsers provide for that; HTTPS magically appears to work again.
HTTPS shouldn't magically appear to 'work' again, considering it is completely broken when a forced mitm is introduced.
If you want privacy against the administrators of your employer, don't use your employer's network to do things that need privacy.
The problem isn't necessarily what the employer sees, it's what the might employer keep around.
But even if reasonable people could disagree about that policy decision: the reality is that people operating large corporate networks require the ability to control SSL/TLS sessions; for instance, there are whole industry verticals where accessing a private email server not controlled by your employer is grounds for automatic termination, because regulations require them to track and archive email messages.
Finally, and I'm repeating myself: I am describing the reality of most Fortune-500 enterprise networks. In most corporate networks, you cannot simply talk from your desktop out to the Internet; you are required to use a proxy. You're also almost certainly on an 10/8 IP address.
In which case I end up shit creek without a paddle because there's no way to temporarily disable the security feature.
And I do not have control over the Proxy server because I'm not in the fucking security team.
You can disable all certificate checking with --ignore-certificate-errors but it is as bad as it sounds.
Rather, to correctly support MITM proxies you should install their CA certificate locally.
This is the problem.
> ... and it makes Chrome practically unusable when they happen
This is not the problem.
There is a clean solution to this problem: the proxies should serve as just-in-time CAs for the traffic they proxy. The big proxy products all do that. This simply isn't Chrome's problem.
Considering your use of the word "breakage": DannoHung is talking about a button that is actively being disabled in certain situations, not something bad being enabled. This is extra code in a security critical part of the browser. Thus, we can assume that there were meetings that discussed this "feature" and its implications, the actual coding, code reviews and QA, adding up to quite a bit of opportunity cost. That begs the question: Would this time have been better spent on something that adds more security?
Disabling manual overrides may seem like a good idea, but it can go horribly wrong.
If he wants to disable TLS security, there's a right way to do it: by installing the proxy's cert.
If you read 'agl's talk, you'd see that the reason the button is hidden is that it is one of the Internet's great security flaws: a workflow embedded into most browsers that demands users to learn to disable TLS security.
So, I find this argument you're making to be more or less entirely bankrupt.
Of course, I would still only run with "--ignore-certificate-errors" for the limited time the proxy has broken certificates or whatever...
Companies have their firewall infrastructure locked down (hopefully), but lan segments (except in high-security environments) not as much.
When connecting to a Known HSTS Server, the UA MUST terminate the
connection with no user recourse if there are any errors (e.g.
certificate errors), whether "warning" or "fatal" or any other error
level, with the underlying secure transport.
Are you looking for something else?
Also, by using DuckDuckGo  over HTTPS you get the same ruleset in HTTPS Everywhere  even if you don't have the extension installed.
Pages include resources from https-everywhere'd domains and for whatever reason (mostly that the ssl versions of those resource urls aren't serving the same resources, or have broken certs) those resources fail to load. Within an hour of using it I'd seen it break 3 or 4 sites, so it got disabled.
You can manually disable it for individual sites, if you recognize that it's the problem, but if some minor resource fails to load it might not be obvious.
Obviously, you need to be a bit more diligent about making asset urls protocol-relative (which can be a PITA across a large, dynamically generated site), but are there any other gotchas? Server load? Reduced cache-ability?
If you don't do SSL properly (e.g. non-SSL-terminating load-balancer can break SSL session resuming by forwarding requests to different servers which don't share tickets) then you'll have lower front-end performance.
webpagetest.org nicely shows connections including time spent on SSL negotiation, so you can use it to check your SSL overhead.
SNI would help a lot, but unfortunately it will never be a feature in the SSL client code in Windows XP (which MSIE uses) and so we're stuck with this for the foreseeable future.
Because they wound't be so PITAish would they?
I currently use a self signed cert and certificate patrol, but apps (in particular Thunderbird) are becoming increasingly hostile to that.
Yes, they are. StartSSL will even send you a reminder e-mail.
(and a great advertisement for using Chrome in secure settings where you need a web browser)
The irony of Google being one of the main http-only JS resources for a long time was kind of amusing, though.
Also I think GA was http only at some point.
GA has had https support as long as I can remember.
Digressing a bit further, wouldn't you say that even if HSTS is enabled and registered in the all the browsers' built-in list, you still have the problem of unencrypted DNS lookups? (Maybe this kind of attack is orders of magnitude harder to implement. I honestly don't know.)
(If I could vote for your time investment, please kindly consider commenting on that article before replying to this comment.)
There are a bunch of problems with this idea. Most of the ones that spring to my mind are problems with DNSSEC in general: its brittleness, the reliability problems I think it's going to cause, the things it does that actually diminish the security of the DNS... but the big point relevant here is: DNSSEC replaces a market of CAs with a baked- into- the- Internet fiat authority. If DNSSEC had replaced SSL CA's in the mid '00s, Ghaddafi's Libya would have been Bit.ly's CA. This does not seem like a win to me.
I don't think that rent-seeking SSL CAs are as big a problem as many HN users seem to think they are. I think ultimately there's significant expense involved in operating a secure CA, and that relative to their purported value, CA certificates are reasonably priced.
The pressing problem with SSL/TLS is that CAs aren't trustworthy. They are rent-seeking, as expected, but also shoddily operated. The Internet has largely lost faith in the people operating CAs.
Moreover, a decade and a half of browser/CA relationships have left all the major browsers riddled with skeleton-key CA certs run by organization that nobody can really vouch for. As a result, large companies have purchased browser-trusted CA operations, and then used them to do incredibly dubious things. The companies that have been caught doing skanky stuff with their CA keys haven't even been kicked out of the browser CA stores.
As a result, we're left with a situation in which untrustworthy companies can potentially sign certificates for (and thus enable transparent MITM attacks against) critically important sites, like Google Mail. That's an untenable position.
I personally believe (and, yes, hope) that the future of Internet security looks much like today, except with things like Trevor Perrin and Moxie Marlinspike's TACK scheme, to allow security-sensitive sites to overrule bogus CAs, and to allow us to gradually decrease the architectural dependence we have on SSL CAs and start experimenting with more flexible alternatives.
I am not a fan of trying to take the same model that just failed us, but centralizing it and handing it over to the unaccountable groups of people who control the domain name system.
But the important point is that DNSSEC stapled certificates don't need the browser to perform any extra DNS lookups. The certificate itself contains the DNSSEC information and signatures. Since DNSSEC is signed the data can come over any channel; it doesn't have to be port 53.
Unencrypted DNS still leaks the hostname that you're visiting - that's true. However, the destination IP address probably leaks the same information and, if not, we sent the hostname in the clear at the beginning of the TLS handshake! (That's SNI, it allows SSL virtual hosting.)
Please don't take this as an argument. I just want to know where I'm wrong! I just can't get over the idea of pushing at the (justifiably) paranoid level for HTTPS while we still have plain-text DNS... even with DNSSEC!
Wish request: Your thoughts on http://news.ycombinator.com/item?id=4268461.
The key is that the IP address doesn't matter, indeed it shouldn't matter whether the traffic is going over carrier pigeon. You have a name that you wish to connect to, say example.com, and you have some way to send an receive packets. If the other end can prove that they are example.com by means of a certificate then you have a secure connection. How the data gets there and back is immaterial to transport security.
* Chrome wants to FORCE you to buy an SSL certificate.
* The guy suggest getting one from StartSSL BUT those are crap for 2 reasons: you can only have ONE domain, else you have to pay. The TOS are horrible.
So, dear imperialviolet, if you want me to use certificates that your company trusts (and by extension, your users), get up with it and make Google provide free, unlimited SSL certificates.
Til then, no dice.
It's one name per certificate (well, two: yourdomain.com and whatever.yourdomain.com) but you can order multiple certificates for multiple subdomains in the same or different domains at no charge.
And it's only one domain per cert, so your entire argument is silly.
If you have a really small website, NearlyFreeSpeech is actually nearly free.
Actually that's not the case, you can get single certificates which cover different domains, using the Subject Alternative Name field.
Edit: a downside of using separate certs is that you'll need to serve the respective sites from separate IP addresses, or rely on SNI  which isn't supported in older browsers. But if the use case is a separate domain for serving static files, that's probably hosted on a different server/IP address anyways, right?
Not necessarily. People used to have different domains for images/css/js because browsers used to not want to download more than 2 things at the same time from the same domain name. (Back when the web was young and ugly, and bandwidth was scarce, this made sense). By having multiple domains (e.g. a.static.example.com, b.static.example.com etc.) on the same IP addres/server, you could trick browsers into downloading more in parallel and make your site seem faster. You didn't need multiple IPs for that.
Now-a-days browsers have upped their limit from 2 to something like 8 → 16 or so, so it's less of a problem.
I haven't tried this in practice though, but this might be useful if you want to provide a bunch of client.yourdomain.com secure subdomains from the same IP address. Only downside is that the organization name will be the same.
That's the one downside of this HTTPS-everywhere movement - we're beholden more than ever to the certificate authority cartel.
In other words don't specify http or https in the url, just do '//your-url.com/new.js'
no one even pays attention to the client side of ssl. how many of you use your own ssl certificates? you basically can't under the cert authorty scheme. it's a racket and no one is going to pay for these. and do the banks even care? they use tactics like cookies and follow-up emails to verify customers (hardware).
and why does the bank have to be able to switch their ip address without telling anyone? what if the same was true for phone numbers? people would be like wtf? load balancing? c'mon. too difficult ot type? thnk about the trade-offs in security, all for the sake of not looking at a number? ipv4 is no longer than an area code and phone number. just tell people where your servers are and let them choose the one that is nearest. which incidentally, contrary to conventional wisdom, is not _always_ the one that will be the most responsive in the ever-changing state of the network.
there's nothing more annoying than being subjected to using trial and error and you are not allowed to do any of the trial when the errors start coming. out of your control.
what happened to the concept of "important numbers"? are we to believe you only need to remember "google.com" or "yourbank.com"? that's a security problem waiting to happen.
what we need is simplicity, reliability and security.
It's not clear to me from what docs that I have been able to find.
Does it work properly?
(i know that if you try it with self signed cert, it will just drop the request)
With signed certificates, Libya can MITM (unpinned) certificate-backed TLS sessions.
With signed certificates, random people cannot MITM (any) certificate-backed TLS sessions.
With self-signed certificates, Libya can MITM any TLS session.
With self-signed certificates, random people can MITM any TLS session.
I'm not seeing the argument you're making here.