Hacker News new | comments | ask | show | jobs | submit login
Living with HTTPS (imperialviolet.org)
326 points by daniel02216 on July 19, 2012 | hide | past | web | favorite | 126 comments



This is pretty great. I guess he gave this talk at HOPE, but it's laser scoped to startups, down to the order in which he gives the advice:

* Enable HSTS

* Don't link to HTTP:// javascript resources from HTTPS pages

* Set the secure flag on cookies

Very few of the sites we test enable HSTS. But it's easy to do; it's just an extra header you set.

The only quibble I might have is the fatalism he has about mixed-security Javascript links. I'd go further than he does: when you source Javascript from a third party, you have leased your users security out to that third party. Don't like the way that sounds? Doesn't matter: it's a fact. Companies should radically scale back the number of third parties that they allow to "bug" their pages.


A solution going forward to contain 3rd party javascript is HTML5 sandbox iframe. This allows declaring a whitelist of permissions 3rd party code should be granted. Only about 40% of browsers support this feature [1]. For unsupported browsers, the external javascript continues working without the security guarantees, so it's no worse than the situation now.

[1] http://caniuse.com/#feat=iframe-sandbox


You can get most of the benefit now by registering a separate domain for the frames and taking advantage of the same-origin policy.


All of these are good recommendations.

Another technology to start preparing for is TACK. It allows you, the server owner, to control browser pinning of your certs while maintaining CA mobility. This gives you the control over your security that Google has over Gmail via Chrome cert pinning without having to issue a new browser build every time you change CAs.

One way to think of it is like a domain transfer lock but with cryptography. You control how you unlock your pin to allow mobility to a new CA by sticking a signed file on your SSL server.

http://tack.io/

[Disclosure: one of the authors of TACK is a former co-worker.]


I see Moxie is one of the authors on the draft. Is this an outgrowth/pivot of Convergence?


The way I see the relationship between Convergence and TACK is that Convergence is trying to provide trust agility for when we need to trust third parties, while TACK is trying to reduce the amount that we even need to trust a third party at all.

I think the first problem gets considerably easier to solve once the latter is in place, and there's a lot we could do with Convergence-like systems that would make them more deployable if TACK is adopted.

In the short term, however, TACK stands on its own, and we hope it's a fairly uncontroversial proposal that will be easy to integrate into the ecosystem.


Moxie Marlinspike works for Twitter now that they've acquired Whisper. TACK addresses the same problem as Convergence, but is a much more tactical and incremental feature.


It's actually kind of a pain to enable HSTS because it makes you fix all the places where you're downgrading to HTTP. You should definitely do it if you care whether your users' sessions get hijacked, but it's not _just_ flipping a switch.


Pages with http://yoursite links work seamlessly. The browser will access those via https when you click. As long as the SSL version of your site is serving the same resources, there's no problem.


if you need an absolute URI for any assets, then just use:

//www.example.com/path/to/asset.js

this will then use the same transport as the containing page uses.


That's a great suggestion that's often overlooked.

At Quantcast we tried to use "//" without the protocol in our tags (to eliminate the need for a separate http: and https: tag), but we had a huge number of complaints about a bug in our tag (missing http:!). Users also tried to be helpful and add in the "http:" and then complained when it broke. In the end we went with two separate tags to reduce the support burden, despite the added complexity of having to explain the two tags.


"Very few of the sites we test enable HSTS. But it's easy to do; it's just an extra header you set."

Less than a year ago, you were saying HSTS wasn't worth the trouble. Ref: https://news.ycombinator.com/item?id=2909613

Glad you've changed your mind.


Using SSL properly is not particularly difficult in theory, but there are many moving pieces so that the whole thing ends up being hard. For example, it's often easy to forget a crucial step. To address this, I wrote SSL/TLS Deployment Best Practices, which contains 22 recommendations in 5 categories:

https://www.ssllabs.com/projects/best-practices/

I encourage everyone to read through it, and follow it. Once you know what to do, it's easy. Part 2, dealing with advanced topics, is coming in October.


  config.force_ssl = true
Feels good, man. If only it were that easy to enable HSTS in all web frameworks.


> There's a second collarary to this: attackers can set your HTTPS cookies too.

If your app uses session ID cookies, then another implication of this is that attackers can set a user's session ID to a value they know, wait for the user to log in, and then use the session ID to hijack the logged-in session. To prevent this make sure you regenerate session IDs when logging a user in. (This isn't the only reason to regenerate session IDs on log in but it's a very compelling one.)


This is called "Session Fixation". Most modern frameworks prevent it, which is a reason to use your framework's session functionality rather than re-inventing it.

Session fixation used to be a common problem. There were lots of J2EE applications which were not only fix-able, but which would allow an attacker to fix a session ID with a carefully crafted GET URL. It's much rarer now.


The author seems to gloss over the importance of browser built-in HSTS lists. If you're just relying on a response header to tell the browser to use HTTPS, aren't you still vulnerable? Isn't that the same fundamental problem with redirecting to HTTPS via Location headers?

In other words, a MITM could downgrade any HTTPS traffic and simply remove that STS header. The browser would be none the wiser.


No, that's not how STS works. Once the header is set, a MITM can't simply clear the header; the purpose of STS is to tell the browser to remember that the site is HTTPS-only.

You are, obviously, vulnerable on first contact to a site, in that an attacker can prevent you from ever seeing the STS header. The point of STS is that attackers don't generally get to intercept your first contact with a site.

Adam Langley, by the way, is one of Google's Chrome SSL/TLS/HTTPS people.


If you're in an oppressive country (say Syria), for example, is it a bad assumption that you're always being MITM'd, and unless you leave the country (not likely) EVERY first contact you make is already compromised? It's a tough chicken and egg problem.


I'm really not sure what the point of this debate is. There are countries oppressive enough where I'd be worried that most of the computers in them are backdoored and keylogged. HSTS doesn't have anything to say about that either.

Similarly: a country savvy enough to have a whole regime for ensuring they have custody of all transactions from first contact on probably isn't a country that offers safe access to browser binaries either, which kind of hurts the utility of baked-in SSL restrictions.


The thing which terrifies me is that most of the users in these "screwed" countries are probably using mobile phones connected to a state-owned PTT carrier (or a couple of licensed carriers), rather than laptops or desktops, at least for most organizing. True, a lot of the devices are purchased through the grey market unlocked vs. via the carrier, but it's not hard for a carrier to push OTA evil


If clearing the browser cache/cookies makes the browser forget about STS for each domain, then MITM attacker gets a lot more chances to intercept and attack. I don't have stats on how often average users clear their browsers but it is a fairly common troubleshooting step so most people are aware of it.

If clearing the browser cache/cookies does not make the browser forget about STS for each domain, then we got another way to maintain http://samy.pl/evercookie/


The whole point of tracking cookies is to maintain some identifier. With STS you only get 1 bit, how do you identify them? All users who have visited the site since (now - STS expiration length) look the same.


On the other hand, if the Chinese government finds out you have an HSTS flag for https://www.youversion.com/ or some such...


I should clarify, I was referring to the first visit to the site. So yes, I can see how this greatly reduces the vulnerability, though it doesn't completely remove it.

As an example, a rogue Apple Store employee could insert him self as a MITM between the access point and the internet connection. Anyone testing out a new laptop in the store (or logging in to their bank from a just-activated iphone) would be vulnerable, without the attacker ever having touched any of those devices.


There are a lot of real security issues HSTS doesn't protect against. But for its miniscule cost, it does a very good job of protecting against one specific real threat.


I guess the real lessons there for users is "don't do anything sensitive on a device that's not yours. or on one you just acquired, unless you trust every hop between you and the server". But users generally don't know/care/think about these things :)


The STS header has an expiration date on it.

Let's assume you have it set to 1 year. If user A visits the site in March, receives the header, uses the site for a while.

In May, the site/user's DNS/whatever is hijacked. Users are sent to a dummy site, which does not set the header. The dummy site is over HTTP.

The next day, the user tries to go to the site. Because it is not over HTTPS, the browser refuses to load the page, even though the header is no longer sent.


    strict-transport-security:max-age=2592000; includeSubDomains
It has a maximum age, not an expiration date. If I visit gmail today, that STS header will expire a month from now[1].

If I visit again tomorrow, the browser-cached version of that header will be updated with a new expiration date, and expire a month from tomorrow, not now.

1: http://www.wolframalpha.com/input/?i=2592000%20seconds

edit: I think we're agreeing.


Please for the love of god, if you're working at google and read this: Add a deeply set option to FORCIBLY enable that button in all situations where it might appear. We sometimes have certificate issues with our proxy server at my workplace and it makes Chrome practically unusable when they happen.

I know what I'm doing. I'll reset the option when the underlying issue is resolved, and overall it's a great feature for the browser, but I need to have the ability to be responsible for myself.


It's very straightforward for a proxy to have its own CA=YES certificate and mint/sign certs for every HTTPS site the proxy sees on the fly. If you have a corporate proxy that is intercepting HTTPS traffic, that is what it should be doing.

Then, the proxy makes its certificate available to users, you download it, and add it to your CA certs via the UI that browsers provide for that; HTTPS magically appears to work again.


So the proxy server acts as a forced man in the middle? This has to be one of the most atrocious things I have ever experienced. Forcing a man in the middle is insane especially in a large company where there may well be a lack in competency.

HTTPS shouldn't magically appear to 'work' again, considering it is completely broken when a forced mitm is introduced.


Are you arguing with me, or with reality? I can't tell, because the system I described is how corporate proxies work pretty much everywhere.

If you want privacy against the administrators of your employer, don't use your employer's network to do things that need privacy.


I don't see how my comment may be interpreted as starting an argument. I was simply replying on your comment on HTTPS just 'work'ing once you ignore the man in the middle attack. It's not privacy from an employer that is the underlying issue. It is the practice itself which should be frowned upon. People didn't spend their time trying to come up with the ability to have secure communications from point A to point B just to have someone come in and break it.

The problem isn't necessarily what the employer sees, it's what the might employer keep around.


Enterprises are making a policy decision to take advantage of the Internet security model from the border of their network outward, but to take responsibility for IP security inside their network. That is a reasonable policy decision.

But even if reasonable people could disagree about that policy decision: the reality is that people operating large corporate networks require the ability to control SSL/TLS sessions; for instance, there are whole industry verticals where accessing a private email server not controlled by your employer is grounds for automatic termination, because regulations require them to track and archive email messages.

Finally, and I'm repeating myself: I am describing the reality of most Fortune-500 enterprise networks. In most corporate networks, you cannot simply talk from your desktop out to the Internet; you are required to use a proxy. You're also almost certainly on an 10/8 IP address.


This is far more common than you might expect. You just need to push you're company's internal CA to all your client computers, and bam, MITM for everything!


Yes enterprise customers want to decrypt and inspect all traffic, for legitimate and sometimes sketchy reasons.


HIPAA requires it as far as I know, and I am sure other regulatory frameworks probably do.


HIPAA does not require traffic monitoring.


Yeah, that's what should be happening, but sometimes the software breaks, or the security restrictions on the certificates accepted by the browser changes and the vendor of the product doesn't update fast enough or the certificate that's installed is out of date or whatever.

In which case I end up shit creek without a paddle because there's no way to temporarily disable the security feature.

And I do not have control over the Proxy server because I'm not in the fucking security team.


The bypass button only disappears for HSTS sites. Do you have a proxy server that's intercepting these connections and has a broken certificate?

You can disable all certificate checking with --ignore-certificate-errors but it is as bad as it sounds.

Rather, to correctly support MITM proxies you should install their CA certificate locally.


I suppose I can use that the next time it happens, but that's a bit more overkill in terms of disabling warnings than I'm looking for :/


Starting Chrome with a flag is more overkill than adding a whole feature to Chrome to allow users to ignore SSL security?


> We sometimes have certificate issues with our proxy server at my workplace ...

This is the problem.

> ... and it makes Chrome practically unusable when they happen

This is not the problem.


The latter is just as bad. Opinionated software, i.e. making things that are usually a bad choice hard, is a good thing. Making possible things deliberately impossible for the victims of other people's bad decision making is arrogant. So some idiot CIO chose to make https mandatory but their staff can't get it configured properly. I am sure the non-IT user's boss will be happy that, instead of, say, closing that one important deal, the user waits until the https issue is resolved. What a way to drive IE out of the enterprise...


Random enterprises will always be breaking some part of the HTTP stack. It's not reasonable to degrade everyone's security, even the majority of people who don't have unnecessary breakage inflicted on them, just to accommodate those enterprises.

There is a clean solution to this problem: the proxies should serve as just-in-time CAs for the traffic they proxy. The big proxy products all do that. This simply isn't Chrome's problem.


Do you realize that it's not the abstract concept of an enterprise using a browser, but a human being? Which, usually, is not the person administering the server. I'm all for nudging people to change their own behaviour to the better, but this is driving your principles home on the back of the user.

Considering your use of the word "breakage": DannoHung is talking about a button that is actively being disabled in certain situations, not something bad being enabled. This is extra code in a security critical part of the browser. Thus, we can assume that there were meetings that discussed this "feature" and its implications, the actual coding, code reviews and QA, adding up to quite a bit of opportunity cost. That begs the question: Would this time have been better spent on something that adds more security?

Disabling manual overrides may seem like a good idea, but it can go horribly wrong. http://en.wikipedia.org/wiki/Lufthansa_Flight_2904


The button he's asking for is "disable TLS security".

If he wants to disable TLS security, there's a right way to do it: by installing the proxy's cert.

If you read 'agl's talk, you'd see that the reason the button is hidden is that it is one of the Internet's great security flaws: a workflow embedded into most browsers that demands users to learn to disable TLS security.

So, I find this argument you're making to be more or less entirely bankrupt.


Anyway, I would say "--ignore-certificate-errors" is an acceptable workaround here. If your proxy is already intercepting all HTTPS traffic, then there's really no benefit in the client browser also verifying certificates.

Of course, I would still only run with "--ignore-certificate-errors" for the limited time the proxy has broken certificates or whatever...


Even with a corporate proxy intercepting SSL connections, individual browsers are still protected against attacks on the local network involving SSL impersonation (rogue access points, DHCP or ipv6 neighbor announcement abuse...).

Companies have their firewall infrastructure locked down (hopefully), but lan segments (except in high-security environments) not as much.


The standard very explicitly states that Chrome's behavior is correct:

   When connecting to a Known HSTS Server, the UA MUST terminate the
   connection with no user recourse if there are any errors (e.g.
   certificate errors), whether "warning" or "fatal" or any other error
   level, with the underlying secure transport.
http://tools.ietf.org/html/draft-hodges-strict-transport-sec...


It's not arrogant on Chrome's part to incrementally improve the user's security, while forcing companies to incrementally improve their security. It's incompetent on the IT department's side to ignore a problem that is both a genuine security risk and an obstacle to getting things done. (Besides, as other commenters have pointed out, the ignore option is only hidden if the server opts to send an HSTS header.)


Or fix the certificate issues. Don't train the staff to be blind to security warnings.


How exactly does "I know what I'm doing. I'll reset the option when the underlying issue is resolved, and overall it's a great feature for the browser, but I need to have the ability to be responsible for myself." become "train the staff to be blind to security warnings"?


If you know what you're doing, get the CA=YES cert from your proxy and install it in your browser.


The cert itself was broken in the most recent case.


If you know what you're doing, you'd know how to get around this limitation, with the --ignore-certificate-errors option for example. Any knowledgeable front-end or back-end developer would know how to find this out by doing a Google search. As long as you don't train the rest of the staff to do the same, that's fine. Now, why isn't the IT department fixing this security and usability problem?


You can remove HSTS on a per-domain basis using

chrome://net-internals/#hsts

Are you looking for something else?


You can't remove the preloaded ones, which is really how it should be.


Moxie published a tool called SSLstrip http://www.thoughtcrime.org/software/sslstrip/ here's a simple video demonstration https://www.youtube.com/watch?v=PmtkJKHFX5Q


Very interesting. Thanks!


For users, HTTPS Everywhere is a must: https://www.eff.org/https-everywhere

Also, by using DuckDuckGo [1] over HTTPS you get the same ruleset in HTTPS Everywhere [2] even if you don't have the extension installed.

[1] https://duckduckgo.com/

[2] http://www.gabrielweinberg.com/blog/2010/09/duckduckgo-imple...


The chrome extension at least seems to break a lot of sites. They're not kidding when they say it's alpha.

Pages include resources from https-everywhere'd domains and for whatever reason (mostly that the ssl versions of those resource urls aren't serving the same resources, or have broken certs) those resources fail to load. Within an hour of using it I'd seen it break 3 or 4 sites, so it got disabled.

You can manually disable it for individual sites, if you recognize that it's the problem, but if some minor resource fails to load it might not be obvious.


The reason the extension 'breaks sites' is because an alarming number of sites are happy to serve unsecure content all over the place. See the refernece to New York Times in the original article for an example.


And use HTTPS Finder to default to connecting via HTTPS before HTTP, then add a rule automatically to HTTPS Everywhere. https://addons.mozilla.org/en-US/firefox/addon/https-finder/


Honest question, for those who have done it: what are the downsides of allowing your whole site to be accessed via SSL?

Obviously, you need to be a bit more diligent about making asset urls protocol-relative (which can be a PITA across a large, dynamically generated site), but are there any other gotchas? Server load? Reduced cache-ability?


You can have good cacheability—you just need to send explicit Cache-Control headers (which is a good idea anyway).

If you don't do SSL properly (e.g. non-SSL-terminating load-balancer can break SSL session resuming by forwarding requests to different servers which don't share tickets) then you'll have lower front-end performance.

webpagetest.org nicely shows connections including time spent on SSL negotiation, so you can use it to check your SSL overhead.


In my case, I work for a SaaS provider that performs virtual hosting using customer-provided SSL certificates (myservice.customer.com). This puts us in the unenviable position of having to maintain thousands of IP address endpoints, one per customer, along with all the network-related complexity that goes along with it.

SNI would help a lot, but unfortunately it will never be a feature in the SSL client code in Windows XP (which MSIE uses) and so we're stuck with this for the foreseeable future.


By protocol relative, are you referring to the // urls?

(http://paulirish.com/2010/the-protocol-relative-url/)

Because they wound't be so PITAish would they?


Without the proper tuning https will be significantly slower than http.


Could you clarify what "proper tuning" entails?


SSL/TLS is not computationally expensive any more[1] but there are some things that can help/hurt discussed in the footnote.

[1] http://www.imperialviolet.org/2010/06/25/overclocking-ssl.ht...


It mentions the free StartSSL certificates, as does their page. But what isn't clear is if the certificates are free to renew after a year (ie this is just a teaser).

I currently use a self signed cert and certificate patrol, but apps (in particular Thunderbird) are becoming increasingly hostile to that.


> what isn't clear is if the certificates are free to renew after a year

Yes, they are. StartSSL will even send you a reminder e-mail.


HTTPSEverywhere is a firefox/chrome browser plugin, that will ensure connections that can be HTTPS are. It also does a good job preventing ssl stripping.

https://www.eff.org/https-everywhere/


This was by a wide margin my favorite talk at HOPE this year.

(and a great advertisement for using Chrome in secure settings where you need a web browser)

The irony of Google being one of the main http-only JS resources for a long time was kind of amusing, though.


What do you mean by "Google being one of the main http-only JS resources"?


Adsense.

Also I think GA was http only at some point.


Adsense can be an issue. At least it isn't a requirement for site functionality though.

GA has had https support as long as I can remember.


I'm wondering if there could be an equivalent DNS entry that might help signal a site should only be accessed via SSL? Then you could possibly protect against initial access as well as returning users.


We can't do a blocking DNS lookup other than for A/AAAA records. About 5% of Chrome users cannot resolve TXT records because the network is filtering the DNS requests. (i.e. we know that the network is up and we're asking about a DNS name that we known exists, but we get a timeout.)


In [1] you showed us how to authenticate via DNSSEC HTTPS in Chrome. If I understand correctly this involves a lookup of a TYPE257 record. Given that only 5% can resolve TXT records, do you know what % of Chrome users can then resolve TYPE257 records?

Digressing a bit further, wouldn't you say that even if HSTS is enabled and registered in the all the browsers' built-in list, you still have the problem of unencrypted DNS lookups? (Maybe this kind of attack is orders of magnitude harder to implement. I honestly don't know.)

[1] http://www.imperialviolet.org/2011/06/16/dnssecchrome.html


No. The whole idea of HSTS is that you can never trust the DNS; you assume that's the most likely way an attacker is going to MITM her victims. HSTS tells the browser to remember that from that point on, all connections to SERVER.NAME have to happen under a TLS session with a valid cert.


Thanks for your answer! I'm more confused by the moment about DNSSEC et al: isn't the DNSSEC-based validation of HTTPS referred to above supposed to get rid of CAs in the future? That wouldn't make sense even with DNSSEC considering that the information is not encrypted? (Right?) I hope you don't take this as "hijacking", but I'd be most curious about what you and other security experts think about Paul Vixie's "Whither DNSCurve?" [1], which has amazingly not been submitted in HN. I just submitted it [2].

(If I could vote for your time investment, please kindly consider commenting on that article before replying to this comment.)

Thanks again!

[1] http://www.isc.org/community/blog/201002/whither-dnscurve

[2] http://news.ycombinator.com/item?id=4268461


There is a splinter group of people- who- want- DNSSEC who argue that DNSSEC will obviate the need for CAs. There reasoning, distilled, is that DNSSEC is itself a PKI, with the roots signing TLDs and TLDS signing domains and so on. Since the core architecture purpose of certificates is to "break the tie" between an attacker's public key and the real site's public key, and since DNSSEC zones could serve the same purpose, by housing a DNSSEC-signed public key, blammo, no more Verisign.

There are a bunch of problems with this idea. Most of the ones that spring to my mind are problems with DNSSEC in general: its brittleness, the reliability problems I think it's going to cause, the things it does that actually diminish the security of the DNS... but the big point relevant here is: DNSSEC replaces a market of CAs with a baked- into- the- Internet fiat authority. If DNSSEC had replaced SSL CA's in the mid '00s, Ghaddafi's Libya would have been Bit.ly's CA. This does not seem like a win to me.

I don't think that rent-seeking SSL CAs are as big a problem as many HN users seem to think they are. I think ultimately there's significant expense involved in operating a secure CA, and that relative to their purported value, CA certificates are reasonably priced.

The pressing problem with SSL/TLS is that CAs aren't trustworthy. They are rent-seeking, as expected, but also shoddily operated. The Internet has largely lost faith in the people operating CAs.

Moreover, a decade and a half of browser/CA relationships have left all the major browsers riddled with skeleton-key CA certs run by organization that nobody can really vouch for. As a result, large companies have purchased browser-trusted CA operations, and then used them to do incredibly dubious things. The companies that have been caught doing skanky stuff with their CA keys haven't even been kicked out of the browser CA stores.

As a result, we're left with a situation in which untrustworthy companies can potentially sign certificates for (and thus enable transparent MITM attacks against) critically important sites, like Google Mail. That's an untenable position.

I personally believe (and, yes, hope) that the future of Internet security looks much like today, except with things like Trevor Perrin and Moxie Marlinspike's TACK scheme, to allow security-sensitive sites to overrule bogus CAs, and to allow us to gradually decrease the architectural dependence we have on SSL CAs and start experimenting with more flexible alternatives.

I am not a fan of trying to take the same model that just failed us, but centralizing it and handing it over to the unaccountable groups of people who control the domain name system.


Wow. Thank you very much for educating us, Thomas. Your comments require grabbing some popcorn or equivalent. In particular when you engage in a constructive debate with someone of your caliber. One of my all-time favorite threads in HN (or elsewhere...) is http://news.ycombinator.com/item?id=893659. Thanks again.


Well, now that DANE is nearly an RFC I should change Chrome to use it rather than the TYPE257 records.

But the important point is that DNSSEC stapled certificates don't need the browser to perform any extra DNS lookups. The certificate itself contains the DNSSEC information and signatures. Since DNSSEC is signed the data can come over any channel; it doesn't have to be port 53.

Unencrypted DNS still leaks the hostname that you're visiting - that's true. However, the destination IP address probably leaks the same information and, if not, we sent the hostname in the clear at the beginning of the TLS handshake! (That's SNI, it allows SSL virtual hosting.)


Thanks for answering! What I don't understand is that, given that your starting point is "two computers talking over a malicious network", doesn't the current state of affairs of (unencrypted)DNS mean that it's game over from the outset? That is, if the network is malicious, that MITM could very refer you to an invalid IP address the moment you first try to resolve, say, mail.google.com.

Please don't take this as an argument. I just want to know where I'm wrong! I just can't get over the idea of pushing at the (justifiably) paranoid level for HTTPS while we still have plain-text DNS... even with DNSSEC!

Wish request: Your thoughts on http://news.ycombinator.com/item?id=4268461.


Yes, DNS can be used to direct you to the wrong IP address but that hardly matters: an evil network can give you the correct IP address but then intercept all traffic to it.

The key is that the IP address doesn't matter, indeed it shouldn't matter whether the traffic is going over carrier pigeon. You have a name that you wish to connect to, say example.com, and you have some way to send an receive packets. If the other end can prove that they are example.com by means of a certificate then you have a secure connection. How the data gets there and back is immaterial to transport security.


Think this is a side-effect of EDNS0 being blocked, or UDP packets on port 53 bigger than 512 bytes being blocked?


The response is 345 bytes and we used the OS DNS library to send the request, so I'm not sure whether EDNS0 would have been set.


DNSSEC could allow this to work, if the connection between the client and a DNSSEC-enabled recursive resolver were secure. But if you're on the LAN of the client (for example, a wireless network) you can spoof every DNS response and the client is boned.


... at the cost a myriad of other annoyances, breakages, and potential insecurities after DNSSEC is deployed. Deploying DNSSEC to help with the problem that HSTS is trying to solve is like deploying Homer Simpson's automatic hammer to pin an announcement to a bulletin board.


It's still one possible solution to the problem. If one's windows dns client were a DNSSEC-validating stub resolver[1], and you believe that in the future we will come to a point where network admins stop fucking with DNS traffic for no good reason, they could authenticate information from the website's dns on first-visit and avoid HSTS's pitfall. Note that I never said this was going to be practical :)

[1] https://www.internetsociety.org/deploy360/resources/dnssec-t...


What I take out of this, beside things I knew of already (and most others as well) is:

* Chrome wants to FORCE you to buy an SSL certificate.

* The guy suggest getting one from StartSSL BUT those are crap for 2 reasons: you can only have ONE domain, else you have to pay. The TOS are horrible.

So, dear imperialviolet, if you want me to use certificates that your company trusts (and by extension, your users), get up with it and make Google provide free, unlimited SSL certificates.

Til then, no dice.


> you can only have ONE domain, else you have to pay

It's one name per certificate (well, two: yourdomain.com and whatever.yourdomain.com) but you can order multiple certificates for multiple subdomains in the same or different domains at no charge.


If you can't afford the $43/year for a Thawte starter cert, you have no business running a domain of your own. Seriously, less than $4 a month - that's going to be dwarfed by any sort of hosting you might be paying for.

And it's only one domain per cert, so your entire argument is silly.


Dwarfed by hosting costs? There is tons of shared hosting options in the $50/year range which are quite usable for lots of purposes.


My VPS runs an email server, seed box and hosts my personal landing page and a small organization's blog for 2€ / month.

If you have a really small website, NearlyFreeSpeech is actually nearly free.


You know it's not Google that makes you buy SSL certs for security, right? As far as "no dice" ... um check out some tutorials on SSL ...


Do StartSSL certificates even work on every browser by default?


Yes. They offer a list on the bottom of the page: http://www.startssl.com/?app=40


SSL/TLS & Perfect Forward Secrecy - http://hackerne.ws/item?id=4267767


Obligatory Django link: https://github.com/carljm/django-secure.


Somewhat related question: It's fairly common for sites to have static files (images/css) served on a different (sub)domain. What are you supposed to do when the html content is being served on HTTPS? Should the static files be on HTTPS as well? If so, wouldn't it need a different certificate? Certificates are only valid for a single domain, after all.


If you don't want to pay a premium for a wildcard certificate, you could just get another certificate for the static subdomain. You can get an unlimited number of free, single subdomain certificates from StartSSL.


"Certificates are only valid for a single domain, after all"

Actually that's not the case, you can get single certificates which cover different domains, using the Subject Alternative Name field.


You can get SSL certs that are valid for wild card subdomains, e.g. *.example.com


Wildcard certs tend to be very expensive. If you only need two domains it's probably more cost-effective to just buy two certs.

Edit: a downside of using separate certs is that you'll need to serve the respective sites from separate IP addresses, or rely on SNI [1] which isn't supported in older browsers. But if the use case is a separate domain for serving static files, that's probably hosted on a different server/IP address anyways, right?

[1] http://en.wikipedia.org/wiki/Server_Name_Indication


if the use case is a separate domain for serving static files, that's probably hosted on a different server/IP address anyways, right?

Not necessarily. People used to have different domains for images/css/js because browsers used to not want to download more than 2 things at the same time from the same domain name. (Back when the web was young and ugly, and bandwidth was scarce, this made sense). By having multiple domains (e.g. a.static.example.com, b.static.example.com etc.) on the same IP addres/server, you could trick browsers into downloading more in parallel and make your site seem faster. You didn't need multiple IPs for that.

Now-a-days browsers have upped their limit from 2 to something like 8 → 16 or so, so it's less of a problem.


A certificate can also have a number of alternate names, which providers call a Unified certificate -- UCC. The nice thing about that is that you can add/remove names after you've bought the certificate without having to go through the process.

I haven't tried this in practice though, but this might be useful if you want to provide a bunch of client.yourdomain.com secure subdomains from the same IP address. Only downside is that the organization name will be the same.


This is also wildly more secure than a wildcard certificate, where if someone nicks your private key your entire domain is compromised, but with UCC only select hosts' security could be compromised. I believe it's also supported in more devices than SNI (since X.509v3)


Also more expensive than vanilla single-name certs :-(

That's the one downside of this HTTPS-everywhere movement - we're beholden more than ever to the certificate authority cartel.


Ah, I wasn't aware of that. Thanks!


If you are concerned about the warnnings that users see, just call your static files in such a format.

'//www.your-cdn.com/image.jpg'

In other words don't specify http or https in the url, just do '//your-url.com/new.js'


Wildcard certificates.


honest question: why can't banks when customers open a new account, give them a card with 1. the bank's ip addresses (in each region) and 2. their printed public key (ssl or ssh format). and why doesn't the bank ask for a public key from each customer? in-person key exchange.

no one even pays attention to the client side of ssl. how many of you use your own ssl certificates? you basically can't under the cert authorty scheme. it's a racket and no one is going to pay for these. and do the banks even care? they use tactics like cookies and follow-up emails to verify customers (hardware).

and why does the bank have to be able to switch their ip address without telling anyone? what if the same was true for phone numbers? people would be like wtf? load balancing? c'mon. too difficult ot type? thnk about the trade-offs in security, all for the sake of not looking at a number? ipv4 is no longer than an area code and phone number. just tell people where your servers are and let them choose the one that is nearest. which incidentally, contrary to conventional wisdom, is not _always_ the one that will be the most responsive in the ever-changing state of the network.

there's nothing more annoying than being subjected to using trial and error and you are not allowed to do any of the trial when the errors start coming. out of your control.

what happened to the concept of "important numbers"? are we to believe you only need to remember "google.com" or "yourbank.com"? that's a security problem waiting to happen.

second honest question: why does bank website need to embed links to third party reources and require that customers enable their browsers to access all these indiscriminantly (user doesn't get to choose) and to enable javascript?

is javascript needed for security of a connect or to accomplish a financial transaction? because that's all i need from the bank website.

i think we're past the point where customers need to be enticed to use the web to do things like banking and shopping. they're going to be forced to. so we can forgo the silly demonstrations and gratuitous use of javascript. save for "show HN".

what we need is simplicity, reliability and security.


Does anyone have any recommendations for search terms I could use to put together a list of news stories / posts about known man-in-the-middle attacks that have occurred?


If includeSubDomains is set for HSTS, does that mean that a cert for https://foo.com/ is required instead of https://www.foo.com/ in order to protect cookies set for foo.com and under?

It's not clear to me from what docs that I have been able to find.


Google Cache mirror if anyone needs it: http://webcache.googleusercontent.com/search?q=cache:http%3A...


Any experience with CORS and https?

Does it work properly?

If i have www.mydomain.com with certificate A, and api.mydomain.com with a certificate B, can i make CORS call with javascript?

(i know that if you try it with self signed cert, it will just drop the request)


Yes this works. Just get all the headers correct.


I have a rails 3.2 site, and I hadn't thought about this much aside from setting force_ssl to true. Turns out that automatically enables HSTS and secure cookies - cool!


For what is worth Libya owned a trusted CA (maybe still does), which means that MITM would happily work, because they can transfer all the certs to their own authority. I don't personally see how this is more secure than my self-signed certificate, which generates a warning that's these days very hard to avoid (even if I do know that the cert is fine)


Stipulate that it's true that Libya owned a browser-trusted CA, and compare situations:

With signed certificates, Libya can MITM (unpinned) certificate-backed TLS sessions.

With signed certificates, random people cannot MITM (any) certificate-backed TLS sessions.

With self-signed certificates, Libya can MITM any TLS session.

With self-signed certificates, random people can MITM any TLS session.

I'm not seeing the argument you're making here.


I learned a lot about internet security here. Thanks a lot!




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: