Hacker News new | past | comments | ask | show | jobs | submit login
Embracing HTTPS (nytimes.com)
90 points by cpeterso on Nov 14, 2014 | hide | past | favorite | 52 comments



Pushing too hard for HTTPS can lead to HTTPS as security theater. Cloudflare offers this as a service; they call it "Flexible SSL". They get a multi-domain SSL certificate for a huge number of unrelated domains, and let people connect to that. The data is decrypted at Cloudflare, and retransmitted in the clear to the destination site. The user thinks they have security, and they have a little, but less than they think. The multi-domain cert makes some attacks possible. If an attacker can mess with DNS near the client end (perhaps at a public WiFi access point), and can break into any of the sites listed on the cert, they can do an MITM attack. A cert with both an important site and a weakly secured site creates an easy attack target.

Multi-domain certs are used because IPv4 space is full and Windows XP doesn't support Server Name Indication, which allows a unique cert for different domains at the same IP address. So if you want SSL on a shared IP address, and need to support good old IE6 over IPv4, multi-domain certs are necessary. XP still has 19% market share on line, as of October 2014. Everything else has supported SNI since 2007 or so.

I have a paper on this:

http://john-nagle.github.io/certscan/whoamitalkingto04.pdf

This identifies all the major front-end services using shared SSL certificates. Cloudflare has 36,280 second level domains tied to "*.cloudflare.com". Incapsula, the DDOS protection service, has 1471. Once you're past the top 20 such services, no site has more than about 100. Once IE6 has died off, the CAs can stop issuing certs containing unrelated domains. But not yet.


> The data is decrypted at Cloudflare, and retransmitted in the clear to the destination site. > The user thinks they have security, and they have a little, but less than they think.

That is true. But it the user's main concern is the legs of network from their location out of a local untrusted network (such as communal wireless) or country, that is definitely better than nothing.

> XP still has 19% market share

I don't worry about the security of XP+IE users any more. Anyone still there has chosen against good advice to remain insecure. It is their choice when there are other options available (both browsers that support modern standards on XP, and alternate OSs), it is their choice to have the certificate warning and more iffy security when they hit a site that needs SNI.


It's not about the security of XP users; it's about their connectivity.


Connectivity and security need to be considered the same thing on the modern Internet. XP is no longer supported, and that leaves it in a pretty dangerous position. A unpatched computer isn't just a risk to the user of that computer, it becomes a risk to everybody on the net as it tries to spread its latest infection.


You've mistaken me for someone endorsing Windows XP. I'm not; I'm saying that the tradeoff the parent comment proposed is not the real tradeoff sites have to make.


Holy S*. I was about to call BS on you because I know CloudFlare supports end-to-end TLS then I read this: https://www.cloudflare.com/ssl:

> SSL can be difficult for website administrators to set up.

> For sites that require more advanced SSL configurations, CloudFlare supports custom certificates from any certificate authority, full end-to-end SSL with robust certificate checking...

Just, wow. TLS is NOT hard to setup for a website administrator. I think I just lost a ton of respect for CloudFlare. Granted, most threats would be between the client and CloudFlare anyway(that is to say, on the wrong side of the tracks), but still....


Cloudflare offers a number of SSL options, but they all involve decrypting the traffic at Cloudflare and possibly re-encrypting it for the link to the final server. Their options: (https://www.cloudflare.com/ssl)

* "Flexible SSL" - encrypted from client to Cloudflare using Cloudflare's cert, unencrypted from Cloudflare to host.

* "Full SSL" - encrypted from client to Cloudflare using Cloudflare's cert, encrypted from Cloudflare to host using a different host self-signed cert.

* "Full SSL (strict)" - encrypted from client to Cloudflare using Cloudflare's cert, encrypted from Cloudflare to host using a different host CA-signed cert.

(http://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tech...)

* "Keyless SSL" - encrypted from client to Cloudflare using the customer host's cert. Cloudflare doesn't have the customer's cert private key. They contact the customer's host for a session key for each session, and use this to encrypt from the client to Cloudflare. They decrypt at Cloudflare, and re-encrypt for the trip to the customer's host.

With the first three, you can see in the browser that this is happening. The host will be identified as "cloudflare.com" in the certificate. With "Keyless SSL", which is basically MITM with active cooperation from the end host, it looks at the browser end like you're encrypted end to end, but you're not.

One assumes that all of these are being "lawfully intercepted".


I wish that the Full SSL sans-strict option allowed me to upload my self-signed public key to CloudFlare. The problem is that CloudFlare can be MitM'ed without this step.


The "possibly" is the problem. CloudFlare should not be promoting or supporting TLS that is not encrypted all the way to their customers infrastructure IMHO.


Cloudflare can't act as a "cloud waf" unless they are able to decrypt the ssl sessions.


The issue isn't that they are decrypting the traffic for inspection. That's a concession you just have to make with a trusted third party providing these types of services. The issue is that with their "Flexible SSL" they are then forwarding that unencrypted traffic on through the wilds(the internet) to the customer. Not only does this fly in the face of widely regarded best practices for transferring sensitive information between trusted, controlled networks the browser is none the wiser. You could argue that somebody with a certificate could do this anyhow. Yes, this is true. However CloudFlare who should, and most likely DOES, know better and are now doing this wholesale as a product offering! This should be frowned upon and CloudFlare should receive double frowns.


That's a concession you just have to make with a trusted third party providing these types of services.

That's the problem. If you only use SSL/TLS for security-critical pages where there's a login or a credit card, you don't need some massive cloud-based service to cache your stuff. Many sites still use "transferring to secure site for shopping cart checkout". That's fine. If you use SSL/TLS for everything, now you have a load problem on the secure infrastructure.

That's why "SSL Everywhere" is security theater. To have "security" on pages that don't need to be secure requires weakening security on the pages that do.


That's why "SSL Everywhere" is security theater. To have "security" on pages that don't need to be secure requires weakening security on the pages that do.

Nah, you can still have separate secure domains with EV certs for taking credit cards. That way your customer's browsing habits and product preferences are protected from snooping by their ISP, the page assets are still cached and DDOS protected, and the credit card data is still sent to a separate domain without an intermediary.

The only thing still to worry about is your caching provider sending maliciously modified content that bypasses your secure domain.


On the other hand, it's the customer's decision to put Cloudflare in the stack. They have decided to put Cloudflare on the path, just as they would have decided to use nginx. There really is no one to blame (because the end here is not the customer's server, it's the cloudflare server) and if you really want to, you should blame the customer.

EDIT: just read your answer below, my comment is off-topic. Sorry.


Just, wow. TLS is NOT hard to setup for a website administrator.

It really is – you might just be living in a bubble of competence!

I've got years of experience as a developer and sysadmin, and I still sometimes struggle to get SSL working correctly on a site. Sure, getting and installing a cert is fine – but the mess of making sure that third-party services don't break, and that redirects aren't messed up, and that other virtual hosts don't break… it can be a time-consuming effort, and probably difficult for someone with less experience.

Cloudflare have been pretty transparent with the service they're offering, so I find it hard to see this as a bad thing.


But how does FlexibleSSL help with the third-party services and redirects? Your site still appears as HTTPS, so those issues should still appear, no?


Bubble of competence -- well put!


Setting up SSL is painful if it ain't your daily business. The problem is, that it could give users a false sense of security if their mom-and-pop webshop uses cloudflare ssl or if it uses unsecured cookies or an outdated protocol etc, etc...

Regarding setting up SSL I'm reminded of this: https://news.ycombinator.com/item?id=8471877


SNI is not supported by the schannel implementation of win XP, it is not an IE6 thing, IE8 on win XP doesn't know SNI either :(


It's also not supported by python urllib or feedparser (at least not by default), meaning any attempt to connect to a service that requires SNI (cloudfront is one) fails during handshake.

This is doubly annoying for checking blogs that require https and redirect http. I have no way to check their RSS feeds.


Read this Python bug report for the painful story of SNI support in Python, 2009 through 2014.

http://bugs.python.org/issue5639


Sigh.


Firefox and Chrome on the other hand don't have a problem on Windows XP.


Also the non-Chrome version of Android Gingerbread's browser lacks SNI support.


Right; you have to get to at least Windows Vista to get SNI.


You have to have at least Vista to get SNI in IE. IIRC both Firefox and Chrome support SNI on XP.


If you run a news site, or any site at all, we’d like to issue a friendly challenge to you. Make a commitment to have your site fully on HTTPS by the end of 2015 and pledge your support with the hashtag #https2015.

This article is funny... written by the CTO for nytimes (et al).. asks news sites to make a commitment to HTTPS... but fails to commit to it for NYTimes.


The page itself is plain HTTP, too.


Yeah, you don't get to say that when https://open.blogs.nytimes.com/2014/11/13/embracing-https/ isn't even listening on port 443.

Dear Akamai: when are you going to make TLS 1.2 support free? Cloudflare has. :)


Dear Akamai: when are you going to make [...] free?

Lol.


Did they make any notice of commitment to this for themselves?


They didn't in the article... and I don't see anything from them with the #https2015 hashtag.. and none of them posted a commitment for the nytimes on their feeds. So I'm going to go with no. https://twitter.com/rajivpant https://twitter.com/eitanmk https://twitter.com/nytimes

Doesn't look like she's an NYtimes employee, but her name is on the article.. so: https://twitter.com/elenakvochko


For the record, it's worth noting that we're starting from a state that's nothing short of disastrous: https://alexgaynor.net/2014/nov/12/state-of-news-tls/

Let's hope that twelve months from now, we're looking at a very different landscape. Kudos to NYTimes for issuing the challenge. At the very least, this is an important conversation starter.


It's important to note that the benefits of having SSL far outweigh the potential problems. The cost is fairly low for a business to have a certificate for their main domain, the speed difference is not very noticeable (to the point that many major social networks have https on every page). Every website should be developed with the ability to add https on every page


Two questions on this (from an novice-techie pov):

1. Https is slower. From a practicality standpoint - is it that much slower to actually make a difference from a UX side of things?

2. I've implemented https on one of my sites, but in chrome, it's not full green, but appears as https with the broken lock. Any idea on what that means and how to fix it?


This is actually quite easy to test.

Create a text file called curl-format.txt:

  time_namelookup:  %{time_namelookup}\n
       time_connect:  %{time_connect}\n
    time_appconnect:  %{time_appconnect}\n
   time_pretransfer:  %{time_pretransfer}\n
      time_redirect:  %{time_redirect}\n
  time_starttransfer:  %{time_starttransfer}\n
                    ----------\n
         time_total:  %{time_total}\n
Now use curl to connect to your website (or any site that supports both HTTP and HTTPS):

  curl -w "@curl-format.txt" -o /dev/null -s "https://mysite.com"
The difference between "time_connect" and "time_appconnect" is the overhead from TLS/SSL. Usually in the ~100ms range. If you connect to the same site via HTTP those two numbers should be identical (or very close).


My personal blog has been exclusively on HTTPS for quite some time now.

- Link: https://www.bionicspirit.com/

- SSL Rating: https://www.ssllabs.com/ssltest/analyze.html?d=bionicspirit....

- Server is in Europe, here's a load test from New York: http://tools.pingdom.com/fpt/200PE/https://www.bionicspirit....

On nr. (1) it is not that big of a deal and for your own content SPDY can make a difference. Satellite connections are indeed problematic for HTTPS.

BTW - I have insisted on having my personal blog on HTTPS because I noticed that some public networks in hotels and public places are injecting content into websites. And so for me HTTPS is a way of signing my content.


If you just put a static "Hello World" webpage on a server and try to benchmark it with something generic like `ab`, yes, HTTPS appears to be several times slower than HTTP.

If you put a real web application on that server, enable all the bells and whistles (keep-alive, session cache, OCSP stapling, SPDY, etc), and configure your benchmark tool to make use of those features, the performance penalty of HTTPS becomes less than 5%.

And that was a couple of years ago on a relatively low-end VPS. Nowadays, the difference is probably even smaller.


> If you just put a static "Hello World" webpage on a server and try to benchmark it with something generic like `ab`, yes, HTTPS appears to be several times slower than HTTP.

Can you suggest a benchmark tool that can be used to give a more realistic figure than `ab`? I know jmeter can do session caching, but I find its interface baffling and I can't find a pre-made configuration.

I recently compared performance of my home ARM server when serving my blog through HTTP and HTTPS:

https://www.tablix.org/~avian/blog/archives/2014/11/cubietru...


Sorry, it's been a while since I've looked at HTTP/S benchmarking tools, so I can't say which one has the latest & greatest features.

By the way, did you use `ab` with the `-k` option when you ran those benchmarks? Testing HTTPS without keep-alive is utterly meaningless, since every browser aggressively reuses HTTPS connections nowadays.



re 2: you're loading mixed https and http content. open up the devtools console and it will show you what's being loaded over http.


Either that, or they have a SHA-1 certificate that expires in 2016 or later - https://www.ssllabs.com/ssltest/ will tell you for sure.


The cut-off isn't 2016-01-01. I have a cert using a SHA1 fingerprint which expires in June 2016 (see https://random.spillett.net/) and Chrome is not currently complaining about that.


TLS(The S in HTTPS) incurs at least 1 extra round trip. If the site is using crappy gear this can incur even more round trips, particularly when you're accessing the site from Windows due to TCP ack delay.

Anyway, to get to your question; it can. Normally you wouldn't notice because you're close-ish to the servers and you have stuff like keep alive and TLS session caching(tickets or ID's) to mitigate the issue.. But under other circumstances such as the server being in the US and you are not in the America's the difference can be significant, particularly if session caching isn't working(or working properly) and of course on first request. You can mitigate this with geographical distributed endpoints but most sites won't bother with this.


Setting up all those TLS sessions for trackers, one-pixel GIF web bugs, and ad services can add significant overhead. None of the "send it all through one pipe" schemes, such as Google is pushing, help there. Google's solutions work great for Google, whose pages load almost entirely from Googleland, but are not so great for non-walled-garden sites.


Most trackers these days run asynchronously and won't noticeably affect page load time.


"In light of a growing number of cyber security and data privacy concerns, replacing HTTP with its secure alternative, HTTPS, is becoming increasingly important."

s/its secure alternative/a secure alternative/ s/,HTTPS,//

This NYT blog post reads like an advertisement.

If the newspaper is worried about guaranteeing the authenticity of its web content, then why don't they publish their SSL certificate in the print version? For scanning/OCR.

No third party CA needed.

When connecting to the desired website, I can check for the correct certificate myself, thanks. This is not a perfect solution, but it is better than third party CA's or letting third parties embed certificates in browsers where no user ever looks at them. In my opinion.


To push the ball forward, major link aggregators like reddit could implement HTTPS Everywhere style code. The front page has numerous examples of linked content over plaintext http that is also offered over https.


BTW, nytimes.com is TLS 1.3 intolerant. https://www.ssllabs.com/ssltest/analyze.html?d=nytimes.com&s...


"By the end of 2015".

How about do it by the end of next quarter, seriously!?


It's so critically important, we'll commit to doing it within 14 months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: