Hacker News new | past | comments | ask | show | jobs | submit login
HN now comes with HTTPS? (ycombinator.com)
197 points by mike-cardwell on Aug 21, 2011 | hide | past | favorite | 78 comments



The Firefox addon HTTPS Finder just alerted me to the fact that there was an HTTPS version of the site at https://news.ycombinator.com/ - I tried it out, and it worked. Nice work.

EDIT: Session cookie needs to be set as "secure" and Strict-Transport-Security should be implemented in order to protect against certain attacks. End users can just add this HTTPS-Everywhere ruleset:

https://raw.github.com/mikecardwell/https-everywhere/73241d1...


Thanks for the HTTPS-Everywhere ruleset!

I've been using HTTPS-Everywhere for a reasonably long time now (a couple of years?) but this is actually the first time I've added a rule...so if anyone else falls into that boat: all you need to do is save the YCombinator.xml file (linked above, from Mike's GitHub) into the HTTPSEverywhereUserRules folder in your Firefox profile folder, then restart Firefox.

Hopefully that'll save someone else from having to look it up ;)


Thanks for saving me a minute or two. :)


Can you elaborate please? What is this "HTTPS everywhere" and why do we need it?


A really fantastic firefox addon that forces HTTPS on certain sites. https://eff.org/https-everywhere/


> and why do we need it?

If you're on an unsecured wireless network (e.g. at a library) and you don't want someone to see your HN password when you log in.



Watch out, this extension does not do what HTTPS-Everywhere does! With KB SSL Enforcer, your browser still hits the HTTP version before being redirected to the HTTPS version of the site.


So, now the HN effect will be less measurable, as traffic from HTTPS to HTTP doesn't pass a referrer.

(This just increases the incentive for sites to use HTTPS Everywhere, so they're not left out in the dark as to who is sending them traffic.)


Nifty report of HN's HTTPS: https://www.ssllabs.com/ssldb/analyze.html?d=news.ycombinato... Grade C it seems


These stupid grades kind of drive me nuts, but they really should disable SSL 2.0 and the export ciphers.


SSLab's grades are not stupid. They monitor how solid the SSL implementation is and what needs to be corrected to ensure higher security on the site.

Another useful tool is SSLScan, which for Hacker News shows that they are accepting of a very strange set of HTTPS cipher and MAC configurations.

SSLv2 should be turned off, as well as the majority of 40-bit and 56-bit ciphers. Their unusual preference of CAMELLIA-256-CBC is pretty amusing to me.

  Prefered Server Cipher(s):
    SSLv2  168 bits  DES-CBC3-MD5
    SSLv3  256 bits  DHE-RSA-AES256-SHA
    TLSv1  256 bits  DHE-RSA-AES256-SHA
Whaaaaat?


Agree to disagree on the grades, and agree to agree on SSLv2 and export ciphers.

(I have other complaints about the SSLabs grades, but I'm not getting into it here).


Why not? I understand you do not have infinite time but if you get the chance I would enjoy hearing your thoughts.


>Their unusual preference of CAMELLIA-256-CBC is pretty amusing to me.

Yea, I noticed it too when I checked this using Firefox and Chrome.


It was Grade C a day ago. Today I see Grade A:

    Certificate ------ 100
    Protocol Support -  85
    Key Exchange -----  80
    Cipher Strength --  90
What's changed in the past 24 hours?


Glad to see this trend. How about using https links in the RSS feeds, so that anyone coming in from their feedreader gets https by default?


This is great. It would be even better if HN would use the Strict-Transport-Security header so that browser remember to prefer https instead of http for this site.

See http://blog.sidstamm.com/2010/08/http-strict-transport-secur...


If they were taking credit card numbers, STS would be worth the trouble; otherwise, it's not something I'd recommend going out of your way for.

For non- HTTPS- nerds: STS resolves the problem where your first contact with a site is via (insecure) HTTP, and a MITM makes all subsequent contacts lie to you about whether HTTPS is available. Since we've lasted many years without even having HTTPS, I think we can all just look at the address bar carefully instead.


That is not the only problem it solves. You can go to https://news.ycombinator.com/, but if you visit any other non-https website at the same time, somebody can MITM that connection, and inject the following HTML into it:

  <img src="http://news.ycombinator.com/">
That will cause your browser to do a non-https request against news.ycombinator.com (unless STS is implemented), which in this particular case can leak the session cookie, because the session cookie hasn't had the "secure" flag set on it.

I'd say that it is worth going out of your way for. If you're implementing HTTPS, it is only slightly more effort to implement STS as well. And it's definitely worth adding the secure flag to the cookie as well.


Not having the secure flag on a cookie (I didn't look, because I don't care about SSL on HN) is a vulnerability. STS doesn't totally fix that vulnerability.

STS: Nice to have. Most installed browsers don't even support it.

Cookie Secure flag: Must have. Every modern browser supports it to some extent.

You're entitled to your opinion about this stuff, but know that my opinion has been hardened by many many years doing appsec for financial services companies. I would not doc a company for not having STS†, unless I was doing design/best practices review. I would doc anybody for not setting the secure flag on cookies for an HTTPS site.

(ie: if you ask me to doc stuff like STS and Clickjacking)


Thank you for acknowledging my right to an opinion, although I'm not sure why it was needed. I don't think your credentials put you in a better position to comment on this particular matter, but you're still entitled to your opinion anyway.

It takes very little effort to implement STS, and there are significant tangible benefits from doing so. For this reason alone, I would recommend that all HTTPS sites use it. I acknowledge that using the "secure" flag with cookies provides a bigger benefit though.


Saying "you have a right to your opinion" sounds better than "I'm pretty sure you're just totally wrong about this", doesn't it? :)

HN doesn't have a framebuster either, but you didn't call them out on that. The security nerds we work with are far more likely to call us out for not hyperventilating about clickjacking than about HSTS, which, again, is not widely supported in the field to begin with.

There are actual apparent problems with HN's HTTPS; it will for instance happily do SSLv2 with 40 bit RC4. Let's advocate for those fixes first.


It's more passive-aggressive certainly. The SSL config does need tuning, and frame busters adding. That doesn't make STS any less important.

No modern browser is going to choose such a weak cipher, but because SSLv2 is enabled, a MITM can force it. The same MITM who can abuse the lack of STS.


The equivalence you're drawing between what a MITM can do with STS and what a MITM can do with SSLv2 is an objectively false one.

I think you've gone on tilt on this issue, so, feel free to the last word.


I'd prefer if you could educate me on how I'm wrong? I was referring to the ability of a MITM to attack the initial negotiation with a downgrade attack on SSLv2. Modern browsers aren't susceptible to this unless I'm mistaken?

All modern browsers are susceptible to the other MITM attack I described though. Unless the website uses STS.

EDIT: It's worth noting that anybody using IE7+, FF2+, Opera, Chrome or Safari aren't affected be the weak ciphers, or by the existence of SSLv2, as their browsers will not negotiate a weak SSL connection. They are all affected by the lack of STS though.


No, because IE7+, FF2+, Opera, and Safari don't support HSTS. New Firefox does, and Chrome does.


Good catch. Although, when comparing an issue that affects no modern browser against an issue which affects all modern browsers, the issue which affects all modern browsers is perhaps a little more important.

And when there's a solution that is trivial to implement, and can fix the issue for two existing major modern browsers (probably more to come), it might not be a completely crazy idea to go ahead and implement it.

P.S. Thank you for graciously gifting me the final word


Funny. The 'trouble' is not more than adding a single header:

Strict-Transport-Security: max-age=604800

And that is it. Your browser now prefers to use https for your web site. That was really not that much trouble, was it?

Also, https is not just for banking. I like it when people cannot see what I am doing online. Whether that is online banking, doing a google search or writing a posting on hackernews.


It's worth noting, that if you can, you should include the "includeSubDomains" flag with STS:

  Strict-Transport-Security: max-age=604800, includeSubDomains
The reason for this is, somebody controlling the network could create a fake DNS A record for "foobar.news.ycombinator.com", and then stick this into the html of another unrelated non-https page that you go to:

  <img src="http://foobar.news.ycombinator.com/">
Which then may leak the news.ycombinator.com cookie over http, as the STS would only have applied to news.ycombinator.com and not foobar.news.ycombinator.com


Thank you, RTM and PG!!


Nice addition to HN. We also updated Chrome's Readable HN to work with HTTPS https://chrome.google.com/webstore/detail/jpnbjaechgbbpokepg... and if you would like to contribute https://github.com/jorde/readable-hn


For some reason it's now impossible for me to connect over normal http:

    manila:pieter$ curl -I http://news.ycombinator.com
    curl: (52) Empty reply from server
    manila:pieter$ curl -I https://news.ycombinator.com
    HTTP/1.1 200 OK
    Date: Sun, 21 Aug 2011 15:09:27 GMT
    Content-Type: text/html; charset=utf-8
    Cache-Control: private


When a single IP address can support multiple virtual hosts, probing port 443 for an SSL version of a domain is naive and may trigger an intrusion detection system (IDS). Sure, you may get a response (and even the same content), but that's no guarantee that the domain you used is configured for SSL on the server. An IDS may flag the request as a potentially malicious probe and temporary block the IP address from accessing some resources. This can be extremely useful for blocking exploit attempts by drive-by bots that request by IP address alone. Of course, it would be strange to continue to allow access to the SSL resource as in your example, but maybe there's a reason for it (server admin has different goals than site admin, for example). Just one possible explanation. You might regain HTTP access if you stop hitting HTTPS for a while.

Edit: The CN in the cert is news.ycombinator.com, so it's unlikely my explanation is the cause (but still could be).


Is it "naive", or is it totally harmless? I've been using HTTPS Finder, which probes for HTTPS versions of websites for quite a while now, without any noticeable negative effects. I'm sure sites out their exist which have this IDS configuration that you describe, but none of the ones I've visited do. As for the positives, I can't remember how many HTTPS versions of sites that it's alerted me to, but it's a lot


It's naive ("let's poke it with a stick and see what happens"), but should be mostly harmless with a properly configured IDS. You don't want to permanently blacklist legitimate users, just discourage malicious ones.

Unfortunately, there's no guarantee that an HTTPS site is identical to the corresponding HTTP site. In most web servers, each one has its own configuration, so it's best to visit SSL sites only when you've been explicitly directed to do so.


I only started using HTTPS after HTTP stopped working, so that's probably not it.


As a side-effect HN has upgraded from HTTP 1.0 to HTTP 1.1.


This is great, thanks PG! Only problem for me is that the theme i'm running will no longer work but that's okay, i'm sure they'll update.


Not sure what kind of theme you're using, but it broke the Greasemonkey script Hacker News OnePage for me. If you're using something like greasemonkey or stylish, you might have to go into the script and tell it to work on https, since the sites are whitelisted with only http. That should do it.


Is it possible for HN to use Google's SPDY protocol for better performance?


I'd guess that for HN, protocol is not the bottleneck


Is this even supported by Chrome yet?


Yes. Many of Google's own sites like Gmail and Google Maps use SPDY in Chrome.


Why do people want https on Hacker News?


IMHO any site that has a login form should use HTTPS.


Yep. For further explanation:

http://codebutler.com/firesheep


A lot of people on HN will use the same log in information they use on other sites. Some even have their email in their profiles.

If someone logs in on a wifi hotspot, their password is easily visibile to others. And voilà, suddenly you have access to someone's email, facebook account, etc.


Seems like overkill to me.


I was at a conference recently and saw somebody start Firesheep -- HN profiles showed up. So the choices were: use HN on open wifi but risk having your session stolen, don't use HN, or connect to a VPN. The VPN is probably the smartest thing to do on an open wifi network, but it's nice to know that HN is now safe to use without it.


If you're going to a conference and you're using wifi there, please use a VPN. If you're going to a coffee shop and using wifi, if at all possible use a VPN.

By not using a VPN you're leaving yourself open to all manner of attacks (take for example the recent iOS SSL Cert validation issues as an example, but also consider the fact that an attacker could inject malicious content to attack your system over any HTTP traffic - or invalid HTTPS traffic if you make the connection).

At the very least you should use an SSH tunnel or some form of transport to protect all of your outbound traffic when in a hostile environment.


I saw Firesheep in active use at the TechCrunch Disrupt NYC hackathon earlier this year. Yikes. No solution is perfect, but VPNs seem like the way to go for now.

[Shameless Plug] I quit my day job earlier this year and started building https://www.getcloak.com/ with two friends. We're caffeine addicts who dog-food our product; that's why we work out of coffee shops here in Seattle. (You can find our office by following @seafreemob on Twitter. ;-)


Thanks for the plug, finally a VPN solution that seems good (i.e. automatic, good pricing model etc.). That said, please sent an invite (my email stats jtl...)


Sent! Cheers.


Ok, why shouldn't we use HTTPS? To me it seems silly that the vast majority of traffic isn't atall encrypted.


It hides the referer which is unfortunate. Maybe an option only to login, register, etc via HTTPS?


Many people, myself included, would see that as a good thing. Not an "unfortunate" thing. Limiting HTTPS to logins only allows session hijacking. Session hijacking is a serious problem.


Well, HTTPS adds latency due to its handshake, and of course it uses more CPU, especially on the server (unless a dedicated SSL offloaders is used).


Rules of thumb for 2011:

• If you are generating your content in a dynamic creation system, the encryption overhead of SSL is not going to matter.

• SSL initial session latency is highly variable based on packet round trip time of the user. Some people will be irritated, other's can't tell the difference.†

† I suppose there is room in the world for a CDN-like entity that places anycasted SSL entry points in strategic locations (or topology sensitive DNS lookups), then uses a zero-turnaround at startup encryption protocol back to the "real" servers. (Say, HTTP over a VPN or something more clever, after all you are the client and the server, life is easy). You'd even beat straight HTTP since by keeping alive the HTTP link to the real servers you'd save the TCP SYN turnaround. 👍

‡ Unrelated: The unicode committee has lost their mind. OS X Lion users will be seeing a flesh colored thumbs up symbol at the end of the previous paragraph. I suppose now that PILE_OF_POO and NAIL_POLISH are taken care of the committee will add everyone's avatars from all systems.




> OS X Lion users will be seeing a flesh colored thumbs up symbol at the end of the previous paragraph

All I see is EOT symbol


And yet on the flipside, it should prevent interception of login credentials or page-content filtering.


> it should prevent interception of login credentials

That was already possible using HN's OpenID support and a secure provider.


OpenID doesn't prevent session hijacking though does it? It would just prevent the credentials being stolen.


> That was already possible using HN's OpenID support and a secure provider.

Not actually sure what your point is here.


Using OpenID (with a secure provider) to login to HN prevents the stealing of login credentials and was possible even without HN supporting HTTPS. I don't see what's strange about this.

By the way, I'm not claiming that enabling HTTPS is wrong, I'm giving potential disadvantages. It's the website admins' job to weight those against the advantages and decide whether it's the right thing to do or not.


OpenID may prevent stealing login credentials, but it doesn't prevent stealing the cookie which identifies your session.


Also, if you can hijack the http, you can link to a phishing site which prompts users for their OpenID provider (typically their google account) credentials. Maybe they would notice that the domain name of the phishing site (say googleopenid.com, which is available) is fishy ...

https gives you more than just encryption.


I meant why the fact there was a way to do it already actually related to the debate overmuch. This protects people who just use a bog-standard HN account, and further secures all users when actually on the site regardless of login method.


On the server side, it is ycombinator's decision if they can handle the additional cpu cycles. i believe they can, or else they wouldn't have enabled it.

on the client side, I don't think you can really feel the latency. is measurable, of course, but i don't think you can feel it.


>on the client side, I don't think you can really feel the latency. is measurable, of course, but i don't think you can feel it.

That doesn't seem to be the general opinion here: http://news.ycombinator.com/item?id=2565694


And high(er) latency on HN would be a problem because ... ? It's such a highly interactive website? There are billions of high quality images to be loaded? Hundreds of ajax calls going back and forth every second?

Right.


I can't tell the difference... HN is pretty damned slow already as it is.


You can't feel the latency?

It adds half a second or more to initial page load times. For doing any sort of marketing where potential customers are arriving at your landing page, that's huge.


Not necessarily so noticable. I've been running my personal site with https and the effect is fairly minor on the whole in terms of load times. Not zero, but low enough that your average user on the other side of the Atlantic wouldn't notice much of a difference to normal.

Admittedly this is a poor comparison vs a huge site, but then you'll probably find the server processing time is your greater foe.


I was refering to my own feeling when loading HN, a (in comparison) very simple website.

I am sorry if it sounded like I was talking about any website in general. That is not the case.


For some sites, where security particularly matters, using https by default on your landing page seems like good marketing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: