Hacker News new | comments | show | ask | jobs | submit login
FFS SSL (wingolog.org)
447 points by jashkenas on Oct 17, 2014 | hide | past | web | favorite | 152 comments



The state of things really does suck, and I'm trying to do something about it.

SSLMate [https://sslmate.com] sells SSL certs from the command line for $15.95/year. The sslmate command line tool takes care of properly generating the key and CSR, and properly assembling the certificate bundle containing the chain certificate. Certificates secure both example.com and www.example.com. No more hard-to-use websites or obscure openssl commands. I'm working on a config file generator too, so you'll be able to specify your server software and it will output a secure config for you.

And since the author mentioned out-of-process SSL termination, I'm also the author of titus [https://www.opsmate.com/titus/], an SSL terminator that is so paranoid it stores the private key in an isolated process (which would have been impervious to Heartbleed). It also solves the original IP address problem by using Linux's transparent proxy support - your web server sees the client's actual IP address even though the connection was proxied through titus.


Shut up and take my money. I had to jump through so many hoops when I setup SSL about a year ago that I almost just gave up. I have your site bookmarked and next time I need to setup SSL I will be using your service. It looks really easy and how I would expect SSL certs to work.


Why o why do you charge almost ten times for wildcard certs? A wildcard cert is technically identical to non-wildcard certs - only one single parameter is different.

Is it, because you re-sell?


Yes, we do resell and that's just how certificate authorities price things.

If you have fewer than 10 hostnames to secure, definitely go with individual certs and use SNI instead of a wildcard cert. Support for SNI is quite widespread now, and SSLMate makes managing lots of certs easy.


then you are dealing with the wrong resellers! I can buy a wildcard SSL for $64 RETAIL

https://www.gogetssl.com/wildcard-ssl-certificates/

(these guys also have a reseller program but I don't know what that price is)


Yeah, it sounds like they're in a direct relationship with the CA which only works when you have $10k+ to front for reasonable rates.


Thank you, I will check your rates next time I need to renew my certs for sure!


StartSSL will give you unlimited certs, including wildcard certs for $60/year.


  Do you sell Extended Validation (EV) certs?
  
  No. The long and complicated approval process for EV certs 
  does not work well with the SSLMate model, which emphasizes 
  quick and simple SSL purchases from the command line.
Planning on changing your mind anytime soon? I'd like to switch to you, if just for automatic renewal, but the boss wants an EV cert.

  Does SSLMate sell SHA-2 certificates?
  
  Not yet, as our upstream certificate authority has 
  incomplete support for SHA-2 certificates.

  We expect to add full support for SHA-2 certificates in Q4 2014.
:/


Yes, actually, I've been contemplating offering EV certs, and have some ideas for making it work well with the SSLMate model. I do understand the boss factor. Do you mind if I shoot you an email at the address listed on your website?

And I know it's lame about the SHA-2 certs, but my hands are tied for now. We're not selling any SHA-1 certs that expire after 2015 so we're not running afoul of the deprecation schedule set by Google. (And if you do buy an SSLMate cert, we can reissue it as a SHA-2 cert but it's a manual process.)


Sure, go for it.


Honestly this sounds absolutely awesome. If it packages it all up into the appropriate types of files for nginx and/or haproxy all the better.

Any chance of adding wildcard support? We need that.


We have wildcard support. Just specify a hostname like '*.example.com' when buying a cert :-)


Amazing. What about the proper file creation? (Although that's not that hard with a couple of calls to "cat" but getting the order right is hard for people who haven't done it before).


Yup, we create three cert files for every purchase:

www.example.com.crt - the certificate by itself

www.example.com.chain.crt - the intermediate certificate

www.example.com.chained.crt - concatenation of the certificate and the intermediate certificate

Most software, including nginx, takes the .chained.crt file. Apache takes the .crt and the .chain.crt file as separate config options.


I hope we see more key isolation going on in the near future even where titus is overkill. Process-per-connection tends to cross the cost-benefit threshold for high-traffic/low-to-medium importance stuff, but key compromise is a giant pain in the ass no matter what you're doing.

On the other hand, if you want to go further into tinfoil-hat territory, find a way to isolate each process in a separate container. :) (Bonus points: Tear down/rebuild the container for each new connection.)


There's definitely overhead to the process-per-connection approach, but I disagree that protecting the key is more important then protecting the state of the connection. A private key is just a means to the end of securing a connection, and what terrified me most during Heartbleed wasn't the private key compromises, but the fact that it exposed private user data like passwords:

https://twitter.com/markloman/status/453502888447586304/phot...

(there was probably other sensitive content like emails in that dump too)

This kind of information is far more useful to attackers, since it can be used immediately and independently, whereas a compromised private key is only useful in conjunction with an active MitM attack (if forward secrecy is used) or a passive eavesdropping attack (if forward secrecy is not used).

As for containers, titus is already using a lot of container-like techniques, such as filesystem and network isolation, and I plan to use PID namespaces in a future version. So it's probably already deep in tinfoil-hat territory ;-)


I'd be willing to take the hit for the login process itself, but for most of the other stuff I care about, the biggest concern is MitM -- users need to trust that the (mostly non-personal, non-confidential) data they're receiving actually comes from me.


Good point about serving non-personal, non-confidential data - in this case, there's no benefit to the process-per-connection model, but there is still a benefit to isolating the private key. Unsure how you could easily separate out the login process from other pages on the site. I think you'd need to use separate hostnames/IP addresses and an SSO-like system.


Sure, but that's already a requirement for a lot of applications, and a lot more can be handled just by setting '.example.com' cookies if they don't have any untrusted subdomains.

Late edit: Ultimately this all comes down to a more general desire for better tools for segregating data and APIs into appropriate security domains. It's still way too much work, especially for small teams, to separate things into appropriate security domains with appropriate tradeoffs.


SSLMate solves a huge problem for me, specifically `download --all`. I've been trying to figure out a good workflow for building a docker-based deployment system but I kept getting stuck on how to manage certificates. I'm going to start using this the next time I have to generate a cert, for sure.


Why do SSL cert vendors sell the same things at different prices? For example, I've used these guys before https://getssl.me/ and they sell one at 9.95.


If you sell a premium service you can charge a premium. The command line feature is worth $5 per certificate for his customers.


I fear SSL isn't going to see wide adoption by normal people until two things are done:

* It's easier to set up. Having personally been through the situation in the article a time or two in my life, it sucks, and there's no reason for it to suck.

* The prices stop being extortionate. 10x price differential for a literal 1 bit change in the final product to make a wildcard cert? Fuck you! Everyone mentions StartSSL. Sure, the basic certificate is free.. if you don't miskey your domain name.. or you don't select the wrong options.. or OpenSSL doesn't get owned, in which case you get to pay $15 for the privilege of their server spending a few milliseconds of CPU time to spit a few kilobytes of data back at you that represents the thing you already had.

PKI as it exists today is a fucking scam. It's a scam because it's overpriced, it's a scam because it's exploitative, and it's a scam because it's incredibly easy to do things that render the whole exercise pointless.


Check out CloudFlare, they've been offering free one-click SSL for a little while now


Only if you also agree to give up control over your incoming traffic and DNS to them. No thanks.


You "give up control" of your DNS to whoever runs your nameservers.


Unless I missed it, article does not say "use https://www.ssllabs.com/ssltest/ to test your setup", which I think is very useful advice.

Also, you might consider configuring your server to only support cipher suites that will be supported by TLS 1.3 [0][1] and let old clients get errors. If you need security, don't use the obsolete stuff the experts don't trust anymore.

0 - https://tlswg.github.io/tls13-spec/#rfc.section.1.2

1 - This means use only (ECDHE|DHE)-(RSA|ECDSA)-AES128-GCM-SHA256 and (ECDHE|DHE)-(RSA|ECDSA)-AES256-GCM-SHA384. No key exchanges that don't provide forward secrecy, no RC4, no CBC mode, no MD5, no SHA1.


You can add (ECDHE|DHE)-(RSA|ECDSA)-CHACHA20-POLY1305 to that too (pretty soon).


This raises an issue that has been troubling me recently. There has been an explosion in guides that tell you how to secure your TLS installation, and virtually all of them hard code a cipher list, which I fear won't get refreshed as better ciphers come out. I've also seen this with recent instructions to disable SSLv3, which whitelist TLSv1, TLSv1.1, and TLSv1.2, instead of just blacklisting SSLv2 and SSLv3.

To avoid a situation where future security improvements are held back by crufty configuration that was added in a well-intended effort to improve present-day security, I think we should be encouraging blacklists instead; the OpenSSL cipher list spec actually supports blacklisting.


I have a weekly cron that checks my nginx cipher list against Cloudflare's[1]. They're pretty much always first to deploy on new ciphers and they know what they're doing. And it, for instance, picked up on and reminded me to change all ~14 of my SSL configs in response to POODLE

[1]: https://github.com/cloudflare/sslconfig/blob/master/conf


That's a very nice idea.

Of course, you sound like exactly the kind of sysadmin I'm not concerned about ;-)


Sadly, I've come to the conclusion that it's no longer possible to build future-proof cipher suite configuration in a generic way. There are simply too many rules to follow, if you want to get everything right. I spent lots of time trying and in the end gave up.

Now I give my recommendations as an ordered list of suites. It's easy to set up, does exactly what you want and, as a bonus, everyone can look at the list and understand which suites exactly are configured.

That said, I'd like to see good default configurations in libraries and server programs, which can be updated via patches as needed. Then we wouldn't really need to bother with cipher suite configuration at all.


When the standard adds the feature that enables using AES GCM when both sides have an implementation that is fast and resistant to timing attacks and falls back to chacha20-poly1305 when they don't [0], and the chacha20-poly1305 cipher suite itself is standardized, then we should all add that.

0 - https://www.imperialviolet.org/2014/02/27/tlssymmetriccrypto...


If you don't care about OCSP stapling, you can use BoringSSL and get that cipher today!

ex: https://time.ian.sh


And LibreSSL also has support for ChaCha20.


  tells you how to use StartSSL, which generates the key in your browser. 
  Whoops, your private key is now known to another server on this internet
Pardon my ignorance, but I thought the in-browser certificate creation process avoids sending the private key.

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ke...

http://www.jroller.com/whoami/entry/browser_generated_certif... (2006)


Just wanted to mention that you can alternatively upload your own locally generated CSR to StartSSL instead of using the website to generate a key pair for you.


In this case you are trusting the javascript running in the browser that it does not leak the key it generated. It is safer to just not let the browser ever see the private key.


keypair generation is done by the browser using the <keygen /> element, not by custom JS.


Unless malicious JS is served in place of the <keygen> element, which would be largely undetectable to the user.


How would said malicious JS then install the generated phony certificate in your browser's cert store?


I'm talking about the step where StartSSL generates an SSL private key for you in the browser (unless you know to click "Skip"). No need to install anything in a browser store, it just brings you to a page with a generated SSL certificate using that key.


<keygen>'s really intended for client certificates used to authenticate to websites. I think that (for example) StartSSL use it to generate the private key you use to log into their site, but any private keys you create on their website for things like websites are generated on their server.


Interesting tag! I hadn't heard of that.

But even so, unless you actually inspect the live DOM and ensure it's really using that element for your session, and inspect enough of the rest of the code to ensure it's not some misdirection, you can't really trust it.


It's an old deprecated netscape tag. In Netscape/Firefox derived code, it's superceded by the 'crypto' javascript object.


The Netscape specific crypto JS is also gone now: https://bugzilla.mozilla.org/show_bug.cgi?id=1030963 replaced by the Web Crypto API.


Thanks, I didn't know that. But even if the spec says javascript on the page should not be able to extract the private key and XHR it somewhere, it's just a bug away.


It does. But what if StartSSL is compromised (or your connection is MITM'ed) and you're served a page that contains malicious javascript instead of the intended keygen tag? The only way to be sure would be to check the HTML...in addition to the rest of the learning curve of implementing SSL.


if startssl is compromised (and if start ssl doesn't use ssl, allowing for mitm), then I don't imagine you'd have much luck regardless

no matter what practices startssl uses, once your connection to them is compromised or once they are compromised, then the attacker can change what practices startssl suggests to its users.


> then the attacker can change what practices startssl suggests to its users.

Or just issue a cert on its own accord...


by that logic, do you read the source of your key generation binary and then compile it yourself?

or at least consult the md5 on your distill page before running it? oh and make sure md5sum is not altered either.


You should never send a private key. Private keys are private. That is a poorly written tutorial - written by someone that may not understand PKI.

You generate the CSR and send that.


You are correct. It generates a private key and then sends a SPKAC which is essentially a different format of CSR.


With the exception of SMIME certificates, StartSSL generates the key on their server.


Not true at all. StartSSL can generate the private key for a SSL/TLS cert on the server, but you can skip that step and upload your locally-generated CSR.


Yeah, but when you choose to generate it with them that is how it is done.


The process of adding SSL to a website could be 100% automated.

1.) Start the webserver

2.) Webserver: Oh look, I have a virtual host for example.com but no valid SSL certificate in order to serve https.

3.) Webserver: Generates a key+cert. Calls out to third party trusted SSL provider and says: "I control the website for example.com, please sign the following cert"

4.) SSL provider connects back to example.com to validate that the request for a cert is authorised, then signs the cert and gives it back.

This could work today, if a trusted third party SSL provider created such an API and Nginx/Apache were updated to talk to it.

Imagine if next time everyone did an "apt-get dist-upgrade" or a "yum update", their web servers suddenly started providing a HTTPS version of their site.

[edit] Step 4 is the equivalent of the way domain verified SSL already works today, except over HTTP rather than Email.


  user@host:~$ apt-get install apache2
  
  [...]
  
  Please enter your credit card information to submit your SSL cert request.
A company could probably set up an API to allow people to submit/validate CSR requests via pre-validated profile, but at install time? Nah. There's a lot of configuration that needs to happen after you install a web server anyway.

And anyway, most websites don't need SSL. Anyone who tells you different is probably wearing an aluminum foil fashion accessory.


> And anyway, most websites don't need SSL. Anyone who tells you different is probably wearing an aluminum foil fashion accessory.

Encrypting the transport layer ensures that the marketing dribble your execs have written is exactly what your end users see (even in the airport or firesheep-infested coffee shop). And if you have any session cookies or collect email addresses anywhere on your site, that personal information should be a free lunch for Google and the NSA.

HTTPS is always a good idea, foil fashion optional ;-)


> And anyway, most websites don't need SSL. Anyone who tells you different is probably wearing an aluminum foil fashion accessory.

How about the fact that any HTTP page can be injected with arbitrary javascript by a MitM? Are you that confident about your browser's impermeability? Do you always use a VPN in public hotspots? etc.

Anything that's worth viewing, on an often attacked platform (browsers), is worth viewing securely.

edit: s/link/page/


Mitm is still pretty trivial to perform from hotspots, even for supposedly-https websites. There's a dozen ways I can inject traffic into your browser, whether on the initial connection to the site you want, or in one of the many non-https connections from 3rd party content loaded into practically every website on the internet. This isn't even taking into account the vast number of attacks on https clients and protocols.

Second, nobody is trying to inject traffic in your browser on a hotspot. Nobody. Nobody cares about your connection. There is no secret cabal of hackers sitting at every airport and starbucks waiting to steal your Facebook login. They don't give a shit. You are the tiniest small fry, and they have much easier ways of committing cybercrime that pay out much better and provide them better intel.

And yes, if I want to make sure i'm secure, I use a VPN. I assume all public browsing sessions are hijackable.


I never said "install time". I said when the server starts. Obviously each time a new vhost is added it would need to repeat the process.


I'm not sure that this is a 100% correct proposal, but it definitely seems like the right direction. To the author's point -- setting up transport encryption should be painless, not the awful experience it is today.


> Now you're presented with a bunch of pointless-looking questions like your country code and your "organization". Seems pointless, right? Well now I have to live with this confidence-inspiring dialog, because I left off the organization...

That's not right. Even if you supplied all those details in your CSR, you would not have solved this problem.

You bought the cheapest kind of certificate. Your "Organizational Unit" says "Domain Control Validated - RapidSSL". That's all it's ever going to say, no matter what you put in your CSR. Because the registrar did the bare minimum to test that you control the domain.

If you want a certificate that says anything more specific than that, you have to pay more money and provide more proof to the registrar.


What flabbergasts me is how much of this is what the gaming world would call "push this button not to die". That is, it's a question with a right answer and a wrong answer and no real other options or any reason you'd ever want to choose the "wrong" option. Intelligent defaults are supposed to take care of that sort of thing.


Your defaults would change every few months.

BEAST attack comes out... Use RC4.

Not long after.. RC4 is weak use GCM.

GCM isn't supported on older versions of almost everything.

Upgrade servers to 2012R2, latest versions of Linux, throw away your old phones, check out your legacy devices...

Security isn't a question with a right or wrong answer, it's a question with the best answer for the available information and the clients you have at the time. And those answers are changing daily with all the research being conducted.


All the more reason for someone who's on top of things to fix the defaults whenever they change.


While that would be nice, it doesn't fix the problem, that problem is

If you are administering SSL enabled sites, you MUST keep track of latest security practices

Why is that? If you add a new server with different defaults, do you a) update all the other servers to the new defaults, or b) set the new server to your existing configuration.

Most server applications don't change defaults between minor versions because it leads to even worse problems. Such as users not updating because their application breaks and keeping old bugs alive.


The combination of recommending to use modern cipher modes with no known vulnerabilities and HSTS can be lethal.

Older clients won't even know you're there. No first page, no contact info, nothing. Just hang on page load. It might not always be desirable.

It might be a good idea to have two security levels if you're doing HSTS. One to accept every cipher under the sun to use for public pages and data, and a secure one to use to protect session cookies for logged in users.


It is rarely that clean-cut which is why these are not defaults in server configs.

If you're hosting an API that has older / embedded clients you may HAVE to support SSLv3 during a migration plan...etc


It's even worse than that as even if all the above is actually correct and done perfectly that you may still not get the "green" depending on the browser as the browser makers are moving to showing domain validated (via email) certificates as gray and only EV (extended validation) certs as green.

Things to note: EV certs are much more expensive. You cannot get a cert that is both EV and a Wildcard, single domains only.

Screenshots of how the various browsers show Domain vs EV certs at:

https://www.expeditedssl.com/pages/visual-security-browser-s...


> So if you're Google, you friggin add your name to a static list in the browser.

It's worth noting that the Chromium project accepts URLs for inclusion into the bundled list of sites using HSTS. (This is shared with Chrome, Firefox, and Safari.)

https://hstspreload.appspot.com/


Few comments:

1) StartSSL doesn't generate it in the browser (on a secure piece of hardware IIRC), which depending on your viewpoint is good/bad.

2) No CA will allow you to get a 1024 bit cert (you're correct)

3) You should be sending the GeoTrust Global CA cert because it isn't trusted everywhere, and if it's not sent you'll get errors before you get SNI/SHA1/SSL3 errors...

4) The "(unknown)" will always occur on FireFox, unless you have an EV certificate (even OV does not show this)


> StartSSL doesn't generate it in the browser (on a secure piece of hardware IIRC),

You can also submit a CSR so that key generation happens on your local machine. Only the pubkey and metadata will be included in the CSR and get signed.


It's not that the key is generated in Javascript; it's more that the key is generated (and therefore known) by someone who is not you.

Maybe you trust StartSSL with your private key, maybe you don't, but in either case not giving them your private key is preferable.


It's not... your local web browser generates it by the `<keygen>` element. The private key never leaves your web browser and is not known to StartSSL.


Just to note that Internet Explorer does this slightly differently, using VBScript to call the local crypto API as it doesn't support <keygen>. It's functionally equivalent though.


As stated in their CPS (with the exception of SMIME) and my post, it does not use <keygen>...


With StartSSL, you can actually generate your own key via OpenSSL (and you should) and then use that instead of doing it in the browser.


It is, however, ridiculous that DigitalOcean (a quite popular VPS provider) advises innocent webmasters to generate it in the browser with no mention of how insecure this is.

https://www.digitalocean.com/community/tutorials/how-to-set-...


It's a community contributed article - shame DigitalOcean didn't audit it properly though.


If StartSSL gets hacked you are in a bad place no matter what.

And you're more likely to screw up key safety yourself than for that narrow window to be exploited.

I don't think it's ridiculous.


I agree that there's a very slim chance, realistically, of this being exploited. But StartSSL doesn't have to be hacked for a user to be MITM'ed and served malicious JS. Especially given that their site (at least the homepage) loads over plain HTTP.


I've had no problem using startssl for personal projects. Free trusted certificates, and as madsushi says you should generate your own key and csr, just as you would for any premium certificate authority.


While trying to look it up myself:

* StartSSL.com itself doesn't use SSL, so it could be hijacked

* I get a TLS fail (ssl_error_handshake_failure_alert) on https://auth.startssl.com on Firefox 32

I'd call the first a red flag, the second a critical fail. This is the entry point to SSL on the web?


... does nobody here understand what StartSSL is doing?

Guys, they're giving you a certificate to identify yourself with. You add it in your browser. You go to their website, and you don't need a username or password to login. This certificate is much more secure than normal credentials.

Is the concept of authentication without a username+password that lost on you? It's like an SSH key except your username is embedded in it, too.

I fear for the future of internet security.


Huh, I missed that. Thanks for pointing this out.

I had used StartSSL years ago and forgot about this. Not reading all the text, I expected the usual login/password prompt and hoped for a "reset password" form. Getting a browser SSL error interrupted that flow.

Now that I know, it makes more sense, but I'm going to take the position that this is a UX fail. Whether the browser's (which didn't even prompt me for a cert) or StartSSL (who could've made this clearer), I don't know.


Any input action that you do on their site will direct you to SSL. Though they should probably use HSTS and redirect all their users before hand.

>I get a TLS fail (ssl_error_handshake_failure_alert)

That happens to me when I am on one particular provider myself. It concerns me that those providers are doing something with https connections causing them to break on their site.


It would be pretty frustrating to be dumped onto a highway in a stick shift and have to learn how to drive by googling for instructions in the car. That doesn't mean that driving is unnecessarily difficult.

Driving is only intended for people who have been trained and licensed to do so. Similarly, SSL certs are for people who have been trained on how to perform the tasks of a server admin, and presumably have read a 10-page ebook like How To Admin A Web Server.

Web servers in general are never supposed to be touched by the common user. They never were. Your server admin would set up a user account, and you'd dump your files in your ~user/public_html/ folder, and maybe if you were super clever you'd create a .htaccess file. The most complicated task a user ever was supposed to perform was to run "chmod 755" on the files in their /cgi-bin/ folder on their FTP site. All of this worked because an admin had to set everything up the right way and know how to do that.

So the next time you take on a technical hurdle you don't understand, I cannot stress enough how much more useful it is to either look up a good book on how to do it, or ask someone for help. It may not be as instant-gratification, but you'll get what you need done faster and understand it better.


>I cannot stress enough how much more useful it is to either look up a good book on how to do it, or ask someone for help.

Recommending books doesn't seem like a great idea when it comes to web security, as best practices change so often. Besides, enabling something like TLS should be easy. The more difficult it's to enable TLS, the less secure the web will be. In fact, since TLS is so difficult to set up, it's rather rarely used. If it was easier to set up, it would be more common and the web would be more secure on average.


Well first of all, setting up TLS has not really changed in 15 years. The protocol has evolved over time, and as a result of wanting the most number of people to use it as possible, it's not configured by default to be the most secure; it's designed to be the most compatible. The best practice is to install it, and then tighten security to your use case.

Secondly, 'easy' is 100% subjective. To me it's incredibly easy to set up TLS. It wouldn't be easy to my grandma. But the same could be said for anything that takes technical expertise and a complex set of operations. TLS is not simple, and it will never be simple. People who understand how TLS works know this already.


to be honest i enabled tsl n all my 10 page blogs a long time ago. but i took a couple days to read about the subject before dicking around aimlessly. so you're spot on.


Here is my unsolicited and unprofessional advice for this type of site:

1. Set up HTTPS on every site you run. No, really. That static 10 page info site for your church group? Yup, get it set up! The no-CSS blog from 1991 (before they were blogs)? Set it up! Even if you don't use WordPress (god, please tell me you are not running WordPress without SSL), and your site never lets anyone POST/PUT/DELETE/PATCH to it, remember that what people are reading is just as important. If I can hijack your site at the local coffee shop and serve malware, your readers will not be pleased. If I manage to do this in a widespread fashion, Google/Bing will blacklist your site and nobody will get to it.

2. Get a free cert! The dirty secret is that all certs are basically equal (EV and wildcard notwithstanding, though they are an entirely different matter). There are at least two places to get decent free certs: StartSSL and CloudFlare. If you want to protect something but your 10 page church website, get a cert from Namecheap for $8/year.

3. Use HTTPS-only. TFA is a great example: it's posted on a blog that can be accessed by both HTTP and HTTPS. If you leave this configuration, it's almost as bad as not having HTTPS at all. People don't type in "https://...". They go straight to "example.com" or they'll just Google "example" and click on the first link. Set up your server to redirect from port 80 straight to the canonical HTTPS version of your site.

If you are unfamiliar with how to set this up: practice. Get a Digital Ocean box for a few hours ($0.10/hour) and a free cert from StartSSL. Use a random domain name you own (you'll need a proper second level domain, but chances are you have one parked somewhere) and try setting up a site. It'll cost you as much as a single stick of gum and you'll know that much more about how to do it.

Edit:

4. Use a strong cipher suite such as this one: https://support.cloudflare.com/hc/en-us/articles/200933580-W...

5. Use nginx, at least for front-end proxy. Your life will be easier.

6. Check your setup against https://www.ssllabs.com/ssltest/analyze.html. Fix issues it highlights.

7. Don't lose your private key. Don't have it live only on the live server.

8. Use HSTS (http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) but beware that once you have it set, you cannot go back to plain HTTP. For almost everyone this should not be a problem.


I don't disagree with any of the points you make, but it feels like you're advocating the author skips straight to acceptance in their experience of the Grief of TLS.

If we had more people sitting in the anger, depression and bargaining phases of this, we might be in a better situation.

I count myself in the depression phase: the whole thing is a chaotic farce. Most websites achieve a worse security level now than 1995 export-grade SSL was thought to then. The CA system has thoroughly discredited itself through dozens of compromises and refusing to operate transparently. The TLS standard is gigantically complex, and the IETF continually fail to competently improve it to the benefit of its users.

To quote Network: `all I know is that first you've got to get mad. You've got to say, "I'm a Human Being, God damn it! My traffic has Value!"'


I agree that the CA system is broken and that the setup is in some cases needlessly complex. I think that more people who runs sites and going through setting up HTTPS will mean more people having exposure to if, and more change over the long term. I don't think opting out is an answer. I guess I am saying that first we collectively need to understand the mess before we can clean it up.


Honest question: is jumping through all the hoops to enable HTTPS really worth it for a personal static website? Are hijackings really that common? It seems like a lot of hassle for negligible benefits... plus it's not just a one-time thing, the best practices seem to change every few months and not following them can result in "very bad things". It's a heck of a lot easier to just run HTTP.


The first site you ever set up will take you 2-3 hours of screwing around. The next will take you about 20 minutes. After that, it'll take you 5-10 minutes per site. If you decide to go with CloudFlare, it'll take a different amount of time because you'll be setting up DNS through them as well.

It is worth it. Here's why: nobody is going to target your 10-visitors-per-month site directly. You are right, most people don't care. However, there are two types of attacks that will get you in trouble. First, where I decide to sit in a coffee shop and simply hijack every HTTP request. In this case, I am not targeting you directly, but you are susceptible. Even if you don't care about that (say, you know that none of your readers are coffee drinkers/public Wi-Fi users), a much worse situation is where a network attacker is able to attach a large number of sites hosted with e.g. a specific provider. Let's say I discover that Digital Ocean has a vulnerability where I can spoof your IP. I would then MITM all HTTP traffic to all DO hosts, and if you happen to host with them you are screwed. Note that Google doesn't care whose fault it is: they blacklist first, ask questions never.

So in short, it takes very little time/money, it's a skill you should have if you run your own site, and it's warranty against bad things happening.


Has Google ever blacklisted a site due to an attack like that? I see the risk, I'm just trying to understand how likely it is.

When I first read your scenario about hijacking my site via public wifi, it didn't strike me as very important... but after thinking about it for a few minutes, I do see the harm. Even if it's just someone screwing with my resume, I can envision situations where it could do a lot of harm.

And you do make a good point about the Google blacklist, the consequence of a Google blacklist is very bad. Even if unlikely, that alone is probably enough reason to enable HTTPS.

I've set up HTTPS several times on the small sites that I run, and probably spent about 6 hours on the process in my lifetime. Right after heartbleed came out, I switched to HTTP only. Now maybe it's time to redo the process and get it set up again...


I've only had a site blacklisted once. My father ran a WordPress blog on shared hosting and got hacked (probably weak password or vulnerability in one of the plugins or WP itself, who knows). His site was pretty quickly blacklisted, and even after he scrubbed it, leaving just a basic index.html ("we are coming back" type thing), it stayed blacklisted for at least several days. I am sure others have more experience with this, I've just been lucky.


.. and $10 apiece, because you're nuts if you use Startcom after they charged everyone to rekey their certs after Heartbleed.


I believe the point OP was trying to make is "if it hurts, do it more often" [1], hence it's worth setting up HTTPS for a personal static site not due to hijackings but to practice the best practices.

[1] http://martinfowler.com/bliki/FrequencyReducesDifficulty.htm...


If you just want your site to "sit and do nothing", you still need to worry about security updates for software. Even with many static sites.

So staying up to date is worth it no matter what.


I agree, but with only nginx (http only) and sshd public facing services, it's usually a very quick and easy update. Dealing with https vulnerabilities can make it a lot harder to keep up, especially when the fix is not as easy as simply upgrading the software and restarting the service.


If you care about SEO, you betcha.


Don't backup or copy your private key. Should you lose your private key or it gets compromised you generate a new one and issue new certificates. And naturally also revoke your previous certificates.

And before you start pushing out SSL on every page you have, stop using public WiFi. They will always be insecure, no matter what you do. Tether your phone or use a VPN.


> Don't backup or copy your private key. Should you lose your private key or it gets compromised you generate a new one and issue new certificates. And naturally also revoke your previous certificates.

I disagree. While yes a new key would be ideal, generally while you are dealing with the existing problem you will want to reach for the old key. I am not talking about situations where you misplaced the key or the server got compromised. I am talking about a situation where your current server/VPS suddenly dies and you need to spin up a new one fast. IMO, in this case wasting time on issuing/re-issuing a cert is inappropriate. On top of this, I tend to generate the certs on my laptop (I trust its RNG and physical security more than I trust the server). The key is already here. Now I can encrypt it with GPG with the full force of my 4096 bit key that only I can decrypt and store it fairly securely this way. I believe this is good enough for personal and professional sites. In the ideal case scenario, I'd also only keep these on an encrypted flash drive for even greater physical security.


If your VPS does corrupt itself, make sure the key is securely destroyed! DigitalOcean was a great example of this...


That's ridiculous. There are plenty of ways to lose a private key that doesn't involve or lead to compromise.

I generate and store my private keys in my secure CA environment and copy them to the server. If I ever need to redeploy them or generate a new CSR (SHA-2 anyone?), I can do it without ever logging into the server.


”stop using public WiFi”

What kinds of threat is it that you’re concerned about?


Lookup firesheep.


Good point. As pointed out elsewhere in this thread that kind of attack can be prevented by HSTS preloading, but of course that is an approach that’s not not scalable in the long run.


What could be the problem using public WiFi while pushing out SSL? I assume one would use SSH to connect to the server to push and configure SSL on it.


I generally point fellow sysadmins at https://wiki.mozilla.org/Security/Server_Side_TLS. Unfortunately, it doesn't include configuration advice for IIS.


>Set up HTTPS on every site you run.

Why? I don't trust it for security-critical data, and I don't need it for unsecure data. The only time I ever see it being remotely useful is if I were to set up an ecommerce site, at which point you basically only need HTTPS to fend off insignificant adversaries doing payment data sniffing and for CYA. For anything else, the benefits most certainly do not outweigh the pain in the ass described in the OP.

Also, if someone manages to "hijack your site... in a widespread fashion", it probably means they've rooted your server, at which point HTTPS does nothing useful anyway, because the attacker has your privkey.


Heh. So if you only see it as useful for ecommerce, would you mind posting all the passwords you use on non-ecommerce sites? Those of your users as well?

In seriousness, I outlined all the reasons why it is a good idea already. If you disagree, that is your prerogative of course, but you provide no arguments to support your point.


Hear, hear. I seem to be in the minority, but I loathe sites that require SSL without good reason. People overlook the massive added complexity at the client end as well, because it's hidden from the user most of the time - but I've lost count of the number of times I've been trying to get something done, usually in extremis and restricted to busybox or somesuch, and been unable to fetch a resource because curl/elinks/whatever wasn't built with SSL support.

By all means set up HTTPS on your little site, but for the love of god don't require it.


Although it's kind of minor, the way the NSA hack people is by hijacking non-SSLd connections and feeding them exploit kits. So, the more SSL there is, the harder it is and the longer it takes for them to do that.


The NSA is perfectly capable of hijacking SSL'd connections as well. They don't even need to do anything nefarious; my computer has (included by default) 4 root certificates controlled by the DoD.


9. Make sure to renew your certificate ON TIME. Someone needs to be responsible and this person needs to have it in their calendar. If you're not up to that, because it is in fact your church group and you're not sure you'll be there in a year, don't do this.

Also:

> 4. Use a strong cipher suite such as this one

Check out Mozilla's best practice. They'll give you configs for different levels of support.

> 5. Use nginx, at least for front-end proxy. Your life will be easier.

Be careful. It's tricky to configure and if you cut and paste your configuration from the Internet you will open up to arbitrary code execution.


Can you be more specifi about 5?

I'm aware of issues with improperly matching php files, but not of any general configuration issues resulting in RCE.


I was specifically thinking of the php matching issue, which I've seen a few too many times to be comfortable with. People shouldn't copy and paste configuration from the Internet, but they do, and I wish nginx wouldn't make it downright dangerous.


Mozilla recommended cipherlist is here: (includes nginx config file) https://wiki.mozilla.org/Security/Server_Side_TLS#Recommende...


Point #3 always brings me back to MITM, and how there's almost no way for a technically-illiterate user to avoid getting tricked into using an HTTP-only site. Nobody ever notices sslstrip. And while many people might counter with 'I don't care about that use case, it's unlikely', they basically assume that nobody will ever mitm their connection, which implies that they don't need secure connections. I wonder how often people actually think about these contradictions.


HSTS prevents that, though it doesn't protect you the first time you visit the site, unless you request addition to HSTS preload lists.

If you want to be added to the Chrome HSTS preload list, which is also used by Firefox, go here: https://hstspreload.appspot.com/


That's assuming a lot. Here are the reasons HSTS will not protect people:

1. Your browser has to support it. IE still does not support it; it is 'expected' in IE 12. Also vulnerable are people with Mac OS older than 10.9, Chrome older than 4.0.211, and Opera older than 12. Most people I know (non-techies) keep their browsers for the life of their computing device. So basically that's a gigantic pool of users who do not have HSTS support.

2. When they do finally get support, websites have to enable it explicitly. Here[1] is a sample graph of how few sites actually enabled it at the end of last year (about 2 out of every 1000 of the top 1mil sites, or 0.001905%)

3. The 'max-age' is often not set very long, meaning there's increase chance for a new attack to succeed.

4. The preload list is not scalable.

[1] http://hstscheck.phpgangsta.de/


Definitely not a perfect solution -- all your points are definitely gaps in Strict Transport Security.

However, there's still a lot of value to adding HSTS. As for #1 and #2 and #3, HSTS is a standard that can and will be more broadly supported (and better implemented) over time probably more quickly than HTTP2 will be supported on most servers.

Personally, I'm most concerned about #4. This should be something the IETF should be working on (if they aren't already).

At the end of the day, if you've already mastered transport encryption, you may as well go forward with HSTS as well.


Regarding your first advice: why?


Yep - getting this right is hard.

Couple of points about the article.

Browsers verify SSL certificates for revocation (OCSP). This is an ongoing service that has a direct impact on latency - so SSL is an ongoing service very much like DNS. However, most people don't realize this.

Also you send in a CSR - certificate signing request - not CRT (which is usually short-hand for certificate).

Also it gets worse - A recent OpenSSL vulnerability would still allow SSLv3 even if it was configured with "no-ssl3": https://www.openssl.org/news/secadv_20141015.txt

This is why I built https://snitch.io - security and SSL secured sites in particular are moving targets and not "fire and forget". You really need an external process monitoring and auditing your secured site.


> Browsers verify SSL certificates for revocation (OCSP). This is an ongoing service that has a direct impact on latency - so SSL is an ongoing service very much like DNS. However, most people don't realize this.

Inconsistently and sporadically, it seems: http://news.netcraft.com/archives/2014/04/24/certificate-rev...

That article is a few months old though. Have Firefox/Chrome changed their tune due to Heartbleed?


Not really inconsistently. Firefox, Safari and IE all do this. Firefox, for example, will wait up to 10 seconds for an OCSP response (https://wiki.mozilla.org/CA:ImprovingRevocation)

That article cites Adam Langley - a respected engineer at Google who has worked on Chrome and parts of Go. Chrome is wildly lax with certificate revocation. Don't believe me? Browse to https://revoked.grc.com from Chrome. It is true that if someone can MITM they can block CRL/OCSP requests...but browsers (including Chrome) made the choice of 'soft-failing' and thus making it an attack vector. OCSP stapling and the proposed "OCSP Must-Staple" (https://tools.ietf.org/html/draft-hallambaker-muststaple-00) solve this problem. With all due respect to Adam, it seems a little peculiar to say revocation checks don't work when they're broken by design in the browser he worked/works on.

Chrome is the only browser that skips revocation checks for DV certificates but it still does OCSP for EV certs. Chrome has the concept of CRLsets - but these have been shown to only capture a very small portion (<1%) of revoked certificates.

Firefox has the option to hard-fail if the OCSP request isn't verified. This should be the default behavior, but the fear is that too few people understand this and would migrate to another browser if SSL secured sites randomly failed to load sometimes. Note: this is vastly preferable, in my opinion, to loading a site with a certificate of unknown status.


Re: snitch.io

Have you considered somehow providing a demo of what I might see for my domain?


Hi!

The screenshots show you what the app looks like. And to your point all accounts come with a free 14-day trial.

I may eventually add a free "one-off" audit - but the value in a service like Snitch is that something is constantly monitoring and alerting.

Happy to answer any other questions - you can email me anytime. This username at gmail or currylabs.com


Totally agree with the article, certs are a the biggest money making racket out there, and a pain in the ass. Certs should be issued with the domain by default no extra cost.


It's this kind of thing that gave us such a quick transition to Amazon and 'PaaS' providers becoming ubiquitous. You can avoid all of these problems and decisions by just giving your various certificates and stuff to ELB or Heroku.


On Heroku you can use the ExpeditedSSL addon and we will even update the certs for you as the security standards change.

https://addons.heroku.com/expeditedssl


A great write-up on the problems associated with getting https up and running. It's no wonder huge swaths of the internet are unsecured. I recently had to set up SSL for a Facebook app at work, and jump through a lot of these hoops (whilst missing some too it seems).

So my question to HN is: What is being done to simplify this process and make it a simpler, more user-friendly process? (other than what Cloudflare have done).


I installed SSL for my company's website last week (i'm a decent backend engineer and unix hacker). It BLEW my mind how difficult it was for me to: 1) understand the problem 2) find the best certificate issuer 3) make the wildcard work

I 100% understand what happened to you and why you wrote this, and frankly I think someone should fix this. There's room for a great service here in my opinion


I still like StartSSL better than the other options. Yes, obviously opt out of having them generate your private key and CSR and upload your own. There is a big obvious "skip this" button there for that. To me the killer feature is that with class 2 validation ($60/year) you can generate as many 2-year certs, including wildcard as you want.


Configuring things is hard, and if you rely on Google to give you magic commands to execute instead of learning about what you're doing, you can really mess up.

If you don't have the time to spend properly administrating a system, don't do it; use a hosted platform so that someone else (who knows what they're doing) does it for you.


I ran into issues deploying HTTPS this past week. Two things of note:

- Android is super strict with certificates...make sure you properly order your server and intermediates

- Check your site on: https://www.ssllabs.com/ssltest/. This helped me solve a lot of issues


This post speaks hits the spot - a perfect rundown of the (broken) SSL experience. Why do we still live in an age where security & identity are so tightly bound that to "secure" a site, you have to pay someone to "validate" your authenticity - which we all know, is bollocks anyway.


If you'd like to test your SSL config against POODLE, we've updated our tester (https://www.tinfoilsecurity.com/poodle) to call out unsafe ciphers.


You can get StartSSL to sign a CSR that you generate locally, or at least you could when I generated the cert for my site with them.

This the only nitpick I have with this otherwise fine rant. Everything is broken, rejoice!


No need for that word in the headline.


No need to get offended by it, either.


Note for clarity: claar's complaint was prior to a headline edit; "FFS" used to say "For Fuck's Sake".


Which was undoubtedly changed to more closely match the actual article title, rather than for censorship/family-friendliness reasons.


We're adults and on the internet, it's OK.


Sounds like you should be using Blogger or some other hosted platform.


Honestly, this task is something that most people learn to do and just do. It's not nearly as complex as you are trying to a make it out to be.


It could be worse, he could have googled how to do something in php.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: