Hacker News new | past | comments | ask | show | jobs | submit login
Dispelling the New SSL Myth (f5.com)
31 points by wglb on Feb 6, 2011 | hide | past | favorite | 38 comments



The issue here seems to be that the author doesn't seem to really understand the concept of "expensive".

"Expensive" is relative, and while yes, SSL does require more number crunching than not using SSL, the difference, as evidenced and backed up by google, is peanuts. In the age of widely distributed tools like Firesheep, there is no excuse to not use SSL if there is any reason at all that it should be used. Price should be no concern.

EDIT: In my (admittedly, probably overly harsh) opinion, this whole article just reads like someone whining about not wanting to do their job. The kind of thing I'd send to my boss if I wanted to tell him something wasn't technically feasible because I just wanted to sit around and sip cola.


Well, I read the article and it seems like it has a lot to do with the different crypto algorithms and key sizes. While that is true, note that even 1024-bit RSA with RC4 is better than nothing.


Hidden amongst the frothing rant are some good considerations. Unfortunately most of the article is FUD. A couple points I'll mention,

"DISECONOMY of SCALE #1: CERTIFICATE MANAGEMENT"

The author states that distributing keys to a "farm" of servers (10 apparently) is hard. Presumably if you have 10 servers running your website you've figured out how to distribute code to them all without injuring yourself. Distributing certs is not much different.

DISECONOMY of SCALE #2: CERTIFICATE/KEY SECURITY

The author states that SSL keys are sensitive. Indeed, and so is your source code. The paragraph contains some excellent FUD in the form of, a key on your "commodity hardware" server is immediately at risk of theft and will lead to "further breaches."

DISECONOMY of SCALE #3: LOSS of VISIBILITY / SECURITY / AGILITY

The author argues that SSL to the server introduces "unacceptable amounts of latency." This is just patently false. Many of the top 100 websites on the internet operate under this model.

The same paragraph says that not decrypting traffic at every hop will open up your service to "compromise." Even if parsing all the traffic is part of your threat mitigation strategy, out-of-band or port-mirroring can give you this ability without adding latency to the request or response.

Ah and I think the author should remove the bolded article that says Virtual Hosts can not be used with SSL certs. This is what the SNI extension is for.

Remember, security is hard and always a compromise.


There is a cost to installing security, particularly at the higher levels of FIPS certification. Let no one dispute that.

But I consider the idea of allowing your passwords to flow over the wire in plaintext and allowing other information to flow in plaintext to be quite ridiculous.

The author suggests a false dichotomy: 2048bit encryption (which algorithm? he doesn't say) or none.

There are a lot of complexities here that can be tuned for your business and its requirements. At least, if you can hire a competent security guy.


"There is a cost to installing security, particularly at the higher levels of FIPS certification. Let no one dispute that."

Completely agree. Which is why I say at the end of my original comment that security is "always a compromise." Put another way, you weigh the day-to-day cost of more hardware and man hours against the potential future cost of a serious security exposure.

Unfortunately most people are bad at calculating potential future costs. Which leads us to your second point about needing a good security guy. =]


Hmm... I wonder if a vendor makes products that could offload the SSL burden from the server onto another device, and provide an introspection point between that device and the server, getting around all the objections that are raised here...

Oh wait... look at that domain name...


I love the part where not buying an SSL load balancer implies "perhaps less than ethical business practices". Say what?


The F5 blog is a FUD machine. I feel dumber every time after glancing over one of their articles.

But perhaps you need that kind of marketing when you're selling overpriced appliances.


The lack of mention of SNI is odd ... like the author doesn't know what they're talking about.

Most of the latter part is FUD.

One real problem that is encountered when moving to terminating SSL on many machines, instead of a single LB, is the problem of SSL session resumes. When the LB terminates all SSL on a single VIP, it has an SSL session cache and can resume with clients. If you make that LB DSR to servers behind it for SSL, they are going to have local session caches only. Odds are subsequent connections that try to resume the SSL session are going to map to a different machine, and without a distributed SSL session cache, the resume will fail.

We saw the ballpark of ~40-50% of SSL sessions were resumes at $BIG_INTERNET_COMPANY


Source-IP based persistence on the layer 3/4 load balancer solves that very easily. The src-ip cache merely needs the same timeout as the webservers' ssl cache.


Source IP persistence causes hot spots. Big web proxies etc end up clobbering a single machine.


There is a lot wrong with this article, including, no definition of what "commodity hardware" means.

Would it have been that difficult to show how to run "openssl speed aes -multi 4" or whatnot so people could test on their own hardware?

Perhaps the recognition that even older hardware such as the quad core Opteron 2358 I tested on, delivers 350Mbytes/sec of AES256, would tend to undermine their argument.

350Mbytes/second of network throughput is roughly 3.5 Gbits/second (I usually multiple bytes * 10 to account for overhead); far beyond what most people would ever ask of a single server.

(this same hw does 900+ RSA 2048 bit signs/s, just for reference)


>I usually multiple bytes * 10 to account for overhead

I don't disagree with your point, but this bit is faulty. If you want to account for some overhead you should multiply by a number lower than 8, not higher.


No, he's right here--just looking at it from a different perspective. He's saying "it takes roughly 3.5Gbit/s of bandwidth to support 350MB/s of real data transfer."


> And as usual, it’s not just about speed – it’s also about the costs associated with achieving that performance. It’s about efficiency, and leveraging resources in a way that enables scalability.

Do you even know what scalability means? It roughly means that when the demand on your software increases linearly, you can throw more hardware at it (also at a linear rate) and everything should be fine. It does not mean or imply using a small amount of resources. In this sense, using SSL is pretty scalable since it's always a fixed cost per connection and as your number of connections increases linearly, you don't need to add computers exponentially fast.

> Encrypted traffic cannot be evaluated or scanned or routed based on content by any upstream device. IDS and IPS and even so-called “deep packet inspection” devices upstream of the server cannot perform their tasks upon the traffic because it is encrypted.

Good, that means it's doing its job right.


Note however that even 1024-bit RSA with RC4 is better than no encryption at all.


There are no currently known viable attacks on that cipher suite.


I think it is interesting that this article doesn't mention the SNI (server name indication) extension to TLS in the section on certificate management. It seems like a great way to bring down the cost of SSL installations.

http://tools.ietf.org/html/rfc4366#section-3.1 http://en.wikipedia.org/wiki/Server_Name_Indication


Does anyone actually use SNI? Looked into it but browser support would exclude ANY IE version on Windows XP which is pretty significant. Android and BB browsers also don't support it.


I don't know of mainstream hosts using it today, but I have to imagine that hosting companies want to offer it as an option to their customers. Interesting point about Android and BB, I hadn't noticed that before. Kind of seems like a chicken and egg problem. Obviously server admins don't want to turn on the feature until the clients support it, but the client support will go slowly until there are servers requiring it.


IE/XP browser support is what held me back when I was looking at SNI. SNI would have definitely made a migration to Amazon AWS more compelling. Without SNI, every unique SSL certificate = unique external ip = unique EC2 instance.


Heroku does.


Also of note is that NIST recommends ephemeral Diffie-Hellman - not RSA - for key exchange

For the exact same reason this post is written, because RSA keys do not scale linearly and become "expensive".

I honestly couldn't get through the rest of the article, if you don't think securing information is a high priority, then you probably work for gawker.


EDH SSL still uses RSA.


Could you explain what causes RSA keys to "not scale linearly"? I don't seem to recall any part of the protocol being non-linear in the key length.


It was more in terms of comparison with security strength vs key length. For example, 3072 bit RSA keys are equivalent in security strength to 128-bit symmetric keys. To reach 256-bit security strength equivalence, you need 15360-bit RSA keys. [1](Page 63)

1. http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-P...


I hadn't thought of it that way. Thanks for the clarification.


Also I think my numbers are correct, for every double of key length size in RSA, it is 8x more expensive to compute.


I have small computers with VIA processors that have the padlock hardware witch provide an incredible boost to most common cryptographic operations. I'm serioulsy considering sticking with these computers, at least as front end to ssl.


I've been interested in VIA's hardware too - could you elaborate on what kind of performance gain you're seeing, and what hardware your considering deploying?



The biggest fundamental issue with performance and SSL is that it us end to end, and defeats stuff like network cache, transcode proxies, etc. Everything else can be solved by more CPU at both ends, which is cheap and only involves parties with a direct interest.

Networks are more expensive to upgrade and limited by physics...especially wireless systems e.g. Cell and satellite. Smart caches can help a lot here, but not if everything is ssl.


What proportion of web requests are served from a shared network cache? Whenever network cache is brought up in relation to SSL, I always wonder. Surely it's only a few percent max?

I think the security gained from adding SSL far outweighs the efficiencies lost by losing a shared network cache.


Caching can be very useful - I've deployed proxy web caches in school situations, and it does wonders for speed when loading the same site on a lab full of computers all at once (and also preventing age-inappropriate content at school).

That's not a security sensitive situation - in cases where security is at issue, the best practice is to make secure pages as lightweight as possible so they'll transfer quickly on slow lines.


It's an issue for wireless (EDGE, 3G) and satellite networks. I think a lot of networks do some kinds of filtering and downscaling of images or videos.


Disclaimer: I submitted this link after seeing it tweeted by tqbf, and wanting to see HNs reaction to it.

HN did not disappoint.


here's the response by Adam Langley, the guy who kicked off the "ssl is cheap" thing with another blog post: http://www.imperialviolet.org/2011/02/06/stillinexpensive.ht...


Requests per second for a static file on my server via HTTP: 400, limited by bandwidth

Requests per second for a static file on my server via HTTPS: 10, limited by CPU

:(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: