"Expensive" is relative, and while yes, SSL does require more number crunching than not using SSL, the difference, as evidenced and backed up by google, is peanuts. In the age of widely distributed tools like Firesheep, there is no excuse to not use SSL if there is any reason at all that it should be used. Price should be no concern.
EDIT: In my (admittedly, probably overly harsh) opinion, this whole article just reads like someone whining about not wanting to do their job. The kind of thing I'd send to my boss if I wanted to tell him something wasn't technically feasible because I just wanted to sit around and sip cola.
"DISECONOMY of SCALE #1: CERTIFICATE MANAGEMENT"
The author states that distributing keys to a "farm" of servers (10 apparently) is hard. Presumably if you have 10 servers running your website you've figured out how to distribute code to them all without injuring yourself. Distributing certs is not much different.
DISECONOMY of SCALE #2: CERTIFICATE/KEY SECURITY
The author states that SSL keys are sensitive. Indeed, and so is your source code. The paragraph contains some excellent FUD in the form of, a key on your "commodity hardware" server is immediately at risk of theft and will lead to "further breaches."
DISECONOMY of SCALE #3: LOSS of VISIBILITY / SECURITY / AGILITY
The author argues that SSL to the server introduces "unacceptable amounts of latency." This is just patently false. Many of the top 100 websites on the internet operate under this model.
The same paragraph says that not decrypting traffic at every hop will open up your service to "compromise." Even if parsing all the traffic is part of your threat mitigation strategy, out-of-band or port-mirroring can give you this ability without adding latency to the request or response.
Ah and I think the author should remove the bolded article that says Virtual Hosts can not be used with SSL certs. This is what the SNI extension is for.
Remember, security is hard and always a compromise.
But I consider the idea of allowing your passwords to flow over the wire in plaintext and allowing other information to flow in plaintext to be quite ridiculous.
The author suggests a false dichotomy: 2048bit encryption (which algorithm? he doesn't say) or none.
There are a lot of complexities here that can be tuned for your business and its requirements. At least, if you can hire a competent security guy.
Completely agree. Which is why I say at the end of my original comment that security is "always a compromise." Put another way, you weigh the day-to-day cost of more hardware and man hours against the potential future cost of a serious security exposure.
Unfortunately most people are bad at calculating potential future costs. Which leads us to your second point about needing a good security guy. =]
Oh wait... look at that domain name...
But perhaps you need that kind of marketing when you're selling overpriced appliances.
Most of the latter part is FUD.
One real problem that is encountered when moving to terminating SSL on many machines, instead of a single LB, is the problem of SSL session resumes. When the LB terminates all SSL on a single VIP, it has an SSL session cache and can resume with clients. If you make that LB DSR to servers behind it for SSL, they are going to have local session caches only. Odds are subsequent connections that try to resume the SSL session are going to map to a different machine, and without a distributed SSL session cache, the resume will fail.
We saw the ballpark of ~40-50% of SSL sessions were resumes at $BIG_INTERNET_COMPANY
Would it have been that difficult to show how to run "openssl speed aes -multi 4" or whatnot so people could test on their own hardware?
Perhaps the recognition that even older hardware such as the quad core Opteron 2358 I tested on, delivers 350Mbytes/sec of AES256, would tend to undermine their argument.
350Mbytes/second of network throughput is roughly 3.5 Gbits/second (I usually multiple bytes * 10 to account for overhead); far beyond what most people would ever ask of a single server.
(this same hw does 900+ RSA 2048 bit signs/s, just for reference)
I don't disagree with your point, but this bit is faulty. If you want to account for some overhead you should multiply by a number lower than 8, not higher.
Do you even know what scalability means? It roughly means that when the demand on your software increases linearly, you can throw more hardware at it (also at a linear rate) and everything should be fine. It does not mean or imply using a small amount of resources. In this sense, using SSL is pretty scalable since it's always a fixed cost per connection and as your number of connections increases linearly, you don't need to add computers exponentially fast.
> Encrypted traffic cannot be evaluated or scanned or routed based on content by any upstream device. IDS and IPS and even so-called “deep packet inspection” devices upstream of the server cannot perform their tasks upon the traffic because it is encrypted.
Good, that means it's doing its job right.
For the exact same reason this post is written, because RSA keys do not scale linearly and become "expensive".
I honestly couldn't get through the rest of the article, if you don't think securing information is a high priority, then you probably work for gawker.
Networks are more expensive to upgrade and limited by physics...especially wireless systems e.g. Cell and satellite. Smart caches can help a lot here, but not if everything is ssl.
I think the security gained from adding SSL far outweighs the efficiencies lost by losing a shared network cache.
That's not a security sensitive situation - in cases where security is at issue, the best practice is to make secure pages as lightweight as possible so they'll transfer quickly on slow lines.
HN did not disappoint.
Requests per second for a static file on my server via HTTPS: 10, limited by CPU