Hacker News new | comments | show | ask | jobs | submit login
How HTTPS Stops Attackers: What Every Web Dev Should Know (hartleybrody.com)
119 points by mattangriffel on July 25, 2013 | hide | past | web | favorite | 43 comments



I'm wondering about this paragraph:

--- This public key exchange only needs to happen once per session, the first time the client and server connect. Once they’ve agreed on a shared secret, the client and server communicate using a symmetric-key crypto system which is much more efficient to communicate on since it saves an extra round-trip each exchange. ---

Two points:

- The first and second sentences don't appear to be particularly related. - The symmetric system isn't more efficient because of round-trips. (I'm also not sure which round trips it saves on.)

Also (and I made the same mistake in my talk...), yes, explaining DH is important, but now it kind of sounds like in TLS both sides figure out the master secret using DH (and, in your talk, specifically, regular DH, not EC-based DH), when in reality that depends on the ciphersuite, and the vast majority of TLS connections don't work that way. From what I understand to be most TLS configurations in the wild, the pre-master secret is encrypted using the server's public key. (RFC 5246: 7.4.7.1, 8.1.1)

Finally, a bit of a plug, but... If you're interested in the build up, my PyCon 2013 talk "Crypto 101" starts from XOR and ends with TLS in 45 minutes. It mostly goes into a bit more detail about thinks like block and stream ciphers. I'm hoping to eventually turn this into a book. (If you're interested, my e-mail's in my profile.)


Since I can't seem to edit my comment. I figured it would be useful to provide a link to the talk I'm referencing: https://www.youtube.com/watch?v=3rmCGsCYJF8


I absolutely loved your talk; not only is it pretty informative but also entertaining, and found myself rewatching it just because. I think your talk gave me enough knowledge of crypto to take another go at Stanford's "Cryptography I" class. So thanks and I'll look out for your book.


Awesome! Always glad to hear from a happy customer :)

Is there something in particular that you would like more of or perhaps to stay the same? Humor is good, I suppose?

Obviously, a book lets me go in more detail, but I'm disinclined to take that too far. The entire beauty of Crypto 101 is that it doesn't go into detail. Right now I'm mostly just marking sections that you could skip if you want to.


Good read about HTTPS, but very bad title. HTTPS does NOT stop attackers. It does stop data sniffing on wirelress network, but does not prevent any server-side attack (XSS, SQL Injection, an flaw in the web application).


When properly applied it stops sniffing the wire as well (i.e. anywhere in the path) and man-in-the middle attacks and any active attack against the encryption. Obviously there are other types of attacks that are unrelated to the "cryptographic" security but that's a different story.


I thought HTTPS doesn't prevent MITM attacks.

> In 2013, the Nokia's Xpress Browser was revealed to be decrypting HTTPS traffic on Nokia's proxy servers, giving the company clear text access to its customers' encrypted browser traffic. Nokia responded by saying that the content was not stored permanently, and that the company had organizational and technical measures to prevent access to private information


It does prevent the vast majority of attacks. The only attacks it doesn't prevent when your CA signs the MITM's certificate, or when your browser trusts it itself (which is what happened in Nokia's case).


It certainly does. A cryptographic system that does not is useless. In the most basic sense, if you can't trust anyone you can't know if you're talking to a man-in-the-middle pretending to be someone else. TLS provides a mechanism for converting trust in a CA to trust that you're talking to the right party (who presented the certificate). If someone has a cert you accept and he's not the party you want to talk to that's not really TLS's fault and there's actually very little you can do about that. There are practical considerations as well but from a theoretical perspective it's secure.


In Nokia's case, the browser trusted Nokia's server certificate and that's how they were doing MITM. The same could be done if your employer installs a client certificate on your laptop.


There's a company called Bluecoat that makes appliances that are installed in many companies that do just this.


Ah good point, I sorta dashed off the title at the last minute. But the main focus of the article (and the whole point of HTTPS) is to make a secure end-to-end connection between a client and a server, not stop all attack vectors on a web server.


Why don't they mention HSTS? Seriously a very important safeguard against HTTPS stripping.

Apart from that, my idea of what every web dev should know about HTTPS is use HTTPS for everything. The performance reduction is minimal and far less important than the security gain...


Is the performance reduction of the entire web with https really "minimal"?

You've got extra overhead to setup the connection, extra processing to handle the encryption, no intermediate caching AFAICT. Clearly that has some impact - I'd expect it to be significant.


Of course it has some impact. But it's not significant at all. Take a look at the link in the other comment: https://news.ycombinator.com/item?id=6102983


In general I agree that you should err on the safe side and do HTTPS everywhere. But, until you profile your site with/without HTTPS how do you know performance reduction is minimal? Do you have evidence to support such a blanket argument?


Yes, of course I have evidence. Refer to Google's experience in 2010 when they switched Gmail to HTTPS by default [1]:

> On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead.

> If you stop reading now you only need to remember one thing: SSL/TLS is not computationally expensive any more.

1. https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


"Because the TLS handshake occurs before the HTTP connection begins, there can be problems if there are multiple websites hosted on the same server, at the same IP address."

This is why Server Name Indication ( SNI ) is so important. It allows multiple host names on a single IP address to support HTTPS.

Once again, IE on Windows XP is the biggest problem here. Windows XP systems simply must use something besides Internet Explorer ( Chrome or Firefox would be good ).


GlobalSign has a pretty brilliant solution for dealing with Windows XP -- https://www.globalsign.com/cloud/multiple-ssl-certificates-s...


The fallback is "multi-domain certificate", which means, if you host 10K domains, your server will send a certificate which contains 10K certificates within (a very large chunk), before anything can start. This solution is not scaleable obviously.


It'll scale OK to 10s or perhaps 100s of addresses though. OK so out-of-date people still using IE on XP will experience a slow start when accessing those sites, but if they are running "classic IE" slow obviously isn't a problem for them!

It isn't just IE-in-XP though: there are a surprising number of people still using Android 2.x on the last generation or two of smartphones and IIRC SNI was only added to the stock Android browser in v4, so if your site is otherwise mobile friendly this is going to be a concern. If the overhead per certificate is a few Kb and that multi-domain cert needs to be as large as them all then 100s of names will mean several hundred Kb in the inital handshake which may be both slow and costly depending on the user's mobile network.

Though as has already been pointed out for a larger number: if you need to be using 100s of names and can't get hole of at least a few more IPv4 addresses to spread them around there is something either technically or financially wrong with your plans!

This is of course on of the reasons why we need IPv6, as this would become a complete non-issue. Unfortunately IPv6 support is going to be lacking a lot longer than SNI support is as ISPs would much rather mess with hacks like NAT and SNI instead of investing in upgrading the base network.


If you're hosting 10K domains and can't afford multiple IP addresses, you're doing something horribly wrong.


That's correct even for the first 25. I was talking about the meanings of maintaining multi-domain certs from scaling point of view.


Maybe one IP is being used for non-monetary reasons - load balancing perhaps?


I had quite low expectations for what I would learn from this article but it was actually very good. I really liked the paint analogy and the simplified understandable maths.

Does someone want to try it here? :P Lets use root 2 and a small private number. We can do mod 10 (so the last digit of 2^(your secret) is your public number). My public number would be 4. (My secret number is secret). I know this is kind of goofy but it could be fun?


10 is not prime.


Drat. I failed that quite badly. Yes, a prime number would be needed. If someone knows why a prime must be used or knows of something which explains why then I would be very interested to find out why this is.


Symmetric-key ciphers aren't used because they "save a round trip". They're used because they're much faster. One could use a public key encryption system to encrypt the entire session between two parties (e.g. RSA) but AES is much faster than RSA.


Speed is one concern but far from the main reason. You couldn't use a typical-length RSA key to encrypt a large chunk of data because there's a max input size for any given RSA key size. Standard block/stream cipher modes of operation are defined only for symmetric key ciphers.


You can certainly use RSA as a block cypher, you are just limited in the size of the blocks.


The max input size is for one message. It's effectively the "block size" for RSA. You can send multiple messages using a single RSA key (and in fact pretty much everyone does that, otherwise you'd need a new public key for every message). RSA is a complete crypto system, you don't need any additional ciphers. You do need randomiziation (e.g. random padding).


For the sake of completeness, one such construction where RSA is used as a cipher is OAEP (Optimal Asymmetric Encryption Padding). Beyond being slow it has some other inefficiencies but that's another story.


I like this article, and found his explanations surprisingly clear.

One nitpick: SSL doesn't really impose much bandwidth overhead. http://stackoverflow.com/questions/548029/how-much-overhead-...


The article states that "Security breaks caching" but doesn't seem to give anything on the topic. Would a kind soukd elaborate or point me to tips on how to improve performance under these conditions ?


If the request and response are encrypted, how does the cache know what to cache?


At some point the browser must decode the content to render it on the display. So this decrypted content on the client machine could be cached, which leads to my confusion on the subject. I clearly don't understand something here.


Ah, I see. Yeah, I don't think client side caching is affected. Can't think of a reason it should be.


This article leads me to wonder. Has the US Government ever requested the master private keys for the main certificate authorities, such as Verisign?

It actually makes sense in a criminal investigation to spoof a real site in order to phish for passwords or the personal information of criminals. A similar method was used before to get cable thieves to call a phone number on the screen thinking they were giving their information to win a free item.

However, having the capability to impersonate any company, group, or individual has an enormous potential for abuse.


This was a really clear, concise overview. Just the right amount of explanation and clarity for me. Thanks!


What about proxied MITM attacks using a valid CA certificate for the proxy? And also, browsers are not the only HTTP clients out there. HTTPS I believe is a quite a mess as is cert validation and the CA Cert system in general. I view HTTPS as security-lite. I think we need something better.


This is pretty much what I was trying to see if we could do: proxy HTTPS requests to another site through our server. When proxies are involved with HTTPS, there are really two options:

1. The client sends a CONNECT request (instead of GET) which instructs the proxy to open up a secure tunnel with the remote server. In this situation, the proxy steps out of the way and simply shuffles bits back and forth, as if it were just another router between networks.

2. Setup a "Man in the Middle Proxy" which creates/signs certificates for each site on the fly. Basically, the client thinks the proxy is the server, and the server thinks the proxy is the client. The only way this works though, is if the browser is instructed to trust certificates signed by the MitM proxy. So this works fine if you're setting up a proxy on your local dev machine, for example. But not for proxying HTTPS requests to actual users.


> But not for proxying HTTPS requests to actual users.

Proxying everything, including HTTPS traffic, is not uncommon for internal proxies at corporations.

Since they control the desktop infrastructure a lot of companies install an internal CA as trusted root. This trusted root can then masquerade as any website it wants since it can sign any certificate it generates on the fly. As a regular user you wouldn't even notice unless you are certificate pinning.

The other big use for internal CAs is being able to issue SSL certs for internal apps without having to have them signed externally (both the inconvenience and $$$ involved).


AND because of the inconvenience and $$$ involved a lot of applications use a self signed cert and just skip checking the validity of the CA. I have written a few apps like that myself, so I suppose it is a common pattern.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: