Hacker News new | past | comments | ask | show | jobs | submit login
EFF's HTTPS Everywhere Firefox plugin (eff.org)
82 points by _delirium on June 18, 2010 | hide | past | favorite | 17 comments



Just because it surprised me to find it - firewalls we deal with have HTTPS MITM capability built in. You install an SSL certificate which your corporate workstations trust, and it will present that to the workstation while proxying the connection out to the site, and sniffing/filtering the traffic in between.

Until I saw that I thought of it as 'an attack'.


I still consider it an attack.

There's lots of security reasons why a company would want to do that, both in protecting from malware and protecting from loss of company IP. Still, a huge trade off between privacy and security, and I am not willing to make that trade.

Maybe I would do it on my own connection, where I was both the MITM'er and the "victim", but the whole ssl system is built upon trust. It's a shaky foundation (do you trust all of the root certs in your browser?), but let's not start poking holes everywhere in it.


I'd be interested to know the manufacturer of these devices and how prevalent they are. (p.s. you're welcome to email me rather than do that in public: append @google.com to my username.)

Cheers


This is awesome. It's one of those pet peeves of mine that some SSL-enabled sites don't ever default to the secure version (and sometimes you don't know one exists).

I've actually been using Privoxy (http://www.privoxy.org/) to do the same (among other things, like ad blocking).

My privoxy actions file for that: http://media1.mike.tig.as/files/privoxy-ssl.txt


Something related: there's a recent work called "tcpcrypt" that argues for "ubiquitous transport encryption" and scales better than HTTPS too. It's backwards compatible with TCP, does not require application rewrite, works with middle boxes, etc.

http://www.cs.ucl.ac.uk/staff/m.handley/papers/tcpcrypt-usen...


I read this back as a private preprint. Definitely well designed (a good fit for USENIX, where it will be published). They had to overcome some significant technical hurdles in order to get it to be properly backwards compatible (issues with TCP header size, NAT boxes, etc.).

My biggest interest is in the much lower computational overhead for the server, which, I can hope, will mean it will get used much more often than TLS/SSL (thus the idea of it being "ubiquitous").

There's growing interest in the idea of "opportunistic encryption", where the results are transparent and beneficial, but not always guaranteed. (I'm working on this in a different area currently.)


I really like this, "normal" web browsing makes it hard to be actually secure. Things like sslsniff (which basically do the exact opposite of this) make it really easy to MITM "normal" use of ssl (go to http site, get redirected to https).

I'm kinda an "all or nothing" person, so when I had previously thought about how to "solve" auto-encryption like this, I thought about requiring https across the board. Of course if you tried to browse like this, you'd have a pretty crappy experience. If you built in an auto fallback to http in case of failure, you'd have the same problem before, where any MITM can trick you to visiting unencrypted sites.

This is a good compromise of forcing encryption on the important sites (like banks), but still being practical for the real world.


Isn't there a technical reason for not offering HTTPS to everyone and everything? Like server load? Or is that no longer a problem due to the power of modern servers? Just thinking, won't this annoy website owners if adopted en masse...


HTTPS is HTTP over TLS (formerly SSL). TLS is stateful, so it probably won't scale as well as regular HTTP. Also, I think it will render HTTP caches useless.


I'd say that "everywhere" is a bit of an exaggeration.


"HTTPS Everywhere Possible" just doesn't have the same ring.

A few years back, I looked into making an extension that relied on an RFC that was never widely implemented, allowing a user agent to request an upgrade to HTTPS whenever possible. This is the next-best thing, relying on the authority of the user rather than the server.


Something I'm not clear on: isn't HTTPS a stateful protocol (unlike HTTP)? If so, won't that have scalability implications? I assumed that was the reason most retail sites make you shop in HTTP, and then switch over to HTTPS for the payment step.


No. It's just HTTP over the top of SSL.


I believe SSL (TLS) is stateful, though, so HTTPS would be too. The browser would keep the TLS socket connection open for multiple HTTP requests, which would tie up resources on the server.


TLS/SSL can cache the session in order to actually improve performance and scalability. Otherwise you'd have to redo the exchange each time you made a request.

My guess is that the level of caching you'd want TLS/SSL to do is dependent on what kind of content you're serving, the usage patterns of visitors, etc. As an example, Facebook has relatively long user sessions, and would benefit greatly from caching and just refreshing the session keys. Something like Google search, where a user session may only last a few seconds and a couple requests... maybe not so useful. I'm not familiar (off the top of my head) with any in-depth studies on this.


HTTPS connections work just the same as HTTP connections. Connect, perform the request, disconnect.

You can send multiple requests over the same connection for either HTTP or HTTPS via the use of KeepAlive. There's nothing special about HTTPS connections which make them more stateful than HTTP.


What happens when you browse a HTTPS website? Last time I messed with proxies, there was some caveat or something when it came to HTTPS targets.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: