Hacker News new | past | comments | ask | show | jobs | submit login
Should All Web Traffic Be Encrypted? (codinghorror.com)
354 points by hartleybrody on Feb 24, 2012 | hide | past | favorite | 128 comments

Lesser known HTTP feature that I love, instead of linking to resources like follows:

<link rel="shortcut icon" href="http://static4.scirra.net/images/favicon.ico />

You should link as follows:

<link rel="shortcut icon" href="//static4.scirra.net/images/favicon.ico" />

The double forward slash will select the current protocol the page is being viewed on which means no security errors if you're switching between http/https!

Nitpicking here, but this would be a URI rather than HTTP feature. It is described in RFC2396 and is just part of how relative URIs work.

That is a neat trick, does it work on all browsers?

It does, although IE 7 (and possibly 6) have a bug where they will double request the resource. But, honestly, if they're on IE6 or IE7, the web isn't fast for them anyways.

The double request on IE only happens on stylesheets, all other requests (including javascript) are fine.

Yep, I've tested it down to IE6.

i think so yes

I always expected this to do the same:

<link rel="shortcut icon" href="/static4.scirra.net/images/favicon.ico" />

If it doesn't, then what does it do? Default to http?

That's a relative URL. It would be equivalent to http://static4.scirra.net/static4.scirra.net/images/favicon....

Wow, this took forever to click. I have never written sites that spanned several servers like that. Thanks!

It's all on one server! We use Scirra.net as a cookieless domain to serve static content from (images, css, js etc) as this speeds requests up because otherwise the cookie data for the page is sent to every resource on the same domain which can slow the page load down somewhat.

1 cookieless domain is probably fine, but I found out that for us 4 subdomains (static1, static2, static3, static4) gives best page load performance. It's been a while now so I've forgotten exactly the reason why more can be good, but I believe the benefit of multiple cookieless domains is to do with parallelisation of requests.

If you set it up right it's pretty easy to do, I just have a function that randomly generates the static URL based on the hash of the name of the resource it's linking to.

For example:

<img src="<%=allocateStaticPath("~/images/logo.png")%>" />

The function then hashes the filename, randomly allocates it to static1,static2,static3 or static4 and then prints the path. Hashing is important, you don't want the resource skipping round static servers otherwise it wont be able to cache, and as hashes are designed to have a random distribution most pages static resources should be pretty evenly spread out across the 4 static sub-domains (I do tweak it on some pages though like the homepage as sometimes it comes out quite unbalanced).

All the staticN.scirra.net subdomains all point to the same folder.

Some people think it's overkill and a waste of time but it's not. Once it's setup right it's very low maintenance. Also, the pages load very fast :) I would offer you to see our homepage http://www.scirra.com but it's in a transitional stage of moving server so a lot of it is quite slow at the moment!

Page load speed is super important especially on the visitors first visit/request. Every second longer it takes the more visitors are going to press back or cancel the request. People are also capable of differentiating between very small periods of time (~10-30ms) so really every ms counts. A fast page load also sets a good precedent that your site is going to be one they will enjoy to browse as no waiting is involved.

Right, I should have said, I have never even worked with sites that span several domains (not servers). But that is a nice trick to know.

Playing around with Construct has been on my To Do list for a long, long time; I'll take note of the serving performance when I do! :)

It doesn't span several servers. It looks in a directory called static4.scirra.net that is in the webroot of http://static4.scirra.net*.

you are probably thinking of a relative link to an image on the same server.

I use the EFF's Firefox addon called "HTTPS Everywhere". It has a list of websites that have HTTPS enabled, and whenever your browser is directed to the plain-HTTP version, it will go to the HTTPS version instead. https://www.eff.org/https-everywhere A useful (but tbh kinda annoying) companion addon is the HTTPS Finder. It checks to see if the website you're currently browsing also has an HTTPS version, and will add a rule to the HTTPS Everywhere addon. (It also has a "whitelist" of sites that this breaks.) https://addons.mozilla.org/en-US/firefox/addon/https-finder

HTTPS Finder dev here - If you go into settings you can turn off the auto-redirect to HTTPS or the actual drop down alerts (or both).

Then you can go into Preferences > Advanced after some browsing, and you'll see a list of all the good sites found. You can create HTTPS Everywhere rules directly from there, without ever being annoyed during your normal browsing.

Thanks for mentioning this.

I've been using HTTPS Everywhere for a while now, but it's been almost entirely a vanilla install since I got it. The only exception so far has been HN itself, and only because someone put the new rule in a comment. I then had to use Google to find out how to actually install the rule, as it was pretty non-obvious to me.

It seems I've been downvoted, and I think I know why. I can't edit my original comment so a reply will have to do.

My comment was meant to be a sincere thanks for mentioning the HTTPS Finder extension. Reading my comment now, without that clarification, I can understand how it could read as a criticism of the HTTPS Everywhere extension instead. That was not my intention. I merely wanted to express my thanks for making me aware of a complimentary extension which adds useful functionality to an extension I already use.

If the downvote was for another reason, please let me know what was the reason for it so I can try to avoid committing the same error in the future.

One item that this (excellent) blog post does not adress is what to do about referer information which is generally not passed along when clicking on links on sites being browsed over SSL.

In order to "get credit" for all of the traffic that they send everywhere twitter had to develop a fairly elaborate system of redirections (built into t.co) to make sure that clicks from twitter.com ended up being sent out to the rest of the web with referer information.

It would be a real shame if everyone in the world had to develop a similar process.

Part of me thinks that browsers should start sending referer information even when you click on links for SSL sites, though this change would bring with it other problems.

It is not at all obvious (to me at least) what that best thing to do here is.

If HTTP referrers never existed, the web would still be huge and would still be full of amazing content.

Why should we care about retaining referrers? I think the only reason that people dislike the idea of losing them, is because they've got used to them being there.

I don't particularly like the fact that sites which I click through to can see where I'm coming from, or what I was searching for, so I've installed a Firefox addon called RefControl to get around it. The majority of people don't know anything about referrers though.

I'm sure that brick and mortar shops would also love to know how I was referred to them when I walk through their door. They don't get this information unless I consciously decide to give it to them though. And even though they don't get this information, they still manage to sell products.

> If HTTP referrers never existed, the web would still be huge and would still be full of amazing content.

While we're at it, let's get rid of User-Agent. No sarcasm intended, I'm serious. It only does bad things.

I disagree - it's nice to be able to serve a mobile layout to a mobile device. If I'm trying to load, say, CNN or ESPN on an older mobile phone, I don't need all the cruft that comes with the desktop version.

(We can get into all the evils suggested by http://xkcd.com/869/ , but for the purposes of this argument I'm assuming that web developers and sysadmins are competent and not evil.)

And then you get served mobile version of the site because you are using Opera and some incompetent web developer[1] decided that it's a mobile browser. But you got a point. However, deciding what to show should be based on the device (resolution, size, orientation, input methods, ...) and not the browser.

Now, just for fun and laughs, check out what user-agent string is sent by Chrome. I guess Google assumes everyone is like them and their poor browser would get blocked if they just said it's a Chrome.

1. https://groups.google.com/a/googleproductforums.com/d/topic/...

Those don't really affect the HTML that gets sent.

    # wget --user-agent="Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3" http://www.cnn.com/ -O cnn.mobile.html
    # wget http://www.cnn.com/ -O cnn.standard.html
    # ls -lhrt | grep cnn 
    -rw-r--r--    1 pavel  staff    29K Feb 24 13:12 cnn.mobile.html
    -rw-r--r--    1 pavel  staff   104K Feb 24 13:13 cnn.standard.html
I don't need 100K worth of stuff on BlackBerry 1.half's browser that can't render most of it.

Ya this is definitely a valid point of view. I'm not sure if we can get from here to there though. So much of the way that business online is done today is based on the status quo that changing it is pretty hard.

Could happen though I suppose. /shrug/

Inertia shouldn't stop us doing something. Where a visitor comes from is very valuable (not just monetarily), and I don't think it really infringes on privacy.

It absolutely does infringe on privacy. The only thing that is up for question is by how much and if the benefits outweigh the drawbacks. People who benefit from them existing are obviously going to have a different view point to everyone else (on average).

I don't think that referrers should ever have been a part of the protocol and I don't think that the commercial value of them existing should have any influence on whether or not HTTP continues to include them. Unfortunately, both Google and Microsoft benefit financially from the existence of referrer headers, so I don't see them going anywhere in Chrome or IE at least.

Referrer information allows website owners to see who is linking to their site. It's like a private, reverse form of <a> tags

I agree that referrer information is useful to site owners. What you need to understand is that just because something is good for site owners, doesn't mean it's good for site visitors, or the web in general.

If you wanted to see who was linking to your site, and referrer headers didn't exist, you'd use a search engine. Hell, people would build dedicated search engines which alert you when somebody links to your site.

Referrers are good for identifying where a user came from, or what they were searching for when they land on your site. Well, sometimes they don't want you to have that information, and most of the time people are completely unaware that you're getting it.

The interests of web site owners align a lot with the web itself. And referrer headers are much easier than having to build a search engine and being at their whims.

"The interests of web site owners align a lot with the web itself."

Not in this particular case.

"And referrer headers are much easier than having to build a search engine and being at their whims."

Absolutely. And if your browser sent a HTTP header containing your name, address, sexual preferences and date of birth, that would also make things even better for website owners. Just imagine how much better they could target their adverts!

Yeah, this is a bit of a pain. We have to something similar for facebook (https://www.facebook.com/note.php?note_id=10150492832835766). There's talk of a meta referrer tag (http://wiki.whatwg.org/wiki/Meta_referrer) that would allow overriding this behavior and making things a lot simpler, but afaik it's not implemented in any of the major browsers.

When going from https to https site, referer information is usually being sent, at least with modern browsers. So if the whole web goes encrypted (yay), the issue is solved.

Really? That's....very strange. The rules for this sort of thing seem so arbitrary.

It makes a little bit of sense. When you come from a https site, you don't want to leak referrer (and thus which part of which site you were visiting) over the unencrypted internet.

Though in my eyes sending referrers always has been questionable "insecure by default" behavior, as the internal structure of one site is leaked to another. With hindsight, maybe it should have been restricted to the domain name.

It's too bad this refers to SSL. There are sometimes good reasons not to use SSL, but there is rarely a good reason to send emails that contain any business, financial or security information over plaintext. Anyone who gets mails that amount to more than "Hey what's up man" should provide you with a public key for mail crypto.

Also, it is very smart to use full-disk encryption and also encrypt sensitive info on that disk in a separate encrypted file (often preferably with something like TrueCrypt that allows plausible deniability via hidden volumes) if your computer is used for anything important.

Think of the extent of damage that would have hit HBGary or any of the many other companies that have found themselves in a similar quagmire if they had employed some of that computer security knowledge to encrypt mail and required digital signatures before doing anything important (hint: the answer is 0).

You may have a competitor hooked into your mail server for years before you know anything has happened, while you scratch your heads and wonder why they always beat you to the punch on new products and steal your big clients.

You may have a hostile government agency after you for completely innocuous things, like downloading public domain research article. In this case, lots of encryption is going to buy your lawyers lots of time even if the judge eventually orders you to decrypt all of it; hopefully the real goods are hidden somewhere where they won't find them (like in a TC hidden volume, perhaps "in the cloud" in an encrypted file on Tahoe-LAFS over I2P).

Sorry, off-topic, but as a fan it really bugs me that graphic halfway down appears taken without attribution from Hyperbole and a Half. http://hyperboleandahalf.blogspot.com/

You know it's become a huge internet meme right? http://www.google.com.au/search?q=x+all+the+things&oq=x+....

Missed that one. Too busy reading Hacker N -- I mean, writing code.

And the meme-ification only makes me feel slightly better about it. I mean, what's it cost Jeff to give a little credit?

There is a good chance Jeff doesn't know about the source, and generated using one of the meme generator sites.

He likely made it here:


I wouldn't have known where it originated from. It's possible/likely the source gave them permission to use it.

Does rampant theft make theft any more acceptable?

No, but the fact that copyright infringement is not theft might.

Haha, looks like the MAFIAA war machine has been working pretty effectively that even HN readers are starting to believe it!

I recommend re-reading Lawrence Lessig's Free Culture periodically as an antidote to the reasonablish arguments pouring out of the entertainment industry.

The taboo against "theft" is a cultural norm, and now there's a new cultural norm.


Even the very definitions of words are tribally defined, and carry implicit values. Some (not me) could make a case that wage slavery is also morally indefensible, or that private property is a form of theft from the poor. The fact that you see no distinction between copyright and theft is a perfect example of this morality-through-definition.

To use a straw-man example: Donald Trump owns the trademark to the phrase "You're fired." Am I stealing if I repeat this phrase? Now how about Amazon's 1-click, or humming "Happy Birthday"?

A hundred years ago, you could own the copyright to a book, but not claim ownership of a phrase or obvious idea. A thousand years ago, you could not effectively claim ownership of a book. Ten thousand years ago, you (probably) could not have claimed ownership of land, at least in any form we're familiar with. These are all things we made up, for better or worse.

Regardless of any person's or tribe's specific opinions on economic issues, private property, or ownership of ideas, I think it should be implicitly obvious that breaking society's copyright rules is a different act than stealing, just as manslaughter is different from first-degree murder. It doesn't justify it, it simply acknowledges that it is a different act.

Also, this particular case is laughably mild. The creator has only benefitted by sharing this free art snippet and is now internet-famous as a result. I guarantee that Jeff would remove it if asked, and also that the creator would never think to do so, given that this image is strewn across thousands of websites.

I think you mean "tribal" norm.

There's no other norm.

Your tribe endorses theft.

Or it just doesn't consider an act that doesn't deprive someone of anything they had to be theft.

It's possible that they've only seen it as a meme, in which case they would have no idea who to credit. I for one had never heard of the original until just now.

IANAL, but since the content in question is not a trademark, I do not think that genericization applies.

Since Hyperbole and a Half is Creative Commons licensed (CC-NC-ND), it looks like its licensing requirements would be satisfied by attributing the original source ("Proper credit includes a prominent, easily visible link to the source of the material you want to use...")[1].

I think making sure that images in your blog post are properly licensed is difficult [2], but on the same level of difficulty and importance as determining a license for a software project [3].

[1]: http://hyperboleandahalf.blogspot.com/p/faq_10.html

[2]: I have personally had issues with this, which I wrote about at http://www.marteydodoo.com/2011/01/19/licensing-is-hard/

[3]: http://www.codinghorror.com/blog/2007/04/pick-a-license-any-...

You should balance that against the fact that the license also forbids Derivative Works (the ND in the CC-BY-NC-ND) - which likely means that the images themselves, even with attribution, are breaking the license.

for what its worth, that applies to trademark not copyright

Fair use?

I believe you still have to credit the original copyright holder when claiming Fair Use.

No, there is no such requirement. Fair Use is judged on a four prong test, none of which involve crediting the author:

the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work.



But you cannot claim authorship. That violates 17 USC 106A rights of attribution and integrity.

Woah ... I've just re-read 106A for the first time in a few years. There've been a few additions to that section, including a mess of integrity, modification, and destruction claims. http://codes.lp.findlaw.com/uscode/17/1/106A


If not for your post I wouldn't have known the meme was from this comic, and then gone and read the blog and enjoyed it and now appreciate the meme more.

Jeff has added a link. Thanks, Jeff!

It's a popular image macro by now.

Funny that https://stackoverflow.com/ certificate is not valid.

Funny that this was written after Jeff Atwood left Stack Exchange.

To be fair, Jeff Atwood recently left Stack Exchange (amicably, of course): http://www.codinghorror.com/blog/2012/02/farewell-stack-exch...

I thought is last day was March 1?

or... have i been working on this bug for way to long?

Well, it's a cert for stackexchange.com and not stackoverflow.com.

They need to be serving a different cert on a different IP for that (ignoring SNI since not all browsers support it).

Ironic, but there's virtually nothing private on SO.

Training users to click-through security warnings diminishes the security for all sites.

That was actually my first thought after reading this. Chances are, people in public wifi spots will actively and aggressively ignore any and all warnings that get in the way of them getting their work done (I have witnessed this far too many times).

However, Chrome surprised me the other day by not allowing me to continue on to a site that didn't have a valid certificate (I was MiTM myself using PAROS proxy to test something). In other words, we don't need stricter warnings, but the warnings must become errors for them to be noticed.

... except your login cookie. :)

Yes, please, encrypt all the things, but absolutely don't use HTTPS to do it.

See Daniel J. Bernsteins pet project CurveCP (http://curvecp.org/). He also had a talk on 27C3 (http://events.ccc.de/congress/2010/Fahrplan/events/4295.en.h...).

How do you propose to use CurveCP to encrypt "all the" HTTP without involving HTTPS?

What's with the submission of the title here? Would be less meme-y if the blog article title was used.

Just Tuesaday, I sent an email around the company discussing SSL vulnerabilities, how they impact our product, and ways we can mitigate that. I've pulled out the parts specific to our product, but the rest may be interesting. I would love feedback on things I may have missed. FWIW, it doesn't instill great confidence in SSL, but it isn't completely horrible. ------------------------ 1. It is possible to pretend to be any site you want if you 1) find a sleezy CA (and they exist aplenty) or get the government involved and 2) can get between your browser and your final destination (like, for example, a wifi hotspot). There will be no way (reasonable) way to tell you aren't connected to whom you think you are.


This could be addressed using http://convergence.io/

2. Way easier, but leaving some tell-tale signs you can find is to simply put yourself between your victim's browser and his server and convert all the links that come back to be insecure links that go through you. You then encrypt them as you pass them on to their final destination, while being able to see everything that happens. This is trivial to set up, but can be gotten around simply by using bookmarks that specify HTTPS.


This won't go away until everybody is using 100% SSL and HTTP (unencrypted web traffic) is turned off in browsers.


3. For the very determined, it is possible to determine the symmetric key a particular SSL session is using if you have some luck, some skill, and some time (about 30 minutes).


This requires a protocol change to SSL. We've known about (theoretical) vulnerabilities for 10 years, yet most sites still run old versions of SSL. Given how slowly people like banks update infrastructure technology, I don't see this one going away for a long time.

4. If a site is improperly configured, it may allow an attacker to gain access to the cookie representing your secure session by making an insecure request. This is another class of vulnerabilities made possible by using untrusted networks. The misconfiguration allows the browser to send your (supposedly) secure cookies in an unsecured request simply by making any request (typically done by inserting JavaScript into an unsecured page you are browsing). It is possible to mark cookies as "secure only", but services will choose not do that so you don't lose your session if you type http://example.com instead of https://example.com.


1) You can address it in Chrome with pinning [1]. Built in pins require that you be a significant site, but you can also set them with HTTP headers [2].

[1] http://www.imperialviolet.org/2011/05/04/pinning.html [2] http://tools.ietf.org/html/draft-ietf-websec-key-pinning-01

This is a rather poor solution. The longer term one is Certificate Transparency: http://www.links.org/?p=1219

2) is solved with HSTS [3]. You can contact me (@chromium.org) to be built in. There isn't a notability requirement.

[3] http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

3) The BEAST attack was tough to pull off and is fixed with Chrome, FF10 and IE.


4) Yep, cookies must be marked secure. HSTS can also fix this my eliminating the insecure requests. Even with secure cookies (but without HSTS), a MITM can also set the cookie. (i.e. to log you in to their account before you hit 'send' and then to log you back into yours before you notice the problem.)

2) is solved with HSTS [3]. You can contact me (@chromium.org) to be built in. There isn't a notability requirement.

I don't understand that. If I serve http://example.com/mypage which has a link to http://mint.com/justin, you won't convert that to https://mint.com/justing, right? Even if example.com has HSTS enabled? Cause that would assume that mint.com has https, or else the whole thing breaks.

Cause in that case a man in the middle can just insert links to other domains (say, http://examp1e.com/myotherpage when I was serving a link to http://example.com/myotherpage) and still have the attack work. Like the GP said, only starting at an HTTPS page would solve this.

But you're the expert and I'm not, so what am I missing? :-)

as long as mint.com has HSTS and either the user has been there once before or it was hard coded into the browser as an HSTS domain then the browser will never visit http://mint.com, it will immediately go to https://mint.com

EDIT: and well it doesn't seem that mint.com even has HSTS enabled... so bad example :P

Good points. Additionally: if you control the client, you can also trust your own CA only, or even just require a specific certificate.

How have I never seen convergence? I've been bitching about the weaknesses in the CA system for years, and totally missed that someone has done something about it.

See also Perspectives[1], on which Convergence is based.

[1]: http://perspectives-project.org/

Yeah, I'm aware of that, but it had some privacy issues that convergence seems to have figured out

Expanding on 1) you can also play man in the middle, decrypt and resign traffic with your own faked CA. If you ever have access to the users' machine you can install your faked CA as trusted and could have done it long in advance (eg via trojans/viruses).

When I was working on WAN optimizers I actually did this during research. All the various sites I visited still proudly told me how they were "Verisign Trusted" and even clicking on "Verify" links would tell me how verified and correct it was.

The UI in the browsers tries hard, but in reality users want to access the site and they will hit OK to get there. convergence is nice (if you run Firefox) but it is of significantly less help when using corporate/intranet sites.

It's not a "faked CA", it's a perfectly legitimate "internal CA" you created.

Of course you may be using it to impersonate external websites to your internal users, but the circumstances under which that may be an OK thing to do is a policy question that's still evolving.

If you've already owned the box, why would you bother with MITM?

I was working on WAN optimisers - ie something that would be sold to customers. With WAN optimizers you would typically put one in a branch office and one at headquarters. The boxes would then compress traffic between them. Typical compression ratios are 20x/95% - in other words your WAN link can now transfer about 20 times as much traffic. Additionally some traffic would be modified to do read ahead and write behind to provide latency improvements. An example of that was a user in Malaysia opening a Word document in San Jose, CA that was a 75kb file. Without a WAN optimiser it would take almost 3 minutes while with one it would take 5 seconds.

The problem with SSL traffic is that it is encrypted and doesn't repeat even for identical underlying data, and hence can't be compressed, nor can it be modified. This significantly hurts performance. To work well the SSL would need to be stripped off, the traffic compressed/read ahead etc, sent over the WAN and then SSL put back on. (The communication between the WAN optimisers was itself within IPSec or SSL.) SSL is designed so that you can't pull shenanigans like this, unless you have the private key of the servers, or resign the traffic with a different CA that can generate the needed certificates on the fly and are "trusted" by the user.

Many internal corporate services have moved to SSL and branch office users need to access them. Think about benefits systems, HR, documents, accounting, sales forecasting and tracking etc.

In the corporate world, it is used a lot for data loss prevention policies. It keeps employees from sending a file containing all their customers' social security numbers, addresses, and credit card to their home email account (or even one of them).

How is Convergence different from the Perspectives add-on[1]? I ask because I honestly don't see any significant advantage to one over the other.

[1] https://addons.mozilla.org/en-US/firefox/addon/perspectives/

Convergence has a proxy system built in so you connect to notaries via other notaries. It's a rather rudimentary attempt at offering privacy to the user looking up the certificate. It (rightly or wrongly) works on the assumption that people running notaries wont collude.

Perspectives always leaks information about which sites you are visiting to all notaries.

Convergence leaks information about which sites you are visiting for the first time with the current key only if your bounce notary colludes with one other notary.

You can also look at the video where he talks about Convergence.

http://www.youtube.com/watch?v=Z7Wl2FW2TcA (Start from 35m35s)

Adding SSL to a site sounds easy but it's very difficult in some instances.

Take for example a forum, we can force everyone on HTTPS quite easily but as soon as someone hotlinks an image in a post that's not HTTPS it'll throw security warnings up which are (in my opinion) overly dramatic and are very unfriendly to the user experience.

User submitted content and HTTPS can be a pain to get right, and on some platforms like common bb software it's basically so time consuming to modify it's just not worth doing.

SO solves this by using imgur as a proxy, a lot of sites don't have that luxury unfortunately or even the technical expertise to implement something similar. This is also a bit wobbly on the old copyright laws as well.

Sounds at first like an admirable concept, but unfortunately would be incredibly problematic.

What about the case where you're virtual-hosting many sites on one IP address? Since the SSL handshake occurs prior to any HTTP data being sent, and the browser will reject a server-certificate whose hostname does not match, you're normally restricted to one HTTPS domain per IP address.

I gather there are TLS extensions to suppport SSL vhosts, but I don't know how widespread they are. There are also methods of including more than one hostname per certificate, but I'm assuming that when you purchase a SSL cert from a registrar, you're going to be restricted to one host.

There was a discussion on HN a while back about why not all Internet traffic is encrypted:


Wouldn't it provide good additional security if there would be a possibility to define those HTTPS Pins [1] in the DNS, in TXT recods?

This would be fairly lightweight way for me to say which CA(s) I use for a particular domain.

[1] http://www.imperialviolet.org/2011/05/04/pinning.html

With second thought this does not make sense.

False certificates would be only problem in man-in-the-middle attacks of if the attacker can alter information in dns. In both cases the attacker could also fake the information about valid ca's.

Yes... why would any website that values privacy and security not use HTTPS for everything? We've been doing that from day one on https://postgres.heroku.com.

When I'm on some kind of public wifi, I always surf through an SSH proxy so that my traffic is encrypted. That helps if you're hitting up sites that don't have https.

Yes: using IPv6 IPsec everywhere, a web-of-trust model and a distributed method for key exchange (DHT?).

I may be dreaming.

For content that is already public but needs to be protected from modification like images and scripts couldn't it be hashed and the hash just sent with the page your viewing. Then the browser could download extra assets from an insecure source like a proxy or cdn and know that it hasn't been modified?

So then the browsers have to implement two security systems.

Not entirely, as SSL already includes a hash. That said, I don't agree with doing it.

However, I do believe that we should investigate ways of authenticating larger downloads automatically.

"we need to" hey, what about you go ahead and do it yourself?

that's "simple"! you've to solve the certificate issues. Easy as pie. I'll be waiting.

There was an article posted a while back that seems like it might be relevant here (especially with the doubting of SSL being feasible to deploy for everyone). It discussed how many sites use OpenSSL improperly or in a poorly configured way that causes it to be more expensive than it needs to be. I hate doing this, but I also have been looking for the link for a while with no success.

gmail security is really only good between gmail accounts and is definitely stored in plain text in the googlesphere use PGP if you need to guarantee email privacy.

> is definitely stored in plain text in the googlesphere

have a source for that?

By definition, since Google can index email and mine it for keywords, they have access to the contents of it, which makes it isomorphic to plaintext.

They almost surely are for a myriad of reasons, but searchability does not imply a database must be isomorphic to plain text: searchable encrypted database techniques have long existed and have been implemented as things like CryptDB http://css.csail.mit.edu/cryptdb/ which let you set the level of encryption and how much information leakage is possible.

That could be done on the client side, reading the text on the page, like I assume they do for every other AdSense-enabled page.

AdSense detection is not done on the client side. AdSense knows which ads to serve on which page because it leverages the search index cache and content analysis of that page. In the same way, Google serves contextual ads in Gmail by indexing the content of each email as it comes in.

Of course, every other major public email provider in the world stores email in plaintext too, do I don't get how this is a knock against Google specifically.

It's not though. Training the data models for Priority Inbox would also require access to message contents and metadata and happens offline in batch.

That's not the same thing though. Plain text is a security risk because users tend to reuse passwords. If it's hashed then that wouldn't be the case, even if they can mine it or whatever.

If the content of the email were hshed then the recipient couldn't read it!

They could if it were a reversible key-based hash.


I believe you're thinking of a 'cipher'.

Cryptographic hashes (even keyed ones) are expected to be one-way functions.

Could be my bad here, but I said "reversible key-based", which isn't what most cryptographic one-way hashes are.

For ASCII text, modulo 13 is a reversible operation (a/k/a rot13). It's not key-based, may not be a hash, and I'm not aware of any specifically key-based hashes, but that's along the lines of what I was thinking.

Fully admitting winging this one though.

Well your Wikipedia link was about cryptographic hash functions so I figured that's what you intended to refer to.

So when you wrote "key-based hashes", I interpreted that as meaning a cryptographic hash-like function with key input, e.g. HMAC the "Keyed-Hash Message Authentication Code".

Modulo 13 is different than rot13. Modulo 13 is actually a hash function, whereas rot13 is a permutation.

If rot13 took a key (e.g. if it were rotN instead) it would make a primitive cipher. But it doesn't, so it behaves like a cipher that is always used with a fixed key or a cipher the key is already decided in the context of discussion.

The process of applying a specific key to a cipher is called "keying". So just to make things even more confusing, we could perhaps refer to rotN<N = 13> (AKA "rot13") then as a "keyed cipher".


What the hell are you talking about?

Google is not storing hashes of your emails instead of the emails. How is anything under discussion related to passwords?

I'm sorry, I misunderstood. The parent comment didn't mention emails being in plaintext, but just "Gmail." I took that as meaning passwords.

Scumbag CodingHorror

Says encrypt all the things...

isn't encrypted.

Did you notice that he was referring specifically to sites with accounts? There's little point in encrypting public data, especially if tracking cookies aren't involved.

We use CloudFlare. $20/month and you get your whole site encrypted plus all their other really cool features. It is a no brainer for those of us who can do it without violating contracts.

It really isn't a no brainer. Passing our users' data through CloudFlare would violate our contracts with a number of clients. I'm also dubious about the privacy implications - they're presumably getting something out of those terabytes of free traffic they're handling.

> On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead.

This is too good to be true.

I assure you that it's true. I haven't reprofiled in that much detail since but I suspect that the numbers look even better now. Partly because computers are faster and partly because of software improvements.

I still don't understand.

1% of what? Sure, if you're doing a lot of database read/writes for every HTTP request, then yes, that makes sense. I guess it that case, the actual HTTP would account for (say) 0.1% of CPU load – which makes HTTPS 10 times slower.

I think we should compare HTTP and HTTPS connection creation/maintaining/dropping resource consumption relative to each other, not to the whole process that takes to process request, query databases, create page and send it to browser.

I'm no expert in this matter, so I could be (and probably am) completely wrong. That assertion sounded counter-intuitive to everything I've ever heard, and as it is expressed a little vague, I doubted it.

It largely depends up the nature of your web service. If you are running a user-interactive site theb i may buy it. However if you are offering a high-tps, high-throughput web service I assure you the costs of switching 100% of your users to SSL is not negligable and has a real impact on the customer experience.

I think agl is familiar with both kinds. His company tends to serve both the HTML and the APIs through the same set of load balancers.

That's a little dismissive. Perhaps we're just talking about different levels of scale.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact