Hacker News new | past | comments | ask | show | jobs | submit login
Marking HTTP as Non-Secure (chromium.org)
399 points by diafygi on Dec 13, 2014 | hide | past | favorite | 231 comments



I think the best time to do this would be soon after the Let's Encrpyt free CA [1] starts handing out certificates. There's no more good reason not to have HTTPS, so that's a good time to start applying a little pressure and adding the incentive for website authors.

Does google already reward secure sites with higher search rankings? I can't decide if I think that's a good idea or not, but if they want to push for a more secure and free web, that's definitely another avenue.

[1] https://www.eff.org/press/releases/new-free-certificate-auth...


There is a very good reason to not secure your site if you are dependent on ad revenue. When I switched to https, my earnings dropped by about half. I tried it for a week, but I couldn't take the losses.

When you have a secure site that uses Adsense, Google only serves ads from secure ad servers. This shrinks the pool of competing bidders significantly.

For me to switch back, either Google needs to push advertising networks into securing their ad servers, or the nonsecure warning will have to be really big and scary.


I recently switched my (small and ad-free) blog to https-only and I saw a sharp drop in traffic. I highly suspect that is due to various malicious robots and crawlers not handling either https or redirects from http to https correctly. For instance, attempted comment spam fell to almost zero after the switch.

Considering that at least part of the ad clicks comes from robots (there was a recent post about that on HN), it might be that some of your lost revenue was due to that effect.


According to my CloudFlare analytics for the last 30 days (I draw on them because they try to measure bots):

2,230,909 Page views 2,094,927 regular traffic 126,337 crawlers/bots 9,645 threats

While there certainly are a lot of automated visits, they are still a fraction of what appears to be legitimate traffic. Furthermore, I'm pretty sure that CloudFlare blocks or challenges (edit: malicious) bot traffic, so I doubt that is the spoon that is stirring the pot.

I am not alone in reporting this: https://www.seroundtable.com/https-google-adsense-19035.html

It is a serious issue, one which I'm sure is hindering https adoption around the web.


I also switched my small and ad-free blog to https-only and I haven't noticed a drop in traffic. On the other hand, even if I had, I wouldn't have cared.

I'm proud to contribute to a saner Internet and it does matter even for small blogs because I noticed networks that inject content in websites - I don't know how this practice evolved in the US, but a couple of years ago while traveling there the Wifi networks in the 2 motels I stayed at were injecting ads in the websites I was visiting. I found that to be extremely distasteful.

For me HTTPS is a way of signing my content. Shameless plug - https://www.bionicspirit.com/ :-)


This is a really interesting observation. Can you expand upon what "shrinks the pool of competing bidders significantly" means? For instance, when creating an Ad Words campaign, is there a setting/option that they must opt-in to and if they do not they won't be considered for secure-only advertising?


Here is relevant information about it taken from this page: https://support.google.com/adsense/answer/10528?hl=en

"HTTPS-enabled sites require that all content on the page, including the ads, be SSL-compliant. As such, AdSense will remove all non-SSL compliant ads from competing in the auction on these pages. If you do decide to convert your HTTP site to HTTPS, please be aware that because we remove non-SSL compliant ads from the auction, thereby reducing auction pressure, ads on your HTTPS pages might earn less than those on your HTTP pages."


To answer your question more directly: besides AdWords, Google manages the ad inventories of a dozen or so third-party ad networks. Lots of the display advertisements that appear on the web are served via these ad networks. Many of these connections are not encrypted and, as you probably know, if a single image on the page is not encrypted it jeopardizes the security of the connection. To maintain the integrity of the connection, the nonsecure networks are eliminated from the bidding process. Fewer bidders, lower final value.


> if a single image on the page is not encrypted it jeopardizes the security of the connection

But only in the sense that the article text could say e.g. "implement the authentication algorithm according to illustration #42" and illustration #42 could have been maliciously replaced with an image showing an incorrect implementation, right?

A script served over an insecure connection, on the other hand, would give the attacker access to the DOM and compromise the entire page (and other pages on the site with AJAX).

So does the fact that ads need to be served securely imply that they have the ability to execute JavaScript in the context of the page? By serving ads (whether encrypted or not) am I trusting every advertiser on the network with the session cookies of all my users, essentially allowing them to intercept communications between the site and its users?


I can't speak too much about this because it is on the fringes of my knowledge. All I can say is that I trust Google's systems to screen for malverstising. I remember there was an incident recently where one of the ad networks that they manage was serving malicious JavaScript, but they caught it pretty quickly and blocked that network from serving ads.

I do not believe that I can improve on their systems.


> if a single image on the page is not encrypted it jeopardizes the security of the connection

Or, you know, loading untrusted javascript onto your page could also jeopardize the security of your site.


I'm guessing it means that ads that lead to http:// urls get binned.


No, ads that originate from such urls get binned.


This is what's holding me back as well. I'm happy to put in whatever development effort is required to get everything on https, but many of our advertisers through various networks (including Google's) don't yet support it.


Have you sent them a request asking when they will support https? The more customers requesting, the more likely they will weight adding support. You could even request it under the cover of Google weighting https more.


> There's no more good reason not to have HTTPS

Cost. Today I host my website for 6 cents a month as a static page on Amazon S3.

To go https, I'd have to first acquire a certificate (lucky that can be free and with Let's Encrypt will be). Then I have to find someone to host that certificate. I can pay someone hundreds to thousands of dollars in setup and monthly fees.

Or, the cheapest option I've found is to get a $6/mo VPS as a frontend, put nginx on it, and put my cert there. The problem is that costs 100 times as much as what I pay now, and I have to maintain a server and it's not served from multiple redundant servers like my site is now.

Or I can use Amazon's free SNI support, making it so that older browsers can't see my site and I have no way of blocking people from using http or redirecting them.

There is currently no good, cheap option to do SSL only that is viewable by everyone.

That's why I haven't switched yet.


I understand the principle, but let's not get to philosophical about this: You ran reddit's servers and worked at netflix in their availability department. Cut out a morning coffee and you have enough money to run SSL.

SNI works with anyone on Windows > XP, and any mobile / Apple / Linux OS you'll find visiting your site. How many IE6/XP hits does your site get per month?


Even on Windows XP, it is only IExplorer that doesn't do SNI. Chrome or Firefox have no problem.

The bigger problem right now is Android 2.x. But the lifetime of phones is only about 2 years and the market-share of Android 2.x has dropped below 20%.

And it really depends on the audience. The market share of all IExplorer versions on my blog is less than 4%, out of which IExplorer 11 represents more than half of that.


> The bigger problem right now is Android 2.x. But the lifetime of phones is only about 2 years and the market-share of Android 2.x has dropped below 20%.

Below 10% even[1]. Agreed on device lifetime. Unlike PCs that can stick it out much longer, give it another year and it'll be well below IE6 levels. Can't come soon enough.

[1] https://developer.android.com/about/dashboards/index.html


The problem here is that increasing the difficulty or learning curve of a system is always unpopular.

If SSL was truley as trivial as a check box, then we would see very wide adoption.


>The problem is that costs 100 times as much as what I pay now

The "100 times as much" is not that impressive, if it merely goes from 6 cents to 6 dollars.


I suggest looking at the world beyond your immediate environs.

6 cents vs 6 dollars is the difference between a person in a low-income economy spending 0.1% vs 18% of their income on a website.


That works out to an income of about $33/mo. I think that person has a lot more to worry about than where to host their static site.


My source for that figure was the World Bank[1], which gives a GNI per capita of 400USD PA for the Democratic Republic of Congo.

[1]: http://data.worldbank.org/country/congo-dem-rep


Which in big chunks of the western world is not that much, but it will increase the barrier for entry for many in poorer nations. For commercial sites, then maybe I see a benefit. For everything else... why bother? The info isn't secret, doesn't need to be and since there's so much data going around already is anyone malicious bothering to look?


And what do you think that means for websites that are 100 times as big? They go from being rather cheap to host to being money drains.

We need better infrastructure.


Cloudflare provides a free ssl service.


A free MITM service, you mean.


With ECDSA only certificates, which are even less supported by old browsers than SNI.


NearlyFreeSpeech.net offers comparable value to S3, and you can use apache features like .htaccess for redirection. Only problem is that it's still SNI, if that's important for you.


Sounds like an opportunity for a startup if I've ever heard one.


Yeah, more nickel and diming.


> Does google already reward secure sites with higher search rankings?

Yes, although only recently and only slightly.

http://googlewebmastercentral.blogspot.com/2014/08/https-as-...


I still haven't heard a reason why it's important that my blog is on https. Waiting.


1) Sometimes it's not about you.

Currently, the NSA (and others, presumably) consider the presence of encryption as part of their is_suspicious() heuristic. Other people do have need for encryption, and by saying "I (currently) have nothing to hide", you are saying that you are fine with a high correlation between "uses encryption" and "is doing something suspicious". More than any other reason, we need to dilute that correlation until all data looks similar to remove the possibility of this kind of categorization.

2) https://www.philzimmermann.com/EN/essays/WhyIWrotePGP.html

As Zimmermann said, we need to socially normalize the use of envelopes instead of the postcards that are currently used. Without that social expectation, it will be possible to legislate against the use of encryption in the future.

3) It lets us (in the very long term) simply retire 80/tcp

...and plain HTTP servers in general. Sure, this is a minor benefit, but it would still be nice.


While those are all valid reasons they land on the "greater good" side of the scale, which doesn't have the obvious "what's in it for me" I think many people are looking for.

My thoughts

1. Not having secrets - MITM isn't necessarily conducted by malicious attackers, but that doesn't mean it's bad. Consider for example a company that wants to identify usage behavior and buys traffic data from an ISP. While the data may be anonymized it's still someone's usage. With https, a webmaster is limiting the info those companies can get so instead of being able to run a complete analysis on the type of text a user reads and images they see, over https they could only tell what websites you go to. It's still pretty bad, but not as bad.

2. Cost - I do see the value of cheap hosting on S3 and getting redundancy. I've been hosting servers from the days before AWS existed (I started young) and know one thing - if you can't afford something you probably don't need it.

Why does you $0.06 site need to have a multi node setup? I don't mean to sound like a jerk but if you had the kind of traffic a multi node + DSL site needs you'd probably have the funds to invest in it. It's really not very expensive considering a cup of Starbucks coffee costs 100 times what you currently pay for hosting...

If your content isn't secret why not go with a cheap SNI that can host your certificate and put that behind cloudflare (which is free)?


> "What's in it for me"

I'll ignore he obvious selfish nature of this question and simply point out that you may need to take advantage of that "culture of always encrypting everything at some point in the future. It is incredibly short-sighted to assume that you're not ever going to be a target.

> "not having secrets"

You can look the numerous rebuttals for this very well-known fallacy.

> Cost.

It's probably worth mentioning that I am currently living on SSDI (social security disability income) thanks to some unfortunate medical issues. I cannot actually afford any PKI cert and related costs, even $20 costs.

Well, the EFF may soon have a free solution for this, and almost all of the benefits I list are still valid even when the crypto is relying on an self-signed certificate automagically generated by apache on first use.

I would love to see more options that address the cost issue - secure communications should not be limited to those that can afford various economic barriers, but for now at least some solution exists.


What's wrong with StartSSL for your use?


You assume they are always available and always do business with everybody? I had an account with them once, but they declined to reinstate it shutdown the account. I don't know why.

So no, they are not an option for everybody.


Not having secrets - MITM isn't necessarily conducted by malicious attackers, but that doesn't mean it's bad. Consider for example a company that wants to identify usage behavior and buys traffic data from an ISP.

Really? Do you think we're all sheeple that are happy to have every facet of our lives tracked? You don't think that someone has a database of every taboo thing you considered buying or seen online, every contrarian political article you've read, etc? You don't think that they're sitting on this cache until they find a way to sell it to anyone that will buy it or score you some way in a Big Data metric? I know people in that industry. They tell me the public isn't ready the handle how much information is for sale about them.

Viewpoints like these feed the sheeple with naivety that they themselves are good people, so the corporations, government agencies and hackers that spy and exploit the gaping chinks in the armor of the Web would certainly have no reason to exploit such good citizens.


You may have misunderstood my point as what I was aiming at is that MITM is always bad for us even if it's not done by malicious attackers but rather by for profit companies.

By not encrypting traffic, web masters who think they don't have secrets are really just selling their users. That's bad.


>Other people do have need for encryption, and by saying "I (currently) have nothing to hide", you are saying that you are fine with a high correlation between "uses encryption" and "is doing something suspicious".

If those agencies had a problem with https, they wouldn't let a Google team popularize it.

Https is, in all likelyhood, as transparent to them as a piece of glass.


Obviously, given that the TLAs can just national-security-letter a CA (if that's even necessary). That doesn't change anything about my recommendation. You should still use HTTPS, always.

It still has an effect of making your traffic not stand out from anybody else's in a DPI. Also, the TLAs are not the only attacker, and HTTPS may not be transparent to them.

The key feature is that it requires a MitM. That is not easy or cheap, compared to simply catch everything with a simple passive beam-splitter. The idea it is easy to get bulk data with XKEYSCORE/PRISM, but requiring the use of QUANTUM, FOXACID, and other fancier tools is not something that cannot be [cheap, undetected, used against everybody] simultaneously.


1) Why not implement passive encryption? Like StartTLS in SMTP? No need for HTTPS; could be part of HTTP/2 over port 80.

2) Pinning?

3) No, let's not do that. I want to be able to access my sites from my 2-year old devices that don't support SNI, like Android 2.3.


Do you really care about accessing sites with 10-year-old mobile devices using only stock apps (emphasis: long term)?


Because it prevents your users' ISPs from inserting extra ads and analytics into your pages.


While looking at my own (ad-free) blog I saw amitabh banchan selling me life insurance. I don't want my readers associating me with this: http://www.hkyantoyan.com/wp-content/uploads/2012/09/AB-bina...

Switched to https shortly after.


And also from them tracking what you read to give you specific advertisements.

Man, it's really all about advertising these days...


Because I can inject code into your page that infects visitors to your blog with malware. The NSAs QUANTUM system does exactly this, waiting for the targeted user to load a non-HTTPS page, then injecting javascript to redirect to an attack site.


AFAIK HTTPS does not by itself prevent this kind of attack. You need to authenticate the server as well as encrypt the connection or you could just be talking very privately to the NSA.


The browsers do a pretty good job of authenticating servers, and we have a few projects in place to look for rouge certificates. It's not perfect, but it will only get better.


Not that many rouge certificates to look for.


Uh, of course it does. If it didn't guarantee the server you were communicating with had the private key for the certificate it is presenting, it would be worthless.


Well, of course the NSA would have the private key for their own certificate. Having the private key for a previously unknown certificate doesn't, in and of itself, prove that you are not the NSA.


Yeah, because the NSA and such don't have access to certificate authorities in the first place, right?


Which is why you are supposed to throw out untrustable Certificate Authorities.

In the same way as TURKTRUST was thrown out by all vendors a few years ago, nowadays you should throw out VeriSign and GoDaddy just as well.


>In the same way as TURKTRUST was thrown out by all vendors a few years ago, nowadays you should throw out VeriSign and GoDaddy just as well.

Have vendors done it? Because users will surely not bother.

Besides, what makes the other CAs' trustworthy?

They are just some companies, with offices, CEOs, etc. Can have ties to the government, deals, pressure on them, or even just plain planted engineers...


Some users might want to be sure that the content of your blog as is rendered on their screen is truly the content of your blog. Or perhaps there are users who do not want others to know which specific posts on your blog they have viewed.


I get your point but you're kind of blowing it out of proportion. A lot of people like myself host really mundane content and that concern is really the last thing on the minds of those types of site owners and their users. You want HTTPS when it's critical that no one manipulate the request and response but for most of us it's not worth the expense and effort. You want SSL on WebMD, Healthcare.gov, your bank website, and those political sites you read when no one is around but if some extra tracking get inserted into the response body of our favorite cat picture site then I think only you and I will notice or care.

SSL everywhere isn't yet practical only due to the expense. It's not that much more effort to secure a site but when you run 10 sites then you're spending $100 a year for those domains. The expense of an SSL certificate each on top of that makes it impractical for solo "webmasters" to secure all their sites. We all know why we should use HTTPS and we do it when it makes sense but it's just not practical 100% of the time yet. Like others have said, this will make more sense once the EFF initiative starts being adopted and getting a free certificate is as easy as apt-get secure-me-please.


Ok sure, your blog has mundane content. But you might use a the same login for your blog as you do for your online banking (with minor changes to the password to make it more "secure", like adding a kid's birth year which people often do). Say you happen to login at a coffee shop, anyone sniffing your WIFI can now pick up your login because your traffic wasn't encrypted.

Most people wouldn't be comfortable with a stranger looking over your shoulder while they logged in. This is the same thing, only you don't think about it.

These things are ALL rare, but why would you want to expose yourself to this?

SSL everywhere is also about improving security for those who don't realize that they might be engaging in behaviors that compromise their own security.


> But you might use a the same login for your blog as you do for your online banking

That is a completely different problem that using https won't solve. It's like building a ship with a hole at the bottom and having a high throughput water pump.


Yes it will, because passive listeners on the network won't be able to catch your password. Only your blog and your online banking will be able to get it, as is intended.

Also note that as tech-savvy people, we have more responsibility in ensuring our users are safe, even from themselves. Sure, the better thing to do would be to educate everyone so they don't reuse passwords. But it will take time, and using HTTPS in the meantime decreases the chance for them to be pwned.


Think of SSL like vaccination. It provides necessary immunity for those rare cases where something bad would happen.


Some users might not like my font choice either, but I don't think the browser should judge my site to be inferior because of that.


If your site has auto-playing sound, would you be opposed to chrome adding the speaker indicator on the tab to alert users that your tab is causing the sound?


A speaker icon isn't making a judgement as to whether that sound is good or bad. Turning no-ssl into a warning is like having the browser judge you for being poor or cheap or incompetent. Not having HTTPS does not mean that you are insecure, it means the site isn't encrypted. There's a difference. Whether the lack of SSL makes you vulnerable to shenanigans depends on a lot of factors having nothing to do with SSL encryption and everything to do with the site content and what one may gain from snooping on your request or manipulating the response. Sometimes there's nothing to gain there.


Some sites are in theory fine without encryption or an alert. Others are definitely not. The problem is that the browser (software) can't possibly know the difference, especially because the sites that "should" be encrypted is a matter of circumstance and varies from user to user. Some might say that the user should know the difference, but I think we can reasonably call that a huge cop-out.

So in order to show an alert whenever sites that should be encrypted aren't, you just have to show an alert all the time. The SSL everywhere movement and Let's Encrypt are about making encryption easy enough for sites like yours that it's practical to do that.

Basically, your site being encrypted, even if it doesn't specifically need to be, helps to improve security of the web as a whole.


At least the browser knows the font choice is yours. The issue here is not with the content of your blog itself, but making sure it's integrity is preserved. Unless you use HTTPS, you don't know what content is sent to the browser, and you don't know who received it.


There is no reason. Your readers will probably not care that the browser tells them your cat pictures are being delivered to them in a non-secure manner.

You shouldn't put in the work to switch over either. But in 2 years it will be easier for you to have a https blog than a http blog and you will naturally switch.


What about when a trusted cat blog starts providing instructions harmful to cats? Or a recipes site sends incorrect information regarding allergies?


Let's think of the idiocy of Slashdot not using SSL on their pages. It's now known that British intelligence services carried out MITM attacks of this popular tech watering hole to attack targets of interest. Given Slashdot knows this has happened and still, no SSL. Sure, they are not processing payment transactions onsite, but they can serve as a MITM puppet and don't seem concerned about this, surely because it might impinge on Dice's ad revenue.

This is why all my blogs/sites are on SSL (or being converted in 1 case). Do you think people really check those md5sum's on the downloads from your OSS project page/blog? Make that unnecessary and use SSL. BTW, my CloudFront charges only went up 5% (i.e. a few bucks) and all my certs cost $2/ea because I stocked up at a sale. It's not a matter of money for most admins.

The question is not "Why SSL" by "Why not?"


If you have any https links on your site, those could be replaced by http links using something like sslstrip. Your site is now the weak link in the chain google->your site->linked site.

Of course sites should be using HSTS to help prevent this, but if the user is visiting the linked site for the first time, HSTS can't protect them.


Someone posted it further down the comment thread, but anyway, here's a reason post I wrote on why you'll want HTTPS even for a static site:

https://www.bitballoon.com/blog/2014/10/03/five-reasons-you-...


Here's a good one - some networks are injecting ads or trackers in the content served by your blog on said networks. HTTPS is a good way of signing your content.


> Does google already reward secure sites with higher search rankings?

They do: http://googlewebmastercentral.blogspot.de/2014/08/https-as-r...


I have no idea what you people are talking about. All of my websites are https but the problem I have is that Google is NOT serving data from secure servers. Open the chrome console on my webpage here and look at all of the errors and warnings. All because Google refuses to serve secure content despite people complaining for YEARS in the forums.

https://www.websmithing.com/gpstracker/displaymap.php

Can't Google spend a little more time securing their own websites instead of telling everyone else to secure theirs?



On a related tangent:

In the new secure-by-default world, what happens if a website has some large assets (e.g. video, game files) and would like to opt into caching by any transparent proxies that may be on the user's network? The site could embed hashes of the assets in question, and use ServiceWorker to transparently inject a hash check into any fetches (no matter what HTML thingy initiated them), but I think requests to http: from https: are always blocked as mixed content - let me know if I'm wrong. Also, this would still send things like user-agent, language, any non-secure cookies, etc., and allow the unauthenticated server to set them; it would be good to have a way to opt out of all these things, just sending a minimalist HTTP request instead.

Of course, even if these issues are fixed, any system allowing caching of assets must allow tracking of what assets were requested; for some sites this is unlikely to provide much info beyond what the domain name already indicates, but for others it would be best to avoid. On the other hand, if assets have sufficiently distinctive sizes, a proxy might be able to guess this information over HTTPS anyway through simple traffic analysis, so it wouldn't be that harmful to run it over HTTP instead.


One of these days, I'd like to have a scheme for referencing secure third-party content that includes hashes. I'm never going to link to a third-party server for some JavaScript file that could be changed out from under me, but if I can link to it on a CDN along with a sha256 or similar, and have it grabbed from my own server instead if the hash doesn't match, great!

I've seen a few attempts at that, but nothing that has actually taken off. It needs an appropriate polyfill, so that in browsers that don't support it, either the content gets downloaded and hashed client-side, or the content just gets downloaded from the same origin.

> In the new secure-by-default world, what happens if a website has some large assets (e.g. video, game files) and would like to opt into caching by any transparent proxies that may be on the user's network?

The whole concept of a "transparent" proxy will hopefully die in a secure-only world. If you want some server on your network to MITM your traffic (for caching or any other reason), use a non-transparent proxy. (Or, install a CA certificate from your proxy, but hopefully widespread use of certificate pinning will kill that too.)


For what it's worth, http://w3c.github.io/webappsec/specs/subresourceintegrity/ is actively being worked on, and implementations in Chrome and Firefox are on the way in the near future. The polyfill issue is handled in that proposal as described in <http://w3c.github.io/webappsec/specs/subresourceintegrity/#f...: Your "src" attribute points to a server you control, but your "noncanonical-src" points to the CDN, and a browser that knows about "noncanonical-src" is also supposed to know about the "integrity" attribute and ensure that the content it gets hashes to what you expect.


Just as a further set of links regarding subresource integrity in Chromium, you can try it today on Canary Chrome behind the "Enable experimental Web Platform features" flag. You can follow the feature's development here: https://code.google.com/p/chromium/issues/detail?id=355467

In fact, we're having a rather heated discussion on the security-dev@chromium.org mailing list right now, if anyone wants to follow along: https://groups.google.com/a/chromium.org/forum/#!topic/secur...


This is very interesting; thanks!

I am not very familiar with hashing algorithms, but why isn't the length of the content specified in the `integrity` attribute? Wouldn't it help avoid length extension attacks?


A good digest alorithm should already handle length extension attacks with just the digest. HMAC algorithms handle that as far as I know, and I think modern hashes don't suffer from the length extension problem in the first place.


In any case, length extension attacks apply to (hashes misused as) keyed message authentication codes - a scenario where you give a message and its authenticator to a server knowing the key, and it checks whether someone knowing the key generated them. The attacks let you, given a message and its authenticator, calculate the authenticator for a different message. However, in this case, the hash cannot be tampered with; breaking integrity would require finding a different message with the same hash, which is a straight-up collision, considered to totally break a hash algorithm. At the moment, the only commonly used hash algorithm with known collisions is MD5; SHA-1 will probably follow in the next several years.


I perhaps used the wrong term. I meant straight-up collision with a modified length.

The known MD5 collision attack needs to modify the length of the message; finding a collision with the same length is extremely difficult. It would be reasonable to assume that attacks on other hashing algorithms would suffer the same constraint.

Wouldn't having the length specified in the integrity attribute help reduce chances of a future attack? The cost of specifying the length is negligible; about 12 bytes for a base-64 encoded 64-bit unsigned int.


You're essentially using the length as a(n extremely weak) secondary hash in that case. That might be sufficient to disable one known MD5 exploit, but if you're worried about vulnerabilities in your primary hash algorithm it would make more sense to use another real hash algorithm for your backup.


Thanks for the answer. I am really not familiar with the mathematics of hash algorithms and I ask just to learn.

Why is length essentially a weak hash? Isn't it an additional constraint that works orthogonal to the hash? It serves to restrict the space of collisions and hence directly reduces the exploit surface. Moreover, the length can be independently verified of the hash function, and its space & time complexity is negligible.


> Why is length essentially a weak hash?

It's utterly trivial to match by itself. Adding length to a real hash is a mild difficulty increase. Adding a second hash is a massive difficulty increase.

> Isn't it an additional constraint that works orthogonal to the hash?

Yes. But so is a second hash.

> It serves to restrict the space of collisions and hence directly reduces the exploit surface.

Very inefficiently.

> Moreover, the length can be independently verified of the hash function, and its space & time complexity is negligible.

A second hash is independent of the first hash too. Hashing a second time compared to downloading and hashing the first time is pretty close to negligible.


I'd like to have a scheme for referencing secure third-party content that includes hashes.

All that's really needed is a convention that a URL parameter of the form "example.com/.../sha3hash-nnnnnnnnn" indicates the secure hash of the page to be served. Cache systems can cache such pages, but if they change them in any way, the change can be detected.

This removes the need to encrypt publicly available static information. It doesn't require a secure certificate. Most importantly, it means you can use a content-delivery network without letting it have MITM privileges on secure content.

HTTPS Everywhere means Cloudflare gets to see all your users's passwords. That's not a good thing.


Encryption makes sense even for publically available static information. You're not just protecting the contents of the information; you're protecting the knowledge that that specific user accessed it.

And I certainly wouldn't advocate giving a CDN permission to MITM your own domain. Give it its own dedicated domain, serve content from that domain via HTTPS, and don't let that domain have any user-specific information.


And I certainly wouldn't advocate giving a CDN permission to MITM your own domain.

That's how Cloudflare works. At least 36,000 domains let Cloudflare act as a MITM for them. Including "news.ycombinator.com".

This is the price of "HTTPS Everywhere" security theater.

Also, if you know the IP address and the length, you can often figure out what static content was accessed.


This is the price of "HTTPS Everywhere" security theater.

HTTPS everywhere isn't security theater. It prevents ISPs and coffee shop wifi snoopers from intercepting unencrypted traffic. Combined with certificate pinning et al., it also protects users against those governments that don't control the CDN that serves the HTTPS traffic.


It's not like CloudFlare can't see passwords if you don't use HTTPS. I don't think it's security theater, because CloudFlare being compromised is only one out of a large class of potential attacks.

That said, I fully agree that it would be nice to not have to trust CloudFlare.


Why would you link to some third party javascript if you could tag it with an sha256 hash?

The idea of third party javascript hosting is to make caching across sites possible. But the hash is a better way to do that. And unless your visitors are all international, they're going to have a better cache miss experience.


This made me wonder why we don't just have a standard for signing content sent over HTTP. The original server would give the signature to the CDN, which would put the signature in an HTTP header, and then the browser could check that signature against the public key from the main site's SSL cert. Now you know that nobody could have gotten in the middle and sent you a malicious script, and you didn't have to encrypt an innocuous public asset.


I'm pretty sure that pages marked a secure shouldn't contain unencrypted content. If people who are on the same wifi network can see what Youtube videos I'm watching, that's hardly a secure connection, is it?


One of the surveillance attacks pointed out in the post is the NSA piggybacking on advertising cookies. Details in the Snowden leaks were scant, so we did some research to figure out just how far the NSA could go with this technique. Very far, as it turns out. Here's a blog post with a link to our research paper: https://freedom-to-tinker.com/blog/dreisman/cookies-that-giv...

One of our conclusions was that tracking companies switching to HTTPS would help, but a large majority would have to switch to make any difference, because of the sheer number of trackers (Section 4.1). This proposal or something like it is probably necessary if we're to see that magnitude of change.



If you make a change like this too quickly, you run the real risk of users seeing the "warning" so often that they just get trained to ignore it (even more than they already do).


A fair argument, but what's the gradual approach? It has to be all or nothing.

The idea behind this is to use it as an impetus for sites and services to move to SSL/TLS.


There's a very interesting idea at the end of this proposal: that the more-aggressive warnings arrive based on telemetry of the preponderance of secure interactions. By changing over time, and in a manner sensitive to the real mix of user interactions, there's less risk of habituation to "oh, all sites show that warning".

You could even make the switch based not on the entire browsers' usership, but an individual user's recent past. (Not sure this is a good idea, but it's an interesting one.)


> A fair argument, but what's the gradual approach?

Site blacklists (a list of sites that should be secure.) Start with all of the banking and payment sites. Then add in sensitive topics to the person's country (eg atheism, homosexuality, piracy, etc.) The list would work a lot like the phishing/malware blacklist.

If anyone has the algorithms and data for that, it'd be Google. Of course, they'd then risk it being easier for governments to demand browsers block content for them. So it's a double-edged sword.


One possibility for a gradual approach: sites which have both HTTP and HTTPS servers, are marked insecure when user browses to the HTTP version. The knowledge about the presence of both HTTP and HTTPS can be hard-coded for popular websites* and/or inferred from history in the client.

* I mean popular websites that don't send HSTS header, such as reddit.com. Those that send the HSTS header would anyway be subjected to automatic redirection.

PS. We are considering this option in the gngr browser (https://gngr.info). We are also considering going further and not loading the HTTP page automatically. The user would need to press "OK" to proceed.


Excellent. While they're at it, maybe they should stop marking self-signed SSL as more of a security risk than plaintext HTTP.


Yes! I have 4 sites that I bought an SSL certificate for out of around 20 that run. If browsers wouldn't throw a warning for self-signed certificates every last one of my sites would have been secured from the start, easy. The only thing separating a self signed cert from one obtained from a CA is that the CA has some of your contact info but even then it wouldn't take a whole lot of effort to bypass their checks.


     The only thing separating a self signed cert
     from one obtained from a CA is that the CA
     has some of your contact info
Most CA certs are domain validated these days, which means you have to demonstrate control of the domain to get the cert.

    it wouldn't take a whole lot of effort to
    bypass their checks
It's pretty hard. If you get a CA to issue you a cert for a website you have no control over that's newsworthy and would get serious scrutiny put on that CA.

The CA system as it is isn't good, because we have to trust so many different CAs for it to work, but if CAs were widely issuing certs to the wrong parties we'd hear about it more.


Uh, also the fact that in a MITM situation I can trivially replace your self-signed cert with my own self-signed cert.


Which you cannot do on a plain-text connection?

A non-plaintext connection is still more secure than a plaintext connection, so it shouldn't ever get a worse warning than a plain-text connection.


If browsers want to treat it the same as plaintext (no warning but also no lock icon) I think that makes sense. Aside from the false sense of security, I agree that it is better than plaintext.


Not with certificate pinning.


That's a very weak security solution. In some situations it's better than not using HTTPS at all, but not by a lot.


Why is it "very weak"?

It's substantially stronger than ordinary certificate authorities without certificate pinning in many ways. (Namely, an entity out of your control (a certificate authority) being compromised / exploited / coerced doesn't also compromise you.)

The weak point is at initial connection (i.e. before you have the certificate pinned, or if the certificate changes legitimately and you have no way of confirming that fact). However, even in this case it is no worse than without pinning.

(I wish that HTTPS had a certificate-passing mechanism. I.e. if the given certificate doesn't match the pinned one you contact a site that you have the certificate for already and ask it to give you the certificate it believes is for the site. Do this for the same website with multiple sites and you'll have a good idea if someone is not trying to MITM you. You'd have to have rate limiting, etc, etc, but it would in many ways solve this problem. Unfortunately, it's something that would have to be built into the protocol, or else it would be blocked often enough to not be useful. (ICMP and firewalls, for example))


In the extremely common case of visiting a site for the first time on a new device or user agent, it provides almost no security at all. There are other problems, but that's a pretty big one.


...and the CA one actually protects you from MITM attacks (assuming your machine doesn't have nasty root certificates installed).


You mean nasty root certificates like VeriSign and GoDaddy, which the NSA frequently uses for MitM?

You should do the same to those certificates that you did to TURKTRUST, just throw the US CAs out.


    VeriSign and GoDaddy, which the NSA frequently uses for MitM?
Link?


Snowden, 2013?


Actual link? I don't remember reading this in any of the leaked documents and I can't find it now, though I could have missed it.


Agree that self-signed SSL certificates are treated as if they were the red-headed step-children of SSL. Perhaps if movements like Let's Encrypt[1] take off, self-signed certs will be a thing of the past.

[1]: https://letsencrypt.org/


What is the difference between Let's Encrypt and StartSSL?


I am hopeful that Let's Encrypt will do better than StartSSL's SHA-1 and paid revocation.


StartSSL will use whatever certificate digest algorithm you used in your certificate signing request. Most openssl.cnf files distributed with Linux OSes set the default algorithm to SHA-1 - that's nothing to do with Startcom.

Simply specify an explicit algorithm if you want to get a certificate using that. For example, if you do:

$ openssl req -new -sha256 -newkey rsa:4096 -keyout foo.key -nodes

and give them that CSR, you will get back a SHA-256 certificate.

EDIT: They also have a SHA-256 root (in most browsers, though you don't need a second-preimage-resistant digest algorithm for a /root certificate/) and SHA-256 intermediates at https://startssl.com/certs/ - go to the relevant class directory and there is a sha2 directory inside that.


StartSSL's interface is a huge pain. Let's Encrypt is hoping to offer things like modules for Apache and Nginx that make them take care of acquiring certs automatically, though we'll see.


Oh my god yes.

I had to install a CA cert in order to be able to just BROWSE the CCC website last month. Chrome's aggressiveness on Self Signed Certs wouldn't even give me to option to accept the risk. I want to browse one website- not add a whole new fucking signing authority to my browser.

Google's behavior on this topic makes me feel like half of their security team is manic and regularly falls off their meds. Their company's business model is about monetizing personal details of their users, and they act like they own the only privacy opinion that is right.


Agreed.

I think there is a possible fix for this issue that won't put grandma's banking account at any greater risk. Put the requested level of security in the URL. So if the resource is httpq:// (or whatever) it means that we don't care if we are subject to MITM attacks and self signed certs are OK. Then when grandma goes to a https;// site and the identity of the site is questionable we can forbid the connection entirely. She could use the httpq:// form if she wanted but the bank could forbid that from their end by simply not accepting such connections (it would likely be implemented as a separate port). Other sites that are willing to trust their user's judgment would just allow both connections.

The root problem here is that the current system does not accurately take into account the intent of either the user or the provider. So the browser can not be entirely sure and then has to ask obscure/awkward questions after the fact.

Edit: Dunno if there is an easy way for a bank to stop someone from deliberately switching the URL to httpq:// in a MITM situation. So the intent would only be accurately represented for the user sometimes.


So why bother with httpq at all when we have plain http?


You mean do some sort of opportunistic encryption on http:// connections? Wouldn't that just make things ambiguous again?


They currently are more of a security risk because if you tell the user their connection is secure when it's not (no verification of self-signed certs) you impart a false sense of safety.


If anything, we should have a huge you cannot pass this page warning if the page we are trying to access is not served over HTTPS.

Instead we have a system where we do nothing when you access a page that is not secure, and put up an obnoxious warning when you access a page that is somewhat more secure.


If you perform certificate pinning, self-signed can be just as secure. I wish the message was slightly less scary, perhaps saying as SSH does, "you've never visited this site before, do you trust this certificate?"


(user clicks "yes" and proceeds oblivious)


as opposed to unencrypted HTTP?


I can teach my computer illiterate grandma simply "never do email unless the address bar is green", she can remember that.

As opposed to: Never do email unless unless the address bar is green, except when: - It's the first time you visit the web site - You use another browser - You bought a new computer/tablet/phone, reinstalled your computer etc. - You accidentally cleared your browser history - You are on a public wifi the very first time you visit a web site. - The website changed their certificate since last time you used it. - You happen to be unlucky and even at home you are under a MITM-attack the very first time you visit the page.

The last bullet is especially troublesome because even a programmer would have a hard time to judge that one.


Telling the user nothing would not impart a false sense of safety.

Heck, why not mark HTTP as insecure, don't mark self-signed HTTPS, and mark CA HTTPS as 'secure'?

Of course, CA HTTPS is not really secure at all, but that's another discussion entirely.


Then don't tell them it's secure? Just hide the https:// and don't show any padlock-icon, neither broken nor locked.


Exactly. Their list indicates "Secure", "Dubious" and "Non-secure" but I think "Non-secure against active attacks but secure against passive attacks", as self-signed HTTPS is, should be distinguished from "Non-secure".


This is just pointless scaremongering.

Remember what happened when msie added warnings for this a decade ago?

People got so fed up with "security"-warnings that they just clicked "OK! OK! Whatever. Get the fuck out of the way!". And they did it to ALL warnings, serious ones too.

Glad to see history repeat i itself.


I think it's fine they are doing it, but success will depend on how they go about it.

For example, they should probably start by changing only the link's icon to show it's insecure - as they propose, perhaps making it yellow instead of white. Then after a year more they could show that icon in red. After another year, they could give a light pop-up warning, and after a year more, they could put an aggressive malware-like (hey, it's not too far from the truth when NSA and GCHQ are firehosing and datamining all plain-text connections...) red pop-up warning that says the connection is insecure.

Then after a year more, they could even grey out or take out the "continue" button, and only provide a small link instead, so most people run away from that site.

I think 5 years (2020) is a reasonable time period to get to that point. If 7 years after the Snowden revelations we don't even have most of the Internet secured with HTTPS, then we really suck and deserve the totalitarian regimes coming at us (it won't be just the 5 Eyes doing mass spying in 2020).

Besides, it's not what the users think that matters, it's what the web developers do knowing that in 2 years their site will have an insecure red icon and in 3 years it will have a pop-up warning, and in 5 years users will essentially be driven away from their site. Even if 90 percent of the traffic keeps clicking through the warnings, can they live knowing their site shows that to the users? So this is a battle for convincing web developers, not users, that they should be securing their sites.


Agreed.

There are so many websites on the internet which do not gather sensitive data from the users but display read-only content.


If you had checked my web-browsing over the last week it wouldn't be very hard to make an argument that I should be in psych facility, based on the number of suicide related searches I did. In all cases it was purely static content, but in the wrong hands it could be a huge issue for me.

I am only posting it here to prove a point: even static content can reveal a lot.


I don't think SSL is all that great for protecting broad interests.

If you are going to ten different domains in the same span of time that all contain suicide content and someone is snooping your connection, they can correlate what you're doing from the server names (especially if one of the domains has the word 'suicide' in it), even without seeing the page content or path portions of your web requests.

For exclusive content sites, it's a dead giveaway. If someone went to my domain (byuu.org) in HTTPS, then it's pretty obvious that they were interested in emulation, regardless of the encryption. There's already tons of services out there categorizing domains on the internet.

SSL's primary benefit is for form submissions, not for static content pages.

For something like that, your best bet at the present time is a service like Tor. Which even that isn't really perfect.


It's a good point, but most people don't care about that or government surveillance. Any friction introduced by things they don't care about will be seen as annoyance and will be ignored at best. And since absolute majority of websites are likely to stay on http forever - warnings won't do much good and probably will get disabled again in the future.

The good news is: more sites will switch to https.


And it would be so much easier to make a murder look like a suicide with a (public) search history like that.


Reading certain articles reveals sensitive information about you, the reader, too.

Do you really want every node in the network to know that you like NSFW content or are heavily into my little pony?


Would https be better in that regard? (e.g. tracking of visited URIs by the network)


Yes, because the URI is also encrypted in HTTPS (although it can get leaked in other ways; see the discussion at http://stackoverflow.com/questions/499591/are-https-urls-enc...).


But the host is not, and many services exist to categorize the content of domains already. What is the statistical difference between innocuous-site-with-every-kind-of-content-ever.com/friendship-is-pornographic and mlp-fip.com? If more sites are like the latter, then HTTPS will only hide which MLP pictures you are looking at, and not that you are looking at MLP porn. And if a site like the former became too large, we'd have to worry about the government issuing secret trace/tap requests against them.


That thread misses the most important way: the length of the request and the length of the response. On most small sites, the combination of the two will be enough to uniquely identify what page you're visiting.


Synthesis: Chrome Security Team would like to put :( for all non-secure HTTP connections. With gradual deployment, and increase of all active sites moving to HTTPS, it is assumed that users won't become trained to ignore this as a warning signal.

| Then, in the long term, the vendor might decide to represent non-secure origins in the same way that they represent Bad origins.

The biggest disadvantage of the proposal, as it stands in current CA climate, is that it's psychologically successfull deployment will impose a liability for site operators to touch their sites at least once per CA renewal timeframe. There are many sites where this isn't desired, feasable, or even possible at all, but which despite non-security, still serves giant heaps of high-quality information. So, this will increase SEO, and accessibility gap between sites run by geeks, and sites (attempted to) run by everyone else. Whether this is a desired future of the web is left to an exercise for the dear reader.


The Let's Encrypt project will have a daemon that automatically renews the certificate.


From the article:

" We’d like to hear everyone’s thoughts on this proposal, and to discuss with the web community about how different transition plans might serve users."

...

"You do not have permission to add comments."


It's pretty obvious they're intending the discussion to happen on mailing lists.

    We’d love to hear what UA vendors, web developers, and users think. Thanks for reading! We are discussing the proposal on web standards mailing lists:
    public-webappsec@w3.org
    blink-dev@chromium.org
    security-dev@chromium.org
    dev-security@lists.mozilla.org
You make it sound like they're eschewing discussion.


Great. So with this proposal all website operators that don't pay their due to the Certificate Authority cartel will be marked as "bad" sites. I'm all for restoring trust but not before the CA issue is resolved.


Ever since I realized there will be a persistent log of my browsing history maintained by one or more government agencies, I have mentally-marked http as risky. I cannot wait for something like this to happen, and encourage the wider less-informed community that TLS is critical for both trust (properly implemented, TLS will provide protection against tampering and data leakage) and to provide a minimum level of privacy (I can see you visited site x, but not what sections of it). Bring it on.


Hmm, there are a lot of sites out there. So many small sites with their own domain name. Local establishments, projects, clubs, communities... that will be hell to migrate, with non-obvious benefits to most site owners.

This might force more centralisation of the web again... just move your content to ESTABLISHED_GLOBAL_PLATFORM_X instead of that vhost at SMALL_LOCAL_PROVIDER_Y.


If LetsEncrypt comes pre-installed with cPanel and when creating the domain name it's https by default, then as long as shared hosting providers update the cPanel then all small sites will be updated to be ssl by default. But then we'd have the problem with people continuing to use 'http://' in <a> tags.


Really not happy with the way Chrome is going on these things. As a web developer, sometimes the best thing for my users is an HTTP media file off a non-SSL CDN closest to their home and cacheable by things like Squid stuck in front of them in my HTTPS otherwise page. But I can't do that now because Chrome will display the page as broken. They are just taking useful tools away from developers to improve the lives of users and not really improving anything except for a few privacy nuts who care about every tiny image on every page possibly being tracked.


That is easily solvable: make internet connections fast enough that caching is no longer an issue.


It is rarely the speed of the connection that makes caching an issue, but much more frequently the cost of the generation of the data and the cost to maintain that data post generation.


Marking HTTP as non-secure implies that HTTPS is secure. The large company where I work has quietly rolled out the Zscaler HTTPS proxy and my colleagues are unaware that, despite the assurance given by the green padlock in their browser, their connection is most definitely not secure.


That is because browsers obey the false root certificate installed locally, even when they otherwise pin the certificate down. This is a deliberate choice, but that doesn't mean HTTPS isn't secure, it means that you can't trust a computer you don't control.


In that case isn't the problem that their employer installed their own root certificate? The employer could just as well have installed any manner of malware. In the scenario where you are not the administrator of the computer, nothing is secure.


My vote for the icon to use would be a small frown or disappointed face. It's different than the icons for broken https, but still gets the point across that something isn't so great.

Here's what I'm thinking of: https://cdn0.iconfinder.com/data/icons/smile-emoticons/78/Em...


Why not a postcard for insecure and envelope for secured? Might be hard to distinguish in small icons and can be confused with email too easily but otherwise the message should go through.

Can't be worse than padlocks with different shades of green as we have today anyway. Does anyone actually understand them? Here on HN i get a gray padlock, on some other site i get a green padlock and on yet another site i get a padlock and a nice fancy badge with the website name in it. Doesn't help that every browser keep changing their looks with regards to this every 6 months also.


It looks too pessimistic. I'm sure it would depress a great deal of people who view it.


> We know that people do not generally perceive the absence of a warning sign. Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP.

That is a very strong point. User perception of the browsers' current UX treatments of the various security scenarios poorly represents the true level of security. An insecure HTTP site (no warnings) looks safer than an HTTPS site which happens to reference a single non-HTTPS image (mixed content warning). The absence of a warning for the HTTP site is not fair, because despite the mixed content, data submitted in a web form to the HTTPS site is still encrypted, yet the UX treatment affords it less trust.

Separately, how should Extended Validation (EV) certificates fit into this plan? Especially after the final proposed transition phase, T3: Secure origins unmarked. Personally, I think EV certs have become kind of a racket in the CA industry. But they exist nonetheless, and we should push for more consistent browser treatment of site security in general, including consistent treatment of EV certs.


I think that can be scary for the wrong reasons. Like a whole part of the web will be marked as "non-secure" where it should be marked as "non-encrypted."

Security-wise, MIM (Man in the Middle) attacks using HTTP are a fraction of the real security threats (Phishing that can use https!, Keyloger, Malware, Browser Vulnerabilities...)

It will give too much weight to HTTPs websites or too little to HTTP websites.

Moreover, SSL certificates are expensive!


Maybe showing the "http-is-insecure" warning in the address bar only when the user fills form inputs?


This would be a great first step, maybe a new T0.5 in the transition plan. Get people on board by securing the data they are sending around, and the secure the pages they are reading later.


But what if the site loads JavaScript from an insecure origin?


Great move. We also need free/cheap wild card certs.


Even with HTTPS if you compromise the server, or the client you can do whatever you like... so my awesome server with all ports open and with login (on everything): admin password: 123 serves some content, but has a valid certificate, and the browser says: "This Connection Is Secure!"

Meanwhile a well managed server with no open ports, private key auth etc, but serves content without HTTPS, the browser says "This Connection Is Insecure!".

Seems a bit misguided to me. Also feels a little corrupt and profit driven (by CAs)... Why don't google offer free Certificates... seems an obvious (and trusted) way to spread more "Certified websites".


I'm not sure I follow your argument. You do realize that you are talking about two different kinds of attack channels right?

Cleartext-based attacks and snooping (middleman, network peer) are simply far easier and far more likely.

You know those happen all the time, right? Literally all the time for certain users in certain places. A cleartext exploit is source-based (or route-based) and the server owner can't ever do anything to fix it besides forcing encryption (or shutting down completely to that source).

In contrast, a server compromise is destination-based and will theoretically only exist for the time that it is not known to the server provider.


On the other hand, cleartext-based attacks are also more easy to detect since the traffic is plainly visible. The maliciousness doesn't always have to be outside, despite all the focus on surveillance recently; and when it isn't, a "secure" connection makes it even harder to detect until it's too late.

Here's a recent demonstration of this principle - "smart TVs" phoning home via an unencrypted connection: http://arstechnica.com/security/2013/11/smart-tv-from-lg-pho...

If that was over HTTPS, would such data collection have been as obvious or even discoverable? It would be completely indistinguishable from any other "phoning home" - e.g. to legitimately check for software updates. The same encryption technologies that purport to protect us from mass surveillance... can be used to do it even more stealthily, and this is the main concern I have with making encryption ubiquitous.


Interesting, though I don't know how this is really relevant to the debate about whether it's appropriate to tell a user that HTTP is insecure but HTTPS is secure (the comment I was replying to questioned that exact point).

That's because the technology clearly exists to hide the type of phoning-home you are talking about. Any move toward more HTTPS for end users doesn't seem to increase that risk to me.


Some one explain this to me. If I use Https would I still be protected from my own service providers (isp/carriers).

Surely anyone who can see the handshake can also decrypt what follows. I ask this because I'm assuming this move is a reaction to all the NSA buzz that's been in the media(assumption).

And I would figure the only way all the spying was going on is because one of the parties we depend on for our internet services anywhere from the ISP to the end server were compromised.

So how would HTTPS make a difference?


> Surely anyone who can see the handshake can also decrypt what follows.

If I understand your claim here, it seems that you aren't yet familiar with key pair cryptography.

It's not enough for someone to witness the handshake - they need to actually possess the 'private' key of one a party in order to decrypt traffic that has been encrypted with that party's 'public' key.

It's an amazing feat of mathematics; the fact that it is possible suggests, at least to me, that the physics of the universe in some sense favor the evolution of verifiable private communications.

Here's the wikipedia article on key-pair crypto (often simply called "public key crypto"): http://en.wikipedia.org/wiki/Public-key_cryptography


Thanks!


> Surely anyone who can see the handshake can also decrypt what follows.

No. That's the whole point of SSL/TLS.

http://security.stackexchange.com/questions/6290/how-is-it-p...


NSA doesn't need to mitm you, they just have to ask the IP you are communicwting with what you did.


This sounds like a good idea and a step on the right direction. However working in the embedded world, no one has yet created a system for using https that does not involve a lot of complex steps to prevent the browser from warning of an insecure link.

It would be nice if all browers worked more like Firefox where it is easier to add exceptions.

Going forward I can imagine the conversion I am going to get, "why does the brower say that my traffic controller is insecure?"


I hope they'll label mixed content pages the same way as plain HTTP, it might motivate all the big websites that still get this wrong to get their act together.


A better approach would be, in my opinion, to only show insecurity when a <form> or user input is requested, i dont really see the use in https everywhere for small static websites.

DNS will still leak most of the websites you visit, so anonymity in this case is not really the issue that will be resolved.


Here are five reasons why you might want to serve a small static website over https:

https://www.bitballoon.com/blog/2014/10/03/five-reasons-you-...


DNS will only leak which hostname you visit, not which pages you access on that hostname. You hereby definitely resolve a privacy issue.

Furthermore, the presence of a <form> element also looks like a very bad measure for 'protectworthy' information to me. Websites that require a user session to be established, need users to send a cookie for every request. Those requests need to be secure, or the sessions can be stolen.


Then, if there are sessions too, <form> was just an example.

You're right for the path though, but for a majority of users, this doesn't matter, that is because a website often has a specific subject (porn, warez, etc), so if one DNS lookups the website, then that pretty much gives it all if the middle man is interested.

Read: My opinion is that it's useless for most people, and is problematic for old infrastructures. Therefore there are as much benefit as negative effects, or more negative effects, so it doesn't seem worth it.


It can matter a lot which page you're visiting. Also, DNS can be secured later.

When the EFF project to make and update free certificates for everyone launches, there's no reason not to put everything on HTTPS.


Work to install https-capable web server, get and install free certificate and support all that still needs to be done by someone and there is no such someone willing to do it for free for millions of websites. Cost is still significant for majority of websites.


Noooooooo! Why must the entire web be secure? I might run some micro news site for my local church that I update manually with (S)FTP and html files - why on earth should that connection be encrypted.

Bad security should be marked as bad. No security is not inherently bad.


An attacker could advise your trusting church-mates to download and run an application that turns out to be a virus while they believe it's from you.

A real but milder story - a customer of mine once complained about the advertising on my website being slightly offensive. I didn't have any advertising. When I investigated, it turned out the advertising was being injected by malware on his own computer. Not that HTTPS would have solved that, but I've heard of ISPs doing similar things where it would be prevented.


So your real world example would still be possible... that's unconvincing...


I disagree. No security really is as bad as broken security.

I'm guessing you think your church website isn't worth securing because it doesn't have any sensitive content. But in a world where surveillance is pervasive, that's not something you should depend on. For example, if religious discrimination were to lead to members of your church being harassed because of their viewing habits, then the argument that the content isn't sensitive doesn't seem so strong anymore.


For example, if religious discrimination were to lead to members of your church being harassed because of their viewing habits, then the argument that the content isn't sensitive doesn't seem so strong anymore.

HTTPS doesn't hide the IP or even the hostname (SNI is sent in cleartext) of the site you're connecting to, nor the IP of the client, so it'd still be trivial to determine who is visiting the church's website - just not exactly what pages on the site they've viewed. You need something more like Tor or stronger to protect against that.


Securing is different to encrypting.

Most tracking of people is done by advertising, and marketing companies. Should we mark all websites with advertising as insecure?


As much as I'd love that (seriously, not advertising per se of course but most types of cross domain tracking), something tells me that initiative is not going to originate from the Chrome team...


Ads are first-party content, in the sense that they are under the control of whoever is serving the webpage. One would hope that if the content provider were concerned about privacy, they would not choose to serve ads that violated that privacy.

On the other hand, using HTTP would open an otherwise harmless content provider to potential ad-insertion attacks by third parties. So in that sense, HTTPS really does matter here.


Security isn't just about protecting you from eavesdropping or data theft. Its about protecting the integrity of your content. ISPs and wifi hotspots can mitm you and inject advertisements and or otherwise modify the content of your website in midstream. No security means there is no assurance that the page you are looking at is the page you intended to see. I'm sure your church doesn't want parishioners complaining about ads for porn showing up on your website just because they accessed it at a sketchy internet cafe.

And no security isn't inherently bad, but a browser warning doesn't have to be judgmental, it just has to be informative. Warning for a bad cert or a self-signed cert but not displaying any warning for an unsecured connection is misleading, as it implies an unsecured connection is more secure than a self signed cert. By warning in some cases the browser has taken responsibility for providing information about connection security, it should do the best job it can at that, and that should mean warning users that unsecured connections are unsecured.


The point here is not that any given site needs to be secure, but that by having all sites secure, the whole web is better off for it. The downside for you is that you'd need to do some extra work, but the upside is that you can enjoy a much more secure web. For everyone who doesn't have to deal with upgrading to https (which, is by very far most people who use toe web) it is only a good thing.


Personally, I'd be concerned if the government had a list of people who go to my local church, even if it was only approximated as website visitors.


No encryption is worse than bad encryption and should also be marked as such.


I have been looking into setting up SSL for my blog. There currently no free way to get SSL certificates. There are some free ones but they tend to come with strings and lure you into paid plans. I am not sure if this proposal is the best.


Today that's true, but hopefully this eff project will turn out next year: https://www.eff.org/deeplinks/2014/11/certificate-authority-...


StartSSL is free and simple to use unless they changed recently. The only downside is that certain UAs don't trust the StartSSL CA, notably Java.


The free version doesn't come with wild card. So if you run multiple stuff off your VPS you will have to upgrade. I currently use self-signed certificate for everything except when I serve content to public I switch to http.


You could use SNI and support the majority of clients.


SSL Certificates are free. It's the authority you pay for.

What needs to change, in addition to this, is the interstitial warning page for a self-signed certificate needs to go away.

Having a self-signed cert > http.


> Having a self-signed cert > http.

It really depends on what exactly you are talking about. For a Man-in-the-middle attack, your statement is false. For passive dragnet surveillance, your statement is true.

I think people underestimate MITM attacks...


Doesn't really matter, does it? Even if MITM attacks are 99% of all attacks that doesn't leave you any worse of with a self-signed certificate. Better yet you are able to use a root certificate only trusted by most, rather than all, browsers because you secure that much of your traffic (which could easily be the 80+% that runs a modern browser, just not the few virus infected XP machines) which would enable actual innovation among CAs.


> For a Man-in-the-middle attack, your statement is false.

Not with certificate pinning.


on a case by case basis: >=

overall assessment: >


This seems to describe a situation where sites served over plain 'http' would pop a red 'x' or other distinguishing UI element to connote untrustworthiness.

This is almost the case already where a good number of browsers will show no positive (green indicator or lock icon) for domain validated certs - showing a preference for the much more expensive EV (Extended Validation) certs.

Screenshots of the different browser SSL level representations: https://www.expeditedssl.com/pages/visual-security-browser-s...


Sites with advertising should be marked as insecure. Since they have been known to serve trojans, and also are used to track people without their consent.

Will Google and Mozilla both do this? No. Because their revenue depend on it.


Google and Mozilla have no problem blocking sites that use non-Google ad networks as malware http://marketingland.com/googles-chrome-browser-issues-malwa...


To all those arguing that HTTP is still fine, please go ahead and drop ssh in favor of telnet. Go ahead, I will wait. If you cannot figure out why HTTPS is more secure than HTTP and why we should drop HTTP support from all browsers entirely, then please go read about it. HTTP should be used no more than telnet these days.

I am not saying HTTPS is perfect as is, but I am saying that HTTP is fundamentally and practically broken at this point. It is exploited daily in many different ways and your and your users' experience is worse because of it. Stop using it, and help other stop.

</rant>


> Secure (valid HTTPS, other origins like (* , localhost, * ));

Does this notation refer to any protocol and port on localhost? That's my guess, but the meaning of the notation wasn't immediately clear to me.


Can someone explain this part?

    Secure (valid HTTPS, other origins like (*, localhost, *));
Are parenthesis-asterisk and asterisk-parenthesis some kinds of special origin?


In modern W3C and WhatWG specs, an "origin" is either a (scheme, host, port) triple or a nonce value. The asterisks in this case are being used as a shorthand for "any origin which is a triple and whose host is localhost, no matter what the scheme and port are".


This is a great idea, and I say this as the webmaster of many sites that are not secure. I should be forced to get my but in gear and fix that.


So, looking at their screenshots...

How about prefixing the non-secure URLs by "http://", so that, you know, people will be sufficiently warned that it's not "https://"?


I think this is a good start.

But it is also important to educate users that even you are using HTTPS, some companies' internal network might not be fully encrypted and therefore it is not fully secure against some three-letter government agencies :)


Related discussion with and in between W3C TAG members

http://lists.w3.org/Archives/Public/www-tag/2014Dec/0098.htm...


They should also mark as Non-Secure the situation when you are being MITM attacked by your employer - custom root CAs on Windows boxes are quite common.


HTTP is not always insecure. Some environments (eg: LAN), can have network-level security (eg: IPSec).


Since when is HTTPS secure?


Google and others rarely ever mention DNS/DNSSEC, even though just as much information is being sent insecurely in the form of DNS queries/responses.

Learn more:

https://github.com/jedisct1/dnscrypt-proxy

http://dnscrypt.org/

Check if you already have it enabled (unlikely if this is the first time you're learning about this) http://test.dnssec-or-not.com/


Is this working for everyone else? I get a "server not found" message.

http://downforeveryoneorjustme.com/http://test.dnssec-or-not... tells me that the server is up.


working fine here

there are others to check if you have it (I'm pretty sure you won't unless you have explicitly set it up yourself)

google 'DNSSEC test'

btw setting up DNSCrypt-proxy takes all of 10 minutes on Windows


can someone explain how we can save threads within our HN accounts? it's not immediately obvious


um...


Finally...


Long overdue. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: