Hacker News new | past | comments | ask | show | jobs | submit login
Google Will Soon Shame All Websites That Are Unencrypted (vice.com)
448 points by devhxinc on Jan 27, 2016 | hide | past | web | favorite | 356 comments



Which is hilarious because the reason I can't switch The New Yorker website to HTTPS is because of ads - which I'm getting from Google DFP which allows non-secure ad assets.

In short; Google will penalize me because I use Google.

The universe has a sense of humor.


Similarly, Google claimed they would start penalizing websites that showed full-page ads for mobile apps instead of showing you the website. But every single time I try to get to Gmail, or Drive, or Calendar, or any Google service on the web using a mobile device, I'm shown a full page ad for a mobile app. Google has been doing this for years, and it seems like it's also been a year since they said they'd punish all sites that do that. But Gmail still turns up as #1 in search results for email, so does calendar, etc. It seems to me that they have whitelisted themselves and choose not to punish any Google property that breaks the Google rules, despite claiming to do so.

Edit: Typically, when a service tells me "no you can't use this service until you view a full page ad" I just give up and not bother continuing to the service. But the same is not true for Google. I reluctantly click through the full page ad every single time. It's incredibly annoying that I let them get away with this and still use the services. They are so outrageously arrogant about it and it bothers me greatly, but still, I don't change.

Edit 2:

Going to calendar.google.com: http://i.imgur.com/fNRhhYx.png

First results for searching 'calendar': http://i.imgur.com/l3A5Wlh.png



You are right. I think Google want people to get annoyed with this vignette ads if he or she uses google calendar. The user should install Google calendar the APP to be fully controlled by google. Then Google can just sends the user any ad what google like...

They all prefer user using apps rather than web pages. Google and others want to get fully control of users and make money...


> First results for searching 'calendar': http://i.imgur.com/l3A5Wlh.png

Well in your screenshot it seems like you scrolled down on the "calendar" search results. I get some other random thing ahead of Google Calendar, in incognito or not.

It is really annoying (I too hate those things, I would have installed the app if I wanted the app), but the click through thing only happens once in my testing. Are you clearing your cookies regularly?


I scrolled down because the top of the page was a Google Ad for Google Calendar. I chose to show the first organic result. Here's the top of the page: http://imgur.com/cO84ogZ.png


How about using a software to get rid of google ads?

I can erase google ads from DNS level. Users can never reach any google ad at all.

What you think?


Adaway was already invented ;)


The fact is they did penalize themselves for couple of times in history.


Mostly when obliged to do so.


They also said not to "penalize" websites based on user agent. Yet they do it and have been doing it for years.

They also said to use valid html/etc while they didn't do it for cost saving/performance reasons. Not sure this one is still true.

My guess is that this list of preaching water and drinking wine is pretty long for Google. I think their view is they know what they are breaking so it is OK in that particular case. The rest of us has to suck it up.


Yahoo shows YMail. Bing is the only one shows Gmail as first, although Yahoo technically uses Bing. You can pretty much say Yahoo actually "put herself above others" and more guilty than Google. In fact, I don't think Google is doing anything wrong. After all, Gmail is popular, and if you are doing a Google search, the user may be interested to know Google also offer email and most likely the user is already a Google user.


> You can pretty much say Yahoo actually "put herself above others"

Do you mean `itself`? Since when are tech companies assigned genders?


Parent poster might not be from an English-speaking background.


I am not, sort of. You can refer to a country by "she", so why is it inappropriate for a company? I don't see any issues. You can view a company as a mother too.


That's an archaic and half-valid use, so stretching it to apply to a company makes it pretty much invalid.

You could try to convince people to use the word that way, but at present it's just not done. Companies are 'it' or you can talk about the people that make up the company as 'they'.


given that they are not a native speaker this seems over the top. Also maybe consider Sapir–Whorf before stating universal rules..


What? I'm talking about English, not universal rules. The non-native speaker is the one that shouldn't be making assertions about what phrases do or do not have 'issues'.

Also Sapir-Whorf is dumb.


If you try to dictate how language must be used you're just being ignorant of how she constantly evolves through her use by different speakers.

So, there.


I'm not. I'm merely pointing out that such a use is going against the way English has been changing over time.


English as she is spoke does this?


One of the reasons a country has feminine gender is the association with the motherland (ie. one's native country).


Not all countries have feminine gender, just check https://en.wikipedia.org/wiki/Fatherland


I never thought about the question of whether, in languages that require nouns to have grammatical gender, particular countries may have a different grammatical gender from others, but on reflection I already know examples where they do in Portuguese: o Brasil, o Canadá (amusing to me because of the national anthem), but a Argentina, a Alemanha.

I wonder if this also happens in German; the only examples I'm thinking of offhand are feminine (die Schweiz, die Türkei) but now I'm not at all sure that there isn't a masculine one too!


Apparently Iraq, Iran, Yemen, the Congo, Lebanon, and Chad are masculine in German: https://german.yabla.com/lessons.php?lesson_id=409


Actually I can't think of many cases where German would use pronouns with countries. The reason these are masculine is because they are typically referred to using a definitive pronoun (literally "the Iraq", "the Iran", etc). It's more common with names of regions -- which may indicate that these countries used to be mere geographical regions (rather than sovereign nations) when the names entered the German language.

It also happens with countries like the UK, the US, the Czech Republic and so on, but obviously for the same reasons as in English.

I can't actually think of a country that's feminine in German. The "die" you often see is actually indicating plural (e.g. "die vereinigten Staaten", the United States; or "die Niederlande", "the Netherlands").


When you use pronouns for anaphora, would you use "es" for all countries, or is it plausible to imagine "er" or "sie", as with common nouns?

For example: Vor drei Monaten waren meine Mutter und ich in der Schweiz; wir haben _____ wirklich schön gefunden.

Would you accept "sie" here as a reference to Switzerland (because it was referred to as "die Schweiz"), or "es", or both? My intution is "es", but I'm not not a native speaker and non-native German speakers notoriously over-apply "es" to inanimate things.


I'd use "es" because it refers to the experience of being in Switzerland rather than the country itself.

But Switzerland is another example of a country that is typically used with an article. Consider the sentence "Ich fahre nach ____" with a country name. It doesn't work for countries like Switzerland ("nach Schweiz" sounds wrong, you'd instead say "in die Schweiz" -- same as "nach Kongo" vs "in den Kongo").


Thanks! Can we force the sentence to be about the country itself?

- Was meinen Sie über die Schweiz?

- ____ ist schön. / Ich finde ____ schön.


Several countries have articles in German, most don't. Plural countries (USA, UAE) have the plural article, which in the standard form is the same as the feminine article ("die") which makes for even more confusion, but when used in a case changes differently ("in den USA" vs "in der Schweiz") :)

Some feminine countries: Switzerland, the Dominican Republic, Mongolia, Slovakia, Turkey, Ukraine, Central African Republic.

Male, in addition to your own list: Niger (!= Nigeria), Sudan, Vatican.

Neutral: UK (because kingdom is a neutral noun in German), potentially others


In their native tongues, sure. But we're not talking about Afrikaans or French, we're talking about English. And since Britannia is feminine, English would have developed with the word "motherland" representing the native country.


But if you have a child company, wouldn't you expect to associate the parent company with a feminine gender before a masculine gender? That's what I am getting at. An organization has that "motherland" feel in some way.


Not really, no. Motherland is a very specific term that's been ingrained into English most likely because of the close personal relationship between people and their native countries, which would have been Britannia for many English speakers when the language was developing. There isn't really that same deep and universal connection when talking about organizations, so a similar term probably wouldn't develop anytime soon.


Sure, but what about ships? If you've read any sci-fi, the term "mothership" should spring to your mind. Or "motherboard" in hardware.

The concept of "some larger entity that spawns smaller entities" seems to generally lend itself to the mother/daughter terminology if you want to be poetic about it.

That said, whatever happened to artistic liberties?


In Portuguese we also say motherland. I doubt it's a Saxon thing, given Portuguese is a Latin language.


This. I would've written the same by mistake.


No offense meant but why not get the app?

I understand not wanting an application for a news website or something like that but something you use often like google calendar it would seem like the application would be better than the mobile page.


I do have the app. And that fact makes this double-annoying. When trying to visit a website, I'm told not to do that. That would be annoying on its own, and in fact it was for the first few years that it happened. But that's not at all what is frustrating me right now. What's super annoying is that Google claimed last year that they would penalize websites that do this, because they find it annoying too. Except they have done no such thing. I'm calling out Google's hypocrisy on who gets to show full-page ads without being penalized - Google does and nobody else?

If they want to show full-page ads and be super annoying, then fine, I'll deal with it. But don't pretend to be against it when you do the same practice yourself.


> I do have the app. And that fact makes this double-annoying.

It really just shows the sad state of mobile advertising when they're showing you ads for an app you already have.


On the other hand, I like that websites are unable to query my phone to find out what apps I have installed.


Yes I agree with you, I don't want any random website to be able to query my phone to find out what apps I have installed either.

However, if I'm on a specific app publisher's website, I wouldn't mind letting them know (through some mechanism) that I've already installed their specific app.


Sad state?

How do you expecet them to know all the apps installed on your phone? And if they DID know this information, people would be up in arms about privacy or lack there-of.


It's google. They know that you downloaded the app from the Play Store and used the app to connect to their servers directly. They already have plenty of information.


Yes, showing you an advertisement for an app you've already installed is bad UI/UX, regardless of the reason why. On the publisher side, it's also a wasted ad impression.

I don't expect them to know all the apps installed on my phone, nor do I think they need that much information to solve this particular problem.


If they don't know it, they shouldn't make assumptions.


Why do you assume they're not being penalized?

Showing that they rank above "timeanddate.com" doesn't mean a lot.


Being ranked #1 in a Google search for 'calendar' does mean a lot. Also, let's say they are penalizing themselves, but the penalty isn't enough to change their ranking. Why, then, would they claim that they are making this change because it's better for users to not have these ads but to still run these ads themselves?

> Our analysis shows that it is not a good search experience and can be frustrating for users because they are expecting to see the content of the Web page.

https://googlewebmastercentral.blogspot.com/2015/09/mobile-f...

This would imply that they know the experience is bad for users, they know that the penalty won't hurt their ranking, and so they will continue to show the bad experience regardless? That's just as bad as them not penalizing themselves for the full page ad.


A penalty that doesn't knock down one of the largest sites on the internet can still be a big deal to everyone else.

I assume different departments run calendar and search.


Not if you want to allow attendees to edit your event.

This is missing from the android app. So you have to browse to the calendar on the web and ... this.


Google DFP allows it because publishers (e.g. the New Yorker) aren't ready to switch all their traffic HTTPS. If they wanted to they could turn the switch and be HTTPS and tell DFP to only serve secure creatives.

One of the larger difficulties for publishers is that many of the 3rd party SSPs aren't ready to go full HTTPs and so publishers are reluctant to make the switch because it reduces demand sources.

Disclaimer: Work for Google in advertising


  One of the larger difficulties for publishers is that
  many of the 3rd party SSPs aren't ready to go full HTTPs
Right, but Google can motivate or improve the third-party advertisers to update much more effectively than publishers can - just Google hasn't chosen to do that yet.

It would be easy to proxy http-only ads from through a CDN that added encryption. Or to charge a premium to http-only ad networks, and ramp the premium up over time.


How do you think Google can motivate 3rd party advertisers? I'm talking about the Rubicons, Pubmatics, etc. Google doesn't have any real leverage over them. Other than the fact that as publishers do move to SSL (because of the SEO penalty that non SSL sites have) they won't use the SSPs that don't demand or can't insure 100% SSL for their buyers. So in effect Google is putting pressure on them to do it.

I don't understand your next part at all. Who would charge a premium to http only networks? Those sites don't actually rely on Google for delivery.


This is also effectively true for the more broadly used Google Adsense (not just DFP). They do support displaying adsense, but then screen out all non-https ads. Which, of course, results in a lower CPM.[1]

[1]https://support.google.com/adsense/answer/10528?hl=en

>>In short; Google will penalize me because I use Google

+++


To be fair, sites without ads are a better experience than sites with ads.


Sure. I like free things as well.


I dunno. Sites that can't pay the bills tend not to give good experiences, due to not being able to do things. Like exist.


Depends on what the bills are for. Most sites that don't have content-production staff need twenty bucks a month for hosting, plus an occasional prod from a sysadmin.


Are there some types of content where the author, photographer, etc will produce it without direct compensation? Sure.

Are there others where that's not practical? Also, yes. Maybe not things that you need, but this problem does exist.

At the moment, there's not a model, outside of ads, that works very well for that sort of thing. There are some subscription/micropayment schemes that seem promising, but nothing that works as well as ads do.


I didn't say it was supposed to work for all sites, just that st3v3r is ignoring a huge swath of the internet by implying that nonprofessional sites don't exist.


They're trying to nudge their customers for a while. It is just a little difficult when that's one's biggest source of income.

For example, https://support.google.com/dfp_sb/answer/4515432?hl=en


Also funny, because for many sites that run DFP or Adsense...that's their biggest source of income.

So, G is rationalizing their slow pace with the same reason that's not good enough for others :)


Google is a large company, with multiple branches.

I can't remember what services it is now, but there was some Google service that was deranked because it broke some Google search ranking policy.

It shows some integrity for the company that they're (sort of) operating their search engine objectively.

I presume that google doesn't uprank sites that specifically use Adsense versus other competing ad services?



I have a page hosted on Google Sites. It seems that Google Sites doesn't support HTTPS on custom domains either.


That is interesting. The Ads team is the same group that recommended turning off Transport Security in iOS 9 so you can run your Google's unencrypted Ads stack[1]. I'm sure there are two different departments that are fighting two totally separate wars. I've definitely seen this pattern in huge companies where one team is trying to push an agenda that forces another team to reshuffle their priorities.

[1] - http://googleadsdeveloper.blogspot.com/2015/08/handling-app-...


It is really hard for DFP to not allow non-secure creatives as long as you can create 3rd-party creatives. They do try to detect non-secure assets though, so they won't run on secure pages. See: https://support.google.com/dfp_premium/answer/4515432?hl=en


I'm jealous that you get to work for the New Yorker website. Any openings?



Yes! Send me an email to discuss: donohoe@newyorker.com


Reminds me of PageSpeed Insights bitching about assets loaded from Google (fonts, scripts, css).


The article title really, really needs an extra word: "Chrome", between "Google" and "Will". At first glance I thought it would be about the search engine, which would be a very disturbing thought indeed; it's already hard enough to find the older, highly informative and friendly sites --- which often are plain HTTP.

Nevertheless, quite convincing security arguments aside, I feel this also has a very authoritarian side to it: they are effectively saying that your site, if it is not given a "stamp of approval" by having a certificate signed by some central group of authorities, is worthless. Since CAs also have the power to revoke certificates, enforced HTTPS makes it easier to censor, control, and manipulate what content on the Web users can access, which I certainly am highly opposed to. I can see the use-case for site like banks' and other institutions which are already centralised, but I don't think such control over the Web in general should be given to these certificate authorities.

With plain HTTP, content can be MITM'd and there won't be much privacy, but it seems to me that getting a CA to revoke a certificate is much easier than trying to block a site (or sites) by other means, and once HTTPS is enforced strongly by browsers, would be a very effective means of censorship. Thus I find it ironic that the article mentions "repressive government" and "censor information" --- HTTPS conceivably gives more power to the former to do the latter, and this is very much not the "open web" but the centralised, closed web that needs approval from authorities for people to publish their content in.

There's a clear freedom/security tradeoff here, and given what CAs and other institutions in a position of trust have done in the past with their power, I'm not so convinced the security is worth giving up that freedom after all...


Repressive governments somehow convincing all CAs worldwide to refuse to issue certificates for your domain is a pretty distant hypothetical. Repressive governments monitoring and altering unencrypted communications in very sophisticated ways is a reality today. It's not a freedom/security tradeoff but a freedom/freedom tradeoff.

Not to mention that, of course, access to most websites is already gated by a central group of authorities - the domain registries - which can and do seize domains. Using raw IPs is one alternative, but if you're in that kind of position, chances are you want to be a Tor hidden service anyway.


Repressive governments monitoring and altering unencrypted communications in very sophisticated ways is a reality today

They could easily alter encrypted communications to effectively censor too, thanks to the all-or-nothing nature of encryption with authentication. Because by design, the certificate is presented in cleartext, it would be pretty easy to blacklist CAs and then cut off the connection if one of those is detected. Alternatively, whitelist CA(s) [1]. Analysing plaintext takes more computational resources, especially if things like steganography are used.

[1] Related article: https://news.ycombinator.com/item?id=10663843


Regarding the censorship:

It's obvious that censorship by western governments is never considered "censorship".

Only the evil enemy censors, we just have to enforce laws.

If one accepts this argument, it makes sense to argue that giving CAs more power is good — because, obviously, they don't censor, they just protect the interests of our economy.


Blockchain technology (which powers the Bitcoin) can easily be used to replace CAs, or provide an alternative which browsers acknowledge, provided enough site owners use it.

And going by the high issuance/maintenance fee the CAs charge for issuing certificates, the industry is a sitting duck for disruption by a Blockchain DNS/CA app.

I, as a site owner, can just sign my 'certificate' myself and put it on the blockchain DNS/CA app. The certificate will have my domain name, public key. And slso an additional field 'ownership sign' which is something like https://<my domain>.com/ownership_sign.pem (which is signed by my private key).

So if I am the true owner, I can self issue as many certificates to myself as I please. Or there could be some forced limitation to prevent any scalability (cough) challenges.

So, the problem you have pointed out is not really with enforcing/encouraging HTTPS, but with the entrenched CA bureaucracy. And I am really surprised, why is it not being disrupted already?


Tor is a tool for circumventing censorship. HTTPS is an important part of using Tor to surf the web: 1) it protects the user from bad exits that could inject malicious javascript into a page and 2) some exits refuse HTTP connections and only allow HTTPS.

Maybe HTTPS makes it easier to censor in theory, but in practice it helps fight censorship by enabling Tor.


There are already points of centralization at the domain registrar and DNS layers.

That was a major (really the major) basis of the fight against SOPA--it would have required ISPs to interfere with DNS resolution as a way of shutting down serial copyright infringers.

And the U.S. federal government can already seize domain names for some reasons.

So, the question is: does the value of pervasive over-the-wire encryption outweigh the risk additional centralization via CAs? Right now I think it does, but that is in part because I believe that the CA infrastructure itself will improve over time.


What we really need is opportunistic unauthenticated encryption with key pinning as a fallback between CA-signed https and plain http. Beating mass passive snooping is worthwhile even if MITM is still a risk.


The Fenrir project does something like this. They first establish a encrypted connection and then you can authenticate, or not. The authentication can also be federated.

Its pretty cool, but its not production ready.

The GNUNet has multible layers and do bottum up encryption on the lower levels.


I agree! There is TCPCrypt, for example: http://www.tcpcrypt.org/


Consider this:

- Squarespace doesn't support SSL (other than on their ecommerce checkout pages) [1]

- Weebly only allows it on their $25/mo business plan [2]

- Wordpress.com doesn't support SSL for sites with custom domains [3]

- If you've never experienced the process of requesting, purchasing, and then installing an SSL certificate using a hosting control panel like Plesk or cPanel, let me tell you–it's a nightmare.

All that to say, this is an interesting development that will leave a large % of small business websites with a red mark in their browser.

[1] https://support.squarespace.com/hc/en-us/articles/205815898-...

[2] http://www.weebly.com/pricing

[3] https://en.forums.wordpress.com/topic/support-for-https-for-...


Then maybe those platforms will finally implement it. In any case, there's an alternative: putting Cloudflare in front of the site. In fact, Google shows me a guide to do so when I search for "squarespace ssl".

Of course, that's hardly as secure as end-to-end HTTPS, but still, I trust the path between CF and SquareSpace much more than between the user's browser and SquareSpace.


Please do not put Cloudflare in front of your site. It makes it impossible for tor and VPN users to view your site since they have to solve an impossible captcha to even see the static content.


It's possible to turn off security in the CloudFlare control panel. I think the bigger issue is that CloudFlare has become a single point of interception for MITM'ing huge portions of web traffic.


I'm not sure, but I think CloudFlare will still hit Tor users with (unsolvable) captchas even with the lowest security settings.

But yeah, this NSA slide is extremely relevant to cloudflare: http://cdn01.androidauthority.net/wp-content/uploads/2014/06...


> I think CloudFlare will still hit Tor users with (unsolvable) captchas even with the lowest security settings.

That is correct. I have not been able to get passed a Cloudflare captcha over tor for any website.


I wonder how it's even allowed for CF to do this. "your site" <--- http ---> CF <--- https ---> clients is a poor solution and it only hides that the connection is actually not secure. Isn't this a misuse of many CA's TOS and should result in certificate revocation? Maybe I'm wrong though.


Well, at least for several threat models, "your site to CF" is at least slightly better than "CF to clients". It's not vulnerable to things like unsecured wi-fi sniffing. Then again, neither are self-signed certs and for some reason all browsers consider them even worse than no cert.

On the other hand, it doesn't protect against government spying, but then again, I think some governments straight-up MitM HTTPS traffic anyway. For instance:

https://news.ycombinator.com/item?id=10663843


But putting CloudFlare in front of your site makes mass surveillance even easier than it would be for plaintext traffic without CloudFlare.


Yeah I'm all for SSL shaming but my personal site with SquareSpace is about to look like shit for me since I'm a web developer. I mean as a web developer it's not going to look good if your portfolio is shown with a security warning.

I wonder if SquareSpace is going to finally fix their shit or if I'm going to have to move elsewhere which is going to be a pain (I went with SquareSpace because I didn't want to be assed with dealing with much of anything for a personal site).


No offense intended with this, but, as a web developer, what the heck are you doing creating your site on SquareSpace? Shouldn't you...I dunno...develop your own web site?


No offense intended, but as a web developer, why the heck would I waste my time coding things from scratch, setting up tooling, deployment infrastructure and managing yet another server when I could use a service that does all of that for me? For certain contexts it is far superior. Right tool for the right job, and all that.


Because they don't get paid to build their own stuff. They get paid to do work for clients. A variation on the "shoemaker has no shoes."


With that argument, there should be no marketing departments.


...no, the marketing department's job is to market the company and keep it in the public eye. Perhaps you're thinking of an advertising agency. And I've observed they have the same problem.

If a web developer was large enough to have its own marketing department it could maintain its own site.

In any event, I'm busy developing apps for clients all day - I don't get paid to work on my own stuff so it has lower priority.


I would much rather spend my time creating cool shit to share on GitHub and my site rather than maintain and pay bandwidth, CPU usage, etc bills just so I have a very basic blog and portfolio. Any web developer can write their own blog and website so who cares about that?


Why not just put your SquareSpace site behind Cloudflare? Then you get free SSL.


Good point! I might look into doing just that.


DreamHost now supports Let's Encrypt through their admin panel. The only instructions, however, are a community-maintained wiki page that is already outdated, referring to panel menus that no longer exist. I successfully obtained my certificate, but it was not easy.


Based on your comment, I went over to see if I could obtain a certificate.

It took all of 5 seconds for me to do - it's all automated via the admin panel now. Just tick the box to make your site secure. Looks like DH has resolved any initial issues they had.


Plesk built a Let's Encrypt extension: https://devblog.plesk.com/2015/12/lets-encrypt-plesk/


I've managed to get Let's Encrypt working on a shared hosting environment using letsencrypt-nosudo in less than half an hour from the time I started cloning the repo to finally pasting the cert into cPanel. And every step in that process except the final one of installing the cert using cPanel can be automated.


The --webroot module for LetsEncrypt is as easy as pointing to /var/www.


This is how it always should have been.

It was mind boggling that mixed content was "insecure" but HTTP was "secure." HTTP is and always has been insecure and should be marked as such.

I know there are a few people who will moan and groan about how overkill HTTPS is, but this isn't about banning HTTP it is just about reminding users that they shouldn't be entering sensitive information into a HTTP site.

Even phishing sites should be DV secure.


Not only that, but encrypted content from an unverified source is fucking sin for every browser out there, while unencrypted content from an unverified source is fine.

Go figure.


HTTP was never marked as secure.

Mixed content was marked insecure because there were assets on the page that might not be from where you think they were from. It was an indicator that the little https lock in the URL bar wasn't telling you the whole story.


I think this is at the core of Google's thinking on this: unless presented with a negative, users' assumptions are that they're secure.

Which is fair, given that I bet you'd get about a 5% or less recognition rate if you polled a random sampling of people on whether they could define "HTTPS" / "SSL" / "TLS" / "That lock thingie" to any degree of accuracy.

A server shouldn't have the opportunity to serve an insecure connection to the user without the user being made explicitly aware of that fact.


Mixed content is insecure because of active content to be very honest. Most people don't care about passive, but of course, you can make some fake banner if you are able to MiTM. You'd like javascript coming from HTTPS rather than HTTP. HTTP itself is insecure but doesn't mean every website has to be over HTTPS. However, given HTTPS is cheaper to deploy it should be encouraged. Do I really need HTTPS to show an album of cat photos I share with the world? No. But I do anyway.

However, the biggest challenge is actually internal traffic are almost always over HTTP, and the reason is almost always "because self-signed cert is invalid." In some way this is okish since internal traffic is a darknet, but as we have proper toolset make Let's Encrypt available, more people should consider deploying full SSL support for internal traffic as well. At this point, the toolchain to actually make Let's Encrypt simple and useful is still, ugh, a little hackish. Cron job here and there. Sort of complicated process to get started...


>It was mind boggling that mixed content was "insecure" but HTTP was "secure." HTTP is and always has been insecure and should be marked as such.

Why is it mind boggling?

Content served over HTTP is obviously less sensitive than content served over HTTPS, mixed content breaks HTTPS.


No. Obviously less is wrong. Here is an example: login over HTTP deliberate because the site doesn't support HTTPS is definitely not less sensitive.


How is it so hard to understand that its not just about your information. A attacker can easly put in new elements that insentivise users to put in information, or to help identify him.


Of course, but there's a general expectation that stuff served over HTTP isn't sensitive.

Breaking HTTPS where it's deliberately used is something that certainly deserves a warning.


That's true, but I think at some point HTTP should go away. The deprecation should happen. I think we need to get to state where HTTPS is HTTP and there is no "HTTPS" at all. Everyone can easily get a free certificate, and for commercial they can spend hundreds if they want to "prove" more. Like I said in another comment, I don't see a problem with sharing cat photos over HTTP. But if possible, https is definitely not going to hurt. But given most sites are HTTP, yes, probably going to hurt ranking. Old websites running on old CMS won't be able upgrade much. Simiarily, no one should be running FTP. It should SFTP, but setting up SFTP is pain in the ass with chroot and all that. Technology really need to made simpler. Speaking from an ops standpoint.


> Of course, but there's a general expectation that stuff served over HTTP isn't sensitive.

For us, sure. For the other 95% of the population, not really, which is why Google is doing this


Yeah, but I'm just explaining why warnings for mixed content are more important than for plain HTTP.

I'm absolutely not arguing against such warnings for HTTP.


Sounds good. I wonder when Google Cloud Storage will start supporting https on static websites hosted through them:

https://cloud.google.com/storage/docs/website-configuration?...

If they don't then they're not keeping up with hosting on Amazon's S3, which does support it.


Similar (very, since appengine static files are served from GCS), is to write a "python" appengine yaml file that only serves the static content with secure: always.


Google should offer stupid SSL certificates either for free or for $1/yr.

Perhaps at least to customers of Google domains. I won't mind switching from namecheap to Google domain in latter case.


They are a platinum sponsor of Letsencrypt, so...done?


That doesn't mean anything other than "we like the idea, you convinced us, we have some budget, we will sponsor in some way money and human resource."


It also means they quite literally at least assist with offering free SSL certificates.


If you're interested in more direct support, please star my ticket[1], it's likely that the same functionality would work for the https loadbalancer as well.

[1] https://code.google.com/p/googleappengine/issues/detail?id=1...


Money and human resource are what make up a company. They give that, and they put their name behind it in support. What else could they do?


Er, isn't that how it'd work internally too?


It means they are supporting a project that provides free SSL certificates. Which more than solves great-grandparent's quip.


Getting a Google domain means giving up getting new features from Google :( (pauses to clean up bitterness)


No, it does not. Signing up for google apps and choosing to use the google apps account as your primary google user account causes you to get new features on a delayed schedule.

You can get a domain through google without switching your google identity to it. You can also sign up for google apps on a non-google domain. google domains and google apps are not the same thing.


On Google Apps there are features that have been deployed years ago for regular accounts and that are still not available for Google Apps customers.

The one feature missing and that was painful for me were Contacts photos with a resolution higher than 96x96 pixels. On a latest generation Android with good resolution it sucks and I would have preferred if Contacts photos weren't synchronized at all. I ended up switching to a CardDAV provider and in the end I gave up on Google Apps for other reasons as well. And for the record, Google accounts had this resolution increased in 2012 ;-)


Hold on. Are you saying that that's why Android's Contacts keeps resizing all the photos to like 4x4px? Wow, I've tried everything I could find on Google but it never crossed my mind that it would be related to Google Apps. Thanks so much!


TIL, though many people with a domain are going to want their accounts through it.

That said "delayed schedule" above means 1+ years.


Note that by default, Google apps stuff is on a delayed schedule compared to the general public, but you can go into your google apps profile and change to the "Rapid Release" feature deployment, which means you get stuff as soon as the general public does.


In the hopes that it will help spread adoption of HTTPS, I wrote a web server that serves your sites over HTTPS by default, using Let's Encrypt: https://caddyserver.com - It also redirects HTTP -> HTTPS.[1]

There's a lot of misinformation out there about certificates and HTTPS, but don't let it stop you from encrypting your site. Regardless of Google's move, there is no excuse for any site not to be served encrypted anymore.

[1] Here's a 30s demo: https://www.youtube.com/watch?v=nk4EWHvvZtI


This is an awkward argument. One of my sites documents how to configure servers, for example. What excuse is there that something like that needs to be encrypted?

The most legitimate reason I've heard is for privacy. I don't believe the gov't is going to lock someone up for learning how to serve web pages.


Integrity protection. There are a lot of ways to instruct someone to configure their web server in a way that is subtly insecure, not to mention attacks like http://thejh.net/misc/website-terminal-copy-paste

It'd be slightly nice if we were able to have integrity-protected HTTP without encryption (lower overhead, easier debugging with packet dumps), but the advantages are minimal (ciphers are not really the overhead, SSLKEYLOGFILE is a thing) and it's a lot of complexity to the web platform, which is a downside for web developers like you and me: the rules for mixed content between HTTP, HTTPI, and HTTPS are going to be much more involved and confusing.


You can already send unecrypted authenticated data with HTTPS.


Via one of the NULL-cipher suites? That's a somewhat expansive definition of "can" and "HTTPS," since most if not all browsers are unwilling to negotiate any of those suites. Indeed, most SSL libraries make it hard to use those suites: for instance, OpenSSL says (`man ciphers`), "Because these offer no encryption at all and are a security risk they are disabled unless explicitly included."

Which makes sense, since they'd have the exact same problems as an explicit HTTPI protocol, just even more confusing: you'd want to not send things like secure cookies across those ciphers, you'd have to handle mixed content with actual-HTTPS carefully, etc.


Using HTTP instead of HTTPS allows an evil ISP injecting, for example, ads into your website or modifing its content in any way while serving it.


Keep in mind you're also ensuring the integrity of the document is kept and the user has (to some degree) a good idea that the document is actually from you. Confidentiality is only one aspect. I think a couple of ISPs in the US were injecting ads/content at one point into pages served over HTTP.


Consider Tor: in this case, your "ISP" is a random server on the internet. Maybe your Comcast or TimeWarner ISPs will not be malicious, but with Tor, any one in the world can register to be an exit node/ISP. HTTPS helps protect you from attacks in this "random ISP" model.


>I don't believe the gov't is going to lock someone up for learning how to serve web pages.

That's essentially the same as not locking your car doors because you feel your car isn't worth breaking into.


Sorry but I actualy can't load the website because of an HTTPS error (Firefox 43/Linux) (Error code : sec_error_ocsp_old_response).


I just downloaded and installed Firefox 44 today and it works great. Clear your cache?


So I've updated to Firefox 44 and it works indeed but it seems broken on Firefox 43.* on both of my computers (work and personal), you might want to have a look at it since 43.* is a quite recent version. (I'm not the one who downvoted you).


Here's a good excuse for not using https for everything: it breaks caching of files by proxies!


Right. So what's the solution? I run my wife's retail website. Am I supposed to just stop worrying about caching static assets like product images, scripts, etc.? Do I just throw my hands in the air and assume it evens out because I switched to HTTPS?

Serious question, what are my options?


Do you run the cache / contract with someone to run the cache, or are you worried about third parties who run caching servers out of your control (like mobile ISPs, corporate networks, etc.)? If the latter, I'm surprised/curious what the use case is.

If the former, you can stick those on HTTPS too just fine. CloudFlare will be an entire SSL-enabled CDN for you for free. Amazon Cloudfront will serve SSL for you for free (though you still have to pay for Cloudfront itself, and get a cert on your own, though you can do that for free).


Amazon Certificate Manager will issue certificates for CloudFront for free.


* Ensure your server is setting ETags correctly so the clients can determine which assets they need to re-request.

* Make use of edge CDNs with https termination


Turns out my CDN supports HTTPS (using cloudinary), so that's good. Thanks for the ETag reminder, I'm not doing that yet.


That's what CDNs are for. If you control your proxy, nothing prevents you from giving it access to your https by setting up your private key on it. Https is also tapable, but only by servers you trust.


The more privacy-conscious amongst us probably consider that a positive reason not a negative one...


I tried out caddyserver about an hour ago, and the ease of use is awesome. Had it serving my domain with a letsencrypt auto-generated cert in 2 minutes from never having looked at the caddy docs before.


> Regardless of Google's move, there is no excuse for any site not to be served encrypted anymore.

Honest question: are you willing to indemnify your users when the next Heartbleed-like attack comes out for the underlying SSL library you are using in your product?

If you are willing to do that, and will offer me a no-cost wildcard domain certificate, I will switch to your product and start using HTTPS.


Why do we have to go through this whole SSL certificates thing and can't just have a simple, automatically secure, I-do-nothing-and-my-website-is-secure protocol?

Seriously though. If secure is the default from now on, why can't it actually be the default?


Isn't that what Let's Encrypt is aiming for? Install a package, which configures a cronjob for you?

https://letsencrypt.org/howitworks/

Which could just even become a default but optional dependency of your distro's web server package, or part of your Docker container, or whatever.


Ok I'm new to this and I know it's still beta, but it seems:

1. Still WAY too complicated (look at all the stuff you have to know and type)

2. Doesn't seem to support my preferred OS (Windows) or web server (IIS) what-so-ever. Which is strange since, from my experience, installing certs in IIS is already far easier than in Apache and Nginx. (Although maybe that's why they perceive it as less of a priority?)


Hi, I think the IIS support effort that's furthest along is described at https://community.letsencrypt.org/t/how-letsencrypt-work-for... ; maybe that will be useful for you if you want to try Let's Encrypt on your IIS system.

We've had hundreds of people remark that they found Let's Encrypt faster and easier to use than other CA offerings (though most of those people were using Apache on Debian-based systems), so I think we are getting somewhere. But we definitely hope that upstream web server projects and hosting environments will integrate ACME clients of their own, like Caddy has done, so that eventually most people won't need to run an external client at all and won't have to worry about compatibility or integration problems.


You have a ";" at the end of your URL which breaks it.

https://community.letsencrypt.org/t/how-letsencrypt-work-for...


Thanks, edited.


> 1. Still WAY too complicated (look at all the stuff you have to know and type)

The website mentions at the bottom that they're intending to get all of this automated, but they're not at that point yet; they're still in public beta. Certainly all those commands look automatable, just with enough integration with lots of distros / web servers, testing, and debugging. The Let's Encrypt protocol (ACME) is very much designed so that a web server can acquire a certificate with just about no human interaction besides telling it to do so, and keep it up-to-date with no human interaction.

I certainly agree that the instructions on that website are still way too complicated for general use, though far, far simpler than the status quo ante Let's Encrypt.


> 1. Still WAY too complicated (look at all the stuff you have to know and type)

I didn't realise that people getting SSL certs and administrating servers don't know how to read a literally one-page rundown of what to run. They also have helper scripts to make it much simpler.

> Which is strange since, from my experience, installing certs in IIS is already far easier than in Apache and Nginx. (Although maybe that's why they perceive it as less of a priority?)

nginx literally takes less than 10 minutes to set up not only SSL, but also CSP and several other very important security features.


I tried to set up LE for my personal bunch of websites, but sadly the rate-limiting is still too strict for automation to be a viable option.


Huh, the rate limits look pretty generous (500 certs every 3 hours): https://community.letsencrypt.org/t/rate-limits-for-lets-enc...

Do you actually own hundreds of personal websites? (And you could still desync them, anyway.) Or is this a use case where wildcards would be useful. I sort of disagree with LE's decision to not care about wildcards for now, though I understand that it's simpler, at least while it's in beta.


That's per IP, you're also limited to 5 requests per domain name per week. In my case, I have a bunch of subdomains for various stuff that all counts against the limit for the main website. I suppose I ought to combine the CSRs, but implementing that makes it a bit more complex than just automatically requesting a certificate per nginx vhost.


Oh, that's pretty rough.

Still, with enough automation, you can request 5 per week in a cronjob, which will let you get at least 40-something websites, even with the recommended 60-day renewal cycle. :-P


>you're also limited to 5 requests per domain name per week

Huh, I'm pretty sure I used more than that when I was first setting it up with no problems.


To quote the website:

> Certificates/Domain you could run into through repeated re-issuance. This limit measures certificates issued for a given combination of Public Suffix + Domain (a "registered domain"). This is limited to 5 certificates per domain per week.


If SAN certificates make sense for your setup (i.e. all used on the same server or for the same service), you can have up to 100 (sub)domains on one certificate, or basically 500 per week.

Maybe that's how you managed to get more than 5.


I did a bunch of requests starting with one subdomain, then a second, adding SANs multiple times, setting a cron to do one request a month and testing it, then adding yet one more SAN to the list.


Let's Encrypt is awesome but you still need to have root access to the machine. I host my stuff on a shared 1&1 node and I can't seem to find any way to add SSL to my websites without having to pay them.

(Yes I should move to another host but that is too much hassle for me right now.)


The ACME protocol is open and as a result there are several alternative clients which do not require root. Here's a few:

https://github.com/diafygi/letsencrypt-nosudo

https://github.com/kuba/simp_le

https://github.com/lukas2511/letsencrypt.sh

Or you can go a more manual approach via https://gethttpsforfree.com/ but you will need to manually renew your certificate every 90 days.


If apache and nginx follow along the lines of Caddy[1], we might.

[1] https://caddyserver.com/


I tried Caddy the other day and was pretty impressed. It's a single binary, it automatically installed a Let's Encrypt cert for itself and it had a bunch of other nice features.

I'm not going to switch production to it yet, but it's looking like it'll go on my home server pretty soon.


That's... impressive. I'm going to mess around with this over my weekend - thanks for sharing!


Seriously this. I don't see why encryption and website verification have been wrapped up in the same thing (SSL certs). They're two different things. Encryption should be free, automatic and default.


If you don't have a way to confirm that the key you're seeing from the other site is right, you're inherently vulnerable to a man-in-the-middle attack which removes the benefits of the encryption against the attacker.

https://en.wikipedia.org/wiki/Man-in-the-middle_attack

httpS://en.wikipedia.org/wiki/Zooko's_triangle

It's not clear that the certificate authority system was or is the best solution to this problem, but it is a problem that calls for some solution. In the case of Domain Validation, we only try to confirm that the key is appropriate to use with the domain name, which is the smallest possible kind of confirmation that can be done to address the crypto problem. There's no attempt to validate or verify anything else about the site.


However, having one and not the other isn't totally useless.

Having the browser be able to track and tell me that "Though we aren't sure this is actually google.com, we do know that the exact same cert has been used the last 50 times you visited this website" is something I'd consider to be useful. (Actually, telling me if it changes would be the useful bit).

That would be at least be useful for self-signed certs (though those aren't really needed in light of Let's Encrypt...)


> (Actually, telling me if it changes would be the useful bit).

I'm curious. Has anyone ever encountered that scary warning you get when an SSH host key changes, and thought "oh man, I'm getting MITMed, I'd better not connect to this server!", instead of thinking "oh right, I guess they reconfigured the server, now what command do I type to make the warning go away"?


I have. Usually it's because i reconfigured the server, but I am ultra paranoid. Most people don't care, but I would expect sysadmins to do so. And who else should login with ssh?


I've never thought it was likely to be an attack, but I always thought it was my responsibility to check why it changed or at the very least confirm it looked the same via a separate network path.


Take a look at google's certs. They're only valid for a few months. A system that tracks certificates encourages site operators to share the same cert and key across many servers and to allow it to live for a long time. With the sorry state of certificate revocation this is not ideal.

On the server side it's better for each server to have it's own private key and certificate which is valid for a short period of time and frequently renewed. So the compromise of one server does not compromise certificates on any other servers and the useful lifetime of a compromised key is very limited.

I think DNSSEC and DANE is the best solution. Allow the certificate thumbprints to be published securely in DNS. At least then we reduce the number of trusted authorities to the TLDs and the scope of authority for each one is automatically restricted to it's own TLD.


> However, having one and not the other isn't totally useless.

> Having the browser be able to track and tell me that "Though we aren't sure this is actually google.com, we do know that the exact same cert has been used the last 50 times you visited this website" is something I'd consider to be useful. (Actually, telling me if it changes would be the useful bit).

Isn't that what you do when you make a security exception for a self-signed certificate? Having that enabled by default lulls people into a false sense of security.


Be nice if self-signed certs were compatible with vanilla HTTP. Then no warning or complaint from the browser, but minimal security boost over naked transmission.


> Seriously this. I don't see why encryption and website verification have been wrapped up in the same thing (SSL certs). They're two different things. Encryption should be free, automatic and default.

Becuase you either have to do DH and all of the key negotiation anyway (at which point you already have a key, so why not encrypt and HMAC at the same time?). If you had two systems for this, it would be pointlessly inefficient (why have two DH key exchanges for the same channel).


Because without a trust anchor (a certificate), encryption is pretty much worthless against MitM attacks.

You need a way to verify that the site you're connecting to really is who it claims to be before you can trust even an encrypted connection to that site. Otherwise you don't know whether you just established an encrypted connection to the website, or an encrypted connection to a malicious attacker.


SSL/TLS actually does support unverified encryption, but browsers have decided to disable it because the UI for "encrypted but non-verified" is deemed too confusing for users.

See eg https://bugzilla.mozilla.org/show_bug.cgi?id=220240#c6


Because you need to create a public key for the browser to use.


SSH gets this right -- create a host key when the server is installed, and have the client check the key and only warn/error when it changes. Sure, this isn't super-secure for first time visitors to their banking website or whatever, but those websites can continue to use the current system.


SSH doesn't get this right. It's no better than a (auto-pinned) self-signed cert, in our world.

I challenge everyone to find in their extended group of friends and colleagues, and their friends and colleagues, a single person who consistently checks the fingerprint* on every first SSH connection.

Id personally have a hard time finding someone who even knows it matters.

And if you don't? Mitm can get your password, or tunnel your key to another host, bar some crazy ~/.ssh/config which nobody has.

WiFi's WPA2 actually does this better than SSH; the passphrase authenticates both parties to eachother, not just one way. I can't set up a hotspot with your home SSID and intercept your PSK---even on initial connection.

SSH: nice in a cryptographic utopia, not better than self signed SSL certs when applied to human beings.

SSH is just not suitable for humans. Apparently.

* a significant part of it, not just the security-through-obscurity random 2 letters in the middle and the last four.


Still, there's a difference between being less than 100% secure and being a totally useless feature.

Being able to make the statement "Either you've been consistently MitM'ed by the same entity for the past three years, or the your little cloud-based debian box is actually secure" is a lot more useful than not tracking SSH fingerprints at all. I certainly wish my browser would track my self-signed certs in this way.


-o VisualHostKey=yes


A band-aid, I'm afraid.

Without going into the question of how many bits of entropy that actually has when used with human beings in real settings, and just assume it's a perfect check; my question stands: how many people can you find who use this?

Many SSH clients don't even support it, at all. PuTTY and almost anything that uses SSH for tunneling.

When they do: how many of your hosts do you know the image of?

Again: nice idea, but utterly impotent in our universe.

Compare to the efficiency of e.g. WPA2 keys: less theoretically beautiful, but much more efficient with humans.


>Without going into the question of how many bits of entropy that actually has when used with human beings in real settings, and just assume it's a perfect check; my question stands: how many people can you find who use this?

Probably not very many, but it's really only useful for people that ignore basic security features anyway. (Key auth)

>When they do: how many of your hosts do you know the image of?

None, I use key auth like any reasonable person would.


Does key auth protect you from a MITM on the first connection?

That is, key auth as reasonable people use it, as you said.

And this:

> but it's really only useful for people that ignore basic security features anyway. (Key auth)

is precisely the point: that's a lot of people. SSH doesn't work for those people. We can play the blame game, but at the end of the day, clearly something is "not right".

And these are people who use SSH to begin with. Not typically technologically illiterate, I would guess. If they can't even be arsed to use "basic security features", what good is this system, then?

Again: there is a way to use SSH properly, yes. But rare is the person who does this.

(But key auth is orthogonal to host fingerprinting anyway, this is kind of a red herring)


>Does key auth protect you from a MITM on the first connection?

Yes. Key auth will protect you from your SSH connection being listened to, and will make credential theft reliant on social engineering. However, someone could still pretend to be the server (potentially stealing your commands), but there really doesn't exist any way to solve that.

>is precisely the point: that's a lot of people. SSH doesn't work for those people. We can play the blame game, but at the end of the day, clearly something is "not right".

Nothing works for those people, at least generally with SSH users you can assume that they should know better.

>Again: there is a way to use SSH properly, yes. But rare is the person who does this.

I'd hardly consider SSH key auth users rare.

>(But key auth is orthogonal to host fingerprinting anyway, this is kind of a red herring)

But it almost completely fixes the main problem caused by MitM, someone gaining access to the server you're logging into.


SSH gets this right

No, it doesn't.

When was the last time you verified a host key out of band?

And if you're using SSH, you know well enough to know why you should do the damn legwork to verify the key. What do you expect for end users?

Furthermore, if nobody is doing out of band verification on the first pass, how do you expect users to distinguish between an attack and legit host key change?


As I said above: Sure, this isn't super-secure for first time visitors to their banking website. But it's fine for the common case where someone tries to MITM you when you move from your home to a coffee shop or vice versa and you're just browsing sites that would otherwise be using http.


But the worst case scenario with SSH MitM isn't someone being able to eavesdrop on your connection. But someone pretending to be the server, which is hardly as serious. (Unless you're using password auth, in which case you deserve to get owned)


If someone impersonates your server, it can then pass the authentication request to the original, and gain full MITM without your knowledge. Yes, even if you use public key auth.


I will personally pay you the sum of 500 Bitcoins if you can demonstrate a realistic active MitM attack on OpenSSH that allows an active network level attacker to "pass the authentication request to the original" and gain full MitM.

Conditions:

Public key authentication must be used for authentication.

If it's possible to perform the attack passively(e.g on pcaps), it doesn't qualify.

This attack has to affect setups using both the latest OpenSSH client and server with default configuration.

This attack has to be able to be performed in realtime using the processing power of a 2015 macbook model of your choosing.

This attack cannot rely on attacker having any other access but the ability to tamper with the connection however much he wants.

This attack cannot rely on known flaws in the encryption algorithms.

With full MitM I am referring to the ability to at least access the plaintext communications between the client and server. Eg if the user runs 'sudo', the ability to see the password entered.

Please consider this offer legally binding, if you have any questions I will answer them and you can consider the answers binding too.

Good luck.


You're exempting the obvious "MITM on initial connection" attack, right?


Is there a strategic business reason for this on Google's part other than a safer web is better for all? I don't doubt that a more secure web is better for everyone, I'm just more curious about the business drivers of this from their perspective.

The reason I'm wondering is because with AMP, there seems to be a clear strategic benefit from having all of that ad serving data running through them even if the advertisers and publishers are not using the DoubleClick stack or Google Analytics.

By bringing this to market from the standpoint of "improving" the mess publishers have brought upon themselves and speeding everything up, there's definitely a clear win for consumers here. That said, it leaves the door open for something similar to mobilepocolypse where Google updated their ranking signals on mobile to significantly favor mobile-friendly sites. I could easily see this going a similar route where it is a suggestion...until its not because if you don't implement it you'll lose rankings and revenue (and coincidentally feed Google all of your ad serving data in the process).

To be clear, I don't knock them for taking this approach, because if it works it is a very smart business move that will be beneficial to a lot of parties (not just Google). Just looking for other insights into the business strategy behind something like pushing for encryption, and AMP.


> Is there a strategic business reason for this on Google's part other than a safer web is better for all?

The two common reasons for MitM are spying and inserting/replacing advertisements. The latter is stealing from Google, so they want to stop it before it grows too common.


We can only wonder how long it will be until Google starts openly advertising and buying newspaper articles against that new ad-replacing browser.


For Google, it’s not just about providing a secure environment and secure websites. In fact, Google actually has a monetary incentive to get as many websites to move over to HTTPs as possible: convincing website owners to move to HTTPs will help get rid of competing ad networks.


How does it get rid of competing ad networks? Does Google have a monopoly on serving ads over HTTPS?


It means your internet provider can't inject ads or profile you based on the content of the sites that you visit. Comcast, AT&T, and Verizon have all done similar: https://certsimple.com/blog/ssl-why-do-i-need-it#4-not-havin...


Sure they can. Your ISP can easily MitM you.


Not without throwing cert errors on every site I visit.

The only way they can MITM me is if they compromise my PC as well and install their root CA.


To connect to the internet you must install comcast internet-enhancing-certificate. It's the only way to make all websites secure by default™

No reason to compromise when you can force the user.


Ah. True, my mistake.


... or rather get an intermediate certificate from one of the umpteen root CAs your operating system embeds by default.

Is VeriSign going to refuse a certificate to AT&T?


Verisign will happily issue a certificate to AT&T for a domain that AT&T controls.

Verisign will not issue a certificate to AT&T for google.com--no matter how nicely AT&T asks.


Yes, and furthermore there's a very good reason to believe that this claim is true: as soon as they do, every copy of Chrome behind AT&T's network will go and snitch to Google, who will promptly investigate and get Verisign in deep trouble.

Here's what happened when Symantec issued fake Google certificates last year:

https://googleonlinesecurity.blogspot.com/2015/09/improved-d...

https://googleonlinesecurity.blogspot.com/2015/10/sustaining...

"Therefore we are firstly going to require that as of June 1st, 2016, all certificates issued by Symantec itself will be required to support Certificate Transparency. After this date, certificates newly issued by Symantec that do not conform to the Chromium Certificate Transparency policy may result in [annoying certificate warnings, just like self-signed certs]."

And that was just the work of a couple of employees who were inappropriately testing their issuance system and weren't even intending to attack anything. They got fired, which I expect is also a big part of why Google's response was so light.

http://www.symantec.com/connect/blogs/tough-day-leaders


>Is VeriSign going to refuse a certificate to AT&T?

I certainly hope so.


For one IIRC it kills referer headers and so search engines/ad networks can't build out a graph of where a user was prior. Google OTOH sends the majority of the traffic and it's reach in ads allows it fill in the gaps better than any other network.


HTTPS does not kill referrer or referer headers. See https://referer.rustybrick.com/


..so why are all of the search terms suddenly gone from google searche referer headers? Which happened at the same time google defaulted to https?


They stopped linking search results directly to the webpages. You have no Google search referrer headers in your logs/analytics any more.

When the SERP loads, all the results link to the real webpages, so that you see their address in the browser status bar when hovering over a link. Clicking any result link triggers a script that replaces the URL with https://google.com/url?url=the_real_webpage_url.

When you click through, you're clicking a link from google.com to another link on google.com, which redirects to the webpage you intended to visit. The referrer the webpage sees is the intermediate google.com/url page, instead of the search result page. This prevents websites from getting search term data from the SERP URL, if it was present, by removing that URL from referrer headers entirely.


> ..so why are all of the search terms suddenly gone from google searche referer headers? Which happened at the same time google defaulted to https?

Not related to HTTPS at all. This happened completely independently. It happened because Google went from having search URLs like this

    https://google.com/?q=term
To

    https://google.com/#q=term
And the things after the anchor-mark is never in your referer. Effectively this means that the only tool on the planet who knows what people searched for before entering your site is... Google Analytics.

As a website owner you're basically being co-erced into letting Google snoop on your users, at least if you want to know how they entered your site. And the fact of the matter that is most (all?) companies are willing to make that trade-off.

All in all pretty sad and very creepy.


At that time they started using horribly annoying redirects.


Did you read the page I linked to? My referer was https://encrypted.google.com/search?hl=en&q=What%20Is%20My%2...


Uh, the referrer is the page you came from. So if he opened the page on HN, then he wouldn't get the Google referrer, he'd get a page off HN.


If you read the page I linked to, it instructs you to search to try it out for yourself to see that search results are not stripped from the referrer.


I think it's pretty funny that on the HN front page right now is a NYTimes article from the company's Google beat reporter about how trying to interview Larry Page is "emasculating" and then this announcement is accompanied by an image "shaming" the NYTimes web site for being unencrypted.

As to the feature itself, I don't think it's a big deal at all. We all know that the average internet denizen doesn't understand HTTPS at all and would just as likely ignore it as anything. The only people that would see and understand this new red X for what it represents would know that it doesn't really matter that the lolcat meme they just downloaded came through an unsecured channel.


I work for a SaaS company, we absolutely have customers who email us complaining about putting credit cards in a page served over http.


Certainly, and I would be one of them. I'm not saying nobody does care or that nobody should, only that enough people don't care enough to make this "red X of shame" that shameful, really.

Chrome and Firefox have both had to take extreme measures for very similar things, such as web sites using expired (or even unvalidated/spoofed) SSL certificates. Google even reported that using a giant red page with warning labels didn't stop people from clicking through!


Right, and I guess I meant to imply that it is some of the unwashed non-elite masses that notice that stuff. Our product is for people who are bad at software and want an easier way to do task X, but they still know to look for the green lock. I don't have strong data but I'd just say-- don't underestimate the web knowledge of people who are mostly making cat pictures.


Main problem still is Google: They consider HTTPS and HTTP links the same. When switching a site to HTTPS you lose all your incoming links. Redirects only transfer a small amount of juice. You're toast.

We tried migrating several times to HTTPS only, every time got a huge penalty from Google.

So Google is the main driver for HTTP websites.


> They consider HTTPS and HTTP links the same. When switching a site to HTTPS you lose all your incoming links.

Do you mean that they don't consider HTTPS and HTTP the same? Otherwise, I don't understand your point here.


You're right.


Just enabled Chrome to show the little crosses by default for http:// and I already like having this showing. If you wish to be an early adopter go to:

chrome://flags/#mark-non-secure-as

It is good to see how sites that matter are mostly https:// already for me. The http:// tabs I have open such as this article actually are insecure when you think about the amount of trackers on them, so the 'x' is very apt.


They should do something useful for the web and remove most if not all the current root certificates. There are so many places that have what is essentially a master key to the internet - and that master key is only going to be more important as more and more sites become SSL.


So is there already a solution for https on Github Pages with a custom domain?


Stumbled upon Kloudsec here on HN couple days ago [1] and gave it a go. The dashboard is a bit clunky where you kinda have to figure out what to do, but HTTPS works without needing to move the DNS to them, as in case of Cloudflare (which costs 20$ when moving from Gandi).

Basically register account, enter your domain, update your DNS records with an A (replacing the Github pages IP) and TXT record (for verification).

While the change in DNS was in couple minutes on Gandi, Kloudsec DNS took an hour or two to register the change. After that, you go in the "Security plugin" and enable it. If you're using an apex domain, you can remove the www. HTTPS request, since you won't get the cert for that (if you do have an apex domain then you probably know about the CNAME trick on Pages, unless your DNS provider supports ANAME or ALIAS records for the apex domain - Gandi doesn't). It took couple hours again to get the cert.

When it's done click on the "Settings" cog icon for the desired HTTPS domain and enable HTTP-> HTTPS redirect and HTTPS rewrite, then you're set.

[1] https://kloudsec.com


I'm not sure what you mean about Gandi charging $20 to move the DNS to Cloudflare? I'm using Cloudflare to add HTTPS to a website on a domain registered with Gandi, and it hasn't cost me anything above the usual domain registration fee.


Hm yeah, after reviewing the transfer policies, I guess I mistakenly thought the price for transfer TO Gandi stands for transfer FROM as well. It's still a bit more hassle then what Kloudsec offers.


Check out netlify (https://www.netlify.com) - we're like GitHub Pages on steroids (integrated continuous deployment, proxying, redirect and rewrite rules + lots of other features) and we launched free SSL on custom domains a couple of weeks ago :)


In addition to CloudFlare, you can also use AWS CloudFront for this. We just implemented this to get https working on our custom-domain Github Pages site [1] this week.

You first have to upload your SSL certificate to AWS IAM [2] (you only have to do this once, or you can just purchase your certificate from the AWS console now too). Then, all you have to do is create a new CloudFront distribution and point the origin to your subdomain.github.io URL and select your SSL certificate from the drop-down, then point your CNAME record to the CloudFront distribution.

[1] https://os.alfajango.com/

[2] https://bryce.fisher-fleig.org/blog/setting-up-ssl-on-aws-cl...


CloudFlare works best.


Yes! Funnily enough, the site this story is hosted on is using HTTP.



All of this raises the question: why does the new default state require action, while the non-default state requires none?

Is that more or less bass-ackwards?


Should static content be encrypted over https? I think it's fair for chrome to call out with an x as I've literally seen local lunch joints take orders with credit card info over http but to serve mostly static pages like the new yorker over http only means that the user's privacy is compromised in that people can see what you're reading - does that warrant down ranking searches? I'm just curious - I work mostly on platforms so I'm not too aware of all of the incentives for trying to move everyone to https as it's not my problem domain necessarily.


One issue is content injection. You never know what transparent proxies are between you and the server, any one of them can add / remove content, scripts, tracking stuff etc to the static pages. You can't even be sure if your current DNS server resolved to the actual server and not some shady proxy.

I believe Comcast has been accused of doing something shady like that but I don't live in US and have no idea. Just read the news.


Because mobile carriers are given broad discretion to do whatever they want to do to your traffic.

They cheerfully modify content, and have built infrastructure to do it even more.


hmm ya this is a good point.


How will this impact page-speed?

I recall switching the product pages of an e-comm site, which had up to 50 small images per page, from https to http and the change very significantly increased page load speed for the end user


I'll guess that the browser opened several connections to fetch all of the content (to work around broken http/1.1 pipelining) and needed to complete many tls handshakes. http2 probably would have done a better job.


Several weeks ago I have installed certificate to my web site on NGINX and it wasn't hard. It was fun to do. Also I got A+ from Qualys SSL Labs. What I mean is it is easy to deploy an HTTPS site.


Deploying TLS in simple environments isn't overly complicated. It's just cost prohibitive.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: