In short; Google will penalize me because I use Google.
The universe has a sense of humor.
Edit: Typically, when a service tells me "no you can't use this service until you view a full page ad" I just give up and not bother continuing to the service. But the same is not true for Google. I reluctantly click through the full page ad every single time. It's incredibly annoying that I let them get away with this and still use the services. They are so outrageously arrogant about it and it bothers me greatly, but still, I don't change.
Going to calendar.google.com: http://i.imgur.com/fNRhhYx.png
First results for searching 'calendar': http://i.imgur.com/l3A5Wlh.png
They all prefer user using apps rather than web pages. Google and others want to get fully control of users and make money...
Well in your screenshot it seems like you scrolled down on the "calendar" search results. I get some other random thing ahead of Google Calendar, in incognito or not.
It is really annoying (I too hate those things, I would have installed the app if I wanted the app), but the click through thing only happens once in my testing. Are you clearing your cookies regularly?
I can erase google ads from DNS level. Users can never reach any google ad at all.
What you think?
They also said to use valid html/etc while they didn't do it for cost saving/performance reasons. Not sure this one is still true.
My guess is that this list of preaching water and drinking wine is pretty long for Google. I think their view is they know what they are breaking so it is OK in that particular case. The rest of us has to suck it up.
Do you mean `itself`? Since when are tech companies assigned genders?
You could try to convince people to use the word that way, but at present it's just not done. Companies are 'it' or you can talk about the people that make up the company as 'they'.
Also Sapir-Whorf is dumb.
I wonder if this also happens in German; the only examples I'm thinking of offhand are feminine (die Schweiz, die Türkei) but now I'm not at all sure that there isn't a masculine one too!
It also happens with countries like the UK, the US, the Czech Republic and so on, but obviously for the same reasons as in English.
I can't actually think of a country that's feminine in German. The "die" you often see is actually indicating plural (e.g. "die vereinigten Staaten", the United States; or "die Niederlande", "the Netherlands").
For example: Vor drei Monaten waren meine Mutter und ich in der Schweiz; wir haben _____ wirklich schön gefunden.
Would you accept "sie" here as a reference to Switzerland (because it was referred to as "die Schweiz"), or "es", or both? My intution is "es", but I'm not not a native speaker and non-native German speakers notoriously over-apply "es" to inanimate things.
But Switzerland is another example of a country that is typically used with an article. Consider the sentence "Ich fahre nach ____" with a country name. It doesn't work for countries like Switzerland ("nach Schweiz" sounds wrong, you'd instead say "in die Schweiz" -- same as "nach Kongo" vs "in den Kongo").
- Was meinen Sie über die Schweiz?
- ____ ist schön. / Ich finde ____ schön.
Some feminine countries: Switzerland, the Dominican Republic, Mongolia, Slovakia, Turkey, Ukraine, Central African Republic.
Male, in addition to your own list: Niger (!= Nigeria), Sudan, Vatican.
Neutral: UK (because kingdom is a neutral noun in German), potentially others
The concept of "some larger entity that spawns smaller entities" seems to generally lend itself to the mother/daughter terminology if you want to be poetic about it.
That said, whatever happened to artistic liberties?
I understand not wanting an application for a news website or something like that but something you use often like google calendar it would seem like the application would be better than the mobile page.
If they want to show full-page ads and be super annoying, then fine, I'll deal with it. But don't pretend to be against it when you do the same practice yourself.
It really just shows the sad state of mobile advertising when they're showing you ads for an app you already have.
However, if I'm on a specific app publisher's website, I wouldn't mind letting them know (through some mechanism) that I've already installed their specific app.
How do you expecet them to know all the apps installed on your phone? And if they DID know this information, people would be up in arms about privacy or lack there-of.
I don't expect them to know all the apps installed on my phone, nor do I think they need that much information to solve this particular problem.
Showing that they rank above "timeanddate.com" doesn't mean a lot.
> Our analysis shows that it is not a good search experience and can be frustrating for users because they are expecting to see the content of the Web page.
This would imply that they know the experience is bad for users, they know that the penalty won't hurt their ranking, and so they will continue to show the bad experience regardless? That's just as bad as them not penalizing themselves for the full page ad.
I assume different departments run calendar and search.
This is missing from the android app. So you have to browse to the calendar on the web and ... this.
One of the larger difficulties for publishers is that many of the 3rd party SSPs aren't ready to go full HTTPs and so publishers are reluctant to make the switch because it reduces demand sources.
Disclaimer: Work for Google in advertising
One of the larger difficulties for publishers is that
many of the 3rd party SSPs aren't ready to go full HTTPs
It would be easy to proxy http-only ads from through a CDN that added encryption. Or to charge a premium to http-only ad networks, and ramp the premium up over time.
I don't understand your next part at all. Who would charge a premium to http only networks? Those sites don't actually rely on Google for delivery.
>>In short; Google will penalize me because I use Google
Are there others where that's not practical? Also, yes. Maybe not things that you need, but this problem does exist.
At the moment, there's not a model, outside of ads, that works very well for that sort of thing. There are some subscription/micropayment schemes that seem promising, but nothing that works as well as ads do.
For example, https://support.google.com/dfp_sb/answer/4515432?hl=en
So, G is rationalizing their slow pace with the same reason that's not good enough for others :)
I can't remember what services it is now, but there was some Google service that was deranked because it broke some Google search ranking policy.
It shows some integrity for the company that they're (sort of) operating their search engine objectively.
I presume that google doesn't uprank sites that specifically use Adsense versus other competing ad services?
 - http://googleadsdeveloper.blogspot.com/2015/08/handling-app-...
Nevertheless, quite convincing security arguments aside, I feel this also has a very authoritarian side to it: they are effectively saying that your site, if it is not given a "stamp of approval" by having a certificate signed by some central group of authorities, is worthless. Since CAs also have the power to revoke certificates, enforced HTTPS makes it easier to censor, control, and manipulate what content on the Web users can access, which I certainly am highly opposed to. I can see the use-case for site like banks' and other institutions which are already centralised, but I don't think such control over the Web in general should be given to these certificate authorities.
With plain HTTP, content can be MITM'd and there won't be much privacy, but it seems to me that getting a CA to revoke a certificate is much easier than trying to block a site (or sites) by other means, and once HTTPS is enforced strongly by browsers, would be a very effective means of censorship. Thus I find it ironic that the article mentions "repressive government" and "censor information" --- HTTPS conceivably gives more power to the former to do the latter, and this is very much not the "open web" but the centralised, closed web that needs approval from authorities for people to publish their content in.
There's a clear freedom/security tradeoff here, and given what CAs and other institutions in a position of trust have done in the past with their power, I'm not so convinced the security is worth giving up that freedom after all...
Not to mention that, of course, access to most websites is already gated by a central group of authorities - the domain registries - which can and do seize domains. Using raw IPs is one alternative, but if you're in that kind of position, chances are you want to be a Tor hidden service anyway.
They could easily alter encrypted communications to effectively censor too, thanks to the all-or-nothing nature of encryption with authentication. Because by design, the certificate is presented in cleartext, it would be pretty easy to blacklist CAs and then cut off the connection if one of those is detected. Alternatively, whitelist CA(s) . Analysing plaintext takes more computational resources, especially if things like steganography are used.
 Related article: https://news.ycombinator.com/item?id=10663843
It's obvious that censorship by western governments is never considered "censorship".
Only the evil enemy censors, we just have to enforce laws.
If one accepts this argument, it makes sense to argue that giving CAs more power is good — because, obviously, they don't censor, they just protect the interests of our economy.
And going by the high issuance/maintenance fee the CAs charge for issuing certificates, the industry is a sitting duck for disruption by a Blockchain DNS/CA app.
I, as a site owner, can just sign my 'certificate' myself and put it on the blockchain DNS/CA app. The certificate will have my domain name, public key. And slso an additional field 'ownership sign' which is something like https://<my domain>.com/ownership_sign.pem (which is signed by my private key).
So if I am the true owner, I can self issue as many certificates to myself as I please. Or there could be some forced limitation to prevent any scalability (cough) challenges.
So, the problem you have pointed out is not really with enforcing/encouraging HTTPS, but with the entrenched CA bureaucracy. And I am really surprised, why is it not being disrupted already?
Maybe HTTPS makes it easier to censor in theory, but in practice it helps fight censorship by enabling Tor.
That was a major (really the major) basis of the fight against SOPA--it would have required ISPs to interfere with DNS resolution as a way of shutting down serial copyright infringers.
And the U.S. federal government can already seize domain names for some reasons.
So, the question is: does the value of pervasive over-the-wire encryption outweigh the risk additional centralization via CAs? Right now I think it does, but that is in part because I believe that the CA infrastructure itself will improve over time.
Its pretty cool, but its not production ready.
The GNUNet has multible layers and do bottum up encryption on the lower levels.
- Squarespace doesn't support SSL (other than on their ecommerce checkout pages) 
- Weebly only allows it on their $25/mo business plan 
- Wordpress.com doesn't support SSL for sites with custom domains 
- If you've never experienced the process of requesting, purchasing, and then installing an SSL certificate using a hosting control panel like Plesk or cPanel, let me tell you–it's a nightmare.
All that to say, this is an interesting development that will leave a large % of small business websites with a red mark in their browser.
Of course, that's hardly as secure as end-to-end HTTPS, but still, I trust the path between CF and SquareSpace much more than between the user's browser and SquareSpace.
But yeah, this NSA slide is extremely relevant to cloudflare: http://cdn01.androidauthority.net/wp-content/uploads/2014/06...
That is correct. I have not been able to get passed a Cloudflare captcha over tor for any website.
On the other hand, it doesn't protect against government spying, but then again, I think some governments straight-up MitM HTTPS traffic anyway. For instance:
I wonder if SquareSpace is going to finally fix their shit or if I'm going to have to move elsewhere which is going to be a pain (I went with SquareSpace because I didn't want to be assed with dealing with much of anything for a personal site).
If a web developer was large enough to have its own marketing department it could maintain its own site.
In any event, I'm busy developing apps for clients all day - I don't get paid to work on my own stuff so it has lower priority.
It took all of 5 seconds for me to do - it's all automated via the admin panel now. Just tick the box to make your site secure. Looks like DH has resolved any initial issues they had.
It was mind boggling that mixed content was "insecure" but HTTP was "secure." HTTP is and always has been insecure and should be marked as such.
I know there are a few people who will moan and groan about how overkill HTTPS is, but this isn't about banning HTTP it is just about reminding users that they shouldn't be entering sensitive information into a HTTP site.
Even phishing sites should be DV secure.
Mixed content was marked insecure because there were assets on the page that might not be from where you think they were from. It was an indicator that the little https lock in the URL bar wasn't telling you the whole story.
Which is fair, given that I bet you'd get about a 5% or less recognition rate if you polled a random sampling of people on whether they could define "HTTPS" / "SSL" / "TLS" / "That lock thingie" to any degree of accuracy.
A server shouldn't have the opportunity to serve an insecure connection to the user without the user being made explicitly aware of that fact.
However, the biggest challenge is actually internal traffic are almost always over HTTP, and the reason is almost always "because self-signed cert is invalid." In some way this is okish since internal traffic is a darknet, but as we have proper toolset make Let's Encrypt available, more people should consider deploying full SSL support for internal traffic as well. At this point, the toolchain to actually make Let's Encrypt simple and useful is still, ugh, a little hackish. Cron job here and there. Sort of complicated process to get started...
Why is it mind boggling?
Content served over HTTP is obviously less sensitive than content served over HTTPS, mixed content breaks HTTPS.
Breaking HTTPS where it's deliberately used is something that certainly deserves a warning.
For us, sure. For the other 95% of the population, not really, which is why Google is doing this
I'm absolutely not arguing against such warnings for HTTP.
If they don't then they're not keeping up with hosting on Amazon's S3, which does support it.
Perhaps at least to customers of Google domains. I won't mind switching from namecheap to Google domain in latter case.
You can get a domain through google without switching your google identity to it. You can also sign up for google apps on a non-google domain. google domains and google apps are not the same thing.
The one feature missing and that was painful for me were Contacts photos with a resolution higher than 96x96 pixels. On a latest generation Android with good resolution it sucks and I would have preferred if Contacts photos weren't synchronized at all. I ended up switching to a CardDAV provider and in the end I gave up on Google Apps for other reasons as well. And for the record, Google accounts had this resolution increased in 2012 ;-)
That said "delayed schedule" above means 1+ years.
There's a lot of misinformation out there about certificates and HTTPS, but don't let it stop you from encrypting your site. Regardless of Google's move, there is no excuse for any site not to be served encrypted anymore.
 Here's a 30s demo: https://www.youtube.com/watch?v=nk4EWHvvZtI
The most legitimate reason I've heard is for privacy. I don't believe the gov't is going to lock someone up for learning how to serve web pages.
It'd be slightly nice if we were able to have integrity-protected HTTP without encryption (lower overhead, easier debugging with packet dumps), but the advantages are minimal (ciphers are not really the overhead, SSLKEYLOGFILE is a thing) and it's a lot of complexity to the web platform, which is a downside for web developers like you and me: the rules for mixed content between HTTP, HTTPI, and HTTPS are going to be much more involved and confusing.
Which makes sense, since they'd have the exact same problems as an explicit HTTPI protocol, just even more confusing: you'd want to not send things like secure cookies across those ciphers, you'd have to handle mixed content with actual-HTTPS carefully, etc.
That's essentially the same as not locking your car doors because you feel your car isn't worth breaking into.
Serious question, what are my options?
If the former, you can stick those on HTTPS too just fine. CloudFlare will be an entire SSL-enabled CDN for you for free. Amazon Cloudfront will serve SSL for you for free (though you still have to pay for Cloudfront itself, and get a cert on your own, though you can do that for free).
* Make use of edge CDNs with https termination
Honest question: are you willing to indemnify your users when the next Heartbleed-like attack comes out for the underlying SSL library you are using in your product?
If you are willing to do that, and will offer me a no-cost wildcard domain certificate, I will switch to your product and start using HTTPS.
Seriously though. If secure is the default from now on, why can't it actually be the default?
Which could just even become a default but optional dependency of your distro's web server package, or part of your Docker container, or whatever.
1. Still WAY too complicated (look at all the stuff you have to know and type)
2. Doesn't seem to support my preferred OS (Windows) or web server (IIS) what-so-ever. Which is strange since, from my experience, installing certs in IIS is already far easier than in Apache and Nginx. (Although maybe that's why they perceive it as less of a priority?)
We've had hundreds of people remark that they found Let's Encrypt faster and easier to use than other CA offerings (though most of those people were using Apache on Debian-based systems), so I think we are getting somewhere. But we definitely hope that upstream web server projects and hosting environments will integrate ACME clients of their own, like Caddy has done, so that eventually most people won't need to run an external client at all and won't have to worry about compatibility or integration problems.
The website mentions at the bottom that they're intending to get all of this automated, but they're not at that point yet; they're still in public beta. Certainly all those commands look automatable, just with enough integration with lots of distros / web servers, testing, and debugging. The Let's Encrypt protocol (ACME) is very much designed so that a web server can acquire a certificate with just about no human interaction besides telling it to do so, and keep it up-to-date with no human interaction.
I certainly agree that the instructions on that website are still way too complicated for general use, though far, far simpler than the status quo ante Let's Encrypt.
I didn't realise that people getting SSL certs and administrating servers don't know how to read a literally one-page rundown of what to run. They also have helper scripts to make it much simpler.
> Which is strange since, from my experience, installing certs in IIS is already far easier than in Apache and Nginx. (Although maybe that's why they perceive it as less of a priority?)
nginx literally takes less than 10 minutes to set up not only SSL, but also CSP and several other very important security features.
Do you actually own hundreds of personal websites? (And you could still desync them, anyway.) Or is this a use case where wildcards would be useful. I sort of disagree with LE's decision to not care about wildcards for now, though I understand that it's simpler, at least while it's in beta.
Still, with enough automation, you can request 5 per week in a cronjob, which will let you get at least 40-something websites, even with the recommended 60-day renewal cycle. :-P
Huh, I'm pretty sure I used more than that when I was first setting it up with no problems.
> Certificates/Domain you could run into through repeated re-issuance. This limit measures certificates issued for a given combination of Public Suffix + Domain (a "registered domain"). This is limited to 5 certificates per domain per week.
Maybe that's how you managed to get more than 5.
(Yes I should move to another host but that is too much hassle for me right now.)
Or you can go a more manual approach via https://gethttpsforfree.com/ but you will need to manually renew your certificate every 90 days.
I'm not going to switch production to it yet, but it's looking like it'll go on my home server pretty soon.
It's not clear that the certificate authority system was or is the best solution to this problem, but it is a problem that calls for some solution. In the case of Domain Validation, we only try to confirm that the key is appropriate to use with the domain name, which is the smallest possible kind of confirmation that can be done to address the crypto problem. There's no attempt to validate or verify anything else about the site.
Having the browser be able to track and tell me that "Though we aren't sure this is actually google.com, we do know that the exact same cert has been used the last 50 times you visited this website" is something I'd consider to be useful. (Actually, telling me if it changes would be the useful bit).
That would be at least be useful for self-signed certs (though those aren't really needed in light of Let's Encrypt...)
I'm curious. Has anyone ever encountered that scary warning you get when an SSH host key changes, and thought "oh man, I'm getting MITMed, I'd better not connect to this server!", instead of thinking "oh right, I guess they reconfigured the server, now what command do I type to make the warning go away"?
On the server side it's better for each server to have it's own private key and certificate which is valid for a short period of time and frequently renewed. So the compromise of one server does not compromise certificates on any other servers and the useful lifetime of a compromised key is very limited.
I think DNSSEC and DANE is the best solution. Allow the certificate thumbprints to be published securely in DNS. At least then we reduce the number of trusted authorities to the TLDs and the scope of authority for each one is automatically restricted to it's own TLD.
> Having the browser be able to track and tell me that "Though we aren't sure this is actually google.com, we do know that the exact same cert has been used the last 50 times you visited this website" is something I'd consider to be useful. (Actually, telling me if it changes would be the useful bit).
Isn't that what you do when you make a security exception for a self-signed certificate? Having that enabled by default lulls people into a false sense of security.
Becuase you either have to do DH and all of the key negotiation anyway (at which point you already have a key, so why not encrypt and HMAC at the same time?). If you had two systems for this, it would be pointlessly inefficient (why have two DH key exchanges for the same channel).
You need a way to verify that the site you're connecting to really is who it claims to be before you can trust even an encrypted connection to that site. Otherwise you don't know whether you just established an encrypted connection to the website, or an encrypted connection to a malicious attacker.
See eg https://bugzilla.mozilla.org/show_bug.cgi?id=220240#c6
I challenge everyone to find in their extended group of friends and colleagues, and their friends and colleagues, a single person who consistently checks the fingerprint* on every first SSH connection.
Id personally have a hard time finding someone who even knows it matters.
And if you don't? Mitm can get your password, or tunnel your key to another host, bar some crazy ~/.ssh/config which nobody has.
WiFi's WPA2 actually does this better than SSH; the passphrase authenticates both parties to eachother, not just one way. I can't set up a hotspot with your home SSID and intercept your PSK---even on initial connection.
SSH: nice in a cryptographic utopia, not better than self signed SSL certs when applied to human beings.
SSH is just not suitable for humans. Apparently.
* a significant part of it, not just the security-through-obscurity random 2 letters in the middle and the last four.
Being able to make the statement "Either you've been consistently MitM'ed by the same entity for the past three years, or the your little cloud-based debian box is actually secure" is a lot more useful than not tracking SSH fingerprints at all. I certainly wish my browser would track my self-signed certs in this way.
Without going into the question of how many bits of entropy that actually has when used with human beings in real settings, and just assume it's a perfect check; my question stands: how many people can you find who use this?
Many SSH clients don't even support it, at all. PuTTY and almost anything that uses SSH for tunneling.
When they do: how many of your hosts do you know the image of?
Again: nice idea, but utterly impotent in our universe.
Compare to the efficiency of e.g. WPA2 keys: less theoretically beautiful, but much more efficient with humans.
Probably not very many, but it's really only useful for people that ignore basic security features anyway. (Key auth)
>When they do: how many of your hosts do you know the image of?
None, I use key auth like any reasonable person would.
That is, key auth as reasonable people use it, as you said.
> but it's really only useful for people that ignore basic security features anyway. (Key auth)
is precisely the point: that's a lot of people. SSH doesn't work for those people. We can play the blame game, but at the end of the day, clearly something is "not right".
And these are people who use SSH to begin with. Not typically technologically illiterate, I would guess. If they can't even be arsed to use "basic security features", what good is this system, then?
Again: there is a way to use SSH properly, yes. But rare is the person who does this.
(But key auth is orthogonal to host fingerprinting anyway, this is kind of a red herring)
Yes. Key auth will protect you from your SSH connection being listened to, and will make credential theft reliant on social engineering. However, someone could still pretend to be the server (potentially stealing your commands), but there really doesn't exist any way to solve that.
>is precisely the point: that's a lot of people. SSH doesn't work for those people. We can play the blame game, but at the end of the day, clearly something is "not right".
Nothing works for those people, at least generally with SSH users you can assume that they should know better.
>Again: there is a way to use SSH properly, yes. But rare is the person who does this.
I'd hardly consider SSH key auth users rare.
>(But key auth is orthogonal to host fingerprinting anyway, this is kind of a red herring)
But it almost completely fixes the main problem caused by MitM, someone gaining access to the server you're logging into.
No, it doesn't.
When was the last time you verified a host key out of band?
And if you're using SSH, you know well enough to know why you should do the damn legwork to verify the key. What do you expect for end users?
Furthermore, if nobody is doing out of band verification on the first pass, how do you expect users to distinguish between an attack and legit host key change?
Public key authentication must be used for authentication.
If it's possible to perform the attack passively(e.g on pcaps), it doesn't qualify.
This attack has to affect setups using both the latest OpenSSH client and server with default configuration.
This attack has to be able to be performed in realtime using the processing power of a 2015 macbook model of your choosing.
This attack cannot rely on attacker having any other access but the ability to tamper with the connection however much he wants.
This attack cannot rely on known flaws in the encryption algorithms.
With full MitM I am referring to the ability to at least access the plaintext communications between the client and server. Eg if the user runs 'sudo', the ability to see the password entered.
Please consider this offer legally binding, if you have any questions I will answer them and you can consider the answers binding too.
The reason I'm wondering is because with AMP, there seems to be a clear strategic benefit from having all of that ad serving data running through them even if the advertisers and publishers are not using the DoubleClick stack or Google Analytics.
By bringing this to market from the standpoint of "improving" the mess publishers have brought upon themselves and speeding everything up, there's definitely a clear win for consumers here. That said, it leaves the door open for something similar to mobilepocolypse where Google updated their ranking signals on mobile to significantly favor mobile-friendly sites. I could easily see this going a similar route where it is a suggestion...until its not because if you don't implement it you'll lose rankings and revenue (and coincidentally feed Google all of your ad serving data in the process).
To be clear, I don't knock them for taking this approach, because if it works it is a very smart business move that will be beneficial to a lot of parties (not just Google). Just looking for other insights into the business strategy behind something like pushing for encryption, and AMP.
The two common reasons for MitM are spying and inserting/replacing advertisements. The latter is stealing from Google, so they want to stop it before it grows too common.
The only way they can MITM me is if they compromise my PC as well and install their root CA.
No reason to compromise when you can force the user.
Is VeriSign going to refuse a certificate to AT&T?
Verisign will not issue a certificate to AT&T for google.com--no matter how nicely AT&T asks.
Here's what happened when Symantec issued fake Google certificates last year:
"Therefore we are firstly going to require that as of June 1st, 2016, all certificates issued by Symantec itself will be required to support Certificate Transparency. After this date, certificates newly issued by Symantec that do not conform to the Chromium Certificate Transparency policy may result in [annoying certificate warnings, just like self-signed certs]."
And that was just the work of a couple of employees who were inappropriately testing their issuance system and weren't even intending to attack anything. They got fired, which I expect is also a big part of why Google's response was so light.
I certainly hope so.
When the SERP loads, all the results link to the real webpages, so that you see their address in the browser status bar when hovering over a link. Clicking any result link triggers a script that replaces the URL with https://google.com/url?url=the_real_webpage_url.
When you click through, you're clicking a link from google.com to another link on google.com, which redirects to the webpage you intended to visit. The referrer the webpage sees is the intermediate google.com/url page, instead of the search result page. This prevents websites from getting search term data from the SERP URL, if it was present, by removing that URL from referrer headers entirely.
Not related to HTTPS at all. This happened completely independently. It happened because Google went from having search URLs like this
As a website owner you're basically being co-erced into letting Google snoop on your users, at least if you want to know how they entered your site. And the fact of the matter that is most (all?) companies are willing to make that trade-off.
All in all pretty sad and very creepy.
As to the feature itself, I don't think it's a big deal at all. We all know that the average internet denizen doesn't understand HTTPS at all and would just as likely ignore it as anything. The only people that would see and understand this new red X for what it represents would know that it doesn't really matter that the lolcat meme they just downloaded came through an unsecured channel.
Chrome and Firefox have both had to take extreme measures for very similar things, such as web sites using expired (or even unvalidated/spoofed) SSL certificates. Google even reported that using a giant red page with warning labels didn't stop people from clicking through!
We tried migrating several times to HTTPS only, every time got a huge penalty from Google.
So Google is the main driver for HTTP websites.
Do you mean that they don't consider HTTPS and HTTP the same? Otherwise, I don't understand your point here.
It is good to see how sites that matter are mostly https:// already for me. The http:// tabs I have open such as this article actually are insecure when you think about the amount of trackers on them, so the 'x' is very apt.
Basically register account, enter your domain, update your DNS records with an A (replacing the Github pages IP) and TXT record (for verification).
While the change in DNS was in couple minutes on Gandi, Kloudsec DNS took an hour or two to register the change. After that, you go in the "Security plugin" and enable it. If you're using an apex domain, you can remove the www. HTTPS request, since you won't get the cert for that (if you do have an apex domain then you probably know about the CNAME trick on Pages, unless your DNS provider supports ANAME or ALIAS records for the apex domain - Gandi doesn't). It took couple hours again to get the cert.
When it's done click on the "Settings" cog icon for the desired HTTPS domain and enable HTTP-> HTTPS redirect and HTTPS rewrite, then you're set.
You first have to upload your SSL certificate to AWS IAM  (you only have to do this once, or you can just purchase your certificate from the AWS console now too). Then, all you have to do is create a new CloudFront distribution and point the origin to your subdomain.github.io URL and select your SSL certificate from the drop-down, then point your CNAME record to the CloudFront distribution.
They should have it force https.
Is that more or less bass-ackwards?
I believe Comcast has been accused of doing something shady like that but I don't live in US and have no idea. Just read the news.
They cheerfully modify content, and have built infrastructure to do it even more.
I recall switching the product pages of an e-comm site, which had up to 50 small images per page, from https to http and the change very significantly increased page load speed for the end user