There should really be a standard for dealing with this, like a flag on DHCP saying "O HAI you need to login here", and providing a REST endpoint that will tell the OS the status of the connection at any given time.
Since an auto-upgrade to https would break a considerable number of examples, and there's no compelling business need on their part to promote https-only, it's much more likely to stay available through http than just about any other website run by commercial interests.
Host example.com not found: 5(REFUSED)
Google uses http://connectivitycheck.gstatic.com/generate_204.
Apple uses http://captive.apple.com/hotspot-detect.html and http://www.apple.com/library/test/success.html.
Both happens in chrome hence the same browser context.
(As of writing it still works if you do it on a new browser. However once you have HSTS info it will always attempt to do an HTTPS connection I suppose)
If they continue to support this use case, it may be hard to do without introducing bugs - one exposure to a 'real' service which spits out an HSTS header (or the preload list), and the machine loses the ability to conduct Google searches.
I think they'll either have to use some nasty workarounds, or they'll need to use a different domain - which isn't necessarily something you want to do when you are trying to provide simple rules which allow users to identify phishing.
More likely they'll simply force sites that want to continue to MITM to load their own CA roots.
Although I don't think this is their motivation, it also has the neat side-effect of making Google's Chromebook & device management services more useful.
Those wishing to spy on their users with nossl could just disable HSTS in the browsers they provide.
> Turn on SafeSearch VIP
To force SafeSearch for your network, you’ll need to update your DNS configuration. Set the DNS entry for www.google.com (and any other Google ccTLD country subdomains your users may use) to be a CNAME for forcesafesearch.google.com.
>We will serve SafeSearch Search and Image Search results for requests that we receive on this VIP.
You can enforce safe search over https now https://support.google.com/websearch/answer/186669?hl=en
In the US, they still have a legal obligation to censor the Internet or they lose federal funding.
Edit: a whole lot of money! https://en.wikipedia.org/wiki/E-Rate#Modernization
School boards, no way. I can't think of an easier way to torpedo a school board career (or indeed a "pillar of the community" parent's status in the neighborhood) than by mentioning that they think children should have easier access to pornography at school.
By context I can guess what you mean, but by itself it's a very ambiguous statement :)
But "just" doing either of these things turns out not to be simple for many organizations. It's bad enough needing to update your system image/deployment scripts (if you have any!). You also need to figure out what to do about all the devices you don't own. BYOD is a thing.
In my reading, it doesn't say that a compliant user-agent must not set HSTS policies for self-signed certs; rather, it says a compliant user-agent will show an un-remediable error upon attempt to visit the website.
HSTS preloading is just distorting the market towards the already large sites.
A smaller site now will get ads injected by your ISP, but their larger competitor doesn’t.
Yet, you obviously can’t add every single private blog to the preloading list.
I tried getting my personal blog added, it’s just impossible.
So your solution is only good for the already huge websites.
What’s with democratizing the internet, giving every single personal blog the same tools as every huge site?
Why? https://hstspreload.appspot.com/ didn't work?
I wonder what will happen once the preload lists are several billion pages long?
I got my personal blog, which gets less than 100 unique visitors a month, added with no problems at all.
Last I tried was a few years ago, when the official stance was "only the top 100 will be added".
2. Bootstrap MITM vulnerability (RFC section 14.6)
3. NTP attacks (RFC section 14.7) 
4. Send RSTs on all HTTPS traffic. Eventually the user will get fed up with the website not working and try plain HTTP on a different browser/client which doesn't have the HSTS policy cached.
4.1. After the new client connects, force some kind of error or warning on the connection, but still allow it to continue, and use SSL stripping for any initial plaintext connections. HSTS will not be set if any warnings or errors are found on the connection. (RFC section 14.3)
5. Phishing: Send non-secure URLs to your target with instructions that if the connection fails, try the same non-secure link on a different browser. (It may be easier to just register a fake-but-similar-sounding domain and give it a valid cert, and send links to that)
5.1. Phishing with a non-secure root domain cert (RFC section 14.8)
6. Develop a method to determine when a client's policy will expire - or just wait - and wait to attack the moment of the next request, which will [potentially] be in plain HTTP.
7. Invalid configuration, such as forgetting to include the 'includeSubDomains' flag.
8. Internationalized domain names (RFC section 14.10)
Pre-loaded HSTS lists are affected with some of the above attacks, and public-key lists are affected eventually because eventually you need to rotate a key.
These are just the ten identified attacks so far, and they all stem from the design itself, which allows MITM on first connection or policy expiration. If your whole security feature is designed to allow MITM even once, it is guaranteed to happen at least once. This is why it's a well-intentioned bad hack.
The solution was - and still is - to design a protocol which will never allow insecure connections. This could be accomplished by forking SPDY or HTTP2.0 and naming the new protocol "SECURE", and passing out URLs like 'secure://google.com/'. This way it would be totally obvious to both the user and the browser that the connection should only ever be made using at least a valid signed cert, and HSTS would never be needed (but public-key pinning would still be needed). Mandatory DNS-based security would improve this further.
This almost happened with HTTP/2, but nobody proposed the new protocol name. Terrible missed opportunity.
The only way your proposed solution (secure-only protocol) avoids your identified design flaw (non-secure connections are sometimes possible) is if everyone immediately switches it and retires HTTP. That's not going to happen.
Considering a number of your attacks rely on the user not looking at the site security level (or in some cases, even at the domain name spelling), I find it curious that you think another visual indicator will make a difference.
And no, you don't have to retire http. and no, we don't have to switch to it immediately. You can simply add it as a new protocol and begin educating users.
No user that I personally know has ever been trained to identify markers on a browser, or the difference between 'http' and 'https'. It's all too complicated... What, is that a padlock? A check mark? Green? Blue? Yellow? What does "ache tee tee pee ess" mean anyway?
But what is very simple and intuitive is the word "secure". Just make sure the first word is "secure". It even rhymes!