Okay, great, Chrome has a place where you can delete HSTS pins.
Now how do I delete a cached Accept-CH value? It's been two months and any computer I accessed the test server through while it was sending the bad Accept-CH value, still chokes for literally any request made to the test domain. (And no, clearing site data doesn't do anything.)
(If you're wondering what I did that screwed things up so bad: I had the server send an Accept-CH response header in response to an OPTIONS request, in the hopes that it would be delivered in the CORS preflight and therefore get me high-entropy client hints sent in the actual [same-site] XHR. It was delivered in the CORS preflight alright...)
Sounds like you'd need to delete your browser user profile (which holds all the stored info for your user), so it starts a new "empty" one like a new user would have.
I mean, "whoopsie" bad pins are the main reason that most headers that indicate caching have max enforced cache lifetimes. According to a previous draft of the Client Hints spec, Accept-CH's cache lifetime was supposed to only be 10 minutes, for exactly that reason! But apparently the only implementation so far (in Chromium) didn't have any internal TTL-based eviction logic for these entries at all...
You can also bypass Chrome security warnings that don't have the "Continue anyway" link by typing `thisisunsafe` on the page (yes, blind). You'll know if it worked by the last `e`.
If you ask around (like in reddit) they'll tell you "hurr durr but the rfc", "hurr durr security". Thank you, but I am an adult, and this is a computer I bought with the money I earned. I want to be able to tell my computer what to do. /rant
I thought you were kidding until I looked it up. Apparently this is also changed from time to time!
> The chrome developers also do change this periodically. They changed it recently from badidea to thisisunsafe so everyone using badidea, suddenly stopped being able to use it.
Seattle.gov started serving with HSTS `includeSubDomains; preload` over a month ago, broke all sorts of subdomains, and are still picking up the pieces.
City council ordinances and resolutions are hosted at http://clerk.seattle.gov/, not that it matters, since you can't view the site.
To those saying you can view the site; it's presumably because you didn't go to seattle.gov when it was setting HSTS on subdomains. Presumably the parent did and is now not able to access http://clerk.seattle.gov/ because of that.
There's a line in the fwd proxy config for these rare situations where I need to use HTTP without TLS
use_backend b407-notls if { hdr(host) -m str -f /somedir/hosts-notls.txt }
Normally the proxy will use TLS for any request from any application, not just a web browser, regardless of whether URL begins with http:// or whether it's sent to port 80
To avoid HSTS supercookies^1 I can also add a response header
If the site is on some pre-approved list compiled into the browser, I can remove it and recompile. But I am not interested in such user-hostile browsers for frequent use. Good for online shopping, banking, other commercial transactions, but not for non-commercial purposes.
The ubiquity of TLS is of course a relatively recent phenomena
It seems like Internet Archive still does lots of crawls of port 80; I wonder if that's also true for Common Crawl
If so, it's interesting to think about how much of the data used to train ChatGPT and other AI may have come from crawls over port 80
Somewhat relatedly: a site with HSTS is not supposed to let you click through an invalid cert warning. The browser should also ignore HSTS with invalid (self-signed) certs. But there are bugs, and thus you can find yourself in the position where you're unable to ignore the cert error on a site that never had a valid cert.
HSTS will go the way of password rules and password change requirements. And by that I mean people that don’t really know anything about security will ask for it because it checks a box and someone will add it because they’re paid to not have opinions and just write code and in 20 years we’ll still be fighting a stupid battle against HSTS because we can’t have nice things. HSTS was dead in the water from day one.
The hard part is I believe to spoof the DNS entry and then get a cert from a CA they is in the browser root. Were there cases where HSTS actually stopped something after that? (serious question - I always wondered how much of a peripheral problem HSTS solves)
Once you have managed to poison DNS, so your server is contacted instead of the right one, without HSTS you could potentially⁰ serve your responses using plain HTTP with no in-your-face warning to the user¹. With HSTS that initial request won't be plain HTTP if the user has been to the site before or the name is present in their browser's HSTS preload list.
Chrome will default to HTTPS when given a typed URL that doesn't specify protocol these days, falling back to HTTP if that connection fails, but that doesn't protect you from plain HTTP links in other pages or stored as bookmarks. In fact this doesn't protect you as much as you think it might: IIRC this fallback to HTTP happens for any connection error including being served an invalid certificate, so a DNS-poisoning based MitM attack could still work for some users² meaning HSTS would still be useful even if all browsers used the same HTTPS-default-HTTP-fallback procedure.
> I always wondered how much of a peripheral problem HSTS solves
HSTS, especially with preload, solves a potentially serious but likely-to-be-rare problem. Even if the circumstances where it saves the day are rare, it is so easy to implement it is worth (IMO) the small amount of admin for that protection.
--
[0] if the initial request is plain HTTP
[1] some browsers will display an “insecure” flag when a page delivered by HTTP contains a form, or at least a form with a password box, but a user focusing on just what they are typing may not notice that
[2] as the fallback to HTTP, if not blocked by HSTS, will happen without warning
I'm confused by your question. The entire point is to force an attacker to do "the hard part" that you list there, which is genuinely very hard to do. So it won't do anything "after" that.
Sorry for not having been clear. Put it another way: are there hard stats about HSTS actual value? How many orgs that had their DNS defaced and new certs issued were saved by HSTS?
Since a mistake with HSTS is catastrophic, setting it must make sense risk wise.
Are you confusing HSTS with HPKP like the other poster?
HSTS says the site has to be https, nothing else. It does nothing if someone gets a valid cert. It exists to prevent every attack weaker than that, such as local MitM.
And mistakes are not catastrophic at all unless you have some horrible legacy setup that can't do https.
oh crap, I was thinking HPKP and reading/writing HSTS. Sorry for the entropy.
Yes, HSTS (this time HSTS :)) is useful (though there are problems with scaling for the initial seed of pages, but maybe that was already solved).
You know what, I was thinking of HPKP, which is obsolete. HSTS, while I doubt it’s actually prevented a single adversarial MITM, isn’t a terrible idea.
> Your anonymity is preserved because we handle the query on your behalf. Don’t trust us? Erm, we have root. You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err.
If I had a nickel for every out-of-touch South African billionaire with an interest in spaceflight, I'd have two nickels... which isn't a lot, but it's weird that it happened twice, right?
By using Ubuntu I’m trusting their binaries. By using any distro I’m trusting their binaries, or even if I compile everything from scratch I certainly haven’t read and understood every line of code.
He wasn't wrong until he made the statement, at which point I stopped trusting Ubuntu's binaries and sought them from other distros instead :)
In any case, there's a vast difference between trusting binaries running on a local machine v. trusting someone to competently (and not maliciously) administer a remote machine. Shuttleworth's statement would've been less unreasonable in the Before Times when cybercriminals breaking into servers and getting their hands on PII to sell on the Dark Web was an occasional and exceptional thing instead of just Tuesday; that time had been long gone even by then (let alone now).
The problem is that Chrome considers the entire localhost regardless of port within scope of HSTS. Firefox does this only per port, so you don't run into these issues.
Happened to me many times and that chrome://net-internals/#hsts is not very user-friendly (no feedback if it was successful or if the domain wasn't even there).
Perhaps, but I have personally gone back all in with Firefox. I've gained the ability to not have a wanker tell me how https works and be a total pain. Cr browsers will never enter saved creds into a site that isn't "fully trusted". FF will respect your choice after quite a lot of admonishment.
I've been using SSL and TLS longer than some of the knobends programming these fucking things have been alive. There is a difference between being opinionated and a dick. FF is opinionated and Cr is a dick.
To be fair: FF and Cr have an equally awful "show me the fucking cert and stop making my life more miserable" workflow. Why is it such a drawn out routine to see the cert details? I personally spend a lot ... a lot of my time with SSL/TLS - and you fuckwits literally make my life harder by hiding it away in some silly "don't worry your pretty head" thing.
My first browser was telnet and I am mildly irritated.
FF is the least irritating of a bad lot. I'd really like a "programmer's build" or something, with knobs for this stuff - FF used to be better about exposing them in preferences, even if it was a bit obscure.
I can usually get by with curl and/or wget for troubleshooting purposes, but dealing with broken things you actually need to use to fix is far more annoying than it should be, "for my own good".
The thing that irrationally irritates me the most about FF is the uncounted minutes of my life spent waiting for the countdown buttons to let me click them.
Perhaps if we whine about this in an environment populated with a lot of like minded folk, the message might get through.
I can't help but think that there are people in Google/Mozzie/n Co that simply put up with all this nonsense.
Has anyone, developing a browser ever bothered to think that the audience is quite diverse, that say, there are multiple uses for a browser? Also that there are different people using the bloody things? My wife has rather different use cases for her browser than I do.
The lack of imagination from web browser developers - or at least their directors or specifiers is absolutely breathtaking.
You will have to do rather better than "Here be dragons" etc as a UI for this sort of thing.
> FF is the least irritating of a bad lot. I'd really like a "programmer's build" or something, with knobs for this stuff - FF used to be better about exposing them in preferences, even if it was a bit obscure.
Yes they are ... hidden away in about:config. I assert there is more than one way to rig a browser and the current config nonsense ... is nonsense and wankery. OK I am a bit peeved.
No one is served properly with the current model of one browser config fits all. Why can't I have a browser that accepts that I know what I am doing wrt SSL/TLS?
Kind of tangential: I recently wanted to move the hosting of my hobby/side gig site to an S3 static site (site generated by me). Thought it would be a few configuration details in s3 and good to go.
A full day later just to be usable in Chrome (since it defaults to https) I had to get a cert and standup a CDN just to make sure your average dingus web user wouldn’t try to navigate to my page and hit nothing.
To be clear, my site doesn’t even have Javascript, and certainly no form submission. HTTPS is complete overkill and now it’s even more overkill overkill since I could be bombarded by half the world population and my site would likely stand up to it.
In a way, they killed the hobbyist website with that bullshit.
So firstly, I agree - browsers should default to https not http if the protocol is not specified. It's a pain that they do http because it means I have to open port 80 and issue a redirect.
That in itself is no big deal, since I usually have port 80 open anyway (for LetsEncrypt support.)
That said, you raise a point I see a lot - your site is too "plain" to need https. Personally I think this argument is outdated.
The key issue with plain sites being HTTP is that -additional- content can be injected into them. In others people might be reading text, or seeing images, that are not what you built.
Advertising, sure. ISPs have fine that, but that's the least problem. What about injecting a political endorsement? What about altering any links to include (not your) affiliate id? Once you start thinking about the value of HTTP sites in this way you start to appreciate the many ways unencrypted sites can be exploited.
Back in the day "amateur" meant better (because the creator had time to do it right) not worse. Being "professional" was code for "you get what you pay for, no more than that".
I encourage all "hobbyist" Web producers to embrace that, to make it excellent, rather than to simply treat it as a "waste of time".
Probably because 99.985% of people who use a browser call it "the internet" and of these a tiny fraction knows that if there is a padlock nothing bad can happen (yes I know that the padlock is now gone because everything wasn't that safe, apparently).
Then you have the 0.0something population that heard that a certificate ensures you that the site is the one you want to go to. And since it is not always the case they have up.
Then there is the sub-Planck- number of these who understand what a cert is and what it is for.
That's my problem, not "yours". Start writing perfect code and then I will delegate. The current browser UI for SSL/TLS is actively punishing people like me who wish to get SSL working. Why not follow the example of Lets Encrypt and make it easier instead of harder?
The current UI is crap for both me and my wife and we are at the polar extremes of the audience for it. Perhaps we need a faster DNS implementation or a rewrite in Rust.
Non-technical user often concludes "this app is broken" if screen shows anything more than one liner in the tune of "website/internet connection is bad".
User actually reading the screen is sadly scarce resource today.
At least you're not using Firefox for Android; that doesn't even let you view cert data on sites with proper certs (like this one). Chrome on Android does.
I do but do you have any idea how many of the bloody things I have to add?
I have lots of customers with VMware - the vCentres each have a CA.
... all their switches, routers, other stuff ... a lot of stuff.
Each one is secure, or at least I decide if it is - I know where it is and I keep its firmware up to date. I communicate with each device over https and I know all is good.
Where this nonsense goes wrong is that a browser programmer thinks that they know better than me and impose their policy on me, with no recourse.
My tenon saw does not tell me how to use it. If I bark my knuckles or cut off my fingers that is my fault. My browser (maintained by fucking children) thinks it knows best for me.
I suggest that the cool kids have a major rethink about who knows what's best in quite a few scenarios that they never even considered and basically grow up. That's what I had to do, back in the day and is pretty much what all adults will eventually confess to because that is what growing up means.
That’s sort of stuff sits on an isolated vlan. My proxy (with a decent trusted very) talks to them, either via http or self signed https. Proxy also handles the authentication (via oidc or x509) and logging.
I do that at home, but you still have annoyances, like Android forever warning you that you're impure because your cert hasn't been kissed by a real CA. Also, some random things are a real PITA to put your signed cert on, like Ubiquiti Unifi, where you have to mess around with the Java keystore.
I wouldn't bother with any of it if it wasn't for Crome thinking I shouldn't save passwords for HTTP, even though they could clearly make an exception for domain names that resolve to private IP blocks.
And it's not unreasonable to test a new HSTS implementation on localhost first, so I'm glad it works as it normally does. But there should be better tooling in devtools for this.
It's certainly not unreasonable to test software locally with HSTS (and other environmental configurations), sure. It just isn't good practice doing so on localhost, but instead in a jail or container with its own local network IP.
Do you also test your crons, secure integrations and terminal configurations directly on your local environment as well? That just seems tedious, arduous and/or pollutative.
What do you mean by secure integrations and terminal configurations?
As for scheduled jobs, you would just test them during development by running them. It would be weird if the time span itself was inherent to your scheduled job behaving correctly.
Firefox subreddit took part in that blackout nonsense when I was trying to find details about a flag; forget anything to do with Mozilla if they're for hiding information a whim; the API changes didn't even involve Mozilla in any manner
I was also a little irked at a bugzilla response about a request to allow importing local bookmark files on Firefox for Android asking who would would really use it. For real a privacy-respecting browser can't figure out why someone would want to not use their half-broken online sync? If Chromium had that, I'd have been off Firefox that same day.
It looks like I was wrong to assume the subreddit had any sort of official status from Mozilla. It'd be nice if they came right out and disowned any association of that subreddit drama though.
Then, it would seem that deleting "Help" from the front of the headline would make a better HN title. (And are we in the business of retaining misspellings?)
Come on, nobody knowing what HSTS is would need help turning it off for localhost especially if they can even string that sentence together; I got that from the title and expected a funny story of stuff breaking. The fix being mentioned is nice too.
Now how do I delete a cached Accept-CH value? It's been two months and any computer I accessed the test server through while it was sending the bad Accept-CH value, still chokes for literally any request made to the test domain. (And no, clearing site data doesn't do anything.)
(If you're wondering what I did that screwed things up so bad: I had the server send an Accept-CH response header in response to an OPTIONS request, in the hopes that it would be delivered in the CORS preflight and therefore get me high-entropy client hints sent in the actual [same-site] XHR. It was delivered in the CORS preflight alright...)