Hacker News new | past | comments | ask | show | jobs | submit login
Help I accidently enabled HSTS on localhost (bartwullems.blogspot.com)
133 points by freedude on Aug 1, 2023 | hide | past | favorite | 103 comments



Okay, great, Chrome has a place where you can delete HSTS pins.

Now how do I delete a cached Accept-CH value? It's been two months and any computer I accessed the test server through while it was sending the bad Accept-CH value, still chokes for literally any request made to the test domain. (And no, clearing site data doesn't do anything.)

(If you're wondering what I did that screwed things up so bad: I had the server send an Accept-CH response header in response to an OPTIONS request, in the hopes that it would be delivered in the CORS preflight and therefore get me high-entropy client hints sent in the actual [same-site] XHR. It was delivered in the CORS preflight alright...)


Sounds like you'd need to delete your browser user profile (which holds all the stored info for your user), so it starts a new "empty" one like a new user would have.


That's the nuclear option though. It shouldn't be necessary.


It shouldn't be, but this is such a corner case that I would assume that the developers never considered it.


I mean, "whoopsie" bad pins are the main reason that most headers that indicate caching have max enforced cache lifetimes. According to a previous draft of the Client Hints spec, Accept-CH's cache lifetime was supposed to only be 10 minutes, for exactly that reason! But apparently the only implementation so far (in Chromium) didn't have any internal TTL-based eviction logic for these entries at all...


You can also bypass Chrome security warnings that don't have the "Continue anyway" link by typing `thisisunsafe` on the page (yes, blind). You'll know if it worked by the last `e`.


I like this and Firefox has nothing like it.

If you ask around (like in reddit) they'll tell you "hurr durr but the rfc", "hurr durr security". Thank you, but I am an adult, and this is a computer I bought with the money I earned. I want to be able to tell my computer what to do. /rant


I thought you were kidding until I looked it up. Apparently this is also changed from time to time!

> The chrome developers also do change this periodically. They changed it recently from badidea to thisisunsafe so everyone using badidea, suddenly stopped being able to use it.


They had to change "badidea" because in some places typing it into error pages had become part of instruction manuals.

Now people ignoring HTTPS errors will at least k ow why it's a bad idea!


Could be worse!

Seattle.gov started serving with HSTS `includeSubDomains; preload` over a month ago, broke all sorts of subdomains, and are still picking up the pieces.

City council ordinances and resolutions are hosted at http://clerk.seattle.gov/, not that it matters, since you can't view the site.


To those saying you can view the site; it's presumably because you didn't go to seattle.gov when it was setting HSTS on subdomains. Presumably the parent did and is now not able to access http://clerk.seattle.gov/ because of that.


   x=clerk.seattle.gov
   printf 'GET / HTTP/1.1\r\nHost: '$x'\r\nConnection: close\r\n\r\n' \
   |nc -vvn 156.74.251.24 80
There's a line in the fwd proxy config for these rare situations where I need to use HTTP without TLS

   use_backend b407-notls if { hdr(host) -m str -f /somedir/hosts-notls.txt }
Normally the proxy will use TLS for any request from any application, not just a web browser, regardless of whether URL begins with http:// or whether it's sent to port 80

To avoid HSTS supercookies^1 I can also add a response header

   http-response add-header Strict-Transport-Security max-age=0
but I have not needed to do that.

1. https://nakedsecurity.sophos.com/2015/02/02/anatomy-of-a-bro...

If the site is on some pre-approved list compiled into the browser, I can remove it and recompile. But I am not interested in such user-hostile browsers for frequent use. Good for online shopping, banking, other commercial transactions, but not for non-commercial purposes.

The ubiquity of TLS is of course a relatively recent phenomena

It seems like Internet Archive still does lots of crawls of port 80; I wonder if that's also true for Common Crawl

If so, it's interesting to think about how much of the data used to train ChatGPT and other AI may have come from crawls over port 80

On ChatGPT data sources:

https://www.searchenginejournal.com/how-to-block-chatgpt-fro...


After getting a popup about it not being secure, I can visit this site just fine in latest Chrome.


I can view the HTTP version of the site just fine, but I seem to get stuck when trying it over HTTPS.


That site loads fine for me in Firefox.

I like how Seattle has their own ordinance site; I don't know how other places do it, but the town I'm in uses eCode360


So now all of their public sites will have to implement https. Seems like a win to me.


Somewhat relatedly: a site with HSTS is not supposed to let you click through an invalid cert warning. The browser should also ignore HSTS with invalid (self-signed) certs. But there are bugs, and thus you can find yourself in the position where you're unable to ignore the cert error on a site that never had a valid cert.


> Somewhat relatedly: a site with HSTS is not supposed to let you click through an invalid cert warning.

A good user agent SHOULD ignore parts of the spec that go against the user's wishes.


HSTS will go the way of password rules and password change requirements. And by that I mean people that don’t really know anything about security will ask for it because it checks a box and someone will add it because they’re paid to not have opinions and just write code and in 20 years we’ll still be fighting a stupid battle against HSTS because we can’t have nice things. HSTS was dead in the water from day one.


> HSTS was dead in the water from day one.

No, it has served and is serving a useful purpose (reducing MITM risk) for a large number of websites.


Could you explain shortly what can be prevented?

The hard part is I believe to spoof the DNS entry and then get a cert from a CA they is in the browser root. Were there cases where HSTS actually stopped something after that? (serious question - I always wondered how much of a peripheral problem HSTS solves)


Once you have managed to poison DNS, so your server is contacted instead of the right one, without HSTS you could potentially⁰ serve your responses using plain HTTP with no in-your-face warning to the user¹. With HSTS that initial request won't be plain HTTP if the user has been to the site before or the name is present in their browser's HSTS preload list.

Chrome will default to HTTPS when given a typed URL that doesn't specify protocol these days, falling back to HTTP if that connection fails, but that doesn't protect you from plain HTTP links in other pages or stored as bookmarks. In fact this doesn't protect you as much as you think it might: IIRC this fallback to HTTP happens for any connection error including being served an invalid certificate, so a DNS-poisoning based MitM attack could still work for some users² meaning HSTS would still be useful even if all browsers used the same HTTPS-default-HTTP-fallback procedure.

> I always wondered how much of a peripheral problem HSTS solves

HSTS, especially with preload, solves a potentially serious but likely-to-be-rare problem. Even if the circumstances where it saves the day are rare, it is so easy to implement it is worth (IMO) the small amount of admin for that protection.

--

[0] if the initial request is plain HTTP

[1] some browsers will display an “insecure” flag when a page delivered by HTTP contains a form, or at least a form with a password box, but a user focusing on just what they are typing may not notice that

[2] as the fallback to HTTP, if not blocked by HSTS, will happen without warning


I'm confused by your question. The entire point is to force an attacker to do "the hard part" that you list there, which is genuinely very hard to do. So it won't do anything "after" that.


Sorry for not having been clear. Put it another way: are there hard stats about HSTS actual value? How many orgs that had their DNS defaced and new certs issued were saved by HSTS?

Since a mistake with HSTS is catastrophic, setting it must make sense risk wise.


Are you confusing HSTS with HPKP like the other poster?

HSTS says the site has to be https, nothing else. It does nothing if someone gets a valid cert. It exists to prevent every attack weaker than that, such as local MitM.

And mistakes are not catastrophic at all unless you have some horrible legacy setup that can't do https.


oh crap, I was thinking HPKP and reading/writing HSTS. Sorry for the entropy. Yes, HSTS (this time HSTS :)) is useful (though there are problems with scaling for the initial seed of pages, but maybe that was already solved).


You know what, I was thinking of HPKP, which is obsolete. HSTS, while I doubt it’s actually prevented a single adversarial MITM, isn’t a terrible idea.


well at least your localhost is secured


How can you be sure that someone isn't MITM? Did you get the cert signed by an authority?


I guess I'm the authority when it comes to localhost.


Hold on, localhost is my computer. Why are you the authority?


Maybe you're running Ubuntu and he's Mark Shuttleworth: https://www.markshuttleworth.com/archives/1182


> Your anonymity is preserved because we handle the query on your behalf. Don’t trust us? Erm, we have root. You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err.

What a gem!


If I had a nickel for every out-of-touch South African billionaire with an interest in spaceflight, I'd have two nickels... which isn't a lot, but it's weird that it happened twice, right?


By using Ubuntu I’m trusting their binaries. By using any distro I’m trusting their binaries, or even if I compile everything from scratch I certainly haven’t read and understood every line of code.

He’s not wrong in his statement.


He wasn't wrong until he made the statement, at which point I stopped trusting Ubuntu's binaries and sought them from other distros instead :)

In any case, there's a vast difference between trusting binaries running on a local machine v. trusting someone to competently (and not maliciously) administer a remote machine. Shuttleworth's statement would've been less unreasonable in the Before Times when cybercriminals breaking into servers and getting their hands on PII to sell on the Dark Web was an occasional and exceptional thing instead of just Tuesday; that time had been long gone even by then (let alone now).


Open the link to see what else I know about you:

    file:///


Exactly


Better go reread reflections on trusting trust for the 80th time to be sure?


By the authority of myself.


The problem is that Chrome considers the entire localhost regardless of port within scope of HSTS. Firefox does this only per port, so you don't run into these issues.


So many things require HTTPS now I always use HTTPS during development with local certs via mkcert


For Windows/PowerShell users:

    New-SelfSignedCertificate -CertStoreLocation Cert:\CurrentUser\My -DnsName "localhost"
https://learn.microsoft.com/en-us/powershell/module/pki/new-...


It's a good practice but worth noting that almost all web platforms features that require a secure context work on localhost without HTTPS


Happened to me many times and that chrome://net-internals/#hsts is not very user-friendly (no feedback if it was successful or if the domain wasn't even there).

Anyway, golden HN thread here


You have to leave the Internet now.


We are now in an era where the only browsers people think about are Chromium reskins and forks.


Perhaps, but I have personally gone back all in with Firefox. I've gained the ability to not have a wanker tell me how https works and be a total pain. Cr browsers will never enter saved creds into a site that isn't "fully trusted". FF will respect your choice after quite a lot of admonishment.

I've been using SSL and TLS longer than some of the knobends programming these fucking things have been alive. There is a difference between being opinionated and a dick. FF is opinionated and Cr is a dick.

To be fair: FF and Cr have an equally awful "show me the fucking cert and stop making my life more miserable" workflow. Why is it such a drawn out routine to see the cert details? I personally spend a lot ... a lot of my time with SSL/TLS - and you fuckwits literally make my life harder by hiding it away in some silly "don't worry your pretty head" thing.

My first browser was telnet and I am mildly irritated.


Heard.

FF is the least irritating of a bad lot. I'd really like a "programmer's build" or something, with knobs for this stuff - FF used to be better about exposing them in preferences, even if it was a bit obscure.

I can usually get by with curl and/or wget for troubleshooting purposes, but dealing with broken things you actually need to use to fix is far more annoying than it should be, "for my own good".

The thing that irrationally irritates me the most about FF is the uncounted minutes of my life spent waiting for the countdown buttons to let me click them.


Perhaps if we whine about this in an environment populated with a lot of like minded folk, the message might get through.

I can't help but think that there are people in Google/Mozzie/n Co that simply put up with all this nonsense.

Has anyone, developing a browser ever bothered to think that the audience is quite diverse, that say, there are multiple uses for a browser? Also that there are different people using the bloody things? My wife has rather different use cases for her browser than I do.

The lack of imagination from web browser developers - or at least their directors or specifiers is absolutely breathtaking.

You will have to do rather better than "Here be dragons" etc as a UI for this sort of thing.


> FF is the least irritating of a bad lot. I'd really like a "programmer's build" or something, with knobs for this stuff - FF used to be better about exposing them in preferences, even if it was a bit obscure.

Better than Konqueror?


I haven’t used konqueror for 20 years, is it still a thing?

The web today is very different to how it was in 2003


Yes it is still around and still very blue. Mine says:

"Konqueror is a web browser, file manager and universal document viewer. Starting Points Introduction Tips Specifications"


The programmer's builds are already there, e.g. Librewolf.

Firefox's defaults are terrible, but most unhelpful behavior can be disabled in about:config.


Oh, the knobs are all there, in about:config heh.


With a sign on the door saying Beware of the Leopard


Yes they are ... hidden away in about:config. I assert there is more than one way to rig a browser and the current config nonsense ... is nonsense and wankery. OK I am a bit peeved.

No one is served properly with the current model of one browser config fits all. Why can't I have a browser that accepts that I know what I am doing wrt SSL/TLS?


Kind of tangential: I recently wanted to move the hosting of my hobby/side gig site to an S3 static site (site generated by me). Thought it would be a few configuration details in s3 and good to go.

A full day later just to be usable in Chrome (since it defaults to https) I had to get a cert and standup a CDN just to make sure your average dingus web user wouldn’t try to navigate to my page and hit nothing.

To be clear, my site doesn’t even have Javascript, and certainly no form submission. HTTPS is complete overkill and now it’s even more overkill overkill since I could be bombarded by half the world population and my site would likely stand up to it.

In a way, they killed the hobbyist website with that bullshit.


So firstly, I agree - browsers should default to https not http if the protocol is not specified. It's a pain that they do http because it means I have to open port 80 and issue a redirect.

That in itself is no big deal, since I usually have port 80 open anyway (for LetsEncrypt support.)

That said, you raise a point I see a lot - your site is too "plain" to need https. Personally I think this argument is outdated.

The key issue with plain sites being HTTP is that -additional- content can be injected into them. In others people might be reading text, or seeing images, that are not what you built.

Advertising, sure. ISPs have fine that, but that's the least problem. What about injecting a political endorsement? What about altering any links to include (not your) affiliate id? Once you start thinking about the value of HTTP sites in this way you start to appreciate the many ways unencrypted sites can be exploited.

Back in the day "amateur" meant better (because the creator had time to do it right) not worse. Being "professional" was code for "you get what you pay for, no more than that".

I encourage all "hobbyist" Web producers to embrace that, to make it excellent, rather than to simply treat it as a "waste of time".


> That said, you raise a point I see a lot - your site is too "plain" to need https. Personally I think this argument is outdated.

In the early days, sites would often be mainly http, then the payment part would switch to https. That soon died and I think you are correct.


I don’t post affiliate links, I don’t blog and I don’t really care about some shitty ISP injecting shitty ads on my site, truth told.

However if it means that it makes the user experience of browsing my site better, I can get with that.


> To be clear, my site doesn’t even have Javascript, and certainly no form submission.

And when Comcast injects their own JavaScript into your site, what good will that do?


Let them? If the user has an unscrupulous ISP, there’s fat little I can do to guard against that anyway.


> If the user has an unscrupulous ISP, there’s fat little I can do to guard against that anyway.

Well, you could use HTTPS, which trivially defeats this attack.


That's a failure on S3's behalf. It should only take a few clicks to add a certificate, not setting up a separate CDN.


Fair point. It really shouldn’t be a hassle at the end of the day.


Recently I opened Firefox and it said "Thank you for loving Firefox"... Geezus, maybe I should fork it and add the Overly Attached Girlfriend memes all over the place: https://knowyourmeme.com/memes/overly-attached-girlfriend


Yes, that too is getting on my tits. What on earth passes for rational thought these days?


Probably because 99.985% of people who use a browser call it "the internet" and of these a tiny fraction knows that if there is a padlock nothing bad can happen (yes I know that the padlock is now gone because everything wasn't that safe, apparently).

Then you have the 0.0something population that heard that a certificate ensures you that the site is the one you want to go to. And since it is not always the case they have up.

Then there is the sub-Planck- number of these who understand what a cert is and what it is for.


Because we have data showing that anything less leads to easy impersonations of sites. Browser security UI is hard.


"Browser security UI is hard."

That's my problem, not "yours". Start writing perfect code and then I will delegate. The current browser UI for SSL/TLS is actively punishing people like me who wish to get SSL working. Why not follow the example of Lets Encrypt and make it easier instead of harder?

The current UI is crap for both me and my wife and we are at the polar extremes of the audience for it. Perhaps we need a faster DNS implementation or a rewrite in Rust.


Where can I review this data?


Showing certificate details on the "OMFG SOMETHING IS WRONG" page makes it easier to impersonate sites?

Why not show the certificate details, explain what is wrong, and then still have the annoying click through flow?


Non-technical user often concludes "this app is broken" if screen shows anything more than one liner in the tune of "website/internet connection is bad".

User actually reading the screen is sadly scarce resource today.


At least you're not using Firefox for Android; that doesn't even let you view cert data on sites with proper certs (like this one). Chrome on Android does.


If you have admin/root, it's usually easier to just add the leaf cert to the CA trust store.


I do but do you have any idea how many of the bloody things I have to add?

I have lots of customers with VMware - the vCentres each have a CA.

... all their switches, routers, other stuff ... a lot of stuff.

Each one is secure, or at least I decide if it is - I know where it is and I keep its firmware up to date. I communicate with each device over https and I know all is good.

Where this nonsense goes wrong is that a browser programmer thinks that they know better than me and impose their policy on me, with no recourse.

My tenon saw does not tell me how to use it. If I bark my knuckles or cut off my fingers that is my fault. My browser (maintained by fucking children) thinks it knows best for me.

I suggest that the cool kids have a major rethink about who knows what's best in quite a few scenarios that they never even considered and basically grow up. That's what I had to do, back in the day and is pretty much what all adults will eventually confess to because that is what growing up means.


That’s sort of stuff sits on an isolated vlan. My proxy (with a decent trusted very) talks to them, either via http or self signed https. Proxy also handles the authentication (via oidc or x509) and logging.


I do that at home, but you still have annoyances, like Android forever warning you that you're impure because your cert hasn't been kissed by a real CA. Also, some random things are a real PITA to put your signed cert on, like Ubiquiti Unifi, where you have to mess around with the Java keystore.

I wouldn't bother with any of it if it wasn't for Crome thinking I shouldn't save passwords for HTTP, even though they could clearly make an exception for domain names that resolve to private IP blocks.


absolutely. add cache behavior to that as well, nothing is cached from non "fully trusted" pages either...


I agree it's retarded how the important features keep getting buried under deeper sequences of clicks.


More importantly than that...if you need HSTS on localhost, you're doing something wrong.

Add an exception in your code for HSTS on localhost, if it's required. Or, use a container and set up HSTS on its mounted IP.


It's not that anyone needs HSTS on localhost, just that it's damn easy to have your web app's backend accidentally pass out an HSTS header.


And it's not unreasonable to test a new HSTS implementation on localhost first, so I'm glad it works as it normally does. But there should be better tooling in devtools for this.


It's certainly not unreasonable to test software locally with HSTS (and other environmental configurations), sure. It just isn't good practice doing so on localhost, but instead in a jail or container with its own local network IP.

Do you also test your crons, secure integrations and terminal configurations directly on your local environment as well? That just seems tedious, arduous and/or pollutative.


You don't need a jail or a container to change the origin, you can just add an entry in your hosts file that points to 127.0.0.1


What do you mean by secure integrations and terminal configurations?

As for scheduled jobs, you would just test them during development by running them. It would be weird if the time span itself was inherent to your scheduled job behaving correctly.

Of course in addition to automated testing.


Firefox subreddit took part in that blackout nonsense when I was trying to find details about a flag; forget anything to do with Mozilla if they're for hiding information a whim; the API changes didn't even involve Mozilla in any manner

I was also a little irked at a bugzilla response about a request to allow importing local bookmark files on Firefox for Android asking who would would really use it. For real a privacy-respecting browser can't figure out why someone would want to not use their half-broken online sync? If Chromium had that, I'd have been off Firefox that same day.


You're blaming Mozilla for what the Firefox subreddit did, which is not run by Mozilla?


Unless Mozilla takes a similar stance in-accordance to their own guidelines, yeah: https://www.mozilla.org/en-US/about/governance/policies/part...

At the least I expected some public disassociation with the subreddit, and subsequent forks of it.


I see no mention of Reddit on that page.


It seems my assumption was wrong that the subreddit had any kind of official status from Mozilla.


It looks like I was wrong to assume the subreddit had any sort of official status from Mozilla. It'd be nice if they came right out and disowned any association of that subreddit drama though.


I thought that was funny too... Chrome, Edge, and Brave.

Hummm, I wonder why the net-internals URL is identical!


I mandated Firefox on all the office machines. I'm doing my part


Back in the day it was only ie. So… it could be worse.


Pushed by the same folks that used to complain about IE monopoly.


HN is a weird place to ask for technical support.

edit: come on guys isajoke ;-;


They're not asking for support. They encountered a problem and wrote a post about how they fixed it.


Then, it would seem that deleting "Help" from the front of the headline would make a better HN title. (And are we in the business of retaining misspellings?)


HN has a rule to not editorialize titles [0]. Hence why you technically can't make better titles.

[0] "Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize." https://news.ycombinator.com/newsguidelines.html


Come on, nobody knowing what HSTS is would need help turning it off for localhost especially if they can even string that sentence together; I got that from the title and expected a funny story of stuff breaking. The fix being mentioned is nice too.


the brightest minds of our generation, all sitting around with nothing to do? It's the best place




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: