Hacker News new | past | comments | ask | show | jobs | submit | dochne's comments login

The Insites GDPR checker[1] does this.

[1] https://insites.com/free-website-gdpr-check/


I had one of the predecessors to this (the Jelly 2) as one of my strategies to cut down on smart phone usage.

It was a nice idea (as it introduced pain points of using it as a distraction device), but the battery life ended up being abysmal to the point that I couldn't trust it to complete a day. Having a tiny smart phone and then having to take a battery pack for it was absurdity.

It also led me to appreciate how much I'd come to rely on always having a good camera in my pocket. The picture quality was poor enough that the photos would leave me feeling genuinely quite sad.


If you're reading a PR diff on GitHub you can get it to ignore whitespace diffs by adding ?w=1 to the url - complete lifesaver in this kind of situation.


Also, the white space diffs shouldn't be significant as long as all the files were already properly formatted before the change. If your diffs are including unrelated formatting changes, then you should do a single commit to format all your files. You can also use git-ignore-revs so that this formatting commit won't show up in git blame.


MSN Spaces had this feature back in 2006! Makes me feel nostalgic!


It still remains a mystery to me why browsers felt they should "fix" this server misconfiguration.

It's particularly vexing to me as the main reason that people end up with misconfigured servers at all is because after they've configured their new cert (incorrectly) their web browser gives them a tick and they think they've done it right - after all, why wouldn't they?


A common way that these work is that 1 browser does it, then if the others don't copy they appear "broken" to users.

IDK what happened in this case but it is pretty easy to imagine Chrome accidentally allowed validation against certificates in its local cache. Maybe it added some sort of validation cache to avoid rechecking revocation lists and OSCP or similar and it would use intermediates from other sites. Then people tested their site in Chrome and it seemed to work. Now Firefox seems broken if they don't support this. So they decided to implement this and do something more robust by preloading a fixed list rather than whatever happens to be in the cache.

Basically no browser wants to be the first to stop supporting this hack.


The mechanism for caching seen certs dates back to Internet Explorer / Netscape times

https://bugzilla.mozilla.org/show_bug.cgi?id=399324#c16


This is ultimately an application of the "robustness principle" or Poestel's law, which was how people build stuff in the early Internet.

Plenty of people believe these days that this was never a wise guideline to begin with (see https://www.ietf.org/archive/id/draft-iab-protocol-maintenan... which unfortunately never made it to an RFC). However, one of the problems is that once you started accepting misconfigurations, it's hard to change your defaults.


It actually did end up as RFC 9413, albeit somewhat softened.

https://datatracker.ietf.org/doc/rfc9413/


It's Postel's Law being bad advice yet again. No, you should not be liberal in what you accept, because being liberal in what you accept causes even more malformed data to appear in the ecosystem.


That battle is long lost.

For me the revelatory moment was in mid-00s, when everyone screamed anathema at XHTML, saying it was bad because it required people to write well-formed documents, when everyone just wanted to slap random tags and somehow have that steaming mess to still work.

There must me some sort of law that says in tech the crudest pile of hacks wins over any formally elegant solution every single time those hacks lets one do something that requires extra effort otherwise, even if it works only by wildest chance.


> There must me some sort of law that says in tech the crudest pile of hacks wins over any formally elegant solution

This is called 'Worse is better'.

https://en.wikipedia.org/wiki/Worse_is_better


The biggest objection I and many others had at the time was that writing xhtml forced one to deal with hell that is xml namespaces, which many tools at the time barely supported


> bad advice ... being liberal in what you accept causes even more malformed data to appear in the ecosystem.

This is one perspective. Another is to be robust and resilient. Resiliency is a hallmark of good engineering. I get the sense you have not worked on server-side software that has thousands or millions of different clients.


Postel's Law should be called the "Hardness Principle", not the "Robustness Principle". Much like how hardening a metal makes it take more force to break, but results in it being brittle & failing catastrophically when it does, so Postel's law makes systems harder to break initially, but results in more damage when they do fail. It also makes the system harder to maintain, thus adding a pun to the name.


Where do you draw the line? Usually there's exactly 1 intended, standard way of communicating with another system while there's are infinite opportunities to deviate from that standard and infinite opportunities for the other party to try to guess what you really meant. This results in a combinatorial explosion unintended behaviors that lead to bugs and critical security vulnerabilities.


I absolutely have. And I've never modified a server to accept bullshit from an incorrect client. I have, on the other hand, told several people how to fix their clients when they complain it doesn't work with my service. I actually rather enjoy improving the ecosystem, even if it's not strictly my job. It's better for everyone.


Because it wasn’t actually a server misconfiguration, nor was it, as others have speculated, about Postel’s Law.

The way X.509 was designed - to the very first version - was the notion that you have your set of CAs you trust, I have my set, and they’re different. Instead of using The Directory to resolve the path from your cert to someone I trust, PKIX (RFC 2459-et-al) defined AIA.

So the intent here was that there’s no “one right chain to rule them all”: there’s _your_ chain to your root, _my_ chain to my root, all for the same cert, using cross-certificates.

Browsers adopted X.509 before PKIX existed, and they assumed just enough of the model to get things to work. The standards were developed after, and the major vendors didn’t all update their code to match the standards. Microsoft, Sun, and many government focused customers did (and used the NIST PKITS test to prove it), Netscape/later Mozilla and OpenSSL did not: they kept their existing “works for me” implementations.

https://medium.com/@sleevi_/path-building-vs-path-verifying-... Discusses this a bit more. In modern times, the TLS RFCs better reflect that there’s no “one right chain to rule them all”. Even if you or I aren’t running our own roots that we use to cross-sign CAs we trust, we still have different browsers/trust stores taking different paths, and even in the same browser, different versions of the trust store necessitating different intermediates.

TLS has no way of negotiating what the _client’s_ trust store is in a performant, privacy-preserving way. https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-l... or https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-t... are explorations of the problem space above, though: how to have the server understand what the client will trust, so it can send the right certificate (… and omit the chain)


TLS implementations for linux IMAP email back in the day would fail-over to unencrypted credentials if the TLS handshake was unsuccessful. Not sure if that was somebody's Postellian interpretation or if it was just the spec. We had to actually block the unencrypted ports in the firewall because there was no way to tell from the client side whether you had automatically been downgraded to in-the-clear or not.


In my hosting days, we relied on the ssl checker that ssl-shopper has. Browser was never considered a valid test for us. It was final validation, but a proper ssl checker was the real test


As @kevincox says there's a problem where if one browser does it, then users complain a site "works in this-generations-IE" forcing the other browsers to duplicate the behaviour.

But one other problem that happens isn't necessarily "browser 1 fixed this configuration" and browser 2 copied them. It can (and often was) "browser 1 has a bug that means this is broken configuration works", and now for compatibility browser 2 implements the same behaviour. Then browser 1 finds this bug, goes to fix it, and finds that there are sites that depend on it and it also works in other browsers so even if it started off as a bug they can no longer fix it.

That's why there's increasing amounts of work involved in trying to ensure new specifications are free of ambiguities and such now before they're actually turned on by default. Even now thought you still have places where the spec has ambiguity/gaps in the specification where people will go "whatever IE/Chrome does is correct" even if the specification has a gap/ambiguity that allows different behaviour (which these days is considered a specification bug), even if other browsers agree, it's super easy for a developer to say "the most common browser is definitionally the correct implementation".

Back when I worked on engines and in committees I probably spent cumulatively more than a year do nothing but going through specification gaps working out what behaviour was _required_ to ensure sufficiently compatible behavior between different browsers. I spent months on key events and key codes alone, trying to work out which events need to be sent, which key codes, how IM/IME (input method [editor] mechanism used for non-latin text) systems interact with it, etc. As part of this I added the ability to create IMEs in javascript to the webkit test infrastructure because otherwise it was super easy to break random IMEs because they all behave completely differently in response to single key presses.


It's very difficult in practice to shift the blame to the website. Even though the browser would be right in refusing connection, the net effect is that the user would just use another browser to access that website. The proper workaround (Firefox shipping intermediate certificates), doesn't actually damage security. It just means more work for the maintainers. That's a fair tradeoff for achieving more market share.

It's the same reason why browsers must be able to robustly digest HTML5 tagsoup instead of just blanking out, which is how a conforming XML processor would have to react.


Do browsers or is this another OpenSSL Easter egg we all have to live with?

I remember that OpenSSL also validates certificate chains with duplicates, despite that obviously breaking the chain property. That’s wasteful but also very annoying because TLS libraries like BearSSL don’t (I guess you could hack it and remember the previous hash and stay fixed space).


The chain "property" was never enforced anywhere of consequence and is gone in TLS 1.3

In practice other than the position of the end entity's certificate, the "chain" is just a set of documents which might aid your client in verifying that this end entity certificate is OK. If you receive, in addition to the end entity certificate, certs A, B, C and D it's completely fine if certificate D has expired, certificate B is malformed and certificate A doesn't relate to this end-entity certificate at all as far as you're concerned if you're able (perhaps with the aid of C) to conclude that yes, this is the right end entity and it's a trustworthy certificate.

Insisting on a chain imagines that the Web PKI's trust graph is a DAG and it is not. So since the trust graph we're excerpting has cycles and is generally a complete mess we need to accept that we can't necessarily turn a section of that graph (if it even was one graph which it isn't, each client possibly has a slightly different trust set) into a chain.


You are overthinking it. Some sysadmin copying the same cert into the chain twice because AWS is confusing and doesn’t care and OpenSSL doesn’t care isn’t resolving the grand problem of the trust graph, it’s just a loss overall, for everyone. Nobody wins here.

(Of course the 1.3 approach of throwing a bunch of certificates and then asking to resolve over all of them breaks BearSSL comprehensively)


Yes, it's useless to include the CA cert, and to include extra copies, and all those other things.

But requiring the cert chain to be exactly correct is also useless if you need to address clients with different root cert packages. If some clients have only root A and some have only root B, but B did a cross-sign for A, you're ok if you send entity signed by intermediate, intermediate signed by A, A signed by B and clients with A only short circuit after they see an intermediate signed by A, and the clients with B only should be fine too. Of course it gets real weird when the B root has expired, and clients often have A and B, but some don't check if their roots expired, and some won't short circuit to validating with A, so they fail the cert because B is expired.

Oh, and TLS handshakes in the wild don't give you explicit information about what roots they have or what client / version they are. Sometimes you can get a little bit of information and return different cert chains to different clients, but there's also not a lot of support for that in most server stacks.

I don't necessarily like TLS 1.3's approach of end entity cert comes first and then just try all the permutations and accept any one that works, but at least it presents a way to get to success given the reality we live in. I'd also love to see some way to get your end entity cert signed by multiple intermediates, but that's a whole nother level of terrible.

#notbitter


Maybe in the modem days the smaller certificate was considered ideal for connection overhead?


It wasn't long ago when TLS was not the norm and many, many sites were served over plain HTTP, even when they accepted logins or contained other sensitive data. There's a good chance this decision was a trade-off to make TLS simpler to get working in order to get more sites using it.

Browsers have a long history of accepting bad data, including malformed headers, invalid HTML, and maintaining workarounds for long-since-fixed bugs. This isn't really that different.


Really? You receive two files from your CA. One of them is the leaf, the other one is the chain. You just have to upload the latter (not the former) into the server's config directory. That doesn't sound that hard.

If it actually is, I am ready to eat my words, but the actual blame would be on the webserver developers then. Default settings should be boring, but secure; advanced configuration should be approachable; and dangerous settings should require the admin to jump through hoops.


My understanding is that Postman unfortunately pulled the same thing lately, so that's unfortunately a no-go for you.

I've previously had good experiences with Paw (now RapidAPI) - https://paw.cloud/, but given that they are closed source and have started giving away the software rather than charging for it, I'm not filled with optimism they won't do a similar rug pull.


I’ve actually tried Paw but I found it tricky because it doesn’t have a quick fuzzy search for existing API requests, which is something I use a lot in Insomnia.


Thanks here go to the work done by the awesome-archive[0] project without whom this day would have been much more frustrating.

[0] https://github.com/awesome-archive/Wappalyzer/pull/1


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: