Browsers have refused to implement DANE for the last ten years. In the meantime, the major email players came up with MTA-STS, and alternative to DANE that cites lack of DNSSEC adoption as one of its rationales.
If you send email today, it's vanishingly unlikely that any DNSSEC will happen; email is complicated and email infrastructure tends to shut people's brains off (I know it does for me) but you can just look at the tiny slice of domains that are actually DNSSEC signed and see that there's no meaningful adoption.
I don't understand why you are being downvoted, since you are right.
For example: Microsoft has recently adopted DANE validation for outbound email, are planning to add TLSA records for their inbound email 'by the end of 2022. [0]
I take it that "server" in this context includes the remote party in a "serverless" transfer. I mean, I take it this isn't particular to the rsync daemon.
It sounds like a very serious defect, very easy to exploit. It needed to be addressed quickly. I'm not surprised they skipped the code review.
Exactly. I'm usually worried about the opposite scenario (the client infecting the server). If you're copying files from a personal computer it might have all sorts of random software running on it that's accumulated over the years. Whereas on a remote server you tend to have a better security posture, and aren't installing random software and apps. Admittedly, that might just be my personal use case though.
There is a book about IBM's involvement in the Holocaust. Now, I have no illusion that Apple, Google, and co. would not do the same in the circumstances.
This is a good comment. There are I believe a couple more cases where HTTPS is difficult:
- You use a dynamic subsubdomain scheme. E.g. abc.xyz.example.org. A wildcard certificate for *.example.org only covers xyz.example.org, not abc.xyz.example.org. Requesting a certificate as the page is requested is possible, but will cause a lot of latency, and you will probably hit the Let's Encrypt rate limit;
- You embed resources that are only available over HTTP and cannot be proxied, either for technical or legal reasons;
- You request resources from a local IP address, e.g. a website hosted on GitLab Pages that shows you the data from your own DIY weather station which runs in your local network.
These cases are not that common, but that does not make them nonexistent. 99% of websites don't fall under one of these cases (there are probably some others I have not even considered), and should probably support HTTPS.
I see HackerNews chewed up your formatting. An extra newline between bullet-points is necessary.
> You use a dynamic subsubdomain scheme
Good point. If you run a site that creates a new subdomain per customer, and uses subsubdomains, you might end up making a high volume of cert requests. I don't know a lot about this stuff but presumably there are paid CAs that offer a more generous rate limit than Let's Encrypt?
> You embed resources that are only available over HTTP and cannot be proxied, either for technical or legal reasons
When is this a problem?
> You request resources from a local IP address, e.g. a website hosted on GitLab Pages that shows you the data from your own DIY weather station which runs in your local network
I don't follow here. If for some reason you need to present that data as a local HTTP service, that service could just act as a proxy to GitLab Pages over HTTPS, no?
Do you have a source for that? Coming from a country where there are 16 different parties in parliament (the Netherlands), neighboured by two countries which each have at least five parties in parliament (Belgium and Germany), I find that statement difficult to believe.
There are many names that do not follow your rule, and generally speaking I'd find it very rude if I told you my name, and you'd go "let me fix that to the _correct_ spelling". For example a name as Angus MacGyver, or Armand de la Cour. Both don't follow your rule. I don't see why that wouldn't apply to brand names as well.
I like this, because it's beautifully simple. I have been collecting radio stations since 2016 now, and something I have noticed is that streaming URLs tend to break rather often. It might be worth putting them behind a simple redirect so you can change the streaming URL remotely.