That's not a great idea for three different reasons: Filesystems have to do ugly things when they're almost full like split files into many small blocks and store more metadata to keep track of them all, SSDs get slower and have compromised wear leveling when they're almost full, and it makes you more likely to subject yourself to perils of fully running out which can cause random non-temporary problems even if it only happens temporarily.
A good way to do this is to create a swap file, both because then you can use it as a swap file until you need to delete it and because swap files are required to not be sparse.
If you create the file with 'mkswap --file' it allocates the blocks. Trying to use 'swapon' with an existing sparse file won't remove the holes for you but does notice them and then refuse to use it.
There are different governments and different subdivisions within any given government. The only thing you need to get a government that had been pushing Chat Control to do some trust busting is to get more votes.
> C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.
It seems to me the "convenient" options are the dangerous ones.
The traditional method is for third party code to have a stable API. Newer versions add functions or fix bugs but existing functions continue to work as before. API mistakes get deprecated and alternatives offered but newly-deprecated functions remain available for 10+ years. With the result that you can link all applications against any sufficiently recent version of the library, e.g. the latest stable release, which can then be installed via the system package manager and have a manageable maintenance burden because only one version needs to be maintained.
Language package managers have a tendency to facilitate breaking changes. You "don't have to worry" about removing functions without deprecating them because anyone can just pull in the older version of the code. Except the older version is no longer maintained.
Then you're using a version of the code from a few years ago because you didn't need any of the newer features and it hadn't had any problems, until it picks up a CVE. Suddenly you have vulnerable code running in production but fixing it isn't just a matter of "apt upgrade" because no one else is going to patch the version only you were using, and the current version has several breaking changes so you can't switch to it until you integrate them into your code.
This is all wishful thinking disconnected from practicalities.
First you confuse API and ABI.
Second there is no practical difference between first and third-party for any sufficiently complex project.
Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering. It's a bad idea and a recipe for ODR violations.
In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries. Being able to maintain ABI compatibility, deprecating things while introducing replacement etc. is a massive engineering work and simply makes people much less likely to change the way things are done, even if they are broken or not ideal. That's an effort you'll do for a kernel (and only on specific boundaries) but not for the average program.
I'm not confusing API with ABI. If you don't have a stable ABI then you essentially forfeit the traditional method of having every program on the system use the same copy (and therefore version) of that library, which in turn encourages them to each use a different version and facilitates API instability by making the bad thing easier.
> Second there is no practical difference between first and third-party for any sufficiently complex project.
Even when you have a large project, making use of curl or sqlite or openssl does not imply that you would like to start maintaining a private fork.
There are also many projects that are not large enough to absorb the maintenance burden of all of their external dependencies.
> Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering.
Which is all the more reason to encourage every program on the system to use the same copy by maintaining a stable ABI. What do you do after you've encouraged everyone to include their own copy of their dependencies and therefore not care if there are many other incompatible versions, and then two of your dependencies each require a different version of a third?
> In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries.
This feels like arguing that people are bad at writing documentation so we should we should reduce their incentive to write it, instead of coming up with ways to make doing the good thing easier.
> Use HTTP (secure is not the way to decentralize).
This doesn't seem like useful advice. If you're going to use HTTP at all there is essentially zero practical advantage in not using Let's Encrypt.
The better alternative would be to use new protocols that support alternative methods of key distribution (e.g. QR codes, trust on first use) instead of none.
> Selfhost DNS server (hard to scale in practice).
If your DNS port is closed by your ISP, you can't have people use your DNS server from the outside and then you need Google or Amazon which are not decentralized.
Also to be selfhosted you can't just forward what root DNS servers say, you need to store all domains and their IPs in a huge database.
The root certificates are pretty decentralized. There isn't just one and you can use whichever one you like for your certificate. The browsers or other clients then themselves choose which roots to trust.
The main thing that isn't very decentralized here is Google/Chrome being the one to de facto choose who gets to be root CA for the web, but then it seems like your beef should be with people using Chrome rather than people using Let's Encrypt.
> If your DNS port is closed by your ISP, you can't have people use your DNS server from the outside and then you need Google or Amazon which are not decentralized.
It's pretty uncommon for ISPs to close the DNS port and even if they did, you could then use any VPS on any hosting provider.
> Also to be selfhosted you can't just forward what root DNS servers say, you need to store all domains and their IPs in a huge database.
I suspect you're not familiar with how DNS works.
Authoritative DNS servers are only required to have a database of their own domains. If your personal domain is example.com then you only need to store the DNS records for example.com. Even if you were hosting a thousand personal domains, the database would generally be measured in megabytes.
Recursive DNS servers (like 1.1.1.1 or 8.8.8.8) aren't strictly required to store anything except for the root hints file, which is tiny. In practice they will cache responses to queries for the TTL (typically up to a day) so they can answer queries from the cache instead of needing to make another recursive query for each client request, but they aren't required to cache any specific number of records. A lot of DNS caches are designed to have a fixed-sized cache and LRU evict records when it gets full. A recursive DNS server with a 1GB cache will have reasonable performance even under high load because the most commonly accessed records will be in it and the least commonly accessed records are likely to have expired before they're requested again anyway. A much larger cache gets you only a small performance improvement.
DNS records are small so storing a very large number of them can be done on a machine with few resources. A DNS RRset is usually going to be under 100 bytes. You can fit tens of millions of them in RAM on a 4GB Raspberry Pi.
I have two younger brothers. They have the same last name, first initial, a history of having lived at the same address, and the same birth date, because they're twins.
Every time one of them goes to a particular medical facility, he has to explicitly decline having them merge their charts.
With Linux distros they typically put the web link right on the main page and have a torrent available if you go look for it, because they want you to try their distro more than they want to save some bandwidth.
Suppose HF did the opposite because the bandwidth saved is more and they're not as concerned you might download a different model from someone else.
The second one doesn't seem excessively complicated and the latency could be mitigated by caching the CA for a reasonable period of time.
But if you're going to modify the protocol anyway then why not just put it in the protocol that a "server" certificate is to be trusted even if the peer server is initiating rather than accepting the connection? That's effectively what you would be doing by trusting the "server" certificate to authenticate the chain of trust for a "client" certificate anyway.
The complication of (2) is that it requires a server with a completely different protocol and port, that may or may not already be claimed by another server software than the XMPP server, to act in a specific way (e.g. use a compatible certificate).
The technical term for such cross-service requirements is "a giant pain in the ass".
That's assuming you're requiring the ordinary HTTPS port to be used. For that matter, why would it even need to use HTTPS? Have the peer make a TLS connection to the XMPP server to get the CA.
But it still seems like the premise is wrong. The protocol is server-to-server and the legacy concept that one of them is the "client" and needs a "client certificate" is inapplicable, so why shouldn't the protocol just specify that both peers are expected to present a "server certificate" regardless of which one initiated the connection?
reply