Yep! Just to clarify, this is entirely hypothetical! I want to know the value-add people would put on such a database if it were to exist.
There's little insight from what I've seen in the database market as to what people really want the most out of a database, or what makes a database "appealing" to people apart from its reputation from being time-tested.
Yeah, no, the database being replicated "in a masterless fashion" doesn't help. The problem is that under some network partition conditions, you can't do some writes or any writes, unless you want to sacrifice resilience and/or consensus.
Why wouldn't there be failures only under a complete network partition? So long as one node in one partition may communicate with another node in another partition, then writes may still be performed.
Availability would be what is sacrificed in the advent that a node is partitioned away from the main network.
A fork could be created just by taking the existing longest chain, and attempting to append a block to the frontiers blocks parent, or even to the frontier block itself. It's pretty simple. The time between blocks still is non-deterministic, and not often is there realistically a clear winner.
There existing n forks, even at a bare minimum of 3 forks of the same height is very common, especially if you consider how many miners there are in the entirety of the Bitcoin network.
I tried to find the details about the current mini-forks but I couldn't. Do you have a link? I guess the number is small because the mining pools are well connected and none of them want to waste resources (because they loose money).
Probably 3 forks is a good number, but I'd like to see some data. Anyway, it's a linear problem that is adjusted automatically by the difficulty to find the next block. It's not an exponential problem that makes the protocol unpractical at scale.
DAGs aren't very scalable, because like with git, you have to store the whole history.
In some scenarios, you can rebase/snapshot to clean up history, but these usually require a type of centralization or consensus, which defeats the point of using something like git.
As a result, DAGs can only be used to replace a subset of apps/tools out there.
The most important/used apps, though, are indexed lists. Things like:
- Google rankings
- Reddit homepages
- AirBnB listings
- Ubers nearby
If you're updating a geo-index 100s of times a second, like in the case of cars' GPS locations, then you're just wasting resources with a DAG that you'll never use and wind up bottlenecking the system, preventing scaling beyond a certain threshold. I've dealt with this in practice, and it was no fun.
We switched off DAGs, and our biggest production deployment now has been with 15 million monthly users. This is far far far beyond the scale of any of the previous systems.
Halfway through the paper, it describes a new approach to creating trustless, decentralized cloud computing markets via. cryptographic resource attestation.
To simplify it out, the resource attestation model allows one to bind a virtual currency to some amount of computational time and resources.
It allows users to securely rent away their smart devices when they're idle to developers, researchers, startups, and enterprises who really need large amounts of compute power for cheap prices.
There's little insight from what I've seen in the database market as to what people really want the most out of a database, or what makes a database "appealing" to people apart from its reputation from being time-tested.