Uh! Thanks for mentioning. Also wasn't aware that plausible is blocked, thanks for the hint, even though I wonder why it breaks the search. I'll figure it out. Thanks again.
Simplyblock sponsors the domain, yes. Good point. Should have mentioned that. It's not on the simplyblock github account though and as you can see on GitHub, I've built it over the weekend :)
Disclaimer: My full time job involves developing a CSI driver that is on this list (not simplyblock, but I won't say more than that to avoid completely doxxing myself).
A lot of this chart seems weird - is it somehow autogenerated?
For example, what does it mean for a driver to support ReadWriteOncePod? On Kubernetes, all drivers "automatically" support RWOP if they support normal ReadWriteOnce. I then thought maybe it meant the driver supported the SINGLE_NODE_SINGLE_WRITER CSI Capability (which basically lets a CSI driver differentiate RWO vs RWOP and treat the second specially) - but AliCloud disk supports RWOP on this chart despite not doing that (https://github.com/search?q=repo%3Akubernetes-sigs%2Falibaba...).
I bet there are lots of mistakes yet. It's not really automatically generated, but I started from the list at https://kubernetes-csi.github.io/docs/drivers.html and split the table into a YAML file, marking the features that are mentioned in the docs.
Fixed a few, where I saw thinks in their respective docs, and added features like file or object storage. Also added a few that weren't mentioned.
Somewhat related: can anyone recommend a simple solution to share each node’s ephemeral disk/“emptyDir” across the cluster? Speed is more important than durability, this is just for a temporary batch job cluster. It’d be ideal if I could stripe across nodes and expose one big volume to all pods (JBOD style)
I guess your biggest issue may be the multiple writer problem, but you'd have the same issue on a local disk. The second multiple writer are supposed to update the same files, you'll run into issues.
Have you thought about TCP sockets between the apps and sharing state, or something like a redis database?
...pods could some how mount node{1..5} as a volume, which would have 5 * 200GB ~1TB of space to write to... multiple pods could mount it and read the same data.
My experience is that OpenEBS and Longhorn are cool and new and simplified, but that I would only trust my life to Rook/Ceph. If it's going into production, I'd say look at https://rook.io/ - Ceph can do both block and filesystem volumes.
What problem are you trying to solve with replicated storage?
A lot of times, finding a solution further up the stack or settling for backups ends up being more robust and reliable. Many folks have been burnt by all the fun failure scenarios of replicated filesystems.
I build a platform which hosts a few thousand services and tens of thousands of nodes. Currently, those services need to each internally manage replicating data and need to be aware of failure domains (host, rack, data center).
What I would like to do is develop a system where applications just need to request replicated volumes which span a specific failure domain and push that logic down to the platform.
Longhorn, hands down. It’s dead simple to set up and works well with production workloads. We’ve had disks fail, nodes fail, etc. and it has handled everything brilliantly. It’s also near-native speeds, which is really nice.
I guess it depends on how you are measuring it! If you compare it to running it on a RAID5, it is. We are running it on a RAID0 and using Longhorn replication to provide replication instead of the RAID and using striping to get more throughput.
Additionally, https://plausible.io/js/script.js is blocked by adblockers, and the search breaks completely then.