Clicking 'get started' merely leads to a 'please let me waste hours talking with your enterprise sales reps' form.
A mature product should be possible for me to evaluate and try on my own, and I would rather not waste time doing so if I don't even know what it costs.
That said, Quobyte is in production for business critical workloads.
Edit: From the product page the main advantage seems to be end-to-end checksums and an adaptive placement engine.
Architecture and performance comparisons would be very interesting, though I understand if you can't disclose that yet.
Quobyte is a new implementation and only shares the architecture blueprint with XtreemFS.
* XtreemFS has POSIX file system semantics and split-brain safe quorum replication for file data. With Quobyte, we pushed that further and have now full fault tolerance for all parts of the system, working at high performance for both file and block workloads (Quobyte also does erasure coding). GlusterFS replication is not split brain safe, and there are many failure modes that can corrupt your data.
* XtreemFS and Quobyte have metadata servers. This allows them place data for each file individually as it can store the location of the data. With Quobyte we pushed this quite far and have a policy engine that allows you configuring placement. When the policy is changed, the system can move file data transparently in the background. This way you can configure isolation, partitioning and tiering. GlusterFS has a pretty static assignment of file data to devices.
Also I actually tried not to make any valuing statements. I am using the term split brain safety as technical term, ie. the P in CAP. My understanding is that GlusterFS does not have this in its system model and you and the documentation seem to support this: "This prevents most cases of "split brain" which result from conflicting writes to different bricks."
Quobyte generally (and XtreemFS only for files) does quorum replication based on Paxos, where split brain is part of the system model. They are CP and hence data is not always available but is always consistent for reads if the quorum is there. Like Ceph.
I am sorry that I missed the progress on placement. It seems like I need to catch up on what happened after the volume types.
FWIW, I do think the current Gluster approach to replication is not sufficiently resistant to split-brain in the all-important edge cases. That's why I've been working on a new approach, much more like Ceph and many other systems - though few of them use Paxos in the I/O path. That's wasteful. Other methods such as chain or splay replication are sufficient, with better performance, so they're more common.
Google byzantine fault tolerance filesystems to get quite a few more.
If not already loaded, load the FUSE kernel module:
> modprobe fuse
But more importantly, how are you using any of those? :)