Problem is, I run a dedicated server hosting company, and the majority of my customers either want CentOS 6.x or Debian Stable. Neither can install XFS as the root filesystem (but can use it for other filesystems during install, strangely enough; separate ext2/3/4 /boot doesn't fix the issue).
At home I have a mini-server that is 2x M550 128GB SSD, 2x es.2 2TB HDD, with the SSDs partitioned as 16GB md raid1 XFS for /, 256MB for ZFS ZIL, rest for ZFS L2ARC, and the 2x2TB as ZFS mirror; /tank and /home are on ZFS, and / is pretty much empty.
The only thing that would improve XFS at this point is if it supported optional checksumming and LZ4 compression on root filesystems, otherwise its basically perfect.
By the way, said mini-server? Dual core Haswell 3.x GHz, 16GB of DDR3-1600, from pressing the power button, going through BIOS, hammering enter to get past grub menu as fast as possible, it takes about 7 seconds to get to the login prompt; less than 3 of that is between leaving grub and getting to the prompt.
FYI, I've got many machines running Debian Stable (wheezy) with xfs root filesystems, no /boot partition. No problems.
My home server I described? Runs Debian, has XFS root. It clearly works.
Thanks for getting me to test again. One less OS I have to deal with that can't do XFS properly.
I'd have expected them to be beating down the CentOS 7 door by now.
What sort of problems have you encountered?
Basically, everything they've said boils down to package deps are broken and theres no easy way to fix it, and random segfaults and kernel panics (which I have no idea why any distro would ever have this issue).
Ignoring the limitations of physics, 8EB at $0.1/GB comes out to $858,993,459. I don't think there will ever be enough of a market to support the mass production of billion-dollar disks.
And given XFS being the strongest of the Linux filesystems for large systems they will undoubtedly try and handle the high capacity drives and arrays too (relative to today).
You can find verification and many more examples in The Innovator's Dilemma.
Consistent throughput is far and away our top ranked requirement, so XFS has ruled for over a decade.
That one most definitely is at least partially true. I had experienced this several times using XFS on my home Linux machine in the early 2000s. I use JFS now, but supposedly this bug/misfeature was fixed some time ago, thankfully.
Edit: indeed, the linked slides themselves back me up on this; just three slides after the one quoted by the article, we have "Null files on crash problem fixed!"
Ext3/4 are of previous generation of filesystems, their performance on very large volumes is not acceptable. And you don't want your production system to be down because of boot-time fsck for hours.
ReiserFS had some promise years ago, but the project leader is in prison and nobody has picked up the effort. I don't think it's possible for it to close the gap to other filesystems even if Mr. Reiser returns to develop it after his sentence.
If you mean, "why didn't it win earlier?" it's probably a combination of ext3 being good enough (particularly when multi-spindle, large partition environments were less common for Linux users), Reiser3 doing a great PR job (pity about the crappy filesystem...), and ext3 having better data integrity because Ted T'so didn't understand how his filesystem implementation worked (he fixed that with ext4).
I wouldn't be surprised if XFS was the second most deployed filesystem after ext4.
Where it's really losing is to the "block layering violations" of btrfs. Being able to manage storage pools at the filesystem level like ZFS is a major feature advantage.
Something along these lines, although I never used RHEL: https://access.redhat.com/solutions/54544
> From Btrfs, GlusterFS, Ceph, and others, we know that it takes 5-10 years for a new filesystem to mature.
Those are bad examples - they are all significantly more complex than XFS/ext. Two of the three are distributed filesystems that aren't solving any of the same problems.
However, their inclusion in the article is worth noting, even if the author put them in the wrong paragraph.
Increasingly, large volumes are becoming distributed over lots of individual servers, with technologies like glusterfs and ceph. Both of these, and some of their competitors too (xtreemfs is also really good, despite the silly name), use traditional filesystems on the underlying server volumes that are presented as a large distributed FS. XFS is generally used for this task - and unless an individual node got larger than ~ 8EB, there's currently no reason to change this.
The real question, then, becomes - will a single server need a local FS larger than 8EB by 2025-2030? Possibly not. It's very dangerous to say "X is all anyone will ever need", but I think that we're going to increasingly see bulk storage go the same way that CPUs did - instead of a single huge local FS (analogous to increasing single-core clockspeed), we'll see an increasing number of storage nodes combined into one, via a distributed filesystem (analogous to higher core count).
Part of the reason for this is that there are quite a few disadvantages with having very large volumes in one place. RAID rebuilds become unreasonably long, if you use RAID, and RAID rebuild speed is not currently keeping pace with storage growth. If you don't, and solve redundancy with multiple nodes, then the bigger the individual node is, the larger the impact if it fails. At some point, you're also going to need to shift data off that box, and while we're likely to have 100GbE server ports around then, even 100GbE is going to take an unreasonably long time to move 8EB anywhere.
EDIT: Just did some quick calculations. Assuming a 100 Gigabit server port, that's 12.5 Gigabytes/s. At 12.5 GB/s, it would take over 20 years to transfer 8EB anywhere. The idea that we're going to have 8EB of data on an individual server in that timeframe is starting to look a bit silly, with that in mind - how's it going to get there? What on earth would you do with it once it is there?
Even if by some magic we invent 1TbE and make it cheap enough to use on servers (and invent 10+TbE for the network core) by 2025, that would still take over 2 years to fill the disk. Yes, sorry, but this is just silly. 8EB across lots of individual servers? Sure. But all on one server in a regular filesystem? Not going to happen any time soon.