Hacker News new | comments | show | ask | jobs | submit login
Why are there no true cross-platform filesystems? (kuncheff.com)
154 points by Daegalus 1611 days ago | hide | past | web | 105 comments | favorite

I think the real, but unhelpful answer is "supply and demand". Most filesystems are used by only one operating system at a time.

If two operating systems need to share files, they're usually on different machines, so it's simpler to copy the files or provide a network interface to them, rather than share disk-level access.

Getting different OS makers to standardize (and therefore slow innovation) on something as performance-impacting as the file system isn't worth the benefit.

  provide a network interface to them
If you're trying to use partitions off a single disk, implement this using minimal virtual machines. OSX in single-user mode and only the necessary kernel modules loaded, windows server core, suitably tiny linux. Not as good as native fs drivers (in particular, VMs are memory hogs), but cheaper than hardware and a lot faster than going through a cable.

This makes you happy until it fails. Windows, Mac OS X and even real-world Linux write state information onto the FS. (Hibernate etc.) If stuff fails, you may not be able to boot into your "native Installation" anymore.

Another thing is evolving features, Windows is particularly good at this: more and more features are added to the FS, if you don't keep mini VM and native Installation in sync, weird stuff could happen.

I use dual boot since many years, but I have stopped sharing partitions completely. If I need interoperability I use the Cloud, USB sticks and "normal VMs". It's fool-proof, safe and low maintenance.

How about a little backwards-compatible patchkit for UDF that makes it more friendly for HDD storage use? That would require a lot less work to port than something like ext4, and if it wasn't available for a certain OS, it wouldn't be -too- big of a deal.

UDF already has a HDD mode, it isn't just a CD filesystem. It even has support for POSIX permissions, hardlinks, softlinks, and most other features of modern filesystems. I've had Windows Vista, 7, Linux and XP (read only) use it successfully on a USB (however you need to be more paranoid to always unmount it then FAT). I've never got my UDF dicks to work on OSX however.

Is the innovation fast now?

Right! So imagine how much slower it would be if the process involved getting the Linux community to agree with Microsoft on a standard. I realize it's not quite the bad old days anymore, but still.

Why does it have to be one or the other? Why can't we standardize on one file system that has to be supported by all OSes out of the box and provides fairly good read/write speed with no artificial file size limits such as FAT32 and let innovation take place inside other file systems ?

FAT32 doesn't have an artificial file size limit - it has a file size limit which was perfectly reasonable when it was developed.

All filesystems have file size limits - it's not an artificial constraint, it's a design tradeoff in terms of the size of the metadata that the file system requires.

UDF is pretty much that

>it's simpler to copy the files or provide a network interface to them

Ding ding ding. Much easier to have a Samba server then try to make a FS work across multiple OSes. That having been said, I haven't really had that much issue with using NTFS.

Generally, Linux has pretty great support for reading and writing on ntfs (I want to say ntfs-3g is usually a default on most distros?), but Windows can't natively read ext3/ext4/btrfs. It's doable, though, with programs like diskinternals linux reader and explore2fs. However, these programs tend to be read-only, forcing you to copy files across to a windows-readable fs.

I ran into yet another consequence of this situation today: In a Linux/Windows setup, I lost two hours of minor Photoshop work (i.e. stuff that was simple but annoying to recreate).

I have generally found it safe to read from my NTFS drives in Linux (I avoid writing if at all possible). This time, however, when I went to load the result of my work back in Linux, it could not read the folder (Thunar reported some generic IO error). Going back into Windows to investigate, Window's opinion was "It smells like Linux has been here! I can't STAND that guy or anything he touches, so I burned the folder just to be safe. You can thank me later."

As best I can tell, it had something to do with a combination of:

1. Windows Update needing to do things upon shutdown and startup, but then not booting back into it for a month.

2. Linux having been in hibernation rather than halted. Might have had some access bit set on the NTFS drive when it hibernated? I don't actually have the domain knowledge to claim that is what might happen, so it's a complete hunch.

This was really just a rant to get out the anger at lost work. This article was quite timely, and while I'm not glad to see people share my pain (because it's still pain in the end), I hope we one day see improvements.

While technically hibernate (to disk at least) should unmount all filesystems - I'd be very wary of using hibernate (on either os) in a dualboot setup.

Other than that - my experience of late has been that ntfs is safe both to read and write from linux -- but as I now encrypt all my filesystems, that doesn't really help (no bitlocker support). If I actually needed to dualboot -- I suppose a separate disk/partition with truecrypt might do the trick.

My solution for the past several years have been to have all "interesting" files on a separate linux file server -- and access it with cifs/samba and/or ssh.

> While technically hibernate (to disk at least) should unmount all filesystems

Not a chance. Hibernate is just like Suspend to RAM: freeze all processes but core kernel, blit RAM to disk (usually stored in swap) and send poweroff command straight to CPU. Of course it doesn't unmount anything, hence there are FS data structures live in RAM. Booting an alternate system and accessing a filesystem open on the other side is like mounting twice the same filesystem and writing to it. It will blow up on you.

Ah, I was thinking of the Debian package for hibernate, that has custom scripts for unmounting selected file systems, from the help text:

  "If you have network shares or external devices that should be unmounted before suspending, list them here."
  "Unmounts any filesystems of the given types. This is most useful for network filesystems such as smbfs and nfs."
And afaik it is perfectly possible to add ntfs mounts to that list - but I don't think any program with open files on that mount would handle it gracefully (I belive the script will figure out which processes (if any) have open files and kill them prior to hibernation).

You can't change Windows code to 'add ntfs mounts to that list'.

Windows 8 will in fact do hibernation when you think you have turned off the computer.

Bottom line: You can't dual boot safely with Windows 8.

Yeah this bit me recently, as I was hibernating my linux system but sometimes booting into Win7. Ended up having to reformat linux partitions, some inode information got messed up. e2fsck found and 'corrected' some errors which really didn't help at all. Lesson learned, as long as I'm dual booting, will stick with s2ram!

You can suspend-to-disk just fine, as long as you remember to always boot back into the OS you suspended from. I guess suspend-to-ram forces you to do that (and my preferred option anyway), but I just wanted to make sure it was clear that there's nothing unsafe about suspend-to-disk as long as you don't let another OS get its grubby fingers on your mounted disk.

Linux (ntfs-3g actually) won't mount your NTFS drive if it is hibernated. So I'd really avoid booting the other OS after hibernating one.

It smells like Linux has been here! I can't STAND that guy or anything he touches, so I burned the folder just to be safe. You can thank me later."

This pretty much sums up Microsoft's reaction to Linux in a nutshell. It's precisely why articles like the one we are commenting on exist. The answer to the question that the article poses is "Because Microsoft hates competition and acts like a petty spoiled brat. Get back to us when they grow up."

Microsoft's relationship with linux is actually a bit more complicated. It was a top 5 [1] contributor to the linux kernel in 2011 and a top 20 [2] contributor in 2012. The reason here is because linux virtualization is a big deal for Microsoft Azure.

Although the rational here is fairly clear: having good linux support for Azure helps Microsoft sell its cloud service. Having good linux support for Microsoft filesystems helps people move away.

[1] http://www.zdnet.com/blog/open-source/top-five-linux-contrib...

[2] http://www.geekwire.com/2012/surprise-microsoft-list-top-lin...

I'm aware of Microsoft's Linux kernel contributions; I've also been around long enough to remember the woes of NTFS compatibility, CIFS compatibility, the Halloween documents, the "virus" accusations, the SCO debacle, and not least of all relevant to this current conversation, the fact that exFAT is basically banned from an FLOSS implementation due to onerous licensing and software patents.

What the author really wants is network-attached storage.

Find a cheap/old box, put your favourite flavour of Linux on it, add a few TB hard drives, and run Samba. If performance is an issue, use a mobo with SATA 3 controllers and a few gigE ports, and add additional network daemons for more "native" networked filesytems (NFS, AFS, etc.) as necessary.

If he doesn't want to spend any money on hardware, perhaps he could do the same with a Virtualbox VM? That can be hosted on all three of his OSes. I don't know about performance, though.

No, that's silly, because it presents a chicken-and-egg problem -- where would you put the Virtualbox VM if you don't have a filesystem that all three OSes can read?

I was very intentional in suggesting it be a separate physical device.

I've been using Paragon's native NTFS for OSX and it's been fast and robust enough under the modest loads of small office file sharing.

The author of this article seems to have mistakenly dismissed their stuff as FUSE based, which it doesn't seem to be. It's also cheap.

They also have HFS for Windows, haven't tried it even though it came free with the NTFS for OSX driver.

Not associated with Paragon in any way, just a customer.

I think the author's use case is actually quite rare these days. I personally don't dual boot any more, much preferring to run other OSes in a virtual machine. Then I don't care what filesystems they use internally, all shared data is shared via the VM's interface anyway.

I think that's the way of the future, not cross-platform filesystems.

While I try to do this often. My macbook even has Win8 and Ubuntu as VMs on it. Running Xcode or Visual Studio in a VM can get pretty laggy, even when I give the guest OS a considerable chunk of my host resources.

I have an i7 920 and 16 gigs of ram, and I have about 50% to the guest, and Visual Studio and Xcode, along with simulators is pretty messy. Running a VM within a VM can do that.

Its the only reason I do a triple boot like this. Otherwise, I would definitely go pure VMs.

Though Parallels on my laptop does a bangup job with Visual Studio.

But then again, the main problem is huge files and lots of writes.

VM's IO performance is usually pretty bad. Try an SSD.

I still duel boot, because I still can't stand the overhead of VMs. However, the tasks I need to do on Windows don't overlap with Linux, so it is not a problem, and Linux support for NTFS seems to be good enough (otherwise I could still make a deticated share partition in FAT.)

Especially when you do 3D stuff the overhead for a VM is no longer acceptable. So I'm also dual booting. I wasn't sure which FS would work well and made partitions with ext2, ntfs and fat32. But also using ntfs now as it just works good enough (no filesize limit and with ext2 I had the trouble that it wants fs-checks once in a while and if I booted just then into Windows I couldn't access it... also VS had once troubles with files on ext2 for some unknown reason).

> Especially when you do 3D stuff the overhead for a VM is no longer acceptable.

I don't really find this true anymore. I run a retina MBP with VMware Fusion and VS2012 in Win7 (Boot Camp partition running in VMware) runs great, with very solid perf when running my OpenGL-based engine. Games run very well, too--the Steam selection on OS X is pretty poor, but I hate dual-booting, and the perf/quality tradeoff is very acceptable to me under virtualization.

Interesting. I have to admit I haven't tried it for a while, because it was always so bad in the past. But I'm also on Linux and not on MacOSX, maybe the situation there is better.

exFAT [0] is a great option. It's FAT32 updated with all the modern requirements. Surprised there was no mention of it! It's by Microsoft and is supported on XP and above (out of the box on Vista, update available for XP and Server 2003). It's surprisingly fast, robust, and all around a great OS to use for storing shared data.

There's a free implementation available for Linux [1] (actually in the default apt/yum/whatever repos for many distros) and it ships out-of-the-box on recent OS X installs [2].

0: https://en.wikipedia.org/wiki/ExFAT 1: https://code.google.com/p/exfat/ 2: http://www.macrumors.com/2010/11/11/mac-os-x-10-6-5-notes-ex...


Yes, it is mentioned in the article. The article sucks. It does not define what "modern" features are of a filesystem, and the author clearly sets off with his end goal in mind ("there is no cross-platform FS") randomly removing FSes from the consideration for vague and hand-wavy reasons. This applies to several of the entries in his list. I mean the guy thinks UDF will do the trick, but exFAT is not good enough? Come on!

I think a lot of people would contend exFAT is NOT a great option by virtue of the fact that Microsoft holds patents on the filesystem and has (in the past) enforced those patents. So it's pretty unlikely that it's ever going to have great support on Linux.

Yeah, I'm sure Linus, Theo, et al. will be totally willing to pay those patent license fees. Really quite small in the overall scheme of things when you get such a great file system in return -- yeah, NOT.

I wouldn't trust any data I particularly cared about to exFAT. Its a non journaled file system with only a single FAT and free space bitmap in most implementations. i.e. its fairly easy to corrupt and very difficult to recover.

[Anecdote] Seagate and ExFAT; I will avoid these two till the day I die.

I've had at least 3 (I'm not sure, maybe more) Seagate internal HDDs failed on me in the past several years (I don't do anything remotely crazy with my MacBook Pro or other machines). And I've had a 1TB external HDD and 3 USB disks ruined by ExFAT (yes, I always 'eject' the volume before yanking in out from USB port).

I don't know what the hell is wrong with either of them - Maybe I've been unlucky. But that's the truth, and I'm sure the problem wasn't from my side.

Actually, exFAT is mentioned in the article.

I've been in a similar situation as the OP. I was trying to share an external USB drive with Linux, OS X and Windows 8 with my git-annex archive on it. Turns out exFAT doesn't support soft links (git-annex needs them to work), which is a feature I think should be supported by any modern filesystem. In general, you're shit out of luck with git-annex os interoperability :/.

The first thing that comes in my mind is that we shouldn't expect something new and valuable in a near future becuase I believe that technology tends to be more focuced on the cloud filesystems rather than developing new portable filesystems where OS doesn't matter and everything is accessible. The common cloud filesystem might be the best option for such problems.

I do agree that exFAT is probably one of the better solutions out of what's available.

The author mentions journaling and extents (particularly important for his torrenting use-case to prevent fragmentation), which I feel are the way he defines "modern". exFAT also supports neither transparent encryption nor transparent compression.

Unfortunately for being a closed filesystem, I can't trust the linux implementation until its quite mature. It also won't be available in the kernel, which really hurts performance (important in an age with fast devices like USB3). As such this is not an option for linux users.

FUSE only? Proprietary with no spec released? I wanted to like it, but it's also easy not to.

About HFS: You seem to dismiss it a bit quickly. Trial versions of 'MacDrive' on Windows have always worked very solidly for me - if I were still using bootcamp I would pay for it.

However the bigger question IMO is: Do we really need native windows nowadays? If we ban windows into VMs and only use OSX + linux natively, HFS is indeed a solid option. The only thing hindering us so far from using windows VM-only was gaming and audio/video rendering software. Audio is very niche and the rest should soon be solved with increasing support for PCIe passthrough.

HFS hard links don't even work readonly on Linux but are used extensively in Time Machine backups, so there is at least one common use case of HFS partitions that isn't cross platform.

I prefer things vice versa. I know it's very hipster to hate Windows, but if I stick OS X and Linux into VirtualBox, I can then use Samba on Linux and OS X already supports SMB. Now everybody can talk to each other.

> I know it's very hipster to hate Windows

I'm going to raise the level of meta-discussion here and say that it's even more hipster (my subjective assesment) to use that subjective assesment (hipster) of other subjective assesments (hate Windows).

Well, the thing about OSX is that it is heavily reliant on in house media codecs and drivers that I find haven't been virtualized very successfully so far. Furthermore, I find that the (paid) virtual host desktop solutions on OSX are much better in terms of performance and integration than their windows counterparts - probably because of higher demand, possibly also because of architectural reasons.

But maybe you have found a way and you can then prove me wrong: Does iWorks work in your virtual box VM? Does XCode including iOS simulator work well? Is the performance acceptable? How about the iLive suit? How about unity development? How about the clipboard - does it keep its 'PDF snippet' functionality (no rasterization of fonts, original quality images)?

See, there's tons of OSX exclusive software that's quite demanding in terms of media and hardware acceleration, simply because of the OS's highly integrated character. Unless someone can show me a solid OSX-in-a-box I doubt that it's a good experience.

> Does XCode including iOS simulator work well?


> Is the performance acceptable?

It is for the above. Not so much for anything demanding.

You're kinda missing my point. If I wanted to work in OS X, I certainly would never virtualize it. If I occasionally need to check a website or see how it would look like on an iPhone, then virtualization is a good option.

What's wrong with Windows in a VM? Linux can share the files using samba as well.

Maybe he plays performance demanding games in his spare time or wants full performance out of Windows only 3d tools and IDEs (Visual Studio)? Just a guess. Only reasons I would want to use Windows touching hardware versus a VM.

One can do GPU passthrough with Xen[1] if hardware allows, but that's a lot of footwork to set up and configure. Performance near native through from reports of those I know that have done it. Ubisoft gave a demonstration a while ago of Crysis running through Xen and GPU passthrough[2].

[1] http://wiki.xen.org/xenwiki/XenVGAPassthrough

[2] http://www.youtube.com/watch?v=Gtmwnx-k2qg

? It's super duper easy to run a Samba server in Linux.

Let's clarify the author's use case: a multi-boot system for which he's looking for a universally compatible filesystem.

There are various reasons why this doesn't exist, at least not beyond compromise solutions such as vfat. A transfer partition and use of archive formats would largely suffice.

The larger problem though is expecting to be able to manage the same storage on a multi-boot system from multiple operating systems. It's one of the reasons that multi-boot is a poor alternative to virtualized systems.

The author could configure his alternative OSes as VMs, and transfer files via virtual networking, either via transfers (scp, etc.), or via mounted networked filesystems (CIFS/sambafs, NFS). Compatibility issues are handled by the networking protocols (or by using archive formats), and simultaneous access to multiple environments is available.

UDF is pretty cool. It is an open standard supported by Windows and Mac OS X, but for some reason linux doesn't support versions higher than 2.01

Maybe somebody here has the skill and time to fix this.


To be fair, the only versions higher than 2.01 listed on that page are 2.50 (which adds the ability to store all metadata on a separate partition... not that useful) and 2.60 (which adds support for "partial overwrites" on sequentially-recorded media).

Neither of those seem terribly useful for the "UDF filesystem on a shared hard-disk partition" use-case we're talking about, so I don't think Linux's limit of UDF 2.01 is very important in practice.

It's important if it will refuse to mount a later version. I don't know if that's the case, just guessing here.

So I actually have a 16GB usb formatted as UDF. It seems to work perfectly in Linux, Windows vista+, and read only support in XP. I need to explicitly unmount it in Windows though, otherwise it becomes read-only the next time I plug it in to a Windows machine (mounting it on linux fixes this).

Now I've never gotten it to work on OSX though, which is by trying my friends machines. I've always used the linux udf utility to format it, and have tried both with and without a partition table. It seems OSX, unlike Windows and Linux, actually respects the partition id.

This is a problem, but triple booters are numerically an edge case.

Rather, what file format do you use on your portable drives when you use all three operating systems on different machines in different scenarios (say Windows at work, OS X at home, Linux on servers)?

I'm sure that starting a virtual machine to share EXT4 drives over the host-only network works (if you have the option of installing VirtualBox everywhere), but I'd be weary of network file system limitations and issues and overall bad performance.

The two popular platforms are mostly closed to external contributions, that's why there aren't any "true cross-platform file systems" that aren't decades out of date like FAT32.

My use case is external USB drives. A lot of comments in this discussion have said "use a network drive!". Which works in some scenarios, but brings a lot of complication.= over a USB drive.

A $85 USB 2.5" drive carries a terabyte of data, fits in your pocket, needs only a single cable to use (and no external power), and transfers data faster than 100MBps ethernet. Or if you really care about speed, $40 gets you a 64GB USB3 flash drive that's faster than gigabit ethernet. The problem is there's no filesystem you can write on these drives that every machine can read.

Link to the $4 64GB USB3 drive?

Oops, typo, updated to say $40.

I'm more upset there are no filesystems where I can organize files by applying labels to them, similar to how one can apply labels to email in Gmail.

Not quite sure if you really want that: open your favorite shell, and count the number of files in your data directory; then multiply it by, say, 2 seconds each -would you like to spend that time on organization, or rather making money? (obviously you can apply the same label to multiple files, but you still need to figure out what belongs to where).

Personally, I gave up on manual organization of my file system last year, and learned how to love built-in search. This works great, as the rough index is in your head anyways, so you can just take a single token of the filename, type it in, and open it up; massively reducing both access time, and eliminating organization entirely.

I only want to manually apply labels only to some files/folders.

I have some parts of the same project spread out through 3 folders: not in Dropbox, in Dropbox, in Public subfolder of Dropbox. It'd be really nice to have a unified view on those things. As in apply 'Dropbox' or 'Dropbox/Public' labels to some files.

I know Dropbox has shared links, but I don't want to have to create and manage those all the time. It's just an example.

Finally, I think a great way to organize files is by time created/modified. If you exclude the system or program generated files. Quite often I just want to find a file that I know I saved a few days ago (but no idea in which folder).

Symlinks. Put all three folders under a common parent. Then, put symlinks inside your Dropbox pointing to the relevant parts of the project.

Save everything in one folder. Optionally set up (symbolic) links in other locations.

The fact that file systems force us to organize data a into a set of named files in a fairly rigid hierarchy has been lamented since the 1970s by Ted Nelson (the man who, among other things, coined the term "hypertext") and others. However, it goes even further back than that with the critique of library indexing systems by Vannevar Bush in his seminal 1945 article As We May Think [1] that described the "Memex" as an alternative: a system that to today's eyes might look something like a PDF viewer with chained bidirectional links (think '90s web rings, only for book and magazine pages), which were called "trails".

There have been attempts to remedy those problems by both individuals like Nelson [2] and corporations like Be, Inc. [3] and Microsoft (which may or may not be related to the fact that Nelson's book Computer Lib / Dream Machines was very influential at early Microsoft and Microsoft Press even republished it). It truly is lamentable that Microsoft didn't bring one into the mainstream with WinFS [4].

[1] http://www.theatlantic.com/magazine/archive/1945/07/as-we-ma...

[2] http://www.google.com/patents?vid=6262736, http://www.xanadu.com/

[3] http://www.nobius.org/~dbg/practical-file-system-design.pdf, Chapter 5

[4] https://en.wikipedia.org/wiki/WinFS

Bill Gates' recent Reddit Ask Me Anything he said his biggest regret was not doing WinFS.

Q: What one Microsoft program or product that was never fully developed or released do you wish had made it to market?

Gates: We had a rich database as the client/cloud store that was part of a Windows release that was before its time. This is an idea that will remerge since your cloud store will be rich with schema rather than just a bunch of files and the client will be a partial replica of it with rich schema understanding.

-- http://www.reddit.com/r/IAmA/comments/18bhme/im_bill_gates_c...

I wonder how an implementation of this using extended attributes (xattr) would scale? I couldn't find much information on anyone doing this before, except one command line/emacs tool:


  I suppose adding a gtk-gui on top of this would be possible.
As well as an explanation (of sorts) for why there doesn't appear to be support for (x)attrs in GNU find:

"find" can filter based on -exec -- but there doesn't appear to be a simple command that will take a filename, an attribute (and optional value) and simply return 0 if found, other if not found (not that it should be hard to write, based off the attr-utilities code, for instance).

Well, it isn't really a matter of file system since it is possible to implement "tags" on any FS using links. (i.e. by creating directories named after tags and by populating these directories with links to the actual files.)

Therefore what you need isn't a new file system but a file system browser that allows you to specify a (set of) file(s) using tags.

Considering how easy it would be to do just this using fuse, I'd guess some Linux folks already tried this, does anyone know about projects in this line?

Not really the approach you describe but quite some time ago I came across an experimental project which used a C implementation of Apache Lucene (I think that was called Lucy) to implement a new approach to file management, based on full-text search, where you don't store the files in a hierarchy but use more a concept liket tags and what a file is about and stuff like that. I thought it was pretty cool and the demo I tried worked pretty well for me. Unfortunately I can't find it anymore and can't remember the name or anything. I think this is a very interesting approach.

In BFS (Be(OS?) File System) you can add any kind of attribute to a file, for what I know.

Unfortunally, this isn't very helpful - since you need to run either BeOS or Haiku. If I'm not mistaken, there's a FUSE port of BFS to Linux as well.

You can do this on a mac (Finder labels are stored in HFS+ metadata).

To some degree, yes, but the UI for it is too awkward to be usable. It's not as first class citizen as creating folders and putting files in folders.

Google Docs used to do that (maybe it still does). It was absolute chaos and very confusing.

OSX supports this today with metadata and spotlight driven smart folders.

I'm surprised that more people haven't mentioned ZFS! It's a stable, cross-platform filesystem, but that's just the tip of the iceberg.

There are stable implementations for Linux, Mac OS X, Solaris, and BSD (at least). You can export a filesystem from any one of these, unplug the hard drive, plug it into another computer, import the filesystem, and go. Doesn't even have to have the same CPU endianness.

Besides being designed from the ground up to be operating system and CPU endianness independent, it does a ton of other stuff that no other filesystem does. It's a total rethink of the entire filesystem / volume manager / RAID pool concept. All of these things are merged into ZFS, with a user interface that's both radically simpler and far more powerful than managing all of those things seperately.

You can easily add capacity to existing datasets (up to 256 quadrillion zettabytes, it's 128 bit!), do mirroring and striping at the filesystem level, and add hot standby drives. You can combine drives of different sizes into storage pools. You can do rolling drive size upgrades in the future to expand the filesystem size when drives become cheaper, without taking the filesystem offline. It has built-in pervasive checksumming and self-healing data corruption detection, there's a strong focus on data integrity despite flaky hardware, writes are atomic, you can take snapshots and roll back to previous filesystem states, there's an advanced adaptive memory caching system that's far beyond simple least-recently-used caches..

Is that not enough? How about built-in encryption, data compression with your choice of algorithms, deduplication of file data... You can add a small SSD drive to your huge traditional hard drive based dataset to act as an L2 cache for fast, durable writes, fully online maintenance and upgrades, builtin SMB and NFS sharing, you can stream updates...

As someone having a little over 6 TB of ZFS storage in my Linux home server:

How does that fit the 'One file system on Linux, OS X and _Windows_' requirement? ZFS is useless for this scenario. No need to sell it if it cannot even start in this competition.

Edit: On top of that, some features that you're selling aren't completely available cross-platform (i.e. builtin NTS/SMB support on Linux, encryption is(?) Solaris only iirc).

Do you have experience using this through Windows? I haven't heard of many successful stories of doing so.

What about network filesystems?

They can be quite fast if you aren't running them over a physical network, such as between host and VM operating system on a single physical machine.

My preferred solution to your use case would be to run Linux in a virtual machine, configure the VM platform separately in each host OS to give the guest direct access to a dedicated raw partition in ext4 or any other Linux-compatible format, then serve Samba or some network filesystem to the host from the guest. (Of course, if Linux is your host, you can just mount the dedicated partition directly from the command line or fstab.)

Back when Windows was my main OS, I liked Colinux [1], a port of user-mode Linux [2] which allows you to run the Linux kernel as a native Windows binary.

But nowadays my first recommendation would be Virtualbox. You could also try qemu, it might have better Mac support.

[1] http://www.colinux.org/

[2] http://en.wikipedia.org/wiki/User_mode_linux

A friend of mine uses ZFS on OS X. Apparently it works quite well. I imagine Linux support is good too, but I'm not sure about Windows. At any rate, I'm surprised the author didn't mention it.

I thought ZFS on OS X was read-only.

Linux ZFS is still quite young, only supports a very early version of the fs, and can't be updated to the newest without serious reverse-engineering.

As for ZFS on windows, I definitely would not hold my breath.

I ran into the same frustration as the author and gave up. My notebook triple-boots Linux, Solaris 11, and OS X (I have no use for windows) and would be beyond thrilled to have ZFS shared among them.

One other problem with the idea though... you'd have to export the shared fs each time you shut down and import it into the next OS you booted, which would be kind of a pain (of course you could force the imports, but that's ugly and it only saves one step).

> Linux ZFS is still quite young, only supports a very early version of the fs

Which implementation are you talking about? See https://launchpad.net/~zfs-native/+archive/stable . Zpool version 28 is old now? The only implementation of 31-33 is in Solaris! See http://en.wikipedia.org/wiki/ZFS#Comparisons

http://zfsonlinux.org/ They no longer list the versions on the front page. Last I saw the page was only a few months ago (max) and you couldn't even create a filesystem yet (you could create a zpool and zvols). I remember reading on that page the zpool and zfs versions implemented at that time, and they were way back.

And yes, I am using Solaris for comparison... I mentioned that I'm triple-booting with Solaris in the post you replied to.

I'm glad so see the development has caught up, but it is still quite young.

Assuming you have a separate zpool for shared storage, you could easily(?) script export/import on shutdown/startup?

Filesystems are these days entangled with virtual memory.

That joins to the kernel's world at the hip.

The main reason I suspect is covered below, but the only time this is a useful thing is dual booting - the rest of the time network attached storage is the 'right' answer.

Personally, I'm glad to have left the time of dual booting operating systems behind me - it's a complete PITA. A Linux box with a bunch of disks under samba/ext4/lvm/mdadm is much more practical IMHO.

I respectfully disagree with the author's conclusion.

There are at least two filesystems, intended for situations where several computers share block-level access to the same disks over Fibre Channel or iSCSI, that are truly cross-platform, allowing volumes to be used, simultaneously if desired, from computers with different operating systems.

Quantum's StorNext File System is available for Linux, for Microsoft Windows, and for several UNIX systems, including OS X, where it's bundled with the OS and known as Xsan. SGI's CXFS ("clustered XFS") is similarly compatible.

Both of these filesystems are modern, in the author's sense that the filesystem does not impose a practical limit on file or volume sizes.

I triple-boot my desktop and laptop (Linux, W7, XP), and have a pair of large NTFS partitions to share data (including my Steam library using symlink trickery) between all three operating systems. The downside is that the open source version of ntfs-3g is slow (still faster than using a VM), and its allocation strategy seems to cause extensive fragmentation.

To those wondering why VMs are insufficient, if you have work (or gaming) to be done that requires maximum performance in multiple operating systems, a VM is not fast enough. I develop software for my startup in Linux, write client software in XP, and do video work and game in all three.

This is kinda the situation I am in. But I stated that in the first few paragraphs of my post.

As others have mentioned, the cost of standardizing file-systems across OSes does not seem worth it. But, is there any way to define a standard interface so that a filesystem driver written for 1 OS can be easily ported to another. Similar to how NVidia ships the same binary blob regardless of the OS. We even have ndisswrapper, which can run many Windows wireless drivers on Linux without modifying the binary.

We will obviously run into issues where different OSes have different concepts of what the filesystem needs to do (such as differing security models), but at the very least we should be able to get the common interfaces standardized.

The e2tools command line utilities should work just fine on OSX. I'm surprised no one has bothered to make OSXFUSE work with an ext3/4 driver, but it should be pretty simple. The libext2fs library which ships with e2fsprogs is pretty easy to make work with userspace applications. The the e2tools package uses libext2fs, and there have been FUSE drivers which use libext2fs. I can ask around see if there are some FUSE developers who could update an ext2/3/4 driver that works with OSX and Windows.

Dare to use / test all different drivers on different systems from different vendors to read / write single piece of disk with some valuable data is really brave. Use NAS.

Indeed vary brave. Filesystems ARE NOT and never have been an API to exchange data between multiple OSes - they are a means to persist data efficiently for a single OS. Now if you don't mind breaking a bunch of the FS rules, sure compatibility hacks can always be written - but are you really sure each implementation is FULLY honoring the rules. I can guarantee you, for anything but the simplest filesystems, the answer will be "no". Hell, even FAT has implementation stragglers out there in the wild and its brain dead simple.

Use an API for cross platform sharing layered on top of a filesystem: Either a Virtual Machine sharing protocol (aka 9p virtio) or a real network protocol AFP/SMB/NFS. Anything else is taking your data in your own hands and is antithetical to the design of modern OSes.

I use a small console-only install of VirtualBox and use Samba to communicate back with Windows. I think I only have it using 1 cpu and 256 MB memory. I can easily run Autocad or Solidworks in the foreground.

I find that I only lose about 5% efficiency compared to booting up Linux and copying without overhead.

One thing I love about Unix is the option to mount a filesystem read-only. I would love to have the option for my Windows box, when noticing a new filesystem, to present me a dialog box asking me how I want to mount the filesystem.

One solution that hasn't come up but might be worth pursuing: ext4, with a umlwin32 ftp server to export it to Windows.

What about the native APIs? NTFS and HFS have their roots in non-Posix worlds. The basic operations are somewhat similar in all platforms (open file, read directory, ...) But the "modern" stuff is not standard, otherwise it would not be modern.

This is probably the reason only FAT fits the bill: it's a lowest common denominator that has so few features that it can appear "native" on all platforms.

Look how most cross platform GUI toolkits turned out :/

Because files are "cross-platform".)

Just a side-note, if he's not using Apple hardware to run OSX, there could be some licensing issues here.


There's a pipe dream. It'd be great but I can't possibly imagine it happening. I have to think that any spare FS effort is being spent on ZFS (OS X) or ReFS (Win8/Server2012). I'm still rocking out with a 12TB RAID10 array and another 6TB RAID10 array that I gave to my parents. Haven't had any issues yet.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact