3rd party supporting a file system would be one of the last things on a list of all software I’d ever want a 3rd party writing instead of the OS maker.
Nightmare to evaluate the options, pure stress testing the options, difficult to know if it didn’t mess something up.
>3rd party supporting a file system would be one of the last things on a list of all software I’d ever want a 3rd party writing instead of the OS maker.
Given how many people use FUSE, Paragon NTFS for Mac, and similar tools, you're hardly totally representative.
Third party read-write NTFS drivers took FOREVER to become really robust. I remember hearing horror stories not infrequently up until maybe a decade or less ago.
MacOS users' awareness of a mostly linux centric piece of tech is pretty damn irrelevant here. The point is that FUSE is a pretty mature piece of technology, and we know that it can be used productively without being that nightmare scenario you described. There is no reason why Apple's FSKit can't be equally successful.
How many people use software like this because they have no choice? I used Paragon NTFS, but the entire time, I thought it was ridiculous that MacOS can't read NTFS on its own.
Like 99% of the computer using world until less than a decade ago, when almost all drivers were kernel extensions and things like kexts were very much used?
That being said: FSKit is a userspace API. In that respect, it's a lot better than filesystem code running in the kernel - it can't crash your computer or corrupt data on other filesystems, and it's much more tightly sandboxed if it gets exploited.
Exactly! Third party file systems support in user space is exactly what I want to see. It seems to me that third party kernel code has always caused me problems. By moving the FSKit to user-space, I’m quite happy to try something, knowing that it won’t affect the rest of my system.
What would Apple's incentive be to support Btrfs, Ext4, XFS or ZFS ?
Btrfs, Ext4 and XFS are all under GPLv3, which may or may not be a problem for Apple, but "just in case".
They tried with ZFS, but couldn't strike a deal with Sun/Oracle, so instead invented APFS.
Apple already delivers a stable filesystem. It may not be "best of breed", but it works, as billions of devices runs on it every day with zero problems.
I'd be happy for VeraCrypt not to have to rely on MacFuse which requires I go turn off some very low-level protection to even use. It sounds like this makes that possible.
I don't really understand your objection to be honest. Drivers for storage are common on other platforms
This article describes a new disk image format (on which a filesystem can be put, APFS in the article), not a filesystem, or did I misunderstand?
edit: added the word "image", which I apparently forgot to type. Mentioning the edition because otherwise it would make an answer to this comment difficult to understand.
Only Mac users IMHO are well-familiar with working with disk images. They are not as diverse or well-supported on other OS’es, while nearly every Mac app (prior to the App Store) was installed by dragging it out of a mounted disk image.
The post only benchmarks against UDIF. Whether this brings something new is a very good question that is not answered by the post.
How complex is the format? Did they do anything clever? Did they engage in NIH? For all I know they switched the AES mode and made the blocks bigger and that's the entirety of the performance boost.
That's of course also a reasonable question to ask! We all hope by asking these questions some Apple employee using a throwaway account will provide an answer on HN. HN is that magical place where such questions have the best chance of getting answered.
I don't think the question would make much sense. Why did Apple do X and not Y (unrelated to X) is not an interesting question. It would seem close to whataboutism. They were willing to spend money on X. They are not willing to spend money on Y. Or, they didn't think about doing Y. Or, they needed X. They haven't needed Y. Apple want fast VMs and also they haven't needed Btrfs. What insight can you get? There's no relation between the two. You'd get the same insight by asking "Why didn't Apple implement Btrfs?", but by linking the two in the same question, you are kinda implying there's a link.
But the question would have been highly relevant had Apple developed a new FS, and disk images and FS do seem related at first. I didn't want to assume whataboutism, so I figured OP was possibly confused because this is likely, and I wanted to give them a hint without bluntly asking whether they confused things. I could have, really there's no harm in being confused, nor in asking whether someone was confused.
ext4 and Btrfs are only well supported on Linux; they are not universal standards.
NTFS was only supported well on Windows until recently; but extensions like NTFS Encryption (BitLocker) are still Windows only. Mac still does not let you write to an NTFS volume.
APFS and HFS+ are obviously Apple file systems.
FreeBSD does not support ext4 or Btrfs well; but instead prefers UFS2 or ZFS despite also being an open-source Unix-inspired OS.
The world runs on proprietary or non-universal file systems with CDFS (ISO 9660), FAT, and exFAT being the sole exceptions.
Is there a single filesystem in the world (besides "simple" ones like FAT) that both has an open standard AND is licensed in a "usable" codebase (MIT or other "non-copyleft" license)?
AFAIK that's an incorrect meme that just won't die. The performance issues you're thinking of have nothing to do with the filesystem itself, but with the I/O subsystem in Windows more generally. If you have evidence otherwise please share.
I'm on my phone and this is a longer discussion than I can have here, but the performance problems they're thinking of and the ones people usually rant about on these forums are not the same ones (or same magnitude). Before being so confident it's NTFS itself that's the issue, try ReFS (and FAT32 for that matter) and tell me if you see the performance problems you've encountered actually improve a lot. And then narrow down the cause. You might be surprised how much of it had to do e.g. with filter drivers or other things. And don't forget you're still testing one particular implementation on one OS, which doesn't say anything about a different one for the same filesystem on a different OS.
There are tools like voidtools Everything2 and WizTree that directly read NTFS from disk device bypassing windows FS apis and are blazingly fast (faster than find/du on ext4 in Linux).
correct. but it's also a management system so perhaps they only want a files system? no idea why more don't use zfs. especially after the auto expansion update earlier in the year.
That's a major benefit of ZFS for sure, but I think being copy on write is another major benefit for single disk systems. ZFS is the only one I know of that has a full feature set and is supported on every major OS.
if GPL is a non-starter for you, youre missing the point of the open standard. apple already discloses a litany of various GPL it ships. XFS would be no different.
I intentionally excluded unofficial or third party software, as almost anything is supported if you’re brave enough. The quality of said drivers also wildly varies.
Until 4 years ago, nothing was good enough for the upstream Linux kernel.
I mean, yeah, you could say that. Something being in the kernel is a good benchmark of quality. But IMO open-source is different. For instance, Terraform had no stable release from 2014 till 2021, that didn't stop enterprises from using it on scale.
I just ran into a use case yesterday. I wanted to copy some files from either my Mac or my Windows machine onto the MicroSD card for my SteamDeck, which is ext4.
I wanted to just plug in the card and copy files, but couldn’t.
Even FreeBSD can't be bothered to do a good job supporting Linux file systems. They're basically proprietary siloed file systems like the rest, even if code is available. Linux, meanwhile, can't be bothered to support either of FreeBSD's file systems officially. UFS2 is hardly patented anymore, but Linux doesn't care beyond read-only support.
Being open-source, and even being in a popular open-source project, does not make a standard, or even imply inferiority to those who do not implement it.
There might be some license issues, but the dirty secret is filesystem portability isn't terribly important, and for the users for whom it is - ExFAT and friends are usually good enough.
exFAT is widely used but it not being journaled has led to so many thousands (if not more) people losing tons of data, many of which wouldn't have lost so much data had they used a journaled filesystem (or even one with redundant file tables.)
If you need to connect a portable drive to machines of different OS's, there is no safe filesystem that supports read and write on both Windows and MacOS.
Alternatively, cloud storage works until the files are larger than the space you have left on Drive/Dropbox/OneDrive/etc., and local network sharing (on certain networks at least) is more complicated than what the average user is willing to put up with. In practice, many use USB flash drives or external HDDs/SSDs with exFAT. Yeah, people should have more than one backup, but we know in the real world that's not what many do. That requires them spending more time (e.g. configuring local network sharing or setting up an old machine lying around to be a NAS) or money (more on cloud storage.) In practice, having a cross-platform, journaled filesystem would lead to a lot less data loss.
Aside from exFAT, the only alternative with native cross-platform R/W capability is FAT32, but while it has a redundant file allocation table (unlike exFAT), it has a max file size of 4GB, which limits its usefulness for many workflows.
A. You're flagged for graphic, inappropriate, and triggering language.
B. Oracle sued Google for a decade over whether a minor number of APIs were copyrighted in Java; do you think Apple's eager to embrace their technology regardless of license? Heck, even Ubuntu wasn't willing to make that bet.
C. Guess how much official licenses for those fancy file systems cost.
D. ZFS, and other fancy file systems, are not known for low RAM and CPU usage. In a server, this does not matter; but it's pretty important for anything on a battery.
Does anybody else think that it would make sense for Apple and Microsoft to just get in a room and horse trade a few things like this, if they cared about user experience? Cross-license both APFS and NTFS, and share any internal documentation under NDA so that external drives can use modern formats with safety features like journaling without locking users in.
I suspect there wouldn't be an agreement on a minimum set of features for a modern filesystem, even just for external disks, even if you limited it to flash storage devices to avoid all the complexities of spinning platter latency.
After all, there's no such agreement on Linux either - we just have all the Linux vendor options available.
Let me clarify, I don't expect two vendors like that to merge filesystem specs. I just think they should have first-class support for reading and writing on each others' default (NTFS and APFS) filesystems because the alternative is that a user who has a hard drive full of their important documents can't switch between Mac and PC without buying 100% more storage with which to do a complicated filesystem-migration exercise -- and let's be honest, only an absolute nerd (like us) would even understand what that is. Others would just plug in the NTFS disk to the Mac, see the popup saying "unreadable," shrug and give up (or worst case, format it not understanding that means erasing).
This is the kind of thing a reasonable government would just tell them they had to do by virtue of being a fully locked-in duopoly, just like they should tell Apple and Google that users should be able to choose to install an alternat app store.
This is a disk image format so why not VHD? It seems like it's open enough and supports what a virtual disk needs what do we gain with yet another file that's a disk type?
I'm guessing you are coming at it from the perspective of a laptop user and likely a power user. The majority of the population just needs to scroll social media, message some friends, send an email or two, do a little shopping, maybe write a document or two. For this crowd an iPad is plenty. When I was a software developer - yeah, I had a Mac Pro on my desk and a MBP I carried when I traveled. Now as a real estate agent, an iPad is plenty for when I'm on the go.
Maybe this is consequence of the Frutiger Aero trend, and that users miss the time where user interfaces were designed to be cool instead of only useful
Current interfaces are not aimed at being optimally useful. Padding everywhere as of today means more time scrolling and wasted screen space. Animations everywhere means a lot of wasted time watching pixels moving instead of the computer/phone giving us control immediately after it did the thing we (maybe) asked for. Hiding scrollbars is a nightmare in general in desktop OSes but is the default (once lost half an hour setting up a proxy because the "save" button was hidden behind a scrollbar).
Usability feels it has only been down since Windows 7. (on another hand, Windows has plenty of accessibility features that help a lot in restoring usability)
Only the generated .scad would be great! I mean, I see to use cases for that that would be helpful for me:
1. Asking for a base model, download it as .scad and them improve it through OpenSCAD accordingly to my needs
2. Starting modeling in OpenSCAD, them ask the AI for some boring task (e.g. generate honeycomb patterns, hooks, hinges, and so on)
> also would love to know what your use case is and why you are more interested in parametric
Most of my use cases of 3D printing is for tools, household utilities, spare parts, etc. Because of that, my favorite tool is OpenSCAD, and I use it a lot.
But I reckon that it is sometimes really boring. Sometimes I need to spend a lot of time with trigonometry and other math tricks and less with modeling itself. For example, the aforementioned honeycomb patterns, I've spent some hours of my life playing with sines, cosines, apothems, etc while I think that it would be a job that an AI could do for me.
And that's probably not enough: for example likely you'd need to reuse whatever Git uses to generates patch formats. It's not necessarily _hard_, but it's not "just" a language translation.
> I can understand English and don't need these automatic translations
I think it is far worse than that:
1. If I don't understand a language, probably that video is not for me. Most videos targeted for international audience are in English, or at least the author translated it by theirself.
2. Titles are small sentences, and they don't have enough context to be translated. Once I saw a video called something like "Vamos assistir uma conexão com o passado", which in Portuguese means "Let's watch a connection to the past". I needed to de-translate it in my brain to understand that the original title was "Let's play A Link to the Past"
3. Online resources are a great way to exercise a second language. So, please, don't underestimate my capabilities. At least let me try to read in the original language by myself, if I need the translation I how to use Google Translate or a dictionary.
I reckon that this feature makes the access to online content more democratic, it's ok. But at least let me disable that since it makes the experience worse
There's a video that Youtube keeps sending me with a translated title "O segredo das lavadouras" (what translates to "the secret of washing machines") that is about picking screw washers...
But the real problem is when it decides to translate the titles of some perfectly watchable videos in English into something that uses the Cyrillic alphabet, what has no relation to my accepted languages, and is only used half-way across the world from where I am.
Every time I see something in C for Windows I see people using MinGW, gcc and friends just like they would do in a Unix-like system. But I wouldn't expect that they are tools that Microsoft recommends for developing on Windows.
So, a honest question from a Linux/Mac guy: what is the Windows-y way to do that?
A cursory browse says there's no Linux-isms in the code base, so the Windows-y way to build that (without going into licensing) would be to use the Visual Studio Build Tools. They're the CLI toolchain you get for Visual Studio, but free when compiling open source projects (as of recently: https://devblogs.microsoft.com/cppblog/updates-to-visual-stu...)
They still notionally need to run on a Windows machine, although I recall people have managed to run them under wine before.
EDIT: It took me a few reads to parse what the link is saying, so: using the toolchain to compile open source dependencies is fine, even if your codebase is closed source, so long as the closed source part isn't being built with the Build Tools.
The most Windows-y way to do that is to get Visual Studio (Community Edition is free for non-commercial use). It still has project templates for pure Win32 apps even, although they are C++ rather than C.
I remember Pelles C being the first full C11 implementation, which I felt was impressive for such an unpopular toolchain.
I guess they can't switch to a FOSS licence because of the licence of LCC? How much of the original code even remains after more than 20 years and several C standards later?
I have a printer, but I've lived without one before. I simply went to Kinkos or Staples and printed what I needed.
Can't do that as readily when you need something 3d printed, and it's a very capable tool.
reply