Hacker News new | past | comments | ask | show | jobs | submit login

If you haven't read the book "Design and implementation of the FreeBSD operating system" (https://www.amazon.com/Design-Implementation-FreeBSD-Operati...) you should. It's possibly the best "applied" OS book. It also contains some interesting code samples in the book which is surprisingly uncommon in comparable books.

FreeBSD has for years had some features that are only now reaching 'mainstream' popularity (e.g. jails, which containers are based on). This books explains all that in quite a bit of detail and manages to stay high level enough not to get boring with the details but gives you enough detail to have a pretty detailed understanding.

It's also one of the few books that talks about networking in very practical terms (and not like OSI layers and whatnot) which makes me say that it's also possibly one of the best books on networking (you learn quite a bit about networks from how OS's work with them).

It also has the hands down best file system around (the book also talks about that).

And unlike large parts of linux, the kernel source is actually pretty legible.




> FreeBSD has for years had some features that are only now reaching 'mainstream' popularity

It's also lacking a bunch of features that reached "mainstream popularity" a long time ago.

Example: ASLR, the most important exploit mitigation technique

  - 2001 - PaX Linux
  - 2003 - OpenBSD
  - 2005 - Upstream Linux
  - 2007 - Windows
  - 2007 - OS X
  - 2011 - iOS
  - 2016 - NetBSD
FreeBSD is still lacking ASLR support despite a number of attempts and an actively maintained fork.

Linux even has Kernel ASLR (even though its effectiveness is debatable and it's much better with the Grsecurity patchset).

I'm a FreeBSD user and run it on a number of firewalls, but FreeBSDs approach at security worries me and I consider porting everything to OpenBSD or Linux.

It did not even have signed packages until recently!


As a lifelong Windows user, I've always been surprised about the obsession Unix users seem to have with filesystems.

I mean, as an end user (developer, but I assume that doesn't matter), what I expect from a filesystem is:

    * let me save a file in a directory
    * let me open it again
I also appreciate how many tools can use stats (eg filemtime) for good effect, and how fine grained permissions help keep my computer secure. All filesystems I've ever used let me do these things, and have for a long time.

This probably makes me a Blub programmer in file system terms. What am I missing? Why should I be dissatisfied with NTFS? I mean, in all honesty I don't even know which filesystems my various hard drives and sd cards have - I just save and open files.


Windows handles a lot of abstractions, to hide users from the nitty gritty details (OS X does the same thing). One of those things is hiding as much of the file system as possible.

Unix usually has users engaged at a lower level, adding a startup daemon means moving a file to a certain place. That shows that filesystems are much more the direct interface for the operating system.

Also, it is possible to have a Unix operating system, but have a completely different filesystem underneath. This is just not possible for Windows users. So there is no choice, so it is nothing Windows users think about in general.


>> let me open it again This is the tricky part, you should about worry as a user

Will it open exactly the same ? How likely is data corruption ? Will it work if I mount it on another OS or another version ? Is versioning supported ? Can it save all the modifications to a file and can I roll back to previous version if I want to ?. Can I take periodic backups without copying/transferring entire drive or entire files even ? Will using "a" file system improve/change my disk life time ?


NTFS is a quite decent filesystem – even more so when it was introduced – and there is a lot to be said for having a one standard.

But would you really want to work with FAT or even exFAT as your main OS? It's about 20% slower and doesn't support ACLs or journaling, so you can lose data in a crash.

So filesystems matter and Linux still doesn't have a standard choice that's as good as other OSes had 5 years ago.


The extra features and differences in the ways the filesystems work make workflows practical which, while they might be theoretically possible on older one work, are too inefficient, or need task-specific userspace tooling to achieve.

Sparse files are one example. They weren't in the original NTFS, although I'm not sure when they were introduced. If your workflow involves lots of files that happen to be mostly zeroes (which is surprisingly common), then you can save a lot of space and time by simply not writing those zeroes to disk. You could achieve the same result by compressing the files, but then everything you might want to use to manipulate those files has to be aware of that compression, and you've got added CPU overhead you might not want.

Copy-on-write snapshots are another. If you've taken a point-in-time backup and want to update it, you can take a snapshot and just copy the changed files across. That's not the clever bit, you can obviously do that with things like rsync. The clever bit is that you can make both the snapshot and the new version fully working file trees, which any of your other tools will work transparently with, and still take advantage of the space saving of the shared, unchanged parts. Again, you could mimic this by either fully copying the file tree first (in which case no space saving, and extra time to duplicate the files) or with hardlinks (which rsync does, but especially on Windows not all tools work well with).

I think one of the reasons Unix users seem to be more interested in filesystems than Windows users is that it's far, far harder to develop a new filesystem (or new filesystem features) on Windows than elsewhere. The Linux kernel interface was specifically designed to make it easy to plug new filesystem implementations in, so people have. On Windows, you're pretty much reliant on MS doing it for you, so you might not even notice when significant new NTFS features go in which would be headline news in other ecosystems. NTFS doesn't have to compete, and its users are in a sense locked in.


It's about how the FS does this stuff.

Take for instance the old FAT filesystem from MS-DOS. If you use it on an SD/SSD (or even a USB key, that's more or less the same tech), you're likely to burn it fast, unless the device does "wear-leveling" internally. Journaling filesystems like Ext4 avoid that by make it so that they don't overwrite again and again the same sector even if the system overwrites the same file again and again.

Another aspect is resilience to system crashes or power outages. A sudden power outage can leave data half-written, including the file allocation table, which may result in severe data corruption. This is especially important for embedded devices, but not only. Some file systems are designed with that possibility in mind.

Then there are performance considerations. FAT had to group sectors into clusters because of the limited number of FAT entries, so a little 1Kb file would actually use something like 8Kb on the disk (Windows still display both sizes BTW). There's also caching policies. If you want something "crash-proof", you don't want any write cache. But if you want something fast, you want a big cache. some filesystems pick resilience, others pick performance, others let you set the parameters you want.


I think you have confused journaled with log structured filesystems.


Yes I had.


Are you talking about high-performance high-reliability server applications, or more about end-user front-end applications?

There's a bit of a difference when it comes to running servers with thousands of users where you want to squeeze every drop of performance out of your hardware and without ever losing data, as to developing an end-user POS interface or other GUI applications. FreeBSD might be picked more often for back-end applications and Windows more for user-facing applications. It's the difference between keeping one user happy versus keeping many as happy as possible.

It probably doesn't matter much what kind of car you drive in the city centre – the requirements and limits are very different than on a race track where every little detail matters in the performance equation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: