Hacker News new | past | comments | ask | show | jobs | submit login
DragonflyBSD 5.0 (dragonflybsd.org)
184 points by joeschmoe3 on Oct 16, 2017 | hide | past | favorite | 33 comments



This is a big release for a few reasons:

* major new filesystem (Hammer2)[1]

* OpenBSD might even adopt Hammer2 has a replacement of it's legacy filesystem [2]

* huge work on network performance. DragonflyBSD is agrubably the fastest BSD for network intensive tasks [3]

* IPFW has been rewritten to be multi-threaded which has resulted in huge performance improvements [4]

[1] https://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/HEA...

[2] https://marc.info/?l=openbsd-tech&m=142755452428573

[3] https://leaf.dragonflybsd.org/~sephe/perf_cmp.pdf

[4] http://lists.dragonflybsd.org/pipermail/commits/2017-Septemb...

UPDATE:

To also give context as to what Dragonfly BSD is, DragonFly BSD was forked from FreeBSD 4.8 in June of 2003, by Matthew Dillon over a differing of opinion on how to handle SMP support in FreeBSD. Dragonfly is generally consider as having a much simpler (and cleaning) implementation of SMP which has allowed the core team to more easily maintain SMP support; yet without sacrificing performance (numerous benchmarks demonstrate that Dragonfly is even more performant than FreeBSD [5]).

The core team of Dragonfly developers is small but extremely talented (e.g. they have frequently found hardware bugs in Intel/AMD that no one else has found in the Linux/BSD community [6]). They strive for correctness of code, ease of maintainability (e.g. only support x86 architecture, design decisions, etc.) and performance as project goals.

If you haven't already looked at Dragonfly, I highly recommend you to do so.

[5] https://www.dragonflybsd.org/performance/

[6] http://www.zdnet.com/article/amd-owns-up-to-cpu-bug/


FWIW, [2] is a 2.5-year-old mailing list post. Nothing came of it, IIRC.


The HAMMER2 docs talk about replication in a cluster, but not about RAID5/RAID6/ZRAID-style redundancy (e.g., n+r disks working together to allow up to r disks to fail with no data loss or losing ability to write).

Does anyone here know how/if this cheaper-but-slower kind of redundancy is addressed in DflyBSD?


Very good overview; perfect for this crowd – or at least me. Thank you for giving us non-Dragonfly users something to hang our hats on.


https://leaf.dragonflybsd.org/~sephe/perf_cmp.pdf

Gotta love an undated presentation that compares DragonFlyBSD version 719bf70a37139bc3bedc84ab0975df7107155714 with FreeBSD version r314268. It can't be super old because Linux 4.9 was released in Dec 2016, but still.



Hammer has always looked cool on paper but it's a hard sell to switch away from the battle-tested options like ZFS for a v1.0 codebase. Filesystems are not really an area of the system that is amenable to flavor-of-the-month, and the dustbins of history are littered with one-man-show filesystems that didn't make the cut.

btrfs is fighting much the same battle for adoption and frankly has a much better chance of seeing success since it's not tied to a niche OS like DragonflyBSD. ZFS-on-Linux finally being available+stable was a massive milestone in terms of adoption since it finally broke away from the tight ties to the Solaris/BSD ecosystems.

So yeah, regardless of how amazing Hammer is, I do have to ask whether there's really space for another filesystem out there. Kudos to Dillon for going ahead and doing it anyway though ;)


One thing Hammer2 has going for it, is license. It's the first next gen filesystem release that is not marred in some way by licensing (marred from the perspective of adoption).

Because it uses the BSD license, it can be adapted to work in more mainstream OSes without problems. If OpenBSD would adopt it for example, it would really help it to cover a few more use cases. I'd also expect some efforts from NetBSD, FreeBSD and Linux. Even Apple and Microsoft could adopt Hammer2 and just use the original source code, although they have no particular reason to do so.


   Filesystems are not really an area of the system that is amenable to flavor-of-the-month and running v1.0 of a new codebase doesn't sound appealing.
Just as a counterpoint to running a v1.0 filesystem, Apple just rewrote from scratch their entire filesystem (APFS) and force updated their entire userbase of hundreds of millions of iOS devices and macOS to use a v1.0 of APFS.

They haven't seem to have any problems running a v1.0 filesystem and I have to imagine way more people are now running APFS then ZFS ever.

https://en.wikipedia.org/wiki/Apple_File_System


Apple must have invested incredible resources on testing that internally, because that was a very risky move, and well outside the norm for filesystem development.


Apple did some pretty brilliant stuff with testing. E.g. they had it silently test install itself on random user devices, verify everything was fine, and revert for several versions prior to release (presumably they picked users with sufficient spare room). The actual rollout was stunningly issue-free.

As an aside, this is one of the several unbelievably slick updates Apple has pulled off in its history, including:

- the switch to 32-bit clean addresses only (which was probably the most painful switch Apple ever did!) in System 7.6

- the switch to PowerPC (which caused fewer problems than the 32-bit clean switch!)

- the switch to Mac OS X

- the switch to Intel

- the switch to 64-bit on desktop

- the switch to 64-bit on iOS

Microsoft, (in most cases rightly) vaunted for backwards compatibility has had horrific snafus such as DOS 5 being the first version of DOS whose RESTORE program could read BACKUP files from the previous version of DOS.

(It's also worth noting that NeXT itself managed to support four runtime architectures with "quad fat binaries".)


> they had it silently test install itself on random user devices

I suppose that meant doing a small install in some virtual disk? I can't think apple put at risk anyone's data.

But then, that would not have been the same as actually testing the real hardware interface, would it?


APFS landed in iOS 10.3. One cool thing Apple did was a planetwide dry-run of the HFS+ to APFS migration in a previous iOS update. They did the whole convert operation, except the very final bit, and presumably collected metrics about failures.


Judging by how nightmarishly buggy the upgrade to iOS11 has been, I wouldn't be surprised if they didn't test it at all. My phone actually stopped functioning as a phone upon upgrade [0]!

[0] https://discussions.apple.com/thread/8073670?start=0&tstart=...


While I'm sorry you had bad experiences upgrading to iOS 11, I'm fairly confident it hasn't been "nightmarishly buggy" for everyone. (Neither of my two iOS devices, nor my roommate's two devices, nor my BFF's two devices, had any issues during upgrading, although I'm consistently finding an irritating-but-not-showstopping bug in bluetooth keyboard support on the iPad.)

More relevant to the parent, though: the upgrade to APFS happened in iOS 10.3, not iOS 11. So whatever bug you may have had with iOS 11 probably have nothing to do with the file system.


If the upgrade to iOS 11 was "nightmarishly buggy," we wouldn't only be hearing about it buried in a thread about DragonflyBSD.



I'm not saying it's the be-all end-all of validation, but I don't see HN in there. As an Android user, it was not on my radar at all. This is the first I've heard. Sux2Bme, sure, but I think I'd have seen it somewhere.


> "littered with one-man-show filesystems"

ReiserFS springs to mind, but what other one-man-show filesystems have there been?


Tux2/Tux3 and bcachefs come to mind.


> Hammer has always looked cool on paper but it's a hard sell to switch away from the battle-tested options like ZFS for a v1.0 codebase

It is strange. Linux users would do anything for ZFS and BSDs are moving away from it.


Linux has ZFS, and DragonflyBSD never had ZFS. Linux users want ZFS in kernel, and everyone wants cool new filesystems that are already stable.


When installing with Hammer2 (EFI, encrypted root and swap), after rebooting and decrypting the root I get an error saying `mount_hammer2` isn't found. Has anyone else had this issue and figured out if there's a fix?


What Elhana said - mount_hammer2 isn't included in /usr/share/initrd so it's not copied into the image used there.

You can build a new initrd - cd into src/share/initrd/sbin, add it to the makefile, type 'wmake install' I think, and you may be good. I am guessing because I don't have an encrypted disk to test.

Otherwise, wait and there will be an updated 5.01 image soon, I'm sure - the recent KRACK vulnerability will prompt that, if nothing else.


Thanks for the info! I'm fine with waiting for a 5.01; I just wasn't sure if I had installed it wrong or something


mount_hammer2 is probably not in initrd


Sorry, I'm still a relative beginner with regards to BSD; is this something I can fix during/post installation?


I'd suggest trying more popular flavors of BSD first. Especially if you're installing on physical hardware, dragonflyBSD may be hard to begin with.

FreeBSD on the laptop has decent support, and in a couple of weeks you will learn about initrd etc. to be able to work with DragonflyBSD.

If you want to learn more about system internals, you might want to learn how to use bhyve (FreeBSD's awesome hypervisor) on FreeBSD. By trying to use bhyve you will learn how to extract the kernel from an img build, load it etc.

Have fun and good luck on your studies!


How does DragonFlyBSD compare to OpenBSD in terms of security?


About the same as FreeBSD. IF you google youl find a few openbsd VS FreeBSD comparisons


Is anyone using DragonflyBSD in production due to its advantages?

Are these advantages so significant that it is worth it?


Those companies here https://www.dragonflybsd.org/commercial/ are providing commercial DragonflyBSD packaging or support.

Of course it's worth it. It's significantly faster and better





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: