Hacker News new | past | comments | ask | show | jobs | submit login
DragonFly BSD 4.8 released (dragonflybsd.org)
167 points by ceratopisan on March 27, 2017 | hide | past | favorite | 34 comments



In case people are wondering what makes DBSD so interesting, DragonflyBSD is:

- focused only on x64 architecture

- has an extremely small but exceptionally talented team of developers (e.g. Matt Dillon from DICE and Amiga fame)

- has it's own unique filesystem called Hammer (and work is being down on Hammer2 which is a complete rewrite)

- Network performance is particularly good with Dragonfly and even better than FreeBSD which is known as being the golf standard for network performance [1]

[1] https://leaf.dragonflybsd.org/~sephe/perf_cmp.pdf

Edit: formatting

Edit2: it should also be noted in the release notes, it refers to detailed NVME disk performance testing the Dragonfly team performed. These results are largely agnostic of what OS you run. Really interesting to see the Samsung NVME device come out on top and Intel in last. This is a good read even if you don't run Dragonfly.

http://apollo.backplane.com/DFlyMisc/nvme_randread.txt


> - has it's own unique filesystem called Hammer (and work is being down on Hammer2 which is a complete rewrite)

You said "interesting", but superficially "a complete rewrite" WIP doesn't sound like a plus when choosing a production system OS.

If anyone is interested, this is what the DBSD man page says about hammer:

     HAMMER file systems are designed for large storage systems, up to 1
     Exabyte, and will not operate efficiently on small storage systems.  The
     minimum recommended file system size is 50GB.  HAMMER must reserve 512MB
     to 1GB of its storage for reblocking and UNDO/REDO FIFO.  In addition,
     HAMMER file systems operating normally, with full history retention and
     daily snapshots, do not immediately reclaim space when files are deleted.
     A regular system maintenance job runs once a day by periodic(8) to handle
     reclamation.

     HAMMER works best when the machine's normal workload would not otherwise
     fill the file system up in the course of 60 days of operation.  
And what appears to be the original design doc for Hammer by Dillon: https://www.dragonflybsd.org/hammer/hammer.pdf

[& p.s.] Hammer2: http://gitweb.dragonflybsd.org/dragonfly.git/blob/b93cc2e081...


In all fairness to Dragonfly, Apple themselves just today released an entirely new file systems that was a complete rewrite as well.

With the advent of SSD and NVME, how you achieve maximum performance and ensure long term "disk" endurance has radically changed in recent years. You're no long write data to a physical platter anymore. Which radically changes huge fundamental assumptions in how legacy file systems were created 30-40 years ago.

So don't view a rewrite as a bad thing. It's Dragonfly being proactive and keeping up with the times.


> Apple themselves just today released an entirely new file systems that was a complete rewrite as well.

Without CRCs or checksums on the blocks. Grr...

"Silent data corruption is real" https://news.ycombinator.com/item?id=13851349


> Which radically changes huge fundamental assumptions in how legacy file systems were created 30-40 years ago.

Actually, it doesn't. It makes the ones that you haven't heard of interesting again. Consider the BSD 4.4 LFS, for example. The disc is written to as a circular log, with all writes going to the head of the log, which gradually works its way across the whole disc, and a cleanup mechanism emptying the tail of the log. That is global wear levelling in the file system ... in a design from 1990.

This is what you miss when you adopt the mindset that mis-uses the word "legacy" like that.


Only history proves stability.


Hammer1 is a log-structured snapshottable data filesystem with near-live master:slave replication, similar to e.g. zfs/btrfs, with very low memory requirements.

Hammer2 will, last I checked, enable multi-master clustered volumes plus replicated fanout mirrors and caching clients at the base system level (e.g. not an overlay application like other systems)

While there is a focus on keeping releases as stable as possible and the tip of the source tree stable, it is heavily under development so some amount of low-level expertise is probably a good idea.. which isn't to say you can't use it for day-to-day use.

Systems are used for different things - one being heavily under development doesn't necessarily mean it isn't suitable for some use cases


I know you probably meant "gold standard" but I was amused thinking about what a golf standard might mean ;)


I just assumed "golf standard" was some QA term I hadn't heard of before.


It could be part of MIL-SPEC evaluation. They'd be concerned with how many times you could hit the evaluated product with a golf club before its effectiveness degraded. Also how many times before it stopped working. These numbers would provide actionable intelligence to acquisitions officers in the military on how to order just enough replacements for given scenarios while minimizing the amount of unnecessary orders.

Long story short, applying the golf standard in your QA process can both increase longevity of the product and reduce replacement costs. Many government organizations and enterprises running mission-critical applications might find DragonflyBSD servers attractive if they passed the golf standard. They could combine it with their Five 9's middleware.


Why do you assume he meant "gold standard"? Have you ever seen how fast a golf ball flies just after a strong hit?


Never from the point of view I'd like to see it from.


HAMMER2 has been worked on for at least 6 years; AFAICT by one guy. Every year it is supposed to be happening Real Soon Now, but never appears to make any evident progress.

Matt Dillon does very impressive work, but the critical mass just isn't there.


It's possibly too big a task for one person, I agree. If he manages to get something usable in even 10-15 years, though, I'd be impressed. HAMMER2 is a clustering file system with POSIX semantics, and by the standards of that category, I don't think it's going all that slowly, to be honest. Ceph has had probably hundreds of person-years of development put into it, and only very recently have people started claiming CephFS is production-ready (and still many don't trust it).


In $dayjob, I'd still go for gluster over ceph, with zfs on my storage nodes -- this obviously depends on what you're doing but in my case this would be shared storage for everything from images, logs and dataflow between some chunky legacy apps.

Why this stack? For no other reason really other than it works perfectly, has caused me almost no pain in the 6+ years I've used it for this kind of DFS stuff (across several clients) and there are no features I need in ceph which would make me take the less mature option.

I've been using LeoFS a little lately also again ontop of ZFS and it's working reasonably well (S3 compat stuff).

Currently why anyone would use HAMMER2 or BTRFS for anything important escapes me.


You might enjoy the discussions at https://news.ycombinator.com/item?id=13929692 .


> Really interesting to see the Samsung NVME device come out on top and Intel in last. This is a good read even if you don't run Dragonfly.

The Intel device tested was their absolute lowest end consumer part. The 750 and above drives are completely different beasts, especially at the server level.


The 750 series also costs a lot more, while the 600p is comparable in price to Samsung's 960 EVO. The numbers are still interesting :-).


I appreciate that there is so much operating system innovation coming from the *BSD's like the Hammer file systems in DragonFly and security concepts like the new pledge() system call from OpenBSD which informs the operating system that an executing program pledges to never use certain system calls. (i.e. if you say you'll never do something, the operating system will have the privilege of killing your application should there be some kind of buffer overflow/compromise.)


Re: pledge() in OpenBSD:

In FreeBSD for quite some time there is Capsicum and CloudABI has been included in FreeBSD 11. Definitely worth a look, especially the second which is also present in DragonFly.

https://wiki.freebsd.org/Capsicum

https://nuxi.nl/cloudabi/freebsd/


Anyone using DragonFly BSD in production? Or as a desktop system? What is your experience? I love the BSDs, but never used DragonFly.


I have used it on 1 server as a test. Worked pretty well, did a few NFS exports from it to other servers. Has been up 546 days, running v4.3 of the OS. No complaints, but I haven't tested it under high load.


I wanna know that too. Never heard of anyone using it in production.


The hammer FS might be its main improvement. I think it's, right now, an exercise in maintaining options.


I run a little VPS on DragonFlyBSD that I use for misc hobby projects and personal file hosting. Mostly it runs Postgres. It's quite fast. I had redis running on there as a message queue for a distributed web crawler for a while. I benchmarked it against Arch Linux (which is what I was using on that server before DragonFly) and it was somewhat faster.

HAMMER is a really nice filesystem. It's not one of the fancy COW filesystems (HAMMER2 is, though), but it still has built-in versioning and snapshots and a host of other features. It's great for storing personal projects on.

It's also rock solid and easy to tinker with. I've recompiled my kernel several times with various configs and patches. It is probably the easiest production-ready OS to hack around with.

Also #dragonflybsd on efnet is super helpful.


What benchmarks showed an improvement between Arch and DragonFlyBSD?


redis's redis-benchmark utility. Sorry if that wasn't clear.


If you have a copy of the results, that will be awesome ! Thanks !


Been looking for Skylake GPU support for BSD. Does it support Skylake OpenCL?

How compatible is this with FreeBSD? Can I test it alongside a FreeBSD distribution with minimal changes? Does it use the same Ports/Packages system? Do I need to recompile/reinstall all applications? Is there ZFS support?


Yes, Skylake is supported.

See https://www.dragonflybsd.org/docs/supportedhardware/

Re: FreeBSD. Dragonfly forked from FreeBSD around 11 years ago (v4.x). So the two have diverged quite a bit of the years. E.g. Different file systems. Different kernel approaches to SMP. Etc.

The package management system is the same though.


to expand -

Shough package systems are based on the same code, syscall/binary level compatibility has diverged, so it does entail a separate installation on another partition/disk/etc and separate set of application packages, etc.


Checking the release notes, it does include the recent fix mentioned at https://news.ycombinator.com/item?id=13882171 .


Awesome news, and props to the DBSD team for all the consistent hard work they put into it, usually without much fanfare. I'm a pretty hardcore GNU/GPL guy, but I have said before if I were starting an ISP I would probably be doing it with DBSD. The networking stack alone is top-notch, and once HAMMER2 rolls out I honestly expect it to get the momentum to compete with ZFS and BTRFS. (not much traction now though, so as others have said, probably years down the road).


Strange that there is another dragonfly, a framework for newlisp.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: