The entire premise behind DragonFly was wrong. FreeBSD predicted correctly future hardware trends with 5.x and it is bearing great fruit with FreeBSD 8 and 9.
FreeBSD 9 will parallel Linux in terms of scalability: NUMA, multi-core granularity, advanced scheduling.
Meanwhile, FreeBSD also has stable and current ZFS support.
HAMMER seemed interesting in theory but grandiose for such a small developer community and userbase. Now Dillon admits that the design was flawed and proposes and even more grandiose filesystem with even more shit that really needs to be in userland for any hope of sanity.
All I see is an unnecessary fork, a lack of resources, and struggle for relevance. Thanks but no thanks.
Im not asking to disagree, im just curious about the nature of their choices.
If you're interested in reading a paper about the execution model, Jeffrey Hsu's 2004 "The DragonFlyBSD Operating System" (http://www.dragonflybsd.org/presentations/dragonflybsd.asiab...) describes the LWKT and port abstractions.
The DragonFly network stack is a pretty interesting subsystem to understand the SMP model through; it uses a form of connection-oriented parallelism rather than fine-grained locks. The same approach was taken by Solaris in their FireEngine system (they call it 'vertical partitioning'). References that would be interesting to look at to understand how the netstack works, "An Evaluation of Network Stack Parallelization Strategies
in Modern Operating Systems", along with the second part of the earlier paper. FreeBSD chose to build a fine-grain-locked netstack in a style the paper above called 'Message Parallelism'.
The DragonFly kernel memory allocator is another interesting subsystem to look at SMP design through. The kernel allocator (kmalloc) is a slab allocator, like most other kernel allocators. The DragonFly slab allocator differs from Bonwick's classic by using fixed size and fixed alignment slabs, so the traditional reverse-mapping hash table is not required; the slab headers are always at the slab ... head. The SMP strategy was not to put a per-CPU cache in front of each zone, however. The slab allocator itself was duplicated across each CPU (each slab for a given size is CPU-private). Remote frees were handled via passive IPIs (as described in the first paper). The only lock in the allocator is at the bottom, to allocate kernel address space and frames for slabs. Other systems (think Solaris, say) build per-CPU caches of objects in front of the allocator and lock the slab layer.
The pattern you might see is that DFly chose to replicate resources across each CPU in an SMP system where reasonable...
Throughout the rest of the kernel, Dfly uses a curious lock called a 'token'. A token lock is automatically released when a thread holding it stalls and is reacquired when it
becomes runnable; think of a token as a having the semantics of the older *BSD MPLOCK, but there can be more than one token, where there was one MPLOCK. Token locks can't deadlock (as sleep-and-hold is not possible; they do introduce a new class of error, though, where a lock "holder" slept, meaning earlier assumptions don't hold). They probably won't work as well as conventional mutexes when the token chains get larger than a few elements. Most importantly, they allowed the MPLOCK to be broken up very quickly, mostly over the 2.8 and 2.10 release cycles.
Don't know that there is too much other there to read about tokens; there might be something in the XNU kernel notes about 'funnels' (which have similar semantics).
I know FreeBSD is better than DragonFly in many, most areas, but certainly not when it comes to the file system.
Don't take another project's success as some sort of personal insult. You aren't helping DragonFly, FreeBSD, or even the idea of BSD operating systems in general.