It relies on two distinct bugs, an info-leak to obtain a pointer to an allocation in the kalloc.1024 zone and a memory corruption primitive (deriving from a NULL pointer dfr. in IOKit) allowing me to OR 0x10 anywhere in kernel memory.
To break kASLR I corrupt the size of a vm_map_copy struct, which allows me to read the adjacent allocation to the struct, which is a C++ object. First 8 bytes of said C++ object is a pointer to the vtable, which resides in __TEXT of some kernel extension. Since I can calculate the unslid address from userland without any issue, by subtracting what gets leaked with what gets calculated you get to know the kASLR slide.
Just to clarify: The code execution part has 100% reliability rate. The kASLR leaking part does have some chance in it, however empirical evidence indicates that the failure rate is extremely low.
If osx were s/xnu/minix 3-style, full microkernel/, sploiting Iokit as a least priv'd process, only it would get pwned and be limited to iokit's acls. Still bad, but it likely wouldnt have rights to exec a root shell.
XNU kexts have way too much authority, and all the syscalls they each tack on compounds the attack surface to the total codebases of all Apple and third-party kexts. Because once you've found and symbolicated the not-really-hidden call table, you're pretty much able to do whatever. And with a mutating mem kext bug ...
Expecting a finished product of a new project would be unreasonable. Minix 3 is early on and not the only full microkernel out there. They will probably find an approach to reduce context switches if it's a mature optimization to make. An Android-like mobile/embedded platform would make a sensible research -> real use-case, minus Java.