"Unlike X11, where the graphics primitives were rather low-level and all input event handling involved round-trips to the client, NeWS was advanced enough that simple widgets, such as scroll bars and sliders, could be implemented entirely server-side, only sending high-level state changes to the client, more along the lines of “slider value is now set to 15” than “mouse button 2 released”. Similarly, the client could ask the server to render a widget in a given state, rather than repeatedly transmitting sequences of graphics primitives."
The kernel side component does even somewhat less than X server, as all drawing is done either by direct writing to frame buffer from userspace or thru DRI-like direct rendering mechanism. Kernel side "display server" then only handles message queues and input and tracks which screen region belongs to which window (and set's up the shared memory accordingly, in theory).
For 16 bit windows, one might say that the userspace libraries are somehow part of the kernel (as there is no user/kernel split in win16 and one of the libraries is even called kernel.dll), but in win32 case these libraries are userspace.
On the other hand, NeWS principle is that server implements turing-complete virtual machine (based on postscript) that executes arbitrary code uploaded from clients (which then can define behavior of widgets, specify new ones or even create windows whose complete behavior is server-side).
Before Windows 10, though, GDI was in kernel space, so it was a bit higher level than that, no? Old-style GDI apps would send state-of-the-1980s-style graphics primitives to the server, while newer apps would just write directly to the framebuffer, right?