Hacker News new | past | comments | ask | show | jobs | submit login

https://medium.com/@slavapestov/yesterdays-news-c52f2be95205...

"Unlike X11, where the graphics primitives were rather low-level and all input event handling involved round-trips to the client, NeWS was advanced enough that simple widgets, such as scroll bars and sliders, could be implemented entirely server-side, only sending high-level state changes to the client, more along the lines of “slider value is now set to 15” than “mouse button 2 released”. Similarly, the client could ask the server to render a widget in a given state, rather than repeatedly transmitting sequences of graphics primitives."




.. as it was always done in Windows with the native controls.


Windows native controls could be remoted? The point of the comment above is that NeWS communicated events at a higher semantic level than X between the display server and the application. This communication was network transparent.


Not remoted, but the semantic communication between the kernel and the application is at the event level for native controls such as the scrollbars example mentioned.


That's not how Windows GUI works, to some extent it's the other way around with all controls being implemented in client's user space code. The difference is that for windows, there is one standard client toolkit that is part of the OS API. BTW, even window decorations are drawn by the client.

The kernel side component does even somewhat less than X server, as all drawing is done either by direct writing to frame buffer from userspace or thru DRI-like direct rendering mechanism. Kernel side "display server" then only handles message queues and input and tracks which screen region belongs to which window (and set's up the shared memory accordingly, in theory).

For 16 bit windows, one might say that the userspace libraries are somehow part of the kernel (as there is no user/kernel split in win16 and one of the libraries is even called kernel.dll), but in win32 case these libraries are userspace.

On the other hand, NeWS principle is that server implements turing-complete virtual machine (based on postscript) that executes arbitrary code uploaded from clients (which then can define behavior of widgets, specify new ones or even create windows whose complete behavior is server-side).


> all drawing is done either by direct writing to frame buffer from userspace

Before Windows 10, though, GDI was in kernel space, so it was a bit higher level than that, no? Old-style GDI apps would send state-of-the-1980s-style graphics primitives to the server, while newer apps would just write directly to the framebuffer, right?


If you're not remoting, it's not difficult to work at a high level. The API presented to the application is easy to control. The interesting thing about NeWS was that a high level event could be defined by the application. The protocol needn't be designed to accommodate it a priori. The example mentioned sliders, not scrollbars. The display server wouldn't even know what a slider was. It would have been defined by the application.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: