Hacker News new | past | comments | ask | show | jobs | submit login

interesting insight! Do you by chance know what's the current model used by windows right now with WPF or UWP or whatever they call it today ?





Win32 is still alive and kicking, so any desktop Windows app has at least one "traditional" window (HWND) - that being the top-level one - and it still runs a WndProc, that periodically receives WM_PAINT, telling it which chunks to redraw. There's a compositor sitting above all that, so the complexities of the model are largely redundant, because it no longer needs to handle partial refreshes - it doesn't render directly to the screen.

Most modern GUI frameworks don't use the tree-of-HWNDs anymore, though. Which is to say, the entire visual element tree is handled internally by the framework's own compositor, and the top-level WM_PAINT just renders the resulting bitmap. WPF and Qt do it that way. That said, there's still no shortage of apps that are implemented in terms of native Win32 widgets - pretty much all the non-UWP apps that come with Windows are like that. So when you are looking at, say, Notepad or Explorer, they still fundamentally work the way the article linked to above describes.


Modern UI frameworks mostly try to offload the rendering to the gpu where just every pixel is rendered for every frame into texture buffers.

But of course that can be combined with diff&patch as well...


Not every pixel! Mozilla posted recently about how they have gone to great lengths to not redraw every pixel every frame because it saves battery. Apparently Chrome and Safari already do that.

> But of course that can be combined with diff&patch as well...

Which they are, through extensions such as https://www.khronos.org/registry/EGL/extensions/KHR/EGL_KHR_...


Yes I am especially curious to know how they manage state in desktop GUIs - if it's events and callback based or some other kind of functional architecture

It's usually event callbacks for actions, although these are often wrapped in a first-class "action" or "command" abstractions, to allow routing different events to the same handler - e.g. both the menu item and the toolbar button.

For views, you either get some form of MVP, with explicitly implemented model interfaces that provide the glue between the views and the object tree they are representing, or data binding that effectively creates that same glue for you. Here's an example from UWP:

https://docs.microsoft.com/en-us/windows/uwp/data-binding/da...

So no, it's not really functional. Quite the opposite - the state is global and mutable, and UX actions that purport change things really change them. That also makes it all very intuitive, though.


In what concerns WPF, UWP (you can do this with forms as well although support is more primitive), via data bindings.

The concept is somehow complex to master, I compare it to getting monads to click, but when you understand it, you are able to envision how to build the full UI architecture as having a Lego box at your disposal.

For everything, views, stylesheets, event handlers, data models.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: