Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure if I'd weigh latency as "far more" important than throughput (instead a more intelligent tradeoff), but I do think an exploration of low latency systems for day-to-day use would be very interesting. I've thought about trying this myself, starting simple with something like RT Linux and try to present a low latency UX for viewing books, media, and certain webpages. Almost every step of the computation pipeline has optimized throughput over latency so it would take some "radical" rework, but I think it would be a very fun and informative experiment.

I've spent a lot of time archiving and scraping my own content and optimizing my home network (and ingress points, routing, etc.) to access all of these things with minimal (< 3 ms) retrieval latency and the effects are joyful. The UX is bad as it's something I've written for low latency use and my UX skills aren't very good, but it makes me feel like I have "instant" access to content, which feels like a superpower. I'd love to explore topics like this for a broader audience for more than just nerdy media I consume.




In human-facing systems, responsiveness is everything.

Somehow games can run at 240fps on a modern PC, but desktop user interfaces (not to mention web apps) are still sluggish.


This is it exactly. Opening up file explorer in windows can sometimes take 1-2 seconds from click to window usable. or worse, context menu on a file can also take multi-seconds. It's absolutely amazing.

Still, it could be worse, at least desktop environments aren't as bad as the mobile world where it seems like everything has to have a transition and animation.


In some old computers, the OS was simple and on ROM and responded/loaded immediately. An upgrade just meant you swapped the ROM chip. HDDs and FDDs were then just for user data or application programs.


The games are not doing much OS interaction at all. That's where most of the delays come from (when it's not just poor internal architecture). The games do physics, logic and video rendering. They typically have a relatively direct path to the GPU, bypassing most of the kernel.

This is not a model for applications in general.

That doesn't mean that the sluggish desktop apps should not be fixed, but "be more like games" is not really the right advice.


Games and Applications aren't that far apart, most titles will be developed on existing engines that are predisposed to certain genres/constraints and will need to work within that the same way that UI frameworks limit what possible things an application can do(which is quite a lot).

I've spent a decent amount of time in both spaces in my career, it is totally possible to build performant, fluid applications but it requires a degree of care and investment. Usually "good enough" suffices and that's where things land but there are also developers who prioritize that and short of very restrictive environments you can build responsive applications.

Besides, at the end of the day all UI is doing is rendering, with a direct path to the GPU just like games do :).


> They typically have a relatively direct path to the GPU

That makes me wonder... How much of an OS could be offloaded to a GPU? Specially for CPUs built into the processors that share the same memory bus, the data wouldn't need to go through PCIe to get to the GPU and back to main memory.

Does any modern OS offer a standard and architecture-neutral way to run workloads on GPUs?

> That doesn't mean that the sluggish desktop apps should not be fixed, but "be more like games" is not really the right advice.

If we can offload as much of the GUI rendering to the GPU, that's still a win because it frees the CPU for other things.


> The games are not doing much OS interaction at all. That's where most of the delays come from

...really? My PC is orders of magnitude more powerful than the 166Mhz Pentium PC my family used in the 90s, yet the interface is no faster. Was interacting with the OS somehow orders of magnitude faster back then? I can run Win98 in a virtual machine with all the associated overhead... hell, I can do that on top of f'ing javascript in a web browser, and it is still more responsive than modern OSs.


There are certainly times when a heavy weighting in favour of latency makes sense and times when it doesn't - and interactivity is the defining factor for me. If I'm synthesizing an FPGA core or rendering a video I don't want latency optimisation to slow it down if I've wandered away from the computer, and left it to do its thing. If, on the other hand, I want to get on with something else while the long-winded task is progressing, I want what I'm doing now to be have priority, to be regarded as the "foreground" task, and I don't mind if the long-winded process takes longer as a result (within reason!) And, crucially, if that long-winded task eats all the RAM and I decide to cancel it, I want the user-interface to remain responsive so I can do that.

I'm glad you've recognised the delights of a low-latency interface. The ideal interface is invisible, unnoticed by the user - my own thoughts are that an unpredictable response time is jarring, causing the interface to be noticed in a way that it shouldn't be - I've heard it called "jank" in the context of cellphone apps. So I believe people do recognise it, but probably underestimate how much it detracts from a pleasant user experience.


I remember a cute trick Windows did (at least in the ~XP days) where if a window had focus, it got more CPU cycles. You could, for instance, create two identical console windows running a program that just does math in a loop, and if you give focus to one of them and let them churn, it will start to out-strip the other.

Of course, the underlying problem is that it's not always easy to know where the user's attention is (or if they have walked away, as you mention), but it was a simple "somewhat right" trick.


An RT kernel is not going to have much impact on UX latency.

I run an RT kernel all the time. Plenty of things are still janky. Improving UX latency has more to do with UI toolkit and application design than the kernel.


Of course. You'd need to start at more important latencies, like UX callback latency.


And properly prioritize the various GUI tasks otherwise they will simply get blocked by higher priority stuff. Having a real time OS and then running interactive tasks at a low priority will defeat the purpose, so there is that extra bit of configuration to keep in mind.


If you're looking for an easy-to-install version of Linux that gives you 99% of RT Linux then Ubuntu Studio is worth looking at.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: