Hacker News new | past | comments | ask | show | jobs | submit login

> On a system running primarily trusted open source applications

I'm not sure how releasing the source code to a program alters its security state or makes the bugs disappear. Security audits typically rely on binary analysis and runtime inspection rather than source code review, regardless of source model. This is because doing so takes into account things that can violate the contract that is source code: the larger runtime, toolchain bugs, and source code bugs that pass by flawed human eyes unnoticed. Assuming your binaries aren't obfuscated, binary analysis is one thing that is incredibly difficult to hide from. The whole field is actually moving towards stuff like black box fuzz testing; I can guarantee that you can thank that for some of the most significant security fixes you've benefited from in the past couple years.

FLOSS has many merits, but security is lower on that list every year. Proprietary software does not have a monopoly on vulnerabilities.

Also, there is no such thing as a "trusted program" unless you employ formal proofs; Project Everest is one example and their progress pales in comparison to less rigorous TLS implementations. Everyone writes buggy code, and some bugs are more exploitable than others.

Finally, most users do regularly run hostile programs in the form of remote JavaScript apps in runtimes that we call Web Browsers. Browsers use all the OS features they have access to in order to provide sandboxing. On Linux, this includes seccomp for syscall filtering, multi-process isolation, and user namespaces; however, this typically lacks robust GUI isolation on Linux and BSD due to the lack of support in X11. Chromium has a (very partial) mitigation by limiting access to the GPU process while Firefox does no such thing; Webkit2GTK browsers leave you even more exposed as they often disable sandboxing. If open source was perfectly secure, Firefox wouldn't regularly lead Pwn2Own's browser exploits.

This isn't limited to browsers: PDF readers, ebook readers, email clients, media players (ASS subtitles might as well be programs of their own), and countless other programs literally exist primarily to work with untrusted content.

> This means it is impossible to have lower latency if you force vsync (especially if you have unpredictable events like user input).

User input in 2022 is complex work that is far more involved than just printing characters one after the other. Characters interact with each other, modify each other with context, overlap, change each others' direction, and switch meanings. Libraries like harfbuzz are quite advanced. With all this work taking place, vsync does reduce latency by reducing the amount of rendering: vsync allows the rendering to skip changes that don't sync with a display refresh.

> Besides, all the weird "timing solutions" and "offloading to the GPU" you are proposing can be implemented on X11 as well.

Wayland has excellent support for frame timing; X11 just does not. You'd have to break compatibility or layer a more advanced compositor on top, and the latter won't be following any standards that programs can adopt. Wayland standardizes the process so that programs can actually reap the benefits. Terminal emulators like Alacritty and Foot are two examples of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: