Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fedora 40 Eyes Dropping Gnome X11 Session Support (phoronix.com)
22 points by HieronymusBosch on Sept 19, 2023 | hide | past | favorite | 17 comments


The first thing that comes to my mind when reading this article is: "Does Wayland have something like "ssh -x""? After a bit of looking, it seems to me that Waypipe[0] is the successor to remote xorg sessions. I'd be interested to hear from anyone who has used it if the functionality is truly comparable.

[0] https://gitlab.freedesktop.org/mstoeckl/waypipe


I'm looking forward to the same. The fact is that `ssh -x` and remote-X in general was really shitty with modern applications (basically sending uncompressed bitmaps and 2 round-trips for every input and output event) but it does work and is easy to set up thanks to SSH.

Things like waypipe and VNC work much better, but they aren't as easy to get set up yet.

Edit: I just tried waypipe again and it worked perfectly. I couldn't even tell the other device was remote (although it was on a fast local connection). Other than needing a separate tool rather than being built into ssh it seems pretty seamless.


Hey thank you for giving it a try, I'm happy to hear it works that well. Here's to the past in the future.


This has probably been asked dozens of times, but honest question: how much of a technological dead end is the X11 architecture? Perhaps this ship has sailed, but what were the impediments to modernization? Lack of developer interest? Lack of consensus between stakeholders to make significant changes? I worry that other critical open source software projects will go through a similar lifecycle as it matures and the original developers retire. Considering the lineage of virtual consoles going back to physical teletype machines, it makes sense to preserve some form of feature compatibility with current software.


The problem is that the X11 architecture has been dead for a decade. The X11 architecture was network-transparent drawing primitives. Almost all of modern X is done via extensions. Your drawing is done using OpenGL or some similar interface, buffers are passed using shared memory, lots of display metadata and configuration is done via extensions as well. The actual rendering is done by the window manager (assuming you are doing compositing).

As Daniel Stone said, all that is left of "Core X11" is really bad IPC. (As the client talks to X which talks to the window manager which talks to X which talks to the client, requiring 2 round-trips for every request)

https://youtu.be/RIctzAQOe44?t=1037

So saving Core X11 is a dead end. All you have is a basic IPC system with unnecessary round-trips.

So all that is left to save is the extensions. You can save GLX and the Vulkan equivalent but tearing them from the IPC is leaving not much behind.

So really, no. The things that could be saved are so basic (like feeding input events) that it is better to just re-implement them using a better IPC protocol. For clients that need backwards compatibility there is XWayland (although it is limited for security reasons).


Everyone defending the X11 protocol nowadays should be forced to write at least one GUI app using just Xlib and Xt.

I don't believe people are invested in the X protocol per se, but more the Xorg implementation that still offers a few interesting features not ported to Wayland yet. But allowing people to choose will mean any urgency of porting those features will diminish and you don't get feedback on Wayland from all those people who switch to X11. Linux users seem to be overrepresented at both ends of the conservative/progressive spectrum.


Well, if they can make CUDA and Wayland work simultaneously...

(Baseline Nvidia drivers without CUDA already work fine with Wayland).


From "Wayland does not support screen savers" (2023) https://news.ycombinator.com/item?id=37385627 :

> the NVIDIA proprietary Linux module for NVIDIA GPUs hardware video decode doesn't work on Wayland; along with a number of other things: "NVIDIA Accelerated Linux Graphics Driver README and Installation Guide > Appendix L. Wayland Known Issues" https://download.nvidia.com/XFree86/Linux-x86_64/535.54.03/R...

What is NVIDIA's annual developer salary commitment to non-HPC Linux compared to AMD with ROCm and Intel?

(EDIT)

Most nvidia driver Linux kernel module re-packaging projects are not on GitHub which supports Sponsors.yml for specifying how to donate.


Trusted builds of the GPU modules are necessary; To run with SecureBoot on and the Silverblue Fedora ostree distribution requires local module signing after every upgrade of the GPU modules and adding an additional local key to the SecureBoot BIOS keystore.

I don't think this is a https://SLSA.dev -compliant GPU module software supply chain:

  NVIDIA Src -> RPMfusion pkg & builder -> rpm-ostree toolbox dnf install && Local_module_signing_with_local_secureboot_key


I don’t know, but ROCm is a joke and unusable. 1200 lines of some very complicated C++ for a simple FFT… That’s ridiculous. And nobody seems to be doing anything about it — and then they wonder, why they have 1% or so market share in AI.

It’s not a hardware problem, it is an API design problem.


It has nearly the same interface design as cuFFT, admittedly with less documentation. I’m not sure I understand this complaint versus the competition. https://rocm.docs.amd.com/projects/hipFFT/en/latest/api.html

If you’re complaining about cuFFTs design, how BLAS like interfaces are outdated, and a lack of a proper hypothetical heterogenous array programming language?, sure. But it’s not much better in Nvidia land.


https://www.amd.com/en/graphics/servers-solutions-rocm-hpc :

> ROCm™-optimized libraries currently include BLAS, FFT, RNG, Sparse, NCCL (RCCL) and Eigen

... and Numba, which you can compile a symbolic expression for with sympy.utilities.lambdify. https://docs.sympy.org/latest/modules/numeric-computation.ht...

AMD ROCm does do HPC. Which desktop/workstation AMD GPUs are now supported with optimization?


There are libraries, yes. I was talking about using it directly.


No, I mean for writing new code. I know that FFT is already in the libraries, I was using it as an example. About 200 LoC in CUDA, and 1200 lines of code in ROCm. ROCm is boilerplate-ridden and hardly usable for new code.


That's a contrived example, then. Also because there's already an optimized version of FFT in their libraries.

At least with open-source AMD code, it can be fixed with Pull Requests.

FWIU, OpenCL is insufficient, CUDA is the closed-source fanboy favorite that the industry can't move away from, and Intel OneAPI may be the most portable but not the most performant.

Impact-wise, contributing to the ROCm and OneAPI tools to help them be more competitive is in consumers' interest.


The industry can't move from CUDA because you can easily write anything in CUDA, as opposed to ROCm. I needed once to write a lattice Boltzman CFD simulation, it is so much easier in CUDA compared to ROCm, I wouldn't even start the latter unless I am forced to. Eveything takes 5x amount of code and 5x amount of time.

ROCm is bad API design, and no amount of gradual tinkering will save it. I wonder why AMD can't design something better. HIP exists, but it is a "lesser CUDA" (CUDA is actually a mediocre API, we can design things much better than that now).


What was the difference in runtime performance, and did you try CuPy?

https://github.com/cupy/cupy :

> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.

Projects using CuPy: https://github.com/cupy/cupy/wiki/Projects-using-CuPy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: