Hi, I'm the release manager responsible for this X release seeing the light.
This release would not have happened if an effort to improve touchpad support in Linux was not funded for the past year and half. X server 21.1 makes touchpad gesture functionality support universal so it's much easier to offer consistent user experience for everyone and downstream developers are less reluctant to accept contributions.
By the way, I wonder if there is demand for long-term maintenance of X server specifically. If you think you could contribute, maybe write here and if there's enough interest maybe it's possible to crowd fund something.
In my view there's absolutely space for Xorg in the future. Not every OS can run Wayland well and Wayland lacks some X features like network transparency.
I kinda hope it'll be backed by an organisation that's really keen on taking it forward. Augmenting the security model for example. And going back to some client rendering enabling features like anti-aliasing. I really think there's a lack of a viable remote display tool in Linux, where Windows has RDP. Block pushers are just too slow. VDI usecases are picking up again heavily, as do other security based techniques which could benefit, like remote browser isolation. Filling up my desktop with Windows running on a bunch of other computers is just so powerful.
So in other words I'd love to see an X12 that really moves towards the future. As I understand it, it's currently with RedHat which only sees future in Wayland and X is just on life support. I regret that because I see a lot of concepts in it that are still very valid and that have been lost in current alternatives.
> Wayland lacks some X features like network transparency.
This, IMHO, is the most tragic loss in functionality desktop systems suffered.
It’s hard to justify being unable to do something we’ve done since the mid 90’s. The fact I could sit in front of a computer while running GUI software from different machines with different OSs as if they were local and that, now, the best I can hope for is what VNC gave me in the early 2000’s is nothing short of depressing.
X forwarding "just works" in Wayland via XWayland. None of that has changed significantly.
But I'm disappointed to see this myth keep popping up. X in general isn't "network transparent." Some X11 clients are, but only if they're built in a very specific way, which also tends to degrade performance locally.
Thanks for the effort you are putting into comments here lately! Maybe you have ideas on improving the "Are we Wayland yet?" site: https://github.com/mpsq/arewewaylandyet Yesterday I sent a PR for a "Missing" section.
There isn't much for me to say. To be honest, I would probably not contribute to that website. Wayland isn't that exciting of a technology and I don't think end users should even need to care about it or care that any of those utilities supports Wayland, if distro developers are doing their jobs then it will all "just work" and nobody has to worry about it. Also that website is misleading, many of those utilities are wlroots specific and not really useful on other implementations.
It may be an Ubuntu issue then. I’ve tried to start gnome-terminal on the Linux laptop from the Mac running XQuartz and what happened was a terminal starting on my screen on the Linux box, totally ignoring I was sshing into the Linux machine from the Mac.
Maybe you were using the wayland backend, you might have to force X11 with the environment variable "GDK_BACKEND=x11".
Though part of this weirdness is due to the clunkiness of the way X forwarding is implemented, I wish there was a way to do it that didn't involve kludging around with environment variables. But that is what we are stuck with.
I can relate as I'm one of the few people I personally know who has used that feature in the last twenty years. In the last ten years or so however, I used some variation of NX in any case, whenever I needed a remote X application, as then I typically need to preserve the session as well. I think the last time I used plain X11 protocol across the network was when I still had Sun Rays (between the Sun Ray server and the application's host).
> I really think there's a lack of a viable remote display tool in Linux, where Windows has RDP. Block pushers are just too slow. VDI usecases are picking up again heavily, as do other security based techniques which could benefit, like remote browser isolation. Filling up my desktop with Windows running on a bunch of other computers is just so powerful.
I think it's just a tooling problem; Apache Guacamole or Xpra can trivially stick windows in a browser window, VNC is... okay, not amazing, but passable, and even RDP can be made to work well on Linux, it's just that setting up any of those is a horrible pain on Linux (in my experience at least).
For Wayland there is waypipe for single-window forwarding over SSH, and there is wayvnc for wlroots-based compositors (Phosh, sway) which can run in headless mode with llvmpipe rendering, and weston has an RDP backend (but I recommend wlroots with wayvnc over weston).
> And going back to some client rendering enabling features like anti-aliasing.
The xrender extension already supports server side drawing, anti-aliasing, transparency, gradients and so on. If you use for example Cairo with the xrender backend almost everything except the tessellation of splines will be rendered server side and has a rather efficient wire protocol which will be better than even RDP.
The problem with XRender (and Cairo) is that it is still inherently an old-school immediate mode API. It's never going to be as performant as a fully parallelized GPU implementation.
All XRender operations are already GPU accelerated via Glamor. The spline tesselation step of drawing is notoriously hard to do on GPUs. Hence the separation at that point was actually a smart choice.
The GPU acceleration in Glamor doesn't really help, the issue here is that everything still needs to be serialized by the X protocol, rather than being drawn in parallel as they would on a GPU.
> The GPU acceleration in Glamor doesn't really help
According to benchmarks at the time 700-800% speedup.
> everything still needs to be serialized by the X protocol
Drawing in general needs to be serialized. As soon as the object tree gets too big parallelized rendering on GPU will be slower because of the huge amount of branches. The link you posted solves this problem somewhat but introduces a huge amount of complexity to the renderer (also the benchmarks are done on Windows which obviously has no xrender backend). Xrender on the other hand is a simple standardized solution that works today.
7-8x speedup is not really that much when parallelized GPU algorithms can usually get 100x speedup or more. I would suggest to look at benchmarks compared to GPU based solutions such as piet-gpu, or even a parallelized 2d renderer like Blend2D, and see how that compares. IIRC the way you really want to do it is to build up the entire frame as a command buffer as you would in Vulkan and then submit that.
The recent developments here are actually because advancements showing that drawing didn't need to be serialized. The whole point of a GPU is to avoid that. I really don't know what to tell you, this is a complex problem, it's not solvable with simple solutions. The practice of the X server implementing the simplest possible solution has only really resulted in the complexity being moved into other projects, hence the existence of Wayland...
The last I checked, the BSDs and most other Unix-likes (and genuine Unix™ variants) don't support Wayland. In fact, Wayland is pretty much Linux-only in practice.
I can't readily find anything to indicate that this claim is out of date. I am aware that Wayland has been ported, but I have not heard of it being widely used, or being "production ready" as per se.
FreeBSD wiki is a bit notorious for being out-of-date, and that particular page is marked “CategoryStale”. Anyway, first result on DuckDuckGo is someone using Wayland on FreeBSD successfully [1].
Hi, I am crowdfunding the Barrier effort [0] that would finally make Wayland usable for me.
In the meantime, I do need continuous Xorg updates.
It's not by choice, Wayland just doesn't do the trick and is unusable on my machines until workflows like Synergy/Barrier are supported.
I really hope that there is an awareness in the DE communities that, for some use cases, Wayland is still not able to replace X and that left behind users are stuck with it until then.
> By the way, I wonder if there is demand for long-term maintenance of X server specifically.
Hello and thank you for your work!
I'd love to see Xorg maintained at least.
It seems to me there still is dust to settle around wayland and xorg os getting little to no development.
Honestly, I'd just like xorg to keep working until the rest of the ecosystem has native wayland support.
Taking myself as an example: i just like xfce. Xfce is not on wayland yet. I don't care how buggy xorg is or how better wayland is, I won't switch if I can't run xfce.
At least I would not have been driving this release.
Maybe someone else would have stepped up, it's hard to say. However there was no release manager for the last several years, so the likelihood of this outcome would have been small.
You would be hard pressed to find any current-or-former Xorg developer who actually wants to work on Xorg anymore, regardless of their employer. The intersection between long-time core Xorg maintainers and people-who-started-the-Wayland project is nearly 100%.
The fact that Red Hat doesn't see much future in Xorg reflects this, rather than the other way around.
The other historically important corporate sponsor of Xorg development is Intel, and they seem to have reached the same conclusion.
(I work for Red Hat, but not on anything graphics related)
Hi, I was recently trying to make some improvements to the build for XQuartz, which in turn was so I could make some changes to XQuartz and fix some issues with recent macOS changes too.
While I made some progress, it took much longer than I anticipated. I wonder if there's demand enough for X server on macOS to justify this work? Additionally, I'd have to find some funds to support my work as well, if I were to continue.
> though you're not clear whether it's time, money or community you're asking for
All of these, but perhaps the most important would be money as it's the only thing that clearly indicates the level of community support. Once there's funding it's much easier to prioritize the work.
What kind of money are you looking for? You mention crowdfunding, but it's not clear to me if you're looking from tens or perhaps hundreds from individual users, or tens of hundreds of thousands from institutions/corporations.
If it's the former then yes I'd back it depending on the price and what you're planning. With the latter I'm not in a position to influence my employer to participate.
I think something around $4000 monthly would be enough for release management duties and minimal maintenance. Of course just one person part-time is not enough for large-scale changes, but for resolving regressions and just making sure contributions of other people are being merged, it's enough. I've been doing exactly that for the last half year.
I think it does not make sense to involve X.org as we all want as little bureaucracy as possible because all that time is better spent in software development.
> By the way, I wonder if there is demand for long-term maintenance of X server specifically. If you think you could contribute, maybe write here and if there's enough interest maybe it's possible to crowd fund something.
Are we talking money, time, tooling, etc.? I'm rather attached to Xorg and would like it to keep going; what would be the most helpful?
All of these are great, but perhaps the most important would be money as it's the only thing that clearly indicates the level of community support. Once there's funding it's much easier to prioritize the work.
As a BSD user, glad to see the release, but I am concerned about the future of X since Wayland has plenty of Linuxisms in it, making it very hard to port.
As someone who is still not using Wayland by default (I tried, but it regresses my setup), I am thankful there are still people involved on this. Good job, guys, you are appreciated. It's not just because Wayland looks sexier [0] that your work on the thing most people actually use is appreciated.
[0]: considering Wayland is more than a decade old by now and is still a huge mess, I guess I'll keep trusting old lady Xorg's experience to handle my workflow.
Xorg/Xfree86 served us very well for decades. Will still continue to do so as XWayland. It feels like saying goodbye to a good employer that is retiring but will be replaced by someone more adequate for the job.
Wayland solves problems that I don't have, while introducing new ones (my Nvidia GPU isn't well supported, remote X doesn't exist and none of the wm's I love support it, like openbox).
So I'll stay on Xorg for as long as I can.
lack of support for nvidia is entire nvidia's own fault. basically they decided to go there own way on buffer management for userspace and the development community rejected them at large.
it'll be fixed in about 1 year, nvidia finally got on board and built the APIs they needed to build. iirc KDE forced one of the nvidia devs to implement and then fix all the problems with nvidia on KDE wayland. basically the dev went back to nvidia and was like 'just implement the GDM apis'
So far there is no viable replacement around that is "more adequate for the job" (Wayland is fundamentally flawed at the concept level). So Xorg will stick around much longer probably decades.
As a consumer/power user with a fairly bespoke window manager I’m worried that the minimalist approach to wayland will fragment functionality depending on compostor / DE.
Wanna use Firefox? gotta use gnome compostor for that! Chromium only works on KDE compostor, etc.
Is this a misguided fear, or do you anticipate it moving that way?
I think these worries are unfounded atm, because compatibility is good for most applications, even though there are some tools specific to a compositor (e.g. waypipe for wlroots based compositors).
For the case of minimal window managers, there is wlroots, which is designed to be used to write a compositor. The first line of the wlroots repo[1] states: »Pluggable, composable, unopinionated modules for building a Wayland compositor; or about 60,000 lines of code you were going to write anyway.«
And there are at least three[2] actively maintained compositors based on wlroots. Sway being the most popular, the one for which wlroots was initially developed for.
I believe that there won't be too many incompatibilities, because there are 3 (4, including weston) wayland compositors developed (Gnome's, KDE's & wlroots based), all three of them having their own use cases. Using protocol extensions only useable on one compositor would limit the app to a subset of Linux Desktops (and mobile), so if possible, most apps won't use them.
PS: I just remembered one case of fragmentation: Activity Watch, a tool to keep track which apps are used how long and to track idle times, won't be able to support Gnome (mutter), because Gnome dev's won't implement the necessary freedesktop protocol extension. Sway (and KDE?) does, so it works on them. One solution would be to write a Gnome Shell extension, but yeah, it's not as easy as on X.org (reason being privacy, in this case).
The biggest problem of Wayland is that it has the wrong philosophy ("Every frame is perfect") and the wrong abstractions ("Everything in the universe is an array with RGBA values"). If your foundations are flawed like that you are doomed from the start. What remains is a very expensive and cumbersome way of blitting bitmaps.
People with decades of experience, each, in the Linux and Unix graphics stack, have determined that the most efficient thing to provide, from a display server perspective, is a way of blitting bitmaps (specifically, of compositing bitmaps representing viewports into a final screen image) and that, if there is rendering to be done, it should be done client side. This is the consensus of those with domain knowledge, and it is what is supported by the broader community (toolkits, DEs, etc.). It's also how the major toolkits have been doing things anyway, irrespective of how X11 actually works: draw everything client side and send it to the X server for display. So, your X server is really effectively nothing more than a shitty Wayland compositor.
And people wonder why Linux lags behind Windows and macOS in terms of desktop smoothness and quality.
When all is said and done, you may as well remove the legacy cruft (drawing and filling primitives, the fucking X font architecture, etc.), delegate remote display to a protocol that is better designed to support it such as RDP or PipeWire that only gets loaded when necessary (which it's not for 90% of users in 90% of use cases), and streamline the display server itself to just that which modern clients actually need, and that's Wayland. Wayland just brings Linux, barely, up to the state of the art set by Windows, macOS, iOS, and Android -- which have prioritized perfect frames and smooth compositing and presentation since forever ago.
Sure, it has better performance if there is not a networking protocol in the middle.
What I like about X, that is missing in Wayland, is that you no longer can run the X server on a machine and connect to clients (application) on another.
And if you think about it... this day with the cloud it could have been the killer feature! Nowadays is normal to have programs that run in VM in the cloud with a sort of remote desktop client. X11 would have been more performant even with not so fast network connection, because you have the rendering done in your local PC and only commands that pass on the network.
And if you think about it for a moment, isn't how web browsers work? Where you have the server that sends some HTML to the browser (nowadays JS and other stuff) and the browser doing the job of rendering? If you compare the X server to a web browser and the X client to a web site, isn't very similar? And this is the model that is winning nowadays.
Meanwhile we think about a new graphical server that cannot be used on a network, in a period where we are returning in the era of mainframes, when X was created (granted, nowadays is not a big PC in a room but a multitude of VM in the cloud).
The problem is not X, the problem is that X was never used at is full potential because we are so used to the (to me not optimal) model that Microsoft/Apple proposed to us.
You could have had monitors with an integrated X server, like in the past you had serial terminals, without fans, cables, and stuff on your desk, just a small ARM processor with a GPU to render things on screen. And a single network cable going to a computer that you had anywhere else in the house/company, or in the cloud.
No need for HDMI or other display interfaces, just run a network cable to each monitor, plug keyboard and mouse on the monitor, you are done. The GPU is in the monitor, doesn't it make more sense? How much money that could have saved to a company? It would have been more efficient than thin clients that connect to an RDP Windows machine that has to have a GPU just to render the remote session? Large installations like displays used for public information? Why use a full screen web browser, it's overkill, when you could have simply launched a X server on the particular display, then launched an application on a server and connected to the IP of that particular monitor to display everything you wanted?
The architecture of X was modern 40 years ago when it was invented. It in some ways predicted the future, and now that we are discarding it for something that tries to imitate Windows/macOS (badly, because these operating systems just works well, Wayland doesn't). Is it worthed?
> this day with the cloud it could have been the killer feature!
My impression is that latencies kill X performance. Having a server under my desk is one thing, but running even something like xedit halfway across a world is painful. It may be possible to update these tools to be less synchronous, but I’m not sure this is a great use case, certainly not for new applications.
> No need for HDMI or other display interfaces, just run a network cable to each monitor, plug keyboard and mouse on the monitor, you are done.
This would be great. Every monitor has at least a frame buffer and there is no need to push every pixel to the monitor 60 times a second. Worst case scenario is you have a sporting event where you need to push every pixel to the screen 60 or more times per second, but, most of the time, there’s no such need.
Latency kills it but it's mainly design decisions (made with local networking in mind, not internet) that can be improved upon. NX has shown that and that's only a fix on top of the old protocol.
I'd love to see a new design with network transparency in the current internet age on the foreground.
I remember working in a room with 20 big-screened X terminals connected to one server over one shared 10 Mbit 10Base-T. And it worked amazingly well.
This was when Windows was in its infancy. I'd love to see that kind of vision again.
Depends on the kind of application. The advantage of X compared to RDP/VNC/whatever remote desktop protocol is that you transfer only drawing commands, meaning that if you are seeing a static screen no data is transmitted, if only a part of the screen updates only that data is transmitted. The load on the network is lower.
Nowadays with modern video compressing protocols also sending the video like RDP doesn't require a lot of bandwidth either and has an acceptable latency to be fair, but with X it would be even better (and it is, if you ever used X over ssh it works great)
> The advantage of X compared to RDP/VNC/whatever remote desktop protocol is that you transfer only drawing commands, meaning that if you are seeing a static screen no data is transmitted, if only a part of the screen updates only that data is transmitted.
This is true of RDP also. The initial versions of RDP were basically GDI over the wire. Of course it's been expanded since then to include DirectX calls, etc.
> The load on the network is lower.
That is incorrect. RDP is actually usable over an internet link; X11 is far too chatty and ridden with roundtrip latency for that use case. Even VNC does better over the wire than X.
Interesting, in fact I always wondered how RDP performs so well.
> That is incorrect. RDP is actually usable over an internet link; X11 is far too chatty and ridden with roundtrip latency for that use case. Even VNC does better over the wire than X.
Yes I use it over an internet link and it works reliably.
But the main difference between RDP and X11 forwarding is that RDP forwards the entire session, while with X11 you forward the single application. With X11 forwarding you can indeed run on the same X server different X clients. That can be an advantage in an era of microservices, because you can see an X client as a microservice, and thus have a workstation (a X server) run a plentful of X clients each one on a different machine/VM/container.
The people who write and maintain X decided that life is too short to write or maintain X. Wayland is their way forward.
Web browsers (and Electron) are replacements for NeWS, not X really. That's part of the reason why Electron won't go away, the architecture advantages are too great: have the server push running code to the local display to take advantage of local acceleration. NeWS was awesome in its day, and it kind of lives on in the form of browsers and Electron.
But they are still maintaining it. Also, the improvements that Wayland brings are not really specific to them, because they are mostly due to a new Linux kernel feature that is KMS, and you can use libinput also with X.
I don't really see the point of Wayland, to me X was just fine, sure, maybe it was time for X12, but was a completely new display server needed?
Only just barely. And they told us it could stop at any time. All the developer energy and support is behind Wayland, so that's what you should be using.
> I don't really see the point of Wayland, to me X was just fine, sure, maybe it was time for X12, but was a completely new display server needed?
The developers closest to the graphics stack say yes, so I defer to their expertise.
Really, this has been discussed to death many, many times. The arguments for X (or an X-like architecture) invariably come from people who are ignorant of the actual issues involved. Here's a video by Daniel Stone that addresses the main points; note that it was made 8 years ago and people are still arguing the points: https://youtu.be/RIctzAQOe44
As far as the graphics stack maintainers are concerned the debate is pretty much over, and Wayland won. X will get little to no developer attention going forward. Unless you want to take responsibility for the X server, your choices are to get with the program and get on Wayland, or find your use case completely unsupported.
(Note that when it comes to large projects like Xorg, corporate sponsorship is critical. The corporations are putting their money behind Wayland, not X.)
> Sure, it has better performance if there is not a networking protocol in the middle.
That's not even true. The DRI3 X11 extension uses the exact same buffer swap mechanism like most Wayland compositors. Running locally you get the best of both worlds with X11 already.
> So, your X server is really effectively nothing more than a shitty Wayland compositor.
Couple of things: 1) you use DRI3, you lose network transparency. KDE apps run like a pig stuck in shit over network links because Qt on X was built to take advantage of the massive speed of DRI3.
2) Having the X server, window manager, and compositing engine in separate processes introduces latency due to context switches on the hot path. Wayland fixes this by making the display server, window manager, and compositor all one process.
Wayland solves problems you do have, you just don't know you have them.
> 2) Having the X server, window manager, and compositing engine in separate processes introduces latency due to context switches on the hot path. Wayland fixes this by making the display server, window manager, and compositor all one process.
That also mean that you have a single process that if it crashes you loose your entire desktop session crashes. In the old X days I used to have my compositor crash a lot of times (good old days of Compiz with a ton of effects), but everything else was still functional, and I could simply restart it without loosing my entire desktop session. Or you changed the settings on GNOME, in the past just restart the WM, with Wayland of course you have to log out and login again.
Also, things that on X were simple (for example having an application that captures the screen) are difficult in Wayland, and the only way to do so is to incorporate that functionality into the window manager itself, that becomes a monolith very quickly.
Speaking about latency, to me is stupid. Like I said, in the old days of Debian 6, with a core2 and integrated graphics I used to run GNOME 2 with Compiz, I had smooth graphical effects and it did run fine, with a memory usage less than 100Mb in idle.
Nowadays we have hardware that is orders of magnitude faster and we have more problems that back in the days we didn't.
> So, your X server is really effectively nothing more than a shitty Wayland compositor.
No it is a much better Wayland compositor because unlike Xwayland it provides full backwards compatibility. You even need much less boiler plate code to get a file descriptor on X11 as compared to Wayland.
> Having the X server, window manager, and compositing engine in separate processes introduces latency due to context switches on the hot path.
This is only makes a difference for rare events like moving/resizing windows. For static window positions it is the exact same path as Wayland. In practice Wayland had historically even much worse latency than X11. This has only been fixed a few years ago. Latency has low priority for Wayland developers. And as such Wayland does not beat uncomposited X11 in latency to this day.
The stuff that's not backwards compatible in XWayland is mostly already broken by X compositors anyway and needed to be reimplemented, so that's why there wasn't much incentive to fix that. I've seen some of your comments around and I'm really struggling to figure out what your use case is.
"This is only makes a difference for rare events like moving/resizing windows."
This is actually wrong, that type of split affects the rendering of every single frame. Also, uncomposited X11 is a bad idea for a number of other reasons, the only reasonable comparison to make there is composited X versus Wayland, which is what all the benchmarks I've seen are aiming for.
"What I like about X, that is missing in Wayland, is that you no longer can run the X server on a machine and connect to clients (application) on another."
You actually can do this with Waypipe. Also, I'll mention this again: most Wayland implementations include the X server (as XWayland) and support this as a backwards compatibility option.
"If you compare the X server to a web browser and the X client to a web site, isn't very similar? And this is the model that is winning nowadays. ... Why use a full screen web browser, it's overkill, when you could have simply launched a X server on the particular display, then launched an application on a server and connected to the IP of that particular monitor to display everything you wanted?"
It's really not overkill though, people build web apps because the stack actually works well. Developers seem to really want to use those browser features. X on the other hand is very old and hasn't kept pace. The most obvious example I can think of is OpenGL: indirect GLX doesn't really work any more and hasn't for quite some time. It can't really be updated either without complicating everything, because any performant use is going to require loading code into the server. If you want remote GPU rendering, the best way to achieve that currently is to use WebGL and WASM (and later WebGPU when that's ready).
"we are discarding it for something that tries to imitate Windows/macOS (badly, because these operating systems just works well, Wayland doesn't). Is it worthed?"
Well, worst case scenario, Wayland is just a minimal way to get your browser window on the screen. I think this is another big misconception that people have. Wayland generally sits at a lower level in the stack than network applications, and it really has to be this way if you want to support hardware accelerated clients. Wayland is made for the case where you've already decided you have a GPU buffer and you want to put that on the screen. You could build something else on top of it that runs over the network (and this is what web browsers do, it's how XWayland works, it's how your VNC/RDP client works, etc) but somewhere in the pipeline those will need to have that fast local path where they render to a GPU buffer. And that's where Wayland comes in. It's not that the developers are trying to copy Windows/MacOS, it's that you have to do this if you want to get a good experience out of a modern GPU.
> people build web apps because the stack actually works well
Or because there is not an alternative. Building graphical software is difficult because you have to interact with graphics. But, what if we have a protocol where you simply open a network socket and write some messages on it that says "write this text hear", "draw a line from X to Y"? Well, that is how X clients work.
> Wayland generally sits at a lower level in the stack than network applications
That to me is not a good thing. One of the main advantages of Linux/UNIX systems in the past was that X was a userspace application, and if X crashed the entire computer didn't crash, like Windows does, you restart the X server and you don't loose your work. Of course modern desktop environment crash with X, and that is a bad thing (but an X client can, and should, if the X server crash, just try to reconnect with the new X instance and redraw its window, like you reconnect to any other socket, or worse case scenario the program continue to run just without the GUI!)
"But, what if we have a protocol where you simply open a network socket and write some messages on it that says 'write this text hear', 'draw a line from X to Y'? Well, that is how X clients work."
I mean, that is also how the Web Canvas API works. If you want to control this with some messages on a TCP-ish socket then you can use websockets. The web browser already appears to be a superset of all the networked functionality of X.
"One of the main advantages of Linux/UNIX systems in the past was that X was a userspace application"
Wayland still is a userspace application too. The protocol itself is what is at a lower level of userspace than X was, of course the compositor can also implement high level features as it sees fit.
> I mean, that is also how the Web Canvas API works. If you want to control this with some messages on a TCP-ish socket then you can use websockets. The web browser already appears to be a superset of all the networked functionality of X.
Yes, in a very inefficient way, you end up using 1Gb of RAM to do something that could have been done with 1Mb. Not important for desktops, but for embedded applications?
> Wayland still is a userspace application too. The protocol itself is what is at a lower level of userspace than X was, of course the compositor can also implement high level features as it sees fit.
Wayland is based on KMS that indeed is a graphic implementation in the kernel itself.
I'm not sure what you mean 1GB. An empty tab on Chromium seems to take about 30MB. Minimal "Hello world" GTK and Qt programs are around the same size.
Modern Xorg is also using the KMS API. If you don't want any fancy features and just want to draw some lines then you probably want to skip Xorg and Wayland altogether and use KMS directly.
For some applications you want "every frame is recent" and can tolerate some glitches as trade off for not having perfect frames. Also displays have different pitch values, subpixel arrangements, color spaces, etc. A line from A to B might look completely different on different monitors. By offering only RGB arrays as abstraction Wayland pushes the the need for care of those differences down to clients which is a mistake.
Additionally a Desktop system in general is much more than a bunch of bitmaps blittet together. Applications need to interact and such functionality has to be tightly integrated into the compositor with standardized protocols otherwise it will be impossible to have such functionality. Taking screenshots, drawing to the root window or knowing about coordinates of windows from other programs are some examples.
It tried to come up with issues (out of gamut colors? other optical properties than opacity - turn a portion of the screen into a mirror? not correctly modelling color filter arrays?). Best I could come up with was that it's not VR-ready. Which might be an issue in 15+ years.
Wayland makes the wrong abstractions and does not offer vital functionality usually needed on a Desktop system. As a result every Toolkit/Compositor has to implement that additional functionality itself and most of the time it is incompatible to the competitor for no apparent reason.
As an exercise I recommend to write a native universal applicable Wayland application that takes screenshots.
Wayland as a display protocol (correctly) started out with a tiny core, which can display the content of a window, and manage available extensions by version.
This has grown out to be a really great abstraction, where later multiple implementations can decide on new protocols/extensions to support. And they have already done plenty of such extensions, and they are compatible with each other. Remember, these are protocols. There is no need to implement them yourself, they can be made into a library, like wlroots, and you can build your compositor on that base, without any repetition. Also, multiple implementations only mean that there won’t be implementation-specific quirks.
As for screenshots specifically, while the base use case is trivial enough, recording screens are most definitely not (it needs proper synchronization). Pipewire is the project that can solve the issue perfectly, and it is indeed working correctly for like a year already, with screen share even in proprietary apps like teams.
> As for screenshots specifically, while the base use case is trivial enough, recording screens are most definitely not (it needs proper synchronization). Pipewire is the project that can solve the issue
If I understand the OP correctly, this sort of philosophy is deeply troubling to some people. For simplicity-oriented folks, trivial things should be trivial, as a matter of principle. If some trivial thing becomes slightly less convenient so that complex stuff may be possible, this is an unacceptable compromise; or at least a very worrying one. When taking static screenshots requires an entire "project" that is "just ready this year" , the threshold of unacceptability is long surpassed.
I get that wayland folks do not share this worldview and they want to do the right thing, even at the expense of sacrifying things that were previously easy. But, to other people, this rubs them in a very wrong way.
"If I understand the OP correctly, this sort of philosophy is deeply troubling to some people. For simplicity-oriented folks, trivial things should be trivial, as a matter of principle."
I suspect those people don't matter to this conversation and would not be working on things related to Linux graphics at all. Once you start involving DRM and Mesa (or the proprietary nvidia drivers...), everything gets really complicated, and it's not easy to come up with a one-size-fits-all approach.
In particular: the lack of GBM/dmabuf support prevented having any kind of consistent API for efficient screen capture when using the nvidia drivers, but I think that is changing slowly.
wlroots does implement a very simple screenshot protocol, the reason I didn’t mention it was that it is not cross-wayland-implementation. But there is nothing inherent in wayland that would prevent this functionality — they just try to find a good abstraction they all can stick to (as these APIs will likely be used for a long time), and gnome and plasma people have a different view on that.
(Also, if we have a complex program that does all the things we need and a simple program+the complex one, we will have more complexity in the latter case so the previous one may be preferred)
> Is the issue that compositors don't implement the `screencopy` protocol themselves or use the implementation provided by wlroots?
In general, when your diagnosis to a problem is "the people using my product are all just too lazy to use it right", the problem is almost always that your product is the problem.
> As a result every Toolkit/Compositor has to implement that additional functionality itself and most of the time it is incompatible to the competitor for no apparent reason.
Or, compositors can use these wonderful things we have called dynamic libraries, and simply link against implementations of that desktop functionality, such as wlroots -- which implementations, by the way, will be far less crufty and ad hoc than the equivalent X11 solution!
Much of the reason why X is the way it is is because back in the day, Unix lacked dynamic libraries in general, so in order to share an implementation of, say, graphics primitives, the best way was to write a server that implemented them and have clients communicate with that server. Now that we have dynamic libraries, we can share a single implementation of graphics rendering across multiple programs, and still have all the speed advantages of doing all the rendering client side!
You're mistaken, that's not the role of wayland. If you're looking into an universal way to takr screenshots, screesncast, etc..., you're looking for xdg-desktop-portal. There'is implementations for wlroots, gnome, qt, and others.
A side note, but I really hate this ideology of 'x does one thing' outside of command line tools. This is a display server we're talking about. It should be able to enable normal desktop usage. Maybe not everything but core features like brightness control, screen sharing, screenshots, etc. Otherwise every wm that comes along is going to have to reinvent the wheel. There are tons of really great window managers out there, where the authors have looked at all the work it will take to get running on wayland, and have simply thrown their hands up in frustration.
It seems to me the only people that really benefit from the switch to wayland are the wayland devs themselves.
I, as a user, do not give a rats ass if it's a protocol or server. I don't want to lose functionality just because some group of devs prefers to do a rewrite of something.
I've used gnome wayland, and sway. It's pretty damning that so few other window managers support wayland due to the sheer difficulty of the task.
Wayland is a protocol specifically for compositing window managers, the point of it was to simplify the implementation of window managers that already existed as X compositors, e.g. Mutter, KWin, Enlightenment. If other window managers weren't doing that then it wouldn't really make sense to port them to Wayland, because that entails also redesigning them as a compositor. I know many older X window managers never bothered to implement a compositor and just recommended people use compiz, compton, picom, etc.
Someone could build a compatibility layer based on picom or something, and that has been talked about for a while, but I don't think it will happen unless somebody funds it with some real dollars. And I'm skeptical of whether people using those window managers would even pay for it at all, based on the comments here it seems they would rather put the money towards keeping Xorg alive.
Wlroots is an implementation of wayland protocols — if gnome and plasma implement the same protocol according to specification, you only have to write code to that protocol, once.
Xorg was designed decades ago. The architects of Xorg are dead, retired, or overwritten with their stupider future selves; you're talking about maintainers.
The life of a package distributor is one of constant build changes. Among them, the switch from autotools to meson is probably not an especially disruptive one. For the ecosystems I've helped maintain, it certainly wouldn't be.
Packaging frameworks like Debian and RPM coevolved with autotools, especially when it comes to things like cross-compilation, standard build flags, etc. Anything that moves away from autotools is, AFAIU, generally considered a PITA for maintainers of the big distributions. Modern build tools optimize for the problems of massive corporate centralized repositories and build pipelines, which often bundle dependencies and otherwise approach most problems from an entirely different angle. Nuances related to portability, cross-compilation, filesystem hierarchies, dynamic library versioning, etc, are typically afterthoughts if they're considered at all.
OTOH, while it's a PITA it's certainly one to which package maintainers are accustomed. autotools began to lose mindshare among younger programmers and with younger projects many years ago.
IME as someone who has done alot of portability work, both for my own projects and others, and including writing more than my fair share of Debian and RPM builds (but not as an official package maintainer for a distro), boilerplate stuff is rarely the issue. It's all the random, niche problems that invariably crop up at least once or twice with every project, even smaller ones. Every build is broken somehow. Given that basic reality, what matters is how often you can route around brokenness without patching the upstream build, and when you must patch how easy it will be to implement and maintain those patches.
Building X was already complex and annoying. While I hate autotools, I didn’t find Meson to be particularly better. If anything this is a lateral change. Instead of m4, shell, and makefiles, Meson has python, some python packages (managed with pip), and ninja. I guess Meson is faster, but I’m not convinced that justifies changing the build system especially since the speed up isn’t too significant.
My own experience working on X server, clean ccache-based rebuild is a couple minutes faster on meson compared to autotools. These minutes add up quickly, switching my workflow to meson paid for itself the same day.
Last time I wanted to build anything Xorg-related, I found the fragmented repos to be also a barrier.
Both mean the project loses out on casual development -- ideally I would check out the "latest" code for the ecosystem, build it easily, verify my bug/problem still exists, make a change.
I'm not sure any friction at this stage is a good idea, so I hope the those with the experience of the codebase are making wise decision by changing the build process to one less widely available.
But that's not because of Meson. It's just because Mesa authors wrote a code generator in Python; you'd need Mako even if the build system used autotools or hand-written makefiles.
Okay, my bad. I just tried a pre-Meson version of Mesa and you're right. I thought that Mako was some Meson extension...I didn't really look into it much. I was just annoyed when the build broke and Meson printed a message that looked like it was missing a dependency. Then I was doubly annoyed when pip told me Homebrew (or Linuxbrew) would break in the future. It seemed like Meson was a PITA, but that was my first experience with it. I was wrong and I apologize.
If you're packaging RPMs, what you need is approximately to change a few %configure macros to the corresponding %meson macros. Though X.org is large enough that there will probably be some weirdness.
If your software is distributed as a source tarball, the build system follows established conventions like honouring $DESTDIR and doesn't do anything weird like download extra stuff from the internet, creating packages is super easy, at least with RPM.
Most of the time you just write out a bit of metadata, list your dependencies, use standard macros for your build system to do the actual building and then list the files that you want to use from the build (potentially putting them in subpackages), and that's it. RPM even supports specifying dependencies like pkgconfig(library) or perl(Thing) so that the actual package name that provides it does not matter.
I haven't done Debian packaging in ages so I don't know how it compares.
On Debian it's the same. I use Debian packages as the means to ship code to test devices and the most time consuming thing when converting X server from autotools to meson was the default configure flags (meson and autotools use different formats).
Two: Python and ninja/samurai (samurai doesn't have any dependency other than a C compiler). Almost any program the size of Xorg will likely already depend on Python anyway.
> Almost any program the size of Xorg will likely already depend on Python anyway.
Last time I built X by hand, there was one... one single X element the building of which depended on Python amongst the many, many components of X: XCB. So I had to install Python just for that one module; let me tell you I wasn't happy at all with such choice. Icing on the cake, the Python scripts were not compatible with Python 3, and also mixed spaces and tabs for indentation, so I had to write patches to get the stuff to compile.
Had to deal with this recently, Homebrew on macOS though. Meson build failed. Had to manually install Mako with pip, which also installed MarkupSafe. Then bitched that Homebrew (or Linuxbrew) will break in the future for some reason. So next time I build X, I can look forward to that. :/
My understanding (which could be very wrong) is that Meson on its own has no external dependencies like Mako, but that (by virtue of being Python) anybody can write a terrible build that introduces additional dependencies. Is that correct?
If so, that's unfortunate, but it isn't that different from what happens in autotools land -- I've had plenty of builds fail because the configure step fails to check for `$tool` and then expects it to be present at build-time.
For people convinced that X is irreparably insecure, I would direct you to the implementation in Qubes.
On Qubes, a secure management environment runs as a VM called dom0 on Xen, a hypervisor. dom0 (or, on the upcoming 4.1 release, another VM) runs a Linux and manages the physical display and input devices, and an X server that only it connects to, for desktop operations. User-level applications are always run in other VMs that have no direct access to hardware. Each such appVM runs its own X server, headless. So, the only programs that talk to the physical X server are desktop widgets like the XFCE "panel". appVMs have no access to those.
When an app opens a window in its X server, a memory mapping is provided shared with dom0. The app's X server writes its pixels into that shared memory, and dom0 copies from that shared memory to a corresponding window on the physical display. dom0 delivers input events to an appVM's X when they occur within a window the appVM controls.
Importantly, each appVM has no access to any other appVM's X server, window contents, or the GPU, or input; everything it does other than making and deleting windows is via raw bits copied in memory without interpretation. Thus, appVMs are wholly isolated from one another except via (virtual) network routing.
You may object that this would make interaction very slow and laggy. Perhaps surprisingly, it does not, at least on modern hardware, and when running non-time-critical programs. Certainly, browsers (including youtube pages) and similar programs -- wireshark, transmission-gtk, gitk, evince, system-config-printer -- work fine. Even mpv does fine with movies at 2880x1620 resolution. (4K is just out of reach, on my 5y-old laptop.)
I don't know what the brave new world of Wayland will look like on Qubes. The same, I expect. Ways to securely virtualize access to the GPU are, to my knowledge, still a research topic. Maybe Vulkan operations can be forwarded safely? Each shader's memory would need to be protected from others', or operations sequenced with mappings swapped in and out.
Spectrum-OS is a re-imagining of Qubes with much lighter-weight app VMs that each host just one app, just while it is running, without each having a whole Linux kernel and systemd in it. Spectrum-OS is still under development.
I found a neat tool recently called x2x. It lets me forward keyboard/mouse movement from my laptop to my rpi-connected tv over ssh. So my laptop ends up doubling as a very fancy remote control. :)
E.g., I just mouse over to the right on my screen so I can move the mouse on my tv and then type in a search string to bring up a video to watch in the browser on the tv.
1. Can this be done with Wayland currently? (Currently = works in some lts version of a popular distro like Debian or Ubuntu).
2. Glad to see people still working on fixing bugs and making improvements in Xorg-Server!
This release would not have happened if an effort to improve touchpad support in Linux was not funded for the past year and half. X server 21.1 makes touchpad gesture functionality support universal so it's much easier to offer consistent user experience for everyone and downstream developers are less reluctant to accept contributions.
Thanks a lot to all the sponsors: https://github.com/sponsors/gitclear
By the way, I wonder if there is demand for long-term maintenance of X server specifically. If you think you could contribute, maybe write here and if there's enough interest maybe it's possible to crowd fund something.