I don't have the knowledge to judge these specific issues, but the transition from X11 to Wayland was the worse I've ever seen. Worse than the python 2 to 3 debacle, and it seems even worse than the Perl 5 to 6 transition (that was resolved by not doing the transition)
I just tried out Wayland again on my last install, 1.5 years ago, because I really thought I should give it another chance since people keep saying it finally is supposed to work.
My attempt did last longer than previous attempts. But in the end, I gave up due to all the bugs, again. Broken input, broken rendering, broken shortcuts, broken accessibility (not just a thing for those losers who chose to be born disabled) ... it's like Wayland isn't even trying. It's hit-or-miss whether native Wayland programs or XWayland-using programs are more broken.
If you want stability and/or features, X11 remains the only choice, even in $CURRENTYEAR. ... actually, it's quite possible that all the X11 devs moving toward Wayland is responsible for the significant decrease in X11 bugs I've seen recently ... or it might just be that there's more RAM.
In theory, Wayland is beneficial for the small fraction of users with a high DPI monitor. In practice, if you try to enable it, you get a small fraction of programs that obey it correctly, a moderate fraction of programs that badly mix scaled and unscaled elements, a small fraction of programs that end up double-scaling somehow so everything is too large, and a large fraction of programs that remain tiny. Your machine is much more usable if you just leave everything small and sit closer to the monitor, occasionally using per-app controls that work just as well on X11 anyway.
I have not actually seen any other benefits of Wayland, people just tell me they allegedly exist. Sandboxing windows away from each other is pointless since all clients are unsandboxed with the same UID anyway. "All in one process" is exactly the opposite of a feature; it just means that all crashes are fatal instead of usually recoverable like they are on X11.
Maybe Wayland should spend a decade or so running its protocol to a server inside X11 first to shake out the bugs?
Sure, which are completely solved by Xwayland. If your app doesn't work under Wayland, then fine. Lots of apps don't because they're old and volatile to develop. But X11 apps work completely seamlessly under Wayland. Every major desktop right now is running many X11 apps under Wayland.
I am a Kicad user, not just a random speculator. The problems are not solved by XWayland. For example, Kicad uses different windows to represent different views of the circuit and circuit board and warps the cursor according to the view you are looking at. XWayland doesn't solve this, because it only allows warping within a single window. I know there is new warping code coming out, but I don't know if it will ever get into the LTS OS we use at my work.
The obvious right choice if you're on an LTS OS where Wayland isn't in a good enough shape is to continue using X11 sessions. Very few things are dropping support for X11 right now, and on an LTS OS you presumably would be insulated from that. Obviously you can't benefit from anything Wayland improves on, but I suspect that's not a huge problem.
I'd guess an LTS release x years from now would be a different story. Even next year possibly, based on the pace things are going lately.
Arch will presumably pick up GNOME 49 as soon as it's released, and so GNOME on Arch will also drop X11 session support this (northern hemisphere) autumn.
I know that most Ubuntu users run LTS versions, but still, those are probably the 3 most widely-used Linux distros in the Western world, and as such, I think the statement that "very few" things is false where it applies to Linux distros.
When I said that, I meant applications and UI toolkits are not dropping X11 support. Non-LTS OSes will definitely be dropping X11 support pretty soon, since yes, KDE and GNOME are both throwing in the towel. To me the timeline seems about right for that too.
> Obviously you can't benefit from anything Wayland improves on, but I suspect that's not a huge problem.
Indeed, that's not a problem at all, because there are zero such things I care about. The problem is that a lot of distros and DE's are dropping support for X even though Wayland still isn't viable at all for so many people, and LTS isn't forever.
That seems like a solved problem though. LTS distros will drop X11 when the Wayland session is viable for most people, which is very nearly true, but ultimately not. When it does happen, it really shouldn't come as a sudden surprise.
You're always free to step up and support continued X development. Nobody else wants to, because the code is truly terrible, and that's been the main driver for distributions dropping support.
KiCad kicked this press release out because RedHat broke mutter which broke XWayland which caused a whole bunch of us to file bugs against KiCad (and other applications) that were the fault of Wayland.
This caused a whole bunch of application developers to have to waste a bunch of time running down a bug that made it completely obvious that RedHat does ZERO testing of whether their changes break anything in the XWayland space.
That's not the fault of Wayland, it's a GNOME problem. Wayland being just a protocol has the annoying effect that you get DE-specific bugs. It also means that after switching DEs, you might escape some GNOME-specific bugs.
I've been happily using KiCad under XWayland on Sway, I've had no complaints from relatives about the stability of KDE under Wayland, and I hear Hyprland is nice too.
Xwayland has been slow and buggy in my experience. It's very far from seamless. I know one user who stuck with Gnome for decades (he was an "I don't care just get draw the stuff on the screen" kind of guy) until libreoffice became unusable because of the Wayland transition and actually switched back to FVWM over it.
Does Xwayland support running firefox on a remote server and opening it on my local wayland machine ?
I'm hoping the answer is yes, because every single time I've tried in the last few years, it failed and it's most definitely NOT a niche feature of X11 for me: I use it all the time.
I think overall it's a niche feature, and I'm honestly not sure if XWayland allows X forwarding. I know it preserves pretty much all X behavior, including grabbing attention from other windows and such if you give it permission. I don't know how the implementation, specifically, works. Like if each application is under it's own X screen.
But, I think if you really rely on X forwarding you should keep using X. The last time I used X forwarding the performance was significantly worse than newer protocols like RDP. X is a very chatty protocol, which performs poorly on a lot of networks.
See my comments about Wayland team not listening to their users.
It is probably a niche feature FOR THEM.
Now, I most certainly agree that X over network is horribly inefficient.
However:
1. RDP is a freaking mess on Linux. It may be much better at the protocol level, but all the clients are horrible, and it misbehaves horribly across WANs and/or firewalls. And setting the whole fucking thing up is certainly *way* harder than typing the casual one-liner "export DISPLAY=<ip>:0; firefox&"
2. There is therefore no alternative to X11-over-the-net, and in this 21st century world of everything-in-the-browser enshitification, having to use X11 over the network is something I hit and absolutely rely on about once a week when configuring remote servers.
Some of them are not really "open" issues, which is blatantly obvious if you try to drag a tab in Chrome in KDE or GNOME: The dockable widget use case is covered by xdg-toplevel-drag[1], has been implemented by multiple libraries including Qt 6, and you can fallback to normal drag'n'drop with the downside that the user can't drag the window with the handle. A new protocol was just added for pointer warping[2], too. Not going to lie and say the surface occlusion hanging GL thing isn't annoying, but it is indeed solved by using fifo-v1[3] and commit-timing-v1[4] or the equivalent stop-gap protocols that existed before those ones were merged, since not everything is there yet. Some of these problems even had some workarounds prior to the proper protocols being merged; SDL3 already had a way to warp the pointer by (ab)using the existing pointer constraints API.
KiCad for some reason chose to paint the picture that all of these are status quo issues that are not getting solved... but the rate of progress has been pretty good actually.
Some of the issues they list are also very blatantly KiCad/wxWidgets issues and not really anything inherently to do with Wayland. Seriously, "Graphical glitches" and "Application freezes and crashes" are not problems caused by your Wayland compositor.
You seem knowledgeable in the domain, so I hope you don't mind me picking your brains about the two things stopping me switching to Wayland.
1. Autokey. I use this to expand abbreviations for long words and phrases I use often. This relies on being able to insert itself between the keyboard and all userland apps, and this is apparently impossible under Wayland.
2. SimpleScreenRecorder. This relies on being able to access the window contents for all userland apps, and again this is apprently impossible.
Would I be right in thinking that both trip over because Wayland enforces application privacy, so preventing applications accessing other applications resources? And if so, why isn't there a "user root" level for apps that do need to interact with other apps?
The first google result for 'autokey wayland' is someone recommending https://espanso.org/ , that looks like it has good Wayland support. And you only need look at OBS to see screen video capture is perfectly possible on Wayland.
Who is saying those are impossible use cases? I think your two apps have just not updated, that happens often with software.
It's not impossible, but it requires extensions to the protocol. Historically the main headache is that support across different compositors was inconsistent, and may not even use the same extension, which means a developer looking make such a tool would need to in practice need to implement many different interfaces to make it work, which tended to mean it didn't happen (I think most devs kinda get as far as looking at the interfaces, seeing a bunch of drama between different compositors about what the correct way to do it is and whether it should even be done at all (GNOME), then decide to go for a nice walk instead). I think screen recording is now reasonably well supported and standard, I don't know about input interception and simulation.
The quick answer is that Wayland, while it has certain design provisions for privacy and security, doesn't really enforce anything on its own, it's just a set of protocols that applications can use to talk to a display server. The display server is free to do whatever it wants. Unfortunately this is poorly understood due to it being generally poorly explained.
I'll start with screen capture because it is easier. This one can be done on basically any Wayland compositor by using desktop portals + Pipewire, with the downside that applications must ask permission before they can capture the screen or a window. On KDE, XWayland apps can also capture the screen or a window, but it will also require a permission prompt. On some wlroots-based compositors, there are special protocols that allow Wayland clients to see other Wayland top-level windows and capture their contents directly without any permission prompts; for example, with OBS you can use the wlrobs plugin.
In fact, screen capture in OBS will be more efficient than it was in X11, as it will work by passing window buffers through dma-bufs, allowing zero-copy, just like gpu-screen-recorder on X11. OBS is a bit overkill for screen recording, but I still recommend it as it's a very versatile tool that I think most people wouldn't regret having around.
Now for Autokey. This one is possible to do, but I'm not 100% sure what the options are yet. Programmatically typing is certainly possible in a variety of ways; wlroots provides virtual input protocols, and there are other provisions for accessibility tools and etc. However it seems right now the main approach people have taken is to use something like ydotool which uses uinput to inject input events at the kernel level. It's a bit overkill but it definitely will bypass any security in your Wayland compositor :)
The more proper way to support this sort of use case would actually be by interjecting yourself into the input method somewhere. I don't know if anyone has done this, but someone DID try another route, which is to implement it on top of accessibility technology. I haven't tried it so YMMV, but I believe this is relatively close to what you are looking for.
Though, it has the caveat that it only works with applications that properly support accessibility technology (I would hope most of them...)
> why isn't there a "user root" level for apps that do need to interact with other apps?
Truth told, Wayland being inherently capabilities based, all of that could be implemented. Wlroots implements all of the protocols you'd imagine would be in that group, but it's just passively available to all applications by default. (I think they may support lower privilege applications now, too; there's protocols that convey security context and etc. for sandboxed apps.) The wlroots protocols are very useful, so they're also being implemented elsewhere.
If you wanted you could grant just as much capabilities to a Wayland app as before, and give apps the ability to interpose themselves between everything else, get and set absolute window positions, etc. it's all up to the compositors. Personally I think over time we'll see more provisions for this since it is useful even if it's not needed 99% of the time. Just don't expect too much stuff to work well on GNOME, they can be... challenging.
> The quick answer is that Wayland, while it has certain design provisions for privacy and security, doesn't really enforce anything on its own, it's just a set of protocols that applications can use to talk to a display server. The display server is free to do whatever it wants.
I love having to detect at runtime the compositor I'm using (and its version) and have bespoke code paths to work around their various bugs and omissions.
Definitely a recipe for reliable, usable, maintainable software.
> Unfortunately this is poorly understood due to it being generally poorly explained.
This reads like a "missing missing reason"[1]. People do understand it, and they explain why it's a dealbreaker. Wayland has had a decade and a half to grow some consensus and make these very basic things that work under X11 work. Instead of doing that, they're now relying on the main distributions just giving up on (the hardware) X11 servers instead of fixing this. I don't care if only one or two compositors that I don't run support the one thing that I need. That doesn't help me because those compositors don't implement other functionality I need. Having a stable, agreed upon, universally consistently implemented base of functionality that application developers and toolkits can rely on is a good thing.
This is a complete clusterfuck, and that's why there's user feedback. Trying to frame it as "people just don't understand" isn't productive. They do, and their criticisms have some validity. It's up to the Wayland devs to see if they care, and historically, they haven't.
> I love having to detect at runtime the compositor I'm using (and its version) and have bespoke code paths to work around their various bugs and omissions.
> Definitely a recipe for reliable, usable, maintainable software.
Honestly, tough. I didn't like dealing with Xlib/xcb, the weird error handling, or broken ICCCM implementations, but all of that stuff is typically lower level than most developers need to care about, since most developers are going to be using toolkits. If you need or want to go lower level, Wayland is what you get now. You don't have to do anything, you can always take your stuff and go home; the free desktop, though, is not going to be moving back towards X11. The rest of us will keep working to improve Wayland until it stops hurting.
Modern users have high DPI displays, variable refresh rates, multiple monitors, and panels with HDR color space support. X11 was just never going to work well for this. KDE Plasma 6.3, on the other hand, handles any of these situations quite well and I've been using it on a daily basis.
There are many users who can't even do real work without proper color management; it's so bad that for Blackmagic users the typical workaround is additional hardware. Retrofitting good color management into X11 is just not something anyone wants to try to do, it's not fit for the task. We have to move on, and Wayland is the best path forward for the foreseeable future.
> This reads like a "missing missing reason"[1]. People do understand it, and they explain why it's a dealbreaker. Wayland has had a decade and a half to grow some consensus and make these very basic things that work under X11 work. Instead of doing that, they're now relying on the main distributions just giving up on (the hardware) X11 servers instead of fixing this. I don't care if only one or two compositors that I don't run support the one thing that I need. That doesn't help me because those compositors don't implement other functionality I need. Having a stable, agreed upon, universally consistently implemented base of functionality that application developers and toolkits can rely on is a good thing.
Not a big fan of people throwing out psychology terms half-heartedly, but whatever.
The key misunderstanding is thinking that Wayland stops you from doing anything, but really from an end user's standpoint, the right question is "Can I do x on KDE?" because it is your compositor that really defines what your desktop is capable of in the Wayland world.
Developers have to deal with fragmentation by design. Wayland is not a superset or subset of X11, it's an entirely different thing altogether that happens to also cover the use case of applications talking to a display server. It also covers things like kiosks and embedded systems that X11 could be jammed into but was never a great fit for. Technically you're not guaranteed xdg-shell but that never seems to bother anyone.
This is not an accident at all though, it's an intentional strategic choice. Giving application developers less low level tools was a logical choice when looking at the types of hacks X11 applications did to implement features the desktops did not nominally support. This acts as a forcing function to have desktop systems properly support features that they need so that these features can properly integrate into the system. There's no equivalent to layer-shell in X11.
It is not expected to be a thing application developers are always happy about. People are rarely happy when agency is taken away from them.
> This is a complete clusterfuck, and that's why there's user feedback. Trying to frame it as "people just don't understand" isn't productive. They do, and their criticisms have some validity. It's up to the Wayland devs to see if they care, and historically, they haven't.
This was not a response to user feedback, it was a response to a question that revealed a misunderstanding. It's just that simple.
> Wayland is not a superset or subset of X11, it's an entirely different thing altogether that happens to also cover the use case of applications talking to a display server.
This is the core issue, it seems. The Wayland transition is kinda like if Volkswagen said "we're now going to stop making cars, and focus entirely on making the best gearbox possible". Well what are people that just want to get around going to do?
> Developers have to deal with fragmentation by design. [...] This is not an accident at all though, it's an intentional strategic choice.
And this is what I referred to in my original post, when I said that I think this decision was a very poor decision. People developing applications have enough on their plate, they don't need more fragmentation to contend with. Especially cross-platform application developers. I know, I've been one.
In my view it would have been much better if Wayland had sought to reduce fragmentation of the Linux desktop. Instead it seems the Wayland developers decided to turn fragmentation up to 11 by just implementing a minimal core set of protocols and let the DE folks figure out the rest for themselves, and application developers deal with the fallout of that.
At least that's how it looks from the sidelines. On Linux I'm just a user that wants the applications I want to use to work. I've been running a Linux desktop as a secondary machine for two decades now, and based on that experience my feeling is that it robbed at least 10 years of development from the community.
I get that X11 just didn't have a future, but to me it seems the choices the Wayland developers made meant the replacement happened much slower than it could have. That is, we could have had a much better Linux desktop today than we do, had they made different choices.
Alas, we'll never find out. Meanwhile my primary desktop remains Windows, at least for another 5 or so years. Perhaps the Wayland-based desktop is ready then.
> This is the core issue, it seems. The Wayland transition is kinda like if Volkswagen said "we're now going to stop making cars, and focus entirely on making the best gearbox possible". Well what are people that just want to get around going to do?
Even if it is a bit annoying, this pain is a temporary one. Users didn't really need to know what an X server is or how to use it, and likewise they don't really need to know what Wayland is... except for one thing, which is during this transition they need to know what the trade-offs are for "GNOME X11 Session" vs "GNOME Wayland Session" and so forth. But otherwise, neither of these things were meant to be marketed to end users in the first place. Unfortunately though, user's intuition is failing them: There is a much bigger difference between "GNOME Wayland Session" and "Plasma Wayland Session" than there is between "GNOME X11 Session" and "Plasma X11 Session", and it's not clear or obvious at all.
> I get that X11 just didn't have a future, but to me it seems the choices the Wayland developers made meant the replacement happened much slower than it could have. That is, we could have had a much better Linux desktop today than we do, had they made different choices.
To be honest, I don't know the answers. I think a lot of people see me routinely defending Wayland and think the reason is because I personally think it's wonderful, but actually I think Wayland is ugly and made many bad decisions. The wire protocol has no native 64-bit integers. I think that the GNOME folks are dumb for not accepting server-side decorations as a reality and figuring out how to make Mutter handle it, even if it involved working out out-of-process stuff for it. Trying to deal with the GTK/GNOME developers for trying to work on a problem was so frustrating I flat-out deleted my GNOME GitLab account. I also think that it is unfortunate that we already have a fair amount of cruft in the form of unstable, staging, ext, etc. protocols that are sometimes available and sometimes not. Even with all of that in mind, it's so blatantly obvious to me that Wayland is the future for Linux, at least right now. There's no better option emerging, and frankly there are no true deal-breakers with Wayland... Just challenges.
Admittedly, part of the reason why the case for Wayland feels relatively weak now is just because the X.org server got a lot more usable in the last decade or two. When I first jumped on Linux, the X11 server we used was XFree86. There was no hotplugging; if you wanted a new monitor or keyboard to work, you had to restart the X server. Once hotplugging was added, it was unstable for a long time and pretty routine for plugging in a monitor to crash. Multi-GPU situations are still pretty hackish though they do sort of work. In 2013, X11 worked on touchscreens, but it was extremely unstable, and it wasn't uncommon for things to get "stuck", or the X server to simply segfault when touching the screen. Until 2021, variable refresh rate was basically unusable for multi-head configurations, since the primary head drove the buffer flipping. The Linux graphics stack improvements with DRM and KMS became a part of the X11 experience, as did the work done on libinput to improve touchscreens and pen tablets. It worked so well that many people wonder, why not just keep doing X11 then?
I just think there's no future in it. It's not that it's physically impossible; obviously, Microsoft is able to make all of its old systems cope reasonably well with a modern desktop compositing system, including complex things like DPI virtualization. It's not literally impossible for X11 servers to fix these problems. But the thing about a lot of those improvements, including the improvements to DRM, KMS and libinput, were investments in the future and backporting them to X.org helped improve them quite a lot. Trying to retrofit something like DPI virtualization or color management into X11 would be a massive unthinkable time investment that would mostly be in service of a protocol and codebase that already felt dated by the late 90s. It took years just to make a good color management protocol for Wayland where it's actually something that slots well into the design! And it's not just "ha ha wayland developers dumb", the thread is huge and sprawling with legitimate concerns mostly surrounding how to do color management right. Spending that time on X.org just feels like it's a waste. If you try to clean up the unbelievably vast amounts of legacy cruft, you will lose compatibility with a lot of older software, somewhat defeating the point of keeping X11 around. If you try to work around it, you will surely wind up spending an awful lot of time trying to not break things that haven't been updated since George Bush was in office. (In some cases, Senior.)
Maybe I can't make the case to everyone for why X11 is such a bad investment. I know a lot of people have a negative knee-jerk reaction to the idea that all legacy code must be thrown away and rewritten in Rust and other trendy things, and even something as old and weird as the X.org codebase just isn't enough to convince them that this is not one of those cases. Even in that case, I don't think the case for Wayland is really as weak as it seems based on its flaws. To be fair, I think most people complaining about Wayland are fundamentally right, but often frame things in a fundamentally poor way. Very few of the limitations of Wayland compositors that exist today are inherent, and many of them are getting resolved on a month-to-month basis. When developers claim "this can never be done because I don't have something like SetWindowPos", that's just annoying. Any feature can be done without direct access to these sorts of facilities, and I think that Wayland pushing window management tasks back to the window manager is something that will age very well even if it is painful at first.
I will definitely acknowledge is that for application developers, its annoying that you can do one thing on Windows, macOS, X11, BeOS, basically every other desktop system on Earth, and then you have to special-case that thing on Wayland. That's definitely going to be painful for developers who want to write software that runs directly atop Wayland. However: Open source software does not have the benefit of being able to move fast and jump on emerging trends as they start to become popular. I think that the developers of Wayland saw all the way back in 2008 that the longer term future was going to be different when it comes to window management. Obviously, they saw that desktop compositing would eventually be everything. But I think they also saw that for both privacy and security reasons and for the benefit of better window management, giving direct access to position windows is an artifact of many features that really ought to be provided by the window manager. If they wanted Linux to have a desktop system that felt "modern" in 2030, they needed to start making the moves towards that in 2008, because it wouldn't only take an enormous amount of time to make Wayland happen, but for the ecosystem to adjust to it as well. I think they also reasoned, correctly, that the vast majority of developers would not be using Wayland directly, but would be using GTK, Qt, SDL, etc. which I think is still true.
Also true is that some software and definitely some UI toolkits will never fully be able to cope with the windowing model that Wayland presents them, but this isn't actually the dealbreaker everyone seems to think it is.
First of all, I believe XWayland is here to stay for a long time. We're really not all that close to even being able to remove XWayland and I don't anyone really wants to soon. If developers are unable or unwilling to work to make their apps work on Wayland, that is fine. X11 is still here. Some developers will bravely start huge efforts and it will wind up benefiting the whole ecosystem by possibly introducing new protocols, providing tons of experience reports, and probably making compositors more robust all at the same time. A good example is Godot. Other software is just waiting for now and forcing XWayland; an example of that is Krita. A lot of eager enthusiasts may be bugging developers to start getting things going, but I think by and large there's no real reason to rush. For most users, XWayland is good enough.
Secondly though, I think that even old toolkits that rely on concepts like the ability to directly introspect and modify window geometry can be made to roughly work. Obviously, Qt 5+ do generally work on Wayland, with some limitations. (Well enough that most apps don't really have to change anything.) The winewayland.drv in Wayland makes Wine generally work on Wayland, with some quirks (probably a lot less than you're imagining, especially as of late.) While you obviously can't do everything the same, it is possible to virtualize old APIs and use workarounds. I don't think all of the old apps are forever doomed to XWayland as it may seem today.
Will the Wayland desktop be "ready" in 5 years? I think it will be "ready" in closer to 1-2 years, at least if the current rate of progress keeps up. Rather than being worried about the Wayland desktop being ready from the compositor/protocol/application end, I'm more concerned about the continuing progress on the NVIDIA end, which has been promising from multiple angles but still a bit slow moving.
> Modern users have high DPI displays, variable refresh rates, multiple monitors, and panels with HDR color space support. X11 was just never going to work well for this. KDE Plasma 6.3, on the other hand, handles any of these situations quite well and I've been using it on a daily basis.
Everything from your list except HDR worked in X11 years before it started to work in Wayland compositors. Maybe not perfect, but better than any Wayland compositor could do the same until only a year or two ago.
The amount of time it took to get at least feature parity with X11 in Wayland stack is ridiculous. I'm not really sure it wouldn't be better if all that time went into X11.
> Everything from your list except HDR worked in X11 years before it started to work in Wayland compositors. Maybe not perfect, but better than any Wayland compositor could do the same until only a year or two ago.
- X11 has no DPI virtualization or way for windows to communicate their DPI awareness. DPI is all handled by toolkits and desktops. Poorly. With multiple monitors, scaling is often buggy or entirely incorrect, only really working for one of the monitors. (How much you notice this obviously depends on what your desktop is like. Inside GNOME and KDE with "native" GNOME and KDE apps only, it should work decently most of the time. If you're i3wm or anything though you're pretty much guaranteed to have a bad time.)
- X11 indeed does have support for variable refresh, but it doesn't really work very well. In my experience it has basically always been buggy and weird and caused things to misbehave and go choppy for no reason. Before they introduced AsyncFlipSecondaries in 2021, the behavior with multiple displays was essentially unusable. No matter what you do, you pretty much just have to live with tearing on VRR+multi monitor. Much better off disabled.
- And yep, X11 just doesn't have any form of HDR/color management.
> Maybe not perfect, but better than any Wayland compositor could do the same until only a year or two ago.
As far as DPI scaling and VRR goes, this is false. I adopted SwayWM 7 years ago (from i3wm) specifically because I wanted my laptop to actually work with DPI scaling when I docked it. I actually don't know when VRR started to work right in SwayWM because I didn't yet have a VRR display, I just know that it already worked right when I tried it.
Note that for much of that time Wayland didn't have proper fractional scale support, yet I had my laptop at 1.5x scale during most of this time. You'd be hard pressed to notice, because apps could still just render at 2x scale and then it would get virtualized and downsampled. That might sound bad, but that's also exactly what you get with macOS's display scaling, and people usually consider macOS display scaling to be pretty good. (To be more particular, macOS renders apps at either 1x or 2x, but then scales the entire framebuffer to get fractional scales. Clever... I guess.)
As far as color management goes... well, X11 definitely couldn't do it better than any Wayland compositor, since it can't do it at all.