Lomg time ago, I think 2010ish, there was an experimental HTML renderer that would open up a GTK app in a browser that has its UI using plain HTML+CSS.
For the time, it was just jaw dropping. For context, I think it was before Atom, VS Code or Electron (or possibly even NodeJS?) was a thing.
Don't know if that HTML renderer is still around or not.
Oh yes, that's it. Thanks for posting. I think its so beautiful. It is one of those things that have pure artisan value. Of craftsmanship. Whether it is used widely or not, I wish this backend to be there.
Back then, I did run Open Office in Firefox and it amazed me.
If nothing’s changed, the Gtk 4 design tool Cambalache[1] by the former maintainer of the now-defunct Glade project uses[2] Broadway to render its design view, because it was there and an embedded Wayland compositor widget wasn’t. Broadway is awesome, but this just makes me sad.
That the easiest way for a UI design tool using a native toolkit and targeting that same toolkit to display a preview of the UI being edited turned out to be embedding the 800-pound gorilla called a webview.
The only reason the tool does that is to support multiple versions of GTK. But the alternative you suggest, embedding a Wayland compositor in a widget, doesn't exactly seem to be the lightest-weight of options.
Not my suggestion, for what it’s worth, but the author’s own (in ref. 2 above). And yes, it’s not precisely fantastic, but on the other hand “embedded Wayland compositor” is for the most part a fancy way to say “dumb RPC passthrough and a single blit”. (That doesn’t mean that it wouldn’t take a whole lot of work to make, just that the eventual result would not be huge.)
Although it uses more HTML and CSS than some similar things, I don’t think it’s fair to call it “plain HTML + CSS”, because it displays more functional characteristics of the pure canvas approach (where you throw away just about everything the browser gives you and start from scratch). My three primary heuristics for proper behaviour are:
(a) using browser scrolling (the web doesn’t expose the right primitives to make a scroll-event-based reimplementation anything but bad, barely noticeable in some configurations but painfully broken in some of the most common configurations);
(b) using browser text rendering (strongly preferably by DOM nodes, but even canvas plus fillText can be acceptable); and
(c) handling links with real <a> elements (no alternative can provide the same functionality).
Broadway fails all three of these (reimplementing scrolling, rendering text on the server and sending images, and I think actually locking up when you try to click on a link). It also fails my fourth and fifth tests, which are (d) handling text input properly (it looks like it just uses key events, not even a backing <input> or <textarea> or contenteditable, so IME composition fails completely and keyboard navigation will presumably be GTK rather than native); and (d) having a meaningful accessibility tree (preferably backed by normal DOM stuff, but not necessarily; I place this one last despite its importance because it’s likely to be more retrofittable than the others, though it’ll still generally be hard).
I’d count Broadway as suitable for tech demos and for personal use where you know you don’t mind its limitations, but not for any sort of public deployment. It’s basically just an RDP/VNC sort of thing, with a smattering of DOM use. (Also remember that’s how it works—the code is all running on the server.)
> no alternative can provide the same functionality
I agree that real links should always be used to represent links, but if you were to emulate links surely it wouldn't be that hard? Just allow middle-clicking and ctrl-clicking and you should have the basic functionality. Is stuff like the ability to drag a problem?
You can’t even emulate middle-click or Ctrl+click correctly! The best you can do in-page is open(), which will will typically open a new foreground tab; but in typical desktop configurations, middle-click and Ctrl+click will open a new background tab. Users will notice you disrupting their normal workflow of opening a bunch of thing at once and then going through them one by one and get extremely frustrated.
And then beyond that, well. Right click or long press, browser context menu with things like copy, bookmark, open in new tab, open in private window, share… you can’t emulate most of this at all. And hover to see href in the browser’s status bar. And so it goes on.
No, the only acceptable technique is to have the user interact with a real HTML <a>, HTML <area> or SVG <a> element.
I generally agree with you, but I don’t think c) is true. It’s trivial, and most people won’t go the extra mile to make it “native”, but you can properly manage history from JS, can also add accessibility attributes to make it focusable, etc, so any well-implemented DOM elem with a click handler could be just as good as an <a>. Or do you have any other requirements?
Edit: I have replied from an older state of HN, and haven’t yet seen your other comments. Fair enough, didn’t think hard enough of other edge cases, thanks!
Calling it a "HTML renderer" is a bit of a strech. Atleast in GTK3 it just streams pixel data into a canvas element, effectively being the same as VNC with a web viewer.
Lol yeah. In a way, most people kinda got where the industry was moving to, but desktop toolkits moved too slowly and effectively got left in the dust.
There's some fancy bridging modes to run apps in a browser, but the author has also been working on a way to make wasm Wayland apps run directly in the browser tol.
Not really. qBittorent is built on Qt (thus the prefix), and has a hand-rolled webui in pure html + css + js (with a couple of helper libraries, but no heavy frameworks):
I was talking about the web interface, not the GUI; the web interface uses only web technologies of course. I mean I guess they could do a wasm version too but ehhhhhh
I wish GTK didn't follow the trend of allowing widgets in the titlebar. Some can drag, some can not. There's less room for the app and file names. This is not a GTK specific complaint.
After over a decade of GTK claiming fractional scaling is "impossible" and GTK devs vetoing any fractional scaling in the wayland protocol, they finally got feature parity with Qt in this area.
Now we just need proper support in Wayland and we'll finally have support for HiDPI in all major Linux DEs.
Both fractional scaling and thumbnails did require massive background work.
The term „impossible“ is therefore incorrect. It required huge changes in the backend which weren’t feasible in short term. The thumbnails are done already and the new renderes are approaching.
Keep in mind, developers strive good solutions. Users need good solutions. Companies are often happy with less reliable solutions because they are forced to be fast. And this less reliable stuff tends to stay long.
I know how much work this is, and I genuinely appreciate that Gtk has finally reached this milestone. It was the last missing puzzle piece.
It's just sad that Gtk used to have this functionality and dropped it when Gtk3 was envisioned, as it meant the ecosystem was stuck waiting for over a decade.
* what comes preinstalled
* a weird application
* the end
Regarding features - for users - Linux with GNOME is delivering features for decades which Windows lacks:
* Tabs in file-browser * fast and modern terminal (ttf-fonts, utf8, OpenGL…)
* clear concise settings
* maintained application repositories
* no „desktop“ which forces me to arrange icons (WTF?)
Other consider Cortana or ChatGPT as „required“. If they think so…
You seem to be very much out of date with the current state of Windows.
tabs in file browser
>built in for the last 2 years, tabs can be rearranged, moved between explorer windows, etc.
fast and modern terminal
>windows terminal (released 2019 and built in by default) is one of the best terminal apps on any platform. Tabs, GPU acceleration, fonts, seamless multishell support (including Linux shells local - WSL - or remote)
*maintained application repositories
> winget (released 2020 and built in by default) repository is now the backend of windows store and can maintain externally installed software, just run ‘winget upgrade —-all’ in your shell of choice
Also, you can now install any linux distro you want with a single click using WSL, with full x11 and Wayland support. If there’s a Linux app you prefer, you can use it seamlessly. You can even run a full desktop. I’d almost go as far as to say that nvidia hardware is supported better through WSL, where latest drivers “just work” and I’ve never experienced things breaking, than on baremetal Linux (although that says more about nvidia than any platform).
The point is the most are available since 2002 with Linux. Only recently on Windows. And here is argued that the special thumbnail feature was missing on Linux since around 2002.
I’m glad about the thumbnails (it was overdue by a decade) but there are many features and every new feature available is missing somewhere else[1]. Next year we will discuss either missing Flatpak support on Windows (cgroups, namespaces) or missing Vulkan on MacOS or missing built-in Android support on Linux.
[1] or not - because it doesn’t fit well. For example the weird database/library kind stuff in Explorer which teases users on Windows. I think EXPLORER.EXE has become awful hard to use. It was usable back in NT4 and NT5.
>After over a decade of GTK claiming fractional scaling is "impossible" and
So it's just as impossible as thumbnails in the file picker? The insanity of the GTK and the resilience of people willing to put up with it is baffling to me.
A honest „thank you“ to the people which made the thumbnails possible would be helpful. And involving itself. Or spending money. Gtk and GNOME are not companies. People need a motivation - from their work or from outside.
These are large entities made up of people, same as companies, not individual devs making a side project. Does that mean their work should not be criticized because their collective is not a registered publicly listed company?
>People need a motivation - from their work or from outside.
Also feedback. And that was my feedback, candid as it may be.
Like I said below, FOSS work doesn't mean it should automatically be excluded form criticism, especially if that FOSS work is so big and influential enough that it ends up having impact on other products and on people's work and lives, like Gnome being the default on most free but also commercial distros.
So please let's stop attacking and gaslighting critics, as if GNOME is some teenager developing his first app for free in his bedroom who should only be defended and praised for encouragement, instead of the giant mammoth project with private and public funding[1] developed by vested professionals of the software industry who know very well what they're doing and should be criticized when they do something wrong or are being needlessly petty and stubborn with their decisions and arguments, besides being praised when doing something right as there's already enough of that in this topic if you read it.
Constructive criticism is helpful. And yes - some people get it wrong or are stubborn. People don’t act always perfect despite good intentions.
I struggle often to find the right tone and interpret it properly.
And I’m happy about Linux, GCC, Vim, Coretils and Gtk in general. GNOMEs keyboard centric usage is wonderful.
That said. I’m usually sad about public long term forks. Because they are not about learning or improvement but that people often failed to communicate. Yes, sometimes their targets are just differing.
Anyone good news?
Maybe Mutter will support VRR itself. And maybe battery thresholds in a user friendly way. Woohooo
No we just need convince the right people to add Type-Ahead-Find back in Nautilus and background transparency officially back in gnome-terminal. Both „back“. There is existing code for both.
"No we just need convince the right people to add Type-Ahead-Find back in Nautilus and background transparency officially back in gnome-terminal. Both „back“. There is existing code for both."
People are willing and ready to spend money on quality software. It is a trillion dollar industry. Open source is an endless circle of misery that needs to be ended as soon as possible.
Possibly, but it wouldn't be the first time that the first category of devs feel like they're the target of such comments. And it's a well-known phenomenon that negative comments outweigh ten positive ones.
So from where I sit, one might as well not make them. (And I'm pushing back a bit to hopefully support the contributors who I'm grateful to.)
Thank you for moving the goalposts for the chance of a cheap shot comment, but you know very well that's not what I meant.
GTK doing something right , doesn't suddenly absolve it of all the wrongs. And just because something is FOSS doesn't mean it's not without fault and therefore without criticism.
Am I the one being rude? Maybe. Was GTK also being rude for claiming something everyone else was doing at the time, like fractional scaling, as being "Impossible"? Maybe.
Sorry, maybe I was rude, but someone needs to call out BS claims like this as they probably have enough "yes men" tooting their horn.
> someone needs to call out BS claims like this as they probably have enough "yes men" tooting their horn.
Do they, though? Does it have any effect at all, other than potentially demotivating volunteer contributors? And if so, do those effects outweigh the negative effects?
Gnome's problems are not those volunteers contributing code, but the leadership who decided what gets merged to mainline.
Devs could have implemented thumbnails in file picker over a decade ago, but Gnome leadership was always against it.
Toxic and poor leadershipof such large projects should be openly criticized and pointed at, and not allow to hide and be gaslighten behind the "poor small volunteer devs".
What have years and years of criticising and pointing-at brought us, other than a couple of burnt-out devs?
And is there anywhere where we can see that this elite leadership suddenly changed their minds on allowing this, rather than a developer seeing an opportunity when a bunch of things got deprecated, which had been postponed for a long time because it creates a lot of work for downstreams [0]?
Or is the fact that they didn't want to haphazardly break those downstreams the poor leadership you speak of?
Please spare us of this tired trope. People who don't contribute to FOSS, are also allowed to have an opinion on FOSS if they're users of it. Otherwise how would they know what to improve if you don't allow feedback from users? What you're saying is like car companies thinking their customers' opinions are irelevant because their customers never contributed to designing parts of a car.
If you're designing a mainstream product that's not just targeting professional SW developers who can contribute back with code, but also average users who will only use it but never contribute, then you will need to be open to mainstream levels of feedback that represent the average user and not just programmers.
Otherwise let's keep complaining Linux and FOSS alternatives have only ~2% market share that it's somehow the users' fault for only buying Microsoft/Apple and not the projects' for being stubborn to feedback and out of touch with their mainstream users.
I have contributed and improved more than a few FOSS pieces of software that I used when I thought it could use improvement or a bug fix, or even just documentation of the bug. FOSS isn't a company, and a huge portion of contributors aren't paid a dime. It bothers me when people criticize with vehemence against the developers, where I express my opinion is on the nature of belligerent complaints vs helpful ones. Have a nice day.
Everyone has their opinions. There are so many opinions out there, it isn’t really clear what to do with them. Put them out there, fine, now everybody else just has to decide if they care.
One way to tell if you should care about an opinion is to check and see if the person professing it has taken any concrete actions toward getting the world to align with it.
I remember my cousin telling me how horrible it was to work at a hotel - you really found out how horrible people could be when they thought they were owed something. Staff there obviously had to put up with it because that was their job.
Regarding FOSS, I think some users forget that they're at a free lunch and are unpleasant and insistent about their complaints as if they were paying. They're not entitled to service and don't seem to realise it.
As to whether everyone else should listen - sure - but nobody is forced to work overtime just to right some wrong in OSS software because person A was very upset about it.
In this case, PRs were made for close to a decade, tons of people implemented the feature in forks etc but upstream just didn't want it. You can't tell people to just write the code since it's open source when that's irrelevant here, the code itself wasn't what was missing. Though I still think it's great that it is now done regardless of the weird historic that specific feature had.
There are gtk file picker forks (like Nemo I think?) But I guess you could argue that they didn't just add the file picker thumbnails. So here is an example of a patch to make thumbnails work:
> Also, why isn't that patch in an MR to GTK (if the author wanted to upstream it)?
These MRs do a minimal change to just add thumbnails. The current file manager uses outdated widgets. Gnome/GTK expects MRs to also replace deprecated widgets in all files touched by that MR. Replacing those deprecated widgets would require a massive refactor.
Okay sure? I never said otherwise, you asked me for forks and I gave them to you. What's your point? I get that open source needs people to complain less and contribute more, my point is just that while it might be valid for tons of situations, it just isn't make sense for the file picker thumbnails saga.
It's even worse, it's devs (& some users) plainly dismissing fractional scaling as "imperfect" and claiming fullscreen downscaling as the only sensible option with very little justification other than "it's what Apple does".
I had multiple discussions here in HN with users who wouldn't believe that, given fractional scale ratios, _anything_ else than direct fractional rendering produces significant blurriness. (And note fractional rendering can also produce blurriness, e.g. through misalignment).
If your 1200 × 800 window is set to be scaled to 125 %, with the unified renderers, we will use a framebuffer of size 1500 × 1000 for it, instead of letting the compositor downscale a 2400 × 1600 image.
The explanation is a bit confusing to me. I think they are saying that a window that should be drawn as 1500 x 1000 pixels on the screen because of 125% scaling would have 1200 x 800 pixels in application pixels. Since OpenGL and Vulcan use floats to render, they might as well render directly into a buffer that can be rendered 1:1 on the screen by transforming the coordinates in the instructions.
If that's what they are doing, that sounds like sanity finally.
Does anyone have a grasp/understanding of how desktop environments work on Linux? I don't. For me, everything feels to get more and more convoluted and tacked on.
The X Window System was basically the wrong bet on how GUIs and computer hardware would evolve. Its client/server architecture was the opposite of the highly integrated graphics processing model we ended up with.
Instead of cutting its losses early and dumping X11, both the Unix vendors and open source spent far too long trying to make lemonade out of a truckload of rotten lemons. And that’s why Linux GUIs are so far behind.
It’s notable how rapidly Apple was able to evolve their Unix GUI because they were not tied to X11 and instead embraced the integrated model, even designing their own GPUs nowadays.
Isn't this a bit of a non-sequitur? The desktop environments could have been made coherent regardless of the underlying window system. Indeed, I think you could say that there were many coherent desktop environments in the past. Catch comes from the fact that there were a ton of competing ideas. With no clear reason to want one over the other.
Apple rapidly evolved their GUI because they are trying to support just the one. And, frankly, they are starting to buckle on the idea that they are fully coherent.
People keep saying this but I never really had any complaints of kde vs windows vs macos. All 3 are perfectly usable, it's only in the past 8 years or so where windows has gone crazy with the obtuse spying and ads and "MICROSOFT" everywhere. Even that isn't too hard to remove with a download or two.
And polish. With X you can sometimes feel the “layers” separating and slip-sliding around a bit in places that you don’t with a Wayland DE or in macOS. This was especially true in the late 00s when there was less computing power available to brute-force these issues into being less apparent.
As someone not very familiar with Linux, which desktop environment would you recommend with most “polish”? I’m thinking of switching from windows 10 on my new laptop but I do appreciate all the animations in windows. (I could also consider a hackintosh or BSD if they are prettier, especially in movement/animations.)
BSDs aren’t going to be functionally different from Linux in terms of desktop prettiness/animation as the two run the same DEs.
GNOME is probably the most polished in terms of aesthetics, animations, and gestures. The downside is that it’s much more a tablet OS experience with a few desktop affordances — kind of like iPadOS if it were turned into a desktop OS — than it is a traditional DE. As such expect for various power user features to be diminished or absent compared to macOS or Windows. Pantheon also ranks highly here, with similar limitations.
KDE comes in second. Compared to GNOME it still has a number of rough edges in terms of aesthetics but is comparable to Windows in terms of overall capabilities and design.
Third would be Cinnamon, which attempts to blend a more Windows-like desktop with GNOME-like polish, but last I knew this project is understaffed which has resulted in it falling behind a bit. Also requires X11, unlike GNOME and KDE which work well with Wayland.
>The X Window System was basically the wrong bet on how GUIs and computer hardware would evolve.
I disagree. Other platforms did similar things where clients would command the OS what it should draw and the OS would handle all app's rendering which allowed for good performance on primitive hardware. Like other display severs X did evolve to also be for passing around framebuffers of what the app drew once hardware became capable. X's problems come from being hard to maintain, being a monolith of unrelated concepts, having poor security, etc.
The hardware only became capable in the wrong place. Datacenters are full of heavily shared servers without GPUs, and the days of running everything on your own desktop are over.
I agree that the client/server approach was the wrong way at the end. but saying that Unix GUI was far behind others, it's a too far bridge. The first time that I saw composite desktops using the GPU was on Linux. And current KDE/Plasma it's light years better that the inconsistent and ads shit hole that Windows has become.
To be fair, the secondary technologies available to support the preferred model now are far more advanced than what Quartz Extreme (in macOS) and DWM (in Win. Vista) had to use, and there's no longer a need to provide backward compatibility with software rendering.
The socket is fine for messages, but not for rendering. Here’s a simplified version of how 3D rendering apps usually work in different environments I have used.
Direct3D on Windows: app renders something and calls IDXGISwapChain.Present. The implementation communicates with the desktop compositor running in the dwm.exe process, dwm.exe renders the entire desktop composed of multiple windows, then communicates with physical GPU to wait for next vertical blank event.
Bare metal Linux with DRM: app renders something, calls drmModePageFlip(), then waits for next vertical blank event with poll() and drmHandleEvent() functions. Embedded Linux developers did an amazing job about DRM/KMS, in my experience the API is both reliable and efficient.
X Window systems have multiple different methods, glXSwapIntervalEXT, glXSwapIntervalMESA, glXSwapIntervalSGI, but in practice none of them is reliable, and on some systems none of them is even supported. Unfortunately, this makes rendering tear-free 3D content on X11-based desktops hard-to-impossible.
The originally posted article is about new 3D GPU-based backend in the GTK, which is unrelated to games.
Hardware accelerated 3D graphics is the best way to render pretty much everything on modern hardware, games or not games. High-resolution displays are omnipresent. Even cell phones often have FullHD or more pixels, like 2796×1290 or 2556×1179 in the iPhones.
>Hardware accelerated 3D graphics is the best way to render pretty much everything on modern hardware, games or not games.
A counterexample is video playback. The best way for video is to use the hardware decoder and hardware compositor. You don't want to render the video texture to a quad. Using the hardware compositor requires less computation and less power than using 3D graphics.
Microsoft strongly recommends that new code use MediaPlayer or the lower level IMFMediaEngine APIs to play video media in Windows instead of the EVR, when possible. Microsoft suggests that existing code that uses the legacy APIs be rewritten to use the new APIs if possible.
That lower level IMFMediaEngine API they recommend for Win10+ delivers uncompressed video frames in D3D11 textures.
Yes, but note no hardware composition is involved.
The composition is done by the 3D GPU like I said. Either in your app if you render that video yourself, or if you supply a swap chain’s back buffer to IMFMediaEngine.TransferVideoFrame method, dwm.exe will do it.
It's the compositor's job to composite in the most efficient way possible which in the best case would be using hardware compositing layers, if the hardware does not have enough layers than the compositor will use 3D graphics to combine layers together. Just because a GPU includes a 3D graphics pipeline that doesn't mean that everything has to go through it.
> Just because a GPU includes a 3D graphics pipeline that doesn't mean that everything has to go through it.
That’s how modern Windows does that in practice. Everything does go through the 3D graphics pipeline, and starting from Win8 it’s impossible to disable.
I’m not even sure modern PC hardware supports these hardware layers, except for one small extra layer for hardware mouse cursor. On my computer D3DCAPS_OVERLAY and DDCAPS_OVERLAY flags are unset. The driver reports to the OS the hardware doesn’t support any hardware overlay surfaces.
>I'm not even sure modern PC hardware supports these hardware layers
Yes, they do. I'm guessing you may be using an older nvidia gpu. In the case no overlay is available the video can become fullscreen and to take the only layer from dwm. For pcs the benefit is less about performance / power and more about lower latency.
Tested AMD Vega 7 inside Ryzen 5 5600U, and nVidia 1080Ti — dxcapsviewer.exe shows the same result, no overlays are supported. Only the D3DCURSORCAPS_COLOR hardware cursor is there.
> the video can become fullscreen and to take the only layer from dwm
Indeed, exclusive D3D full-screen mode allows to bypass dwm.exe compositor, even on modern Windows. But I don’t think it’s evidence of any hardware composition being used.
> it doesn't require using 3D graphics to put the texture on the display
In practice, 3D graphics is the best way to put pixels on the display. At this point, I believe other ways are remnants of the old GPUs which had 2D blitting hardware.
GDI is only used by legacy apps. It gonna stay there for quite a while for backward compatibility reasons, but modern Windows apps are using better GPU-centric APIs to render their GUI.
Specifically, WPF is based on DirectX 9, Direct2D is based on Direct3D 11, UWP and WinUI are based on Direct2D. On my computer, both Chromium and Firefox browsers are using Angle on top of D3D11, Firefox uses Direct2D for canvas only.
Also, even legacy apps who render their GUI with GDI are still using 3D GPUs to an extent. Some GDI operations like BitBlt are accelerated internally by the OS. And the OS composes windows on the desktop with D3D11, dwm.exe process does that.
This is utter horseshit. As someone who writes cross-platform native (desktop) GUI code, there's nothing about Apple's GUI system that is substantively superior to the one on Linux that has any relationship to XWindow being the underlying implementation or not. Apple has also had to continually evolve their own GUI toolkit(s) because they also failed to adequately target certain new expectations that arose as time passes (animation being a good example).
I'm sorry, but Xorg still doesn't have a way to correctly handle mismatched DPI monitors. I will also take Cocoa over any linux framework any day of the year.
I'm currently using Windows 10, and it handles mismatched DPI just fine. The same with macOS.
There are some old applications on Windows that aren't DPI-aware and look weird, but usually fixable. It's still a million times better than whatever X11 does.
IMO macOS is the only OS where graphics and fonts work the exact way one would expect reliably.
The rest, not so much. Applications work only if they do the right thing. Applications could do the right thing on X11 too. They don't just like the "old" applications on Windows.
Anyway, you've dragged this away from the general points: issues with Linux GUI toolkits have little to do with XWindow, and Apple's GUI toolkits have had to evolve just as the Linux ones have.
I may have outdated infor, but last time I checked on X11 apps while they could be DPI-aware, they don't handle mismatched DPI well - I had to choose between everything being small on my 4k screen or everything is blurry on my 1440p screen.
I've hard-coded some DPI settings for certain apps that I always display on the same monitor.
Meanwhile, I have zero issues with the same pair of monitors on Windows and macOS. I daily drive all 3 of them.
But it's so fundamental to the design that it's inescapable. The history of Linux GUI is a series of heroic workarounds to hide the misaligned X underpinnings.
Trying to separate X from the client/server model would be like saying: "I like Unix just fine, except files and processes and the shell and the software tool philosophy, really." — You just don't have much left.
I would say X was success. It could be modernized via extensions, while maintaining backwards compatibility and providing network transparency. And it still works well, while the the replacement which was said to have a much cleaner design and which is under development for 15 years still causes problems and has many limitations.
I also disagree that the client/server design is a problem. Modern graphics hardware is remote from the CPU for all intents and purposes. So remote buffer manipulations protocol is exactly what is needed.
Before a few days ago, I had never used Wayland. As you said, it causes problems and has many limitations. The solution that works is infinitely preferable to the one that doesn't.
But, a few days ago, I finally switched over, on a modern machine (~ 6 months old), with a modern OS (more than modern -- I'm running a pre-release version). And now, finally, I see what all the fuss is about. Wayland is absolutely beautiful. The rendering and animations are gorgeous, and so so smooth. The whole system is far faster, far more robust, and far more efficient both in computing resources and workflow than anything I was ever able to achieve on X.
I have been using Linux on X11 displays as my primary machine since 1996. I do development work, but not with graphics, where I'm just a regular user. I don't really care about how good the design is or how hard it is to write programs using X / Wayland / whatever. All I care about is how usable my system is. And for me, having reached the Wayland promised land, I now understand what I was missing with X, in a way that I could never have before.
Tear-free rendering, all the time, every time. Frame perfect windows, all the time, every time. A desktop that finally feels immediately responsive, all the time, every time. Under X11, all those round trips to and from the X server add milliseconds to response time, hard to benchmark, but very easy to feel. Worse yet, X11 responsiveness is variable. If the system is loaded, your desktop can lag so badly under X. This doesn't happen with Wayland.
I now have OSD volume controls that work even when the screen is locked. This was never possible with X, and can never be possible.
I now have a lockscreen that, guaranteed, will not leave my desktop unlocked if someone crashes it by feeding it malicious input. X11 is architecturally incapable of providing such guarantees. There were several instances of security bugs where lock screens in X11 crashed in exactly this way.
Wayland has now reached the point where it is a must-have for me. If it doesn't run Wayland, I won't buy it, and I won't install it. Yes, X11 works well after all these years, but Wayland is enjoyable to use in a way that X11 never was.
Funny that you say "gnome only", cuz currently there are "non-gnome only" apps due to a specific protocol not being implemented by gnome (drm-leasing for VR, at least on main. there is a PR that is specifically said to not merge since they want to handle it via a xdg-portal). They are very wary of exposing anything in an way that could very much backfire in the long run, be it as a general implementation or exposing something they would be stuck with.
The xdg-portal attempt was misguided and I don't beleive anyone is pursuing it at this point. Ideally drm-leasing would be managed by the login manager, allowing multiple compositors to lease connectors and run independently on other monitors, as well as being used for VR headsets. https://github.com/systemd/systemd/issues/29078.
Sidenote: I hacked the wayland protocol implementation for gnome into working at least for SteamVR, but at least with AMD gpus there is some serious bug preventing the card from performing properly. It basically throttles itself for no reason and never hits the refresh rates needed for smooth VR, especially since there is no asynchronous reprojection at the moment. So while ideally the drm-leasing problem would be solved already there are other even more important problems to solve with linux VR for now.
I’m not blaming gnome, I just see it as a big problem in the future ‘oh sorry just have to restart wayland so I can boot into $app’. Any display renderer that has this limitation and expects each individual project to cross-support other renderers is a poor design, at least in my books.
Nowadays most GTK apps look very simliar. A sidebar, some actions in the titlebar, a details view. (Same story for Mac apps, 'Modern' Windows apps, Mobile apps.)
I wonder if a UX toolkit could be completely declarative and semantic - "I need a master/detail view, a list view with the following fields, some actions, ...". At the high level you don't give any positioning, or styling. It would automatically use the appropirate system widgets. Then on top you would add some polish by using a bit of CSS, or maybe an escape hatch to get the native widgets.
Almost everything that is not a browser, a WYSIWYG editor, or a media viewer would fit that mould. The kicker is that from such a description you could easily generate a TUI.
I'm a bit late to answer but... I've used a bit of XAML and it is not at all what I have in mind. XAML, SwiftUI and QML all have the same problem, in that they mix structure and presentation. A typical XAML app is not easily themeable. Let alone having the ability to adapt to different UI paradigms or toolkits.
A symptom is that you have tags like "Rectangle" and attributes like "Margin". The high-level description shouldn't say anything about presentation, just what you want to accomplish. The building blocks should be like "CRUD Editor" or "List-Detail view".
XAML based WPF and Silverlight were really nice and eliminated so much boilerplate and plumbing code. Unfortunately, Microsoft management-marketing didn't want to sell an opinionated framework, among other blunders.
I've tried to convince the fledgling GUI toolkit world of rust to take a look at XAML. I'm not going to rehash my latest round of comments [0] just yet, but suffice to say that XAML gives application developers and end users such freedom and enables very nice architectural splits that other options do not.
But this is not really usable, is it? I was expecting a look "native" to a terminal, i.e. a text-based user interface with the common conventions of the medium.
looks like so much fun workin on this :) cool stuff.
When i read about the anti-aliasing i thought nice, maybe signed distance fields will work just as nice for font rendering at arbitrary scales as in game engines... (Valve had a nice paper out there on this). there's lots of cool trickery in game renderers in UI code and for things like rendering decals that might be nice in gui code too.
I don't get why performance degradations are accepted though. I do most of my computing on old hardware and these are features I would turn off if I could and perhaps are not even supported by my gpus.
Please keep in mind that it is unlikely that Microsoft, Apple or Google would discuss these. It would be probably „forget the old API, here another API“ (Microsoft), „enforced from ${WEIRD_NAME}“ (Apple) or „you won’t get that update“ (Google).
Yeah, reminds me of Grinding Gear Games (Tencent for a few years now) where if you read the update notes for Path of Exile 1, you can see "performance improvements" mentioned many times, yet, mysteriously, the game today has effective minimal requirements order(s) of magnitude (if multiplied together) higher than on release !
In GL it's quite easy to accidentially miss the "fast path" and get surprising slowdowns (and in reverse, GL can be surprisingly fast when hitting the fast path). What's disappointing though is that the Vulkan renderer only nearly reaches the same performance as the old GL renderer, this seems to indicate that the problem sits on the caller side of the 3D API.
It would probably have been a good idea to track performance throughout the implementation and iterate on that instead of "architectural purity".
I don't understand half of the post, but claiming that they're doing this for "architectural purity" sounds ungenerous to me. The post lists a couple of tangible benefits, most of which do, I believe, relate to performance:
> Proper color handling (including HDR)
> Path rendering on the GPU
> Possibly including glyph rendering
> Off-the-main-thread rendering
> Performance (on old and less powerful devices)
(Which is not to say that tracking performance isn't a good idea. It's just that "architectural purity" sounds needlessly dismissive to me.)
This is actually the only case where I tolerate performance regressions: where the previous implementation was actually incorrect, rather than merely old or in need of a rewrite in whatever framework or technology is currently in vogue.
Performance is part of the implementation. Therefore, if it regresses, it is silly to claim it was incorrect before. Seems more likely to claim that tradeoffs were made for performance.
I'm ok with performance leaning on advances in the hardware. I'm also ok with performance dropping if you are pushing more pixels. But, we had high resolution displays years ago. Such that that is a tough hill to defend.
Performance is a combination of a) required behavior, b) implementation. If an implementation does not correctly meet the requirements, its performance cannot be compared to one that does.
Sure. But I'm approaching this from the assumption that we're actually talking "observed/resultant performance" and not "baseline required performance," i.e. talking about "'excess' performance over the required threshold," and if that regresses due to a correctness issue, I don't see the problem. Now what is "the minimum required performance threshold," I cannot say, but it is surely at least the ability to render on "CPU" at 60Hz.
(which CPU? render what? using how much of the available CPU power? with how many frames of latency allowed?)
Note that I don't fundamentally disagree with you and I cringe when I hear issues dismissed as "premature optimizations" myself.
I think that is fair. I was approaching this from the perspective of "my old machine was responsive, it is becoming less so." Back on the example of Doom, I understood why my machine at the time couldn't run Quake. It would be frustrating to force everyone to that paradigm.
These renderers are not on by default and likely never will be. I have never seen it work that an immediate mode rendering API translated to retained mode becomes faster. It is probably somehow possible, but will require a ton of work and probably changes on the API client side to fix some pathological cases.
> I have never seen it work that an immediate mode rendering API translated to retained mode becomes faster.
I don’t think I get your point here. Gtk 4 is retained-mode whatever renderer you use; Vulkan and OpenGL are immediate-mode (well, kinda) whatever renderer you use. Whatever problems that forces in the new renderers would be just as present in the old ones, wouldn’t they?
Oh, you are right about GTK4! It has switched to a scene graph and retained mode drawing. Somehow, I had never heard of that before. Presumably, it was done to be able to use 3D graphics APIs that all work in retained mode.
Vulkan and OpenGL, though, are in no practical sense immediate mode: You need to draw the whole frame every frame (barring a few exotic extensions for compositors and such), so you need to retain the state of everything in the frame so you can draw it.
Your usage of these terms in completely unconventional and wacky.
> You need to draw the whole frame every frame
This isn’t even true at a high level. You can composite buffers. That’s not exotic. That’s a fundamental operation.
But insofar as needing to actually do all your draw calls at once - that is literally what "immediate" in immediate mode means.
> Vulkan and OpenGL
They don’t specifically retain state - they’re immediate. That’s what immediate mode means. What something higher up does has nothing to do with it.
What would you even consider immediate mode by your definition?
> so you need to retain the state of everything in the frame so you can draw it.
That state can be a procedure and a handful of variables (which in the extreme is all a shader). The point is Vulkan and OpenGL have no say over the nature of that state.
I guess I'm looking at it from a GUI application programmer POV. That OpenGL or Vulkan are, themselves, immediate mode, doesn't matter all that much there. Drawing happens on the GPU, which wants to process large buffers without changing its state in between. An overview of all drawing operations (scene graph, proper use of the term retained mode) with batching and merging is needed to cater to that.
Examples: Qt QML scene graph, GTK4 GSK
> You can composite buffers
Compositing buffers is still drawing, though
> What would I consider immediate mode?
Basically, a canvas-type thing. You just scribble wherever whenever (limited by convention and correctness, of course) and what you don't touch stays like it is. Drawing happens on the CPU, which is fairly happy to do small operations at the drop of a hat. No need to remember any state for the benefit of the graphics API.
Examples: Qt QPainter, GTK < 4 using Cairo
GNOME dev team (and by extension GTK) probably don't care that much, iirc most of them use expensive macbooks so many issues get ignored because it "Works on my machine" (e.g. several issues with font rendering that don't affect retina displays)
The font rendering thing is more complicated than that. Subpixel antialiasing doesn't work with animations and I'm pretty sure it interacts poorly with fractional scaling too. The fact that you don't need subpixel antialiasing if you have a HiDPI display just lets them make the tradeoff be "you need a good monitor" instead of having to choose between jaggy text or weird shimmering on animations.
I hope I dont sound bitter, but most decent graphics engine developers have created renderers that are a couple of generations ahead of the open source GUI toolkit renderers. There are several of us that can truely bring next gen rendering to the open source desktop, however we’re working for gamedev companies (they pay our bills), and we have no time to contribute to open source stacks. If the community can organise a regular budget to pay for such devs, then you’d see a significant rendered snd toolkit updates. Same with other open source apps.
2D graphics have very little in common with game engines. The problem is very different in many regards. In 2D, you generally have Bezier and other splines on input, large amount of overdraw, textures coming from users complicate VRAM memory management. OTOH, game engines are solving hard problem which are irrelevant to 2D renderers, like dynamic lighting, volumetric effects, and dynamic environment.
I gotta disagree that PixiJS is fast. I’ve worked with a bunch of 2D graphics engines going back to the Flash days, and I found it’s really easy to hit a wall with PixiJS performance, especially if you are using text and filters. I wouldn’t have much issue with it if the cacheAsBitmap feature was reliable, but I found it buggy as heck and it didn’t help performance as much as you would expect. There is no way I would use PixiJS for a full screen game or mobile game.
Ok, I also found that performance can drop in unexpected ways, causing workarounds, to get it flowing again.
And I don't use cacheasbitmap, but do my own caching. (Not because I knew of flaws, but because I was already doing it)
And for text I really recommend BitmapText. That is fast. (But not possible with all use cases, sure)
Also Pixi8 with WebGPU will be stable soon, looking forward to it.
But all in all I am really impressed, I used quite a few other js graphic engines before and Pixi was by far the best. Or which one did you find better?
What is needed for performance of traditional GUI app rendering? I'm particularly interested in table rendering. Glide and Perspective are both canvas based renderers, but I haven't dug into the internals.
PixiJS is the easy problem, blitting a bunch of premade textures to the screen. The much harder problem is getting curveTo() and text and gradients and strokes, etc.
That is all or mostly possible with PixiJS or an extension to my knowledge.
Edit: but then again, Pixi uses the HTML canvas element for text drawing, which uses the browsers text capabilities. So yes, at some point and somewhere those functionalities need to be implemented
Sure canvas can render text, but its filled with problems and limitations. It doesn’t do linewrapping or any of the font rendering correctly. Doesn’t do aliasing correctly. It needs so much hand holding and manual handling to render semi decent that basically all canvas apps just opt out and use html to render text on top of canvas.
Canvas is almost always the wrong choice if you want to do layouts.
I am a bit skeptical about this. I feel like game UI toolkit and desktop GUI framework live in two separate world with different expectation. At least, in my humble experience, having to have used both in my career.
GTK/Qt are usually good/very good with their integration with the OS, accessibility feature, keyboard navigation, handling of features like copy/paste, ... These are the kinds of things that game ui toolkit tend to completely forgo since they don't need it, and focus instead on performance, theming, integration with a game engine, ...
Theoretically, you could say that the renderer is agnostic with this, but in practice, it is not completely true. And also with the simple fact that you have a limited budget to work on feature, and they both rather work on different feature. Having a very fast and accurate renderer is just not as important for desktop GUI framework than for game UI toolkits.
Text is also the bane of renderers--there is a reason why we have exactly 3 text shaping engines--Windows, Apple, Harfbuzz. Text is a beast to deal with and is often ill-specified.
Text is also the bane of GPUs--we don't have good GPU-only algorithms for taking a string of text, handing that to the GPU, and having the GPU render that directly to a buffer.
Text is also something that games suck at rendering. SDF (signed distance fields) are considered a good rendering of text in the 3D world and they are blurry as hell.
I do think that the modern GUI world is going in a lot of wrong directions, but you must deal with text accurately to call yourself a real GUI.
How many of these game engines are abstracted at a level where you can swap in a pdf or svg backend? How many of them support cmyk and and print units? I'm only scratching the surface of things a gui renderer needs that a game engine doesn't.
I'm very skeptical that a bunch of game developers are going to whip together something that crushes Skia in performance without sacrificing a ton of capabilities.
I am not sure, if I understand you right, but do you mean rendering to pdf or svg instead of the GPU?
If so, are there real world use cases?
"crushes Skia in performance without sacrificing a ton of capabilities"
Same question, what of those capabilities are really in use and needed? Linux GUIs in general are really not a beacon of light, in terms of performance or usability. I strongly suspect things could be better, if some bloat would be removed.
I am amazed with what is possible with PixiJS, a renderer for the Web, using WebGL and soon WebGPU. Having something simple, but powerful as the base, would be my way to go.
Printers often want pdf or postscript. Or if not some other format that isn't a regular GPU supported thing. Games rarely support printing, the main use is take a screenshot which uses the OS not the game engine, which is great for games. However if you are writing something where printing is important you want more control over the output than a screenshot can give (printers tend to be much higher resolution, but you have to deal with paper size - two things that should be passed back to the application as you can often make adjustments to better use show things when you know those limits - there are other compromises in printing too that may matter)
Ah, right. Does not sound too complicated, but is an entirely seperated render path. And apparently now supported? But I have never seen it put to use.
But is there a real world use case, where this actually was put to use?
(Sorry, I am having flashbacks of the debate with X and wayland, where it was argued, but X is network transparent, except that it wasn't anymore since a long time and, or because - no one used it)
Well, it just so happens that I build a UI toolkit over the last decade, but yes with performance in mind and not featuritis. But apparently I have no clue, well sad to hear that. I'll show myself out.
I got the impression from above, that this is something possible now, yet I have never seen such a capability. And I thing the effort will be way greater the reward to implement from scratch ..
("Easy" assumed, there is already something there)
The ^community^ here are people like you - those who also have to pay bills but contribute anyways, in whatever way they can by using the software and occasionally contributing code.
While yes it would be great if a community could raise funds, that coordination job itself would have to become someone's not-paying-the-bills work.
As much as I love Open source/Free/Libre software and am grateful it exists (and contributing to it, when possible), I've a long held belief that it is the pursuit of the privileged. You need to have the privilege of free time and then the privilege of being able to choose to spend that free time on something that doesn't improve your standard of living and then the privilege of being able to do it consistently.
Sadly I totally agree: Open Source is the playground of people who can afford it.
I benefited a lot of Open Source in my career, life so I am very thankful for all contributors (and try to give back in money/time, when I can afford one or the other).
What really annoys me, that my government does not mandate that software build with tax money must be Open Source.
That would go a long way to fund Open Source and improve the quality.
(I wasn't trying to make a point.)
As far as I know, that's what the initiatives like PMPC¹ are for.
I think in Switzerland, a law recently passed that seems to go in that direction² (Open Source should by default but some leniency as far as I can interpret the text).
According to this³ OSOR report, something similar happened in Italy in 2019.
So, I think we're slowly going in that direction in Europe.
>The ^community^ here are people like you - those who also have to pay bills but contribute anyways
In principle, yes. In practice, most of the "community" is paid developers for companies like RedHat. So while they do have to pay the bills, they do so by those FOSS contributions.
True. I recognize that my statement was a bit simplistic.
That said, in the context of what the OP was saying, unfortunately, this is a chicken-and-egg situation. For someone, like the OP, who would like to get paid to do OSS, they'd need to have a reasonably active OSS presence prior to being hired at places like Red Hat. Which is another aspect of my original point of contributing to FLOSS being the pursuit of the privileged.
I'd be very skeptical about that. I work in the game industry, and while our 3D renderers are very good, I have not seen a 2D UI renderer I would feel is at all competive. What are you using for path and pattern rendering?
Game engines can afford to be generations ahead : they might even be targeting specific hardware (single console release), they typically assume a sandboxed environment that they are in total control of (at best, they'll make a limited and tightly controlled GUI framework for modders), they might have restricted themselves to a limited number of inputs/outputs (single screen with a specific resolution, gamepad only), they don't have to worry about unknown unknown uses by other developers...
None of these apply to a generalist renderer, therefore it can only "lag behind" the game ones. (Unless maybe if we're talking about the "human side of the question" : what are the best designs, layouts for a generalist human/machine interface ? Here it's the generalist GUIs that I would expect to be a couple of generations ahead (Xerox' labs, Apple's Macintosh, IBM's Common User Access standard, CERN's World Wide Web...)
You may be surprised to know that the open source Godot game engine has also been adopted by some developers as a GUI toolkit (See Standalone Tools and Applications Made with Godot GUI - https://alfredbaudisch.com/blog/gamedev/godot-engine/standal... ).
I can't think of a segment of user interfaces worse for accessibility than game development. Generally speaking, the game-centric UI toolkits and immediate-mode GUIs are very pretty but completely lack any accessibility hooks, including the ability for a screen reader to even know there is text present. If GNOME switched to gamedev-style UIs, I would probably just go buy a Macbook.
What do you want the community to do? Pitch in with money from their day jobs so a gamedev savior can come in and make things render slightly faster? At the expense of hideous unmaintable code that only a gamedev genius could understand ?
game devs are already relatively underpaid for the value they bring to companies. I don't know how an open source initiative can even afford a single professional graphics developer.
Also, you say they are a couple of generations ahead, but do these kinds of software need to be bleeding edge? Even many games don't, the kinds of software in research labs that do pay even better than gamedev (and of course require PhD's and whatnot).
> If the community can organise a regular budget to pay for such devs, then you’d see a significant rendered snd toolkit updates. Same with other open source apps.
Erm, if the linux community comes together - then it can happen. But the employer of a game engine dev is not part of that community and probably simply does not care.
So if you care and feel the call, go organize something. I care, but have other duties.
That sounds a lot like modern extortion. I sincerely hope this attitude does not creep into FLOSS and Open Source more then it already did. Imagine a volunteer firefighter who has found a well paying job announcing "If you'd pay me more, your house wouldn't have burnt down."
A better analogy is volunteer firefighters not cutting it, houses burning down left and right, and a professional firefighter saying "I'd like to come work in your district and help with firefighting, and I could do a much better job than the volunteer guys, but I need to get paid to work".
>I sincerely hope this attitude does not creep into FLOSS and Open Source more then it already did
If you took away people working on FOSS because they're paid to, that wouldn't contribute or drop to 1/10th the rate otherwise, you removed the most prolific and important maintainers and contributors of lots of huge FOSS projects.
Since I just recently completed my certification as a Firefighter I/II (also Wildland FF 2), and have been an open source developer for more than 35 years, with the last 25 years of that being full time and the last 15 having FLOSS as my actual income source, I'd like to comment.
The primary difference between professional firefighters and their volunteer counterparts is hours on the job. When I graduated from the state academy, I knew as much as about firefighting as any of my fellow graduates and had been through precisely the same training requirements. However, the gap is going to open up very rapidly since the career guys will be doing regular shifts every week, whereas I will be answering 1-3 calls a month on average, most of which will not be fire-related. A year from now, the career guys will be even more familiar with everything we learned during our certification training and more, while I will be working hard to remember any of it.
So the question is: how well does this analogy hold for s/w development?
It doesn't.
First of all, the gap between most proprietary development outcomes and their FLOSS equivalents has more to do with UI/UX design questions than actual coding skills. At the source code level, it's generally proprietary projects that are "burning down left and right" (shoddily and quickly built, with inadequate attention to engineering and insufficient caring about one's work due to marketing deadlines).
Secondly, the difference between proprietary developers and their FLOSS equivalents in terms of hours of experience is not deterministic. It's going to be a function of employers, personalities, life situation. Plenty of (typically younger) FLOSS developers squeeze in more quality hours on their FLOSS work than their proprietary cousins do.
Thirdly, a firefighter only gets to put out the fires that actually happen. A software developer can pick their own problems and goals and work on them at any time. There's no relationship between the outside world and your ability to advance your skills and knowledge.
If you actually cared about that bug, you would have followed it, and seen that it has been closed.
But you just want to shit of GTK for no reason. It's an open source project, so they owe you nothing, and nothing was stopping you from contributing your own improvements.
I have been following it for almost a decade now. I am in #gtk idling all the time. I use gtk3 and gtk4 based desktop environments. The dozen of times the bug has been submitted may be closed: but it's a closed wontfix, despite the actual bug still reminaining.
The first instance of the bug report happened in 2014 when mclassen introduced it and refused to fix it. The most recent instance of the bug report happened 7 months ago when I proved to them they'd ported the bug from gtk3 to gtk4 too: https://gitlab.gnome.org/GNOME/gtk/-/issues/5872 I even showed them a partial patch for the problem for gtkfilechooserwidget.c to restore default text entry input but the filechooser is such spaghetti that everyone is afraid to change anything.
So, yeah, the bug still exists. And it really is more important than another renderer. This is basic functionality that's been missing for a decade.
I believe you haven't tried to make contributions to something like gtk, especially the ones they have no roadmap item for. Modern big open source projects are an entangled mess of subsystems, dependencies, opinions and sometimes plain ego, so even if you're able to program the library, there's little chance to get the change accepted into mainstream, and there's no chance you'll be able to maintain a whole parallel distribution which would make this change effective at least for you. With years I learned that this claim:
they owe you nothing, and nothing was stopping you from contributing your own improvements
lands inside the [ignorance..rudeness..hypocrisy] triangle most of the times. It would be more agreeable if stated as:
they owe you nothing, and nothing was stopping you from contributing approved improvements from their backlog
The paid gtk devs refuse to accept patches to gtkfilechooserwidget.c (it's "frozen"), you'd know this if you yourself tried to make contributions to gtk. They owe us nothing, but the fact remains they have been removing features (like respecting the gsettings (org.gtk.Settings.FileChooser location-mode) that would allow users to fix this bug). And they refuse to fix it themselves or look at patches from others. I ask once per year. Even just 2 lines in gtk/gtk/gtkfilechooserwidget.c swapping in priv->location_mode = LOCATION_MODE_FILENAME_ENTRY; would help.
I'm a little confused whether your comment is the answer to mine or someone elses, cause we seem to agree on all points. In any case, I left G* lands ages ago (gtk2->3) and can only speak for that time. It coincided with the issues around their development policies.
If this is a characteristic of "modern big software projects" then it has nothing specifically to do with whatever happens or doesn't happen in open source development.
At the end of the day we're the ones that have to put up with the rough edges of this software. File picker bug closed or not, the GTK file picker sucks to use. I don't have the ability to just swap it out in programs that use it, at least not within a reasonable amount of time or without significant maintenance of a forked codebase.
People aren't shitting on GTK for no reason. We/I do it because new releases are flawed and GTK is often what you get if you buy a laptop with Linux preinstalled and hardware-certified. Even several months ago, if I scrolled to the bottom of Nautilus with a MX Master mouse, I couldn't scroll back up.
The "nothing" stopping me from contributing is that GNOME source is in an inconvenient location, is more difficult to understand coming from non-GUI microcontroller programming than for the core devs, and is a hassle to build. When I try to install or compile dependencies, I seem to encounter a new error every time, like GNOME Builder not respecting a corporate proxy while using WSL even though my environmental variables are set correctly. (So now, you're learning about "curl -vvv" and maybe installing MSYS instead of figuring out why your mouse doesn't scroll up. Make sense yet?)
Beyond that, I don't want to turn my computer into a machine with Flatpak and multiple GB of devel files just so I can fix the scroll bar that should already work and might be caused by libinput or kernel source.
Something is fundamentally wrong if the GTK scrollbar---one of just a handful of available widgets that would be put in countless GTK programs---doesn't work in a GNOME app.
The last 3 Linux devices I've owned have been an overpriced garbage heap of Certified hardware---running too hot with GPU glitches, OS hanging, everyone's special take on application packaging (aka No More Ubuntu for me), awful sleep and power management, terrible peripheral support, having to learn about journalctl and a laundry list of kernel parameters...
The thing in common with those terrible modern devices that weren't around on my much-loved XPS 13 9370?
Wayland
GTK 4
Pipewire
S0ix
Kernel 6+
I'm not necessarily blaming one specific new technology, but I just today took a video of my laptop (2500 euros, shipped with linux by a major manufacturer) evidently having graphics glitches just from opening Firefox (on a Radeon, just to clear up that not every GPU Linux problem is Nvidia and everyone using AMD is enamored). I have no idea if it's a software or hardware problem---it also happens if I change the UI scale to 125% or plug in a dock---but it's not even the first of ten major problems this laptop has had.
I'd gladly RMA the thing, but then I don't know what to develop on except maybe an Android tablet cross-compiling to x86 plugged into a dock.
My grandpa's 286 was more reliable than an off-the-shelf Linux or Windows laptop today, and the only MBP I ever used left a sour taste.
> If you actually cared about that bug, you would have followed it, and seen that it has been closed.
You mean that the GTK/GNOME people declare they will never fix their file chooser? I've pretty much figured that out. That in itself is more of a problem than the state of the file chooser, because fixing faulty design is one thing, fixing but dogmatic insistence on poor UI is another thing altogether.
> But you just want to shit of GTK for no reason.
I disparage the GTK file chooser because it's horrible to use. I don't know anybody who works on GTK. I don't write software which competes with GTK (I'm a GPU guy). I have better things to do with my time than bad-mouth software projects I have no personal interests for or against.
What's actually happening is that some peopl are holding your hands over your eyes and refusing to see that (some of the) GTK UI elements are utterly broken intentionally and for years or decades. And while GNOME apps have alternatives, GTK is, unfortunately, popular among apps which don't really have alternatives, so I'm stuck with it.
> It's an open source project, so they owe you nothing
Huh? It's a project of human society to serve people's needs. So they owe users and developers to do a decent job and address those needs. You seem to be suggesting that if I don't personally pay them then I should just shut up and accept their choices.
I would never ever say something like that to a user of my FOSS - and I do owe my users, a lot.
> and nothing was stopping you from contributing your own improvements.
1. _Everything_ is stopping me from contributing an improvement. The idea is rejected on principle.
2. I understand you are offering to volunteer take up maintenance of the FOSS work I'm doing while I go off and start getting into GTK, with which I have no experience. Or perhaps you want to take over my day job so I can have more spare time?
And not that damn subpath searching on key press behaviour.
For a long time I joke that the Gnome/Gtk design decisions are made on mushrooms, it’s quite sad that it’s snot an unrealistic cause of this bizarre behaviour.
For the time, it was just jaw dropping. For context, I think it was before Atom, VS Code or Electron (or possibly even NodeJS?) was a thing.
Don't know if that HTML renderer is still around or not.