Hacker News new | past | comments | ask | show | jobs | submit login
New Renderers for GTK (gtk.org)
355 points by Decabytes 7 months ago | hide | past | favorite | 245 comments



Lomg time ago, I think 2010ish, there was an experimental HTML renderer that would open up a GTK app in a browser that has its UI using plain HTML+CSS.

For the time, it was just jaw dropping. For context, I think it was before Atom, VS Code or Electron (or possibly even NodeJS?) was a thing.

Don't know if that HTML renderer is still around or not.


You mean Broadway?

https://docs.gtk.org/gtk4/broadway.html https://www.phoronix.com/news/GTK4-Broadway-Being-Used

I did not considered it as main/official backend but it is still there and was ported to Gtk4.


Oh yes, that's it. Thanks for posting. I think its so beautiful. It is one of those things that have pure artisan value. Of craftsmanship. Whether it is used widely or not, I wish this backend to be there.

Back then, I did run Open Office in Firefox and it amazed me.


If nothing’s changed, the Gtk 4 design tool Cambalache[1] by the former maintainer of the now-defunct Glade project uses[2] Broadway to render its design view, because it was there and an embedded Wayland compositor widget wasn’t. Broadway is awesome, but this just makes me sad.

[1] https://gitlab.gnome.org/jpu/cambalache

[2] https://blogs.gnome.org/xjuan/2021/05/18/merengue-cambalache...


I dont't understand. What exactly makes you sad?


That the easiest way for a UI design tool using a native toolkit and targeting that same toolkit to display a preview of the UI being edited turned out to be embedding the 800-pound gorilla called a webview.


The only reason the tool does that is to support multiple versions of GTK. But the alternative you suggest, embedding a Wayland compositor in a widget, doesn't exactly seem to be the lightest-weight of options.


Not my suggestion, for what it’s worth, but the author’s own (in ref. 2 above). And yes, it’s not precisely fantastic, but on the other hand “embedded Wayland compositor” is for the most part a fancy way to say “dumb RPC passthrough and a single blit”. (That doesn’t mean that it wouldn’t take a whole lot of work to make, just that the eventual result would not be huge.)


Although it uses more HTML and CSS than some similar things, I don’t think it’s fair to call it “plain HTML + CSS”, because it displays more functional characteristics of the pure canvas approach (where you throw away just about everything the browser gives you and start from scratch). My three primary heuristics for proper behaviour are:

(a) using browser scrolling (the web doesn’t expose the right primitives to make a scroll-event-based reimplementation anything but bad, barely noticeable in some configurations but painfully broken in some of the most common configurations);

(b) using browser text rendering (strongly preferably by DOM nodes, but even canvas plus fillText can be acceptable); and

(c) handling links with real <a> elements (no alternative can provide the same functionality).

Broadway fails all three of these (reimplementing scrolling, rendering text on the server and sending images, and I think actually locking up when you try to click on a link). It also fails my fourth and fifth tests, which are (d) handling text input properly (it looks like it just uses key events, not even a backing <input> or <textarea> or contenteditable, so IME composition fails completely and keyboard navigation will presumably be GTK rather than native); and (d) having a meaningful accessibility tree (preferably backed by normal DOM stuff, but not necessarily; I place this one last despite its importance because it’s likely to be more retrofittable than the others, though it’ll still generally be hard).

I’d count Broadway as suitable for tech demos and for personal use where you know you don’t mind its limitations, but not for any sort of public deployment. It’s basically just an RDP/VNC sort of thing, with a smattering of DOM use. (Also remember that’s how it works—the code is all running on the server.)


> no alternative can provide the same functionality

I agree that real links should always be used to represent links, but if you were to emulate links surely it wouldn't be that hard? Just allow middle-clicking and ctrl-clicking and you should have the basic functionality. Is stuff like the ability to drag a problem?


You can’t even emulate middle-click or Ctrl+click correctly! The best you can do in-page is open(), which will will typically open a new foreground tab; but in typical desktop configurations, middle-click and Ctrl+click will open a new background tab. Users will notice you disrupting their normal workflow of opening a bunch of thing at once and then going through them one by one and get extremely frustrated.

And then beyond that, well. Right click or long press, browser context menu with things like copy, bookmark, open in new tab, open in private window, share… you can’t emulate most of this at all. And hover to see href in the browser’s status bar. And so it goes on.

No, the only acceptable technique is to have the user interact with a real HTML <a>, HTML <area> or SVG <a> element.


I generally agree with you, but I don’t think c) is true. It’s trivial, and most people won’t go the extra mile to make it “native”, but you can properly manage history from JS, can also add accessibility attributes to make it focusable, etc, so any well-implemented DOM elem with a click handler could be just as good as an <a>. Or do you have any other requirements?

Edit: I have replied from an older state of HN, and haven’t yet seen your other comments. Fair enough, didn’t think hard enough of other edge cases, thanks!


Calling it a "HTML renderer" is a bit of a strech. Atleast in GTK3 it just streams pixel data into a canvas element, effectively being the same as VNC with a web viewer.

https://imgur.com/a/2EDZ2Ti


Ok I'm not so sure if canvas was around back then so I might have misremembered it. But I did see div upon div with inline styles.

Maybe later they moved to canvas. Or like I said, I might be mistaken.

Recently however, Flutter had two backends for web. One HTML based and other canvas based.

Some WebGL etc might be in the works.



Impressive. Web was very primitive back then compared to how it is today. I think there was no flex, no grids, no ESM modules even.

It might not sound impressive in 2024 but back then, it was a huge deal. Very daring to even attempt something like that IMHO.

It was a time of YUI, jQuery, ExtJS, script.aculo.us and such.


QML pre-dates it by a year or so, it's basically in the same vein. Everyone wanted to co-opt web-devs for desktop development at that point.


Haha, they got quite a bit more than they bargained for. Hello Electron.


Lol yeah. In a way, most people kinda got where the industry was moving to, but desktop toolkits moved too slowly and effectively got left in the dust.


It literally just rendered to a Canvas framebuffer. It's really not that crazy.

I mean cool, yes....but it wasn't generating DOM elements/complex HTML or anything like you're trying to sell it as.


broadway - A while back I created a little POC for running it inside docker which worked really well.

https://github.com/moondev/gtk3-docker

My use case was running a browser inside a browser to easily interact with Kubernetes clusterip services without needing to port forward or proxy.

Another awesome example: running virt-manager to run a vm via the gtk virt-viewer (and interacting with it through the browser)

https://github.com/m-bers/docker-virt-manager


There's Greenfield, an HTML5 Wayland compositor. https://github.com/udevbe/greenfield

There's some fancy bridging modes to run apps in a browser, but the author has also been working on a way to make wasm Wayland apps run directly in the browser tol.


Reminds me of $something that allowed Delphi desktop applications to run in the browser. I think it was CGI-based and really fragile.


I think qbittorrent still uses that for its web ui


Not really. qBittorent is built on Qt (thus the prefix), and has a hand-rolled webui in pure html + css + js (with a couple of helper libraries, but no heavy frameworks):

https://github.com/qbittorrent/qBittorrent/tree/master/src/w...


Kudos to them then. It's really well done


and it's actually quite responsive


As Qt usually is.


I was talking about the web interface, not the GUI; the web interface uses only web technologies of course. I mean I guess they could do a wasm version too but ehhhhhh


...which is true, but completely irrelevant to the above-mentioned Web UI.


I wish GTK didn't follow the trend of allowing widgets in the titlebar. Some can drag, some can not. There's less room for the app and file names. This is not a GTK specific complaint.


didn't gtk/gnome invent this trend?


AFAIR it was Chrome with the tabs in the titlebar thing.


> I wish GTK didn't follow the trend of allowing widgets in the titlebar.

It's bad enough this happens in GNOME (ugh). Not another faux pas in GTK please.


You may not be aware that gtk is a core gnome project.


Pixel-perfect fractional scaling, baby, woohoo!


After over a decade of GTK claiming fractional scaling is "impossible" and GTK devs vetoing any fractional scaling in the wayland protocol, they finally got feature parity with Qt in this area.

Now we just need proper support in Wayland and we'll finally have support for HiDPI in all major Linux DEs.


Both fractional scaling and thumbnails did require massive background work. The term „impossible“ is therefore incorrect. It required huge changes in the backend which weren’t feasible in short term. The thumbnails are done already and the new renderes are approaching.

Keep in mind, developers strive good solutions. Users need good solutions. Companies are often happy with less reliable solutions because they are forced to be fast. And this less reliable stuff tends to stay long.


I know how much work this is, and I genuinely appreciate that Gtk has finally reached this milestone. It was the last missing puzzle piece.

It's just sad that Gtk used to have this functionality and dropped it when Gtk3 was envisioned, as it meant the ecosystem was stuck waiting for over a decade.


The trade-off is that other platforms have had this support for a decade. Stuff like this keeps people from switching.


I think there are other reasons which matter.

    * what comes preinstalled
    * a weird application
    * the end
Regarding features - for users - Linux with GNOME is delivering features for decades which Windows lacks:

    * Tabs in file-browser             * fast and modern terminal (ttf-fonts, utf8, OpenGL…)
    * clear concise settings
    * maintained application repositories
    * no „desktop“ which forces me to arrange icons (WTF?)

Other consider Cortana or ChatGPT as „required“. If they think so…


You seem to be very much out of date with the current state of Windows.

tabs in file browser >built in for the last 2 years, tabs can be rearranged, moved between explorer windows, etc.

fast and modern terminal >windows terminal (released 2019 and built in by default) is one of the best terminal apps on any platform. Tabs, GPU acceleration, fonts, seamless multishell support (including Linux shells local - WSL - or remote)

*maintained application repositories > winget (released 2020 and built in by default) repository is now the backend of windows store and can maintain externally installed software, just run ‘winget upgrade —-all’ in your shell of choice

Also, you can now install any linux distro you want with a single click using WSL, with full x11 and Wayland support. If there’s a Linux app you prefer, you can use it seamlessly. You can even run a full desktop. I’d almost go as far as to say that nvidia hardware is supported better through WSL, where latest drivers “just work” and I’ve never experienced things breaking, than on baremetal Linux (although that says more about nvidia than any platform).


The point is the most are available since 2002 with Linux. Only recently on Windows. And here is argued that the special thumbnail feature was missing on Linux since around 2002.

I’m glad about the thumbnails (it was overdue by a decade) but there are many features and every new feature available is missing somewhere else[1]. Next year we will discuss either missing Flatpak support on Windows (cgroups, namespaces) or missing Vulkan on MacOS or missing built-in Android support on Linux.

[1] or not - because it doesn’t fit well. For example the weird database/library kind stuff in Explorer which teases users on Windows. I think EXPLORER.EXE has become awful hard to use. It was usable back in NT4 and NT5.


>After over a decade of GTK claiming fractional scaling is "impossible" and

So it's just as impossible as thumbnails in the file picker? The insanity of the GTK and the resilience of people willing to put up with it is baffling to me.


As Vinni pointed it with irony.

A honest „thank you“ to the people which made the thumbnails possible would be helpful. And involving itself. Or spending money. Gtk and GNOME are not companies. People need a motivation - from their work or from outside.


>Gtk and GNOME are not companies.

These are large entities made up of people, same as companies, not individual devs making a side project. Does that mean their work should not be criticized because their collective is not a registered publicly listed company?

>People need a motivation - from their work or from outside.

Also feedback. And that was my feedback, candid as it may be.

Like I said below, FOSS work doesn't mean it should automatically be excluded form criticism, especially if that FOSS work is so big and influential enough that it ends up having impact on other products and on people's work and lives, like Gnome being the default on most free but also commercial distros.

So please let's stop attacking and gaslighting critics, as if GNOME is some teenager developing his first app for free in his bedroom who should only be defended and praised for encouragement, instead of the giant mammoth project with private and public funding[1] developed by vested professionals of the software industry who know very well what they're doing and should be criticized when they do something wrong or are being needlessly petty and stubborn with their decisions and arguments, besides being praised when doing something right as there's already enough of that in this topic if you read it.

[1] https://www.omgubuntu.co.uk/2023/11/gnome-sovereign-tech-fun...


Constructive criticism is helpful. And yes - some people get it wrong or are stubborn. People don’t act always perfect despite good intentions. I struggle often to find the right tone and interpret it properly.

And I’m happy about Linux, GCC, Vim, Coretils and Gtk in general. GNOMEs keyboard centric usage is wonderful.

That said. I’m usually sad about public long term forks. Because they are not about learning or improvement but that people often failed to communicate. Yes, sometimes their targets are just differing.

Anyone good news? Maybe Mutter will support VRR itself. And maybe battery thresholds in a user friendly way. Woohooo

No we just need convince the right people to add Type-Ahead-Find back in Nautilus and background transparency officially back in gnome-terminal. Both „back“. There is existing code for both.


"No we just need convince the right people to add Type-Ahead-Find back in Nautilus and background transparency officially back in gnome-terminal. Both „back“. There is existing code for both."

Or just make a fork if you want to.


It can be criticized. But that criticism won't necessarily help anyone get what they want, least of all you.


I don't use GNOME :)


> Gtk and GNOME are not companies.

"Company" doesn't mean "business". It means a bunch of companions. Like the company in `The Fellowship of the Ring`.


> Or spending money.

People are willing and ready to spend money on quality software. It is a trillion dollar industry. Open source is an endless circle of misery that needs to be ended as soon as possible.


At least the devs can find some motivation in the fact that they'll get comments like these every time they do something good...


The devs who actually implemented should be celebrated.

But they're not the same people as the ones that blocked specs and PRs by stating this was "impossible".

Blocking valuable improvements with an absolute statement like that makes contributing to such projects feel like fighting against windmills.


Possibly, but it wouldn't be the first time that the first category of devs feel like they're the target of such comments. And it's a well-known phenomenon that negative comments outweigh ten positive ones.

So from where I sit, one might as well not make them. (And I'm pushing back a bit to hopefully support the contributors who I'm grateful to.)


Thank you for moving the goalposts for the chance of a cheap shot comment, but you know very well that's not what I meant.

GTK doing something right , doesn't suddenly absolve it of all the wrongs. And just because something is FOSS doesn't mean it's not without fault and therefore without criticism.


Likewise, just because you are right in principle doesn't mean your actual comment is anything but rude and pointless.


Am I the one being rude? Maybe. Was GTK also being rude for claiming something everyone else was doing at the time, like fractional scaling, as being "Impossible"? Maybe.

Sorry, maybe I was rude, but someone needs to call out BS claims like this as they probably have enough "yes men" tooting their horn.


> someone needs to call out BS claims like this as they probably have enough "yes men" tooting their horn.

Do they, though? Does it have any effect at all, other than potentially demotivating volunteer contributors? And if so, do those effects outweigh the negative effects?


Gnome's problems are not those volunteers contributing code, but the leadership who decided what gets merged to mainline.

Devs could have implemented thumbnails in file picker over a decade ago, but Gnome leadership was always against it.

Toxic and poor leadershipof such large projects should be openly criticized and pointed at, and not allow to hide and be gaslighten behind the "poor small volunteer devs".


What have years and years of criticising and pointing-at brought us, other than a couple of burnt-out devs?

And is there anywhere where we can see that this elite leadership suddenly changed their minds on allowing this, rather than a developer seeing an opportunity when a bunch of things got deprecated, which had been postponed for a long time because it creates a lot of work for downstreams [0]?

Or is the fact that they didn't want to haphazardly break those downstreams the poor leadership you speak of?

[0] https://blog.gtk.org/2022/12/15/a-grid-for-the-file-chooser/


"Devs could have implemented thumbnails in file picker over a decade ago, but Gnome leadership was always against it."

Please stop this bullshit.


Please cite where a GTK developer said that fractional scaling was impossible.


Usually it's from someone who has never wrote a single line of code to improve the situation either.


Please spare us of this tired trope. People who don't contribute to FOSS, are also allowed to have an opinion on FOSS if they're users of it. Otherwise how would they know what to improve if you don't allow feedback from users? What you're saying is like car companies thinking their customers' opinions are irelevant because their customers never contributed to designing parts of a car.

If you're designing a mainstream product that's not just targeting professional SW developers who can contribute back with code, but also average users who will only use it but never contribute, then you will need to be open to mainstream levels of feedback that represent the average user and not just programmers.

Otherwise let's keep complaining Linux and FOSS alternatives have only ~2% market share that it's somehow the users' fault for only buying Microsoft/Apple and not the projects' for being stubborn to feedback and out of touch with their mainstream users.


I have contributed and improved more than a few FOSS pieces of software that I used when I thought it could use improvement or a bug fix, or even just documentation of the bug. FOSS isn't a company, and a huge portion of contributors aren't paid a dime. It bothers me when people criticize with vehemence against the developers, where I express my opinion is on the nature of belligerent complaints vs helpful ones. Have a nice day.


No, they're right. It's not a requirement to sling code to put your opinions out there about open source.


Everyone has their opinions. There are so many opinions out there, it isn’t really clear what to do with them. Put them out there, fine, now everybody else just has to decide if they care.

One way to tell if you should care about an opinion is to check and see if the person professing it has taken any concrete actions toward getting the world to align with it.


> There are so many opinions out there, it isn’t really clear what to do with them.

Thankfully large projects generally have feedback loop systems to figure out how to parse that feedback.

The rest of your comment isn't particularly contributing to the discussion here, short of trying to re-cement a broken take.


I remember my cousin telling me how horrible it was to work at a hotel - you really found out how horrible people could be when they thought they were owed something. Staff there obviously had to put up with it because that was their job.

Regarding FOSS, I think some users forget that they're at a free lunch and are unpleasant and insistent about their complaints as if they were paying. They're not entitled to service and don't seem to realise it.

As to whether everyone else should listen - sure - but nobody is forced to work overtime just to right some wrong in OSS software because person A was very upset about it.


In this case, PRs were made for close to a decade, tons of people implemented the feature in forks etc but upstream just didn't want it. You can't tell people to just write the code since it's open source when that's irrelevant here, the code itself wasn't what was missing. Though I still think it's great that it is now done regardless of the weird historic that specific feature had.


Please show any fork where the features you (don't) describe were available.


There are gtk file picker forks (like Nemo I think?) But I guess you could argue that they didn't just add the file picker thumbnails. So here is an example of a patch to make thumbnails work:

https://aur.archlinux.org/cgit/aur.git/tree/gtk3-filechooser...


Use the forks if you want to then.

Also, why isn't that patch in an MR to GTK (if the author wanted to upstream it)?


> Also, why isn't that patch in an MR to GTK (if the author wanted to upstream it)?

These MRs do a minimal change to just add thumbnails. The current file manager uses outdated widgets. Gnome/GTK expects MRs to also replace deprecated widgets in all files touched by that MR. Replacing those deprecated widgets would require a massive refactor.


Okay sure? I never said otherwise, you asked me for forks and I gave them to you. What's your point? I get that open source needs people to complain less and contribute more, my point is just that while it might be valid for tons of situations, it just isn't make sense for the file picker thumbnails saga.

As for the Mr, again there were a few of them and it's very easy to find. https://gitlab.gnome.org/GNOME/gtk/-/issues/233#note_106497

The whole thread is full of people actively writing, fixing, rebasing code, creating MRs for years


I think it was impossible with the old rendering engine, hence the excitement in this blog post.


Wayland has fractional-scale-v1 merged.


It's even worse, it's devs (& some users) plainly dismissing fractional scaling as "imperfect" and claiming fullscreen downscaling as the only sensible option with very little justification other than "it's what Apple does".

I had multiple discussions here in HN with users who wouldn't believe that, given fractional scale ratios, _anything_ else than direct fractional rendering produces significant blurriness. (And note fractional rendering can also produce blurriness, e.g. through misalignment).


From the blog post:

    If your  1200 × 800 window is set to be scaled to 125 %, with the unified renderers, we will use a framebuffer of size 1500 × 1000 for it, instead of letting the compositor downscale a 2400 × 1600 image.
The explanation is a bit confusing to me. I think they are saying that a window that should be drawn as 1500 x 1000 pixels on the screen because of 125% scaling would have 1200 x 800 pixels in application pixels. Since OpenGL and Vulcan use floats to render, they might as well render directly into a buffer that can be rendered 1:1 on the screen by transforming the coordinates in the instructions.

If that's what they are doing, that sounds like sanity finally.


I believe he's saying exacty that, they'll render it to the target size instead of rendering 2x and scaling it down.


What was so hard about doing that in the past?


Does anyone have a grasp/understanding of how desktop environments work on Linux? I don't. For me, everything feels to get more and more convoluted and tacked on.


The X Window System was basically the wrong bet on how GUIs and computer hardware would evolve. Its client/server architecture was the opposite of the highly integrated graphics processing model we ended up with.

Instead of cutting its losses early and dumping X11, both the Unix vendors and open source spent far too long trying to make lemonade out of a truckload of rotten lemons. And that’s why Linux GUIs are so far behind.

It’s notable how rapidly Apple was able to evolve their Unix GUI because they were not tied to X11 and instead embraced the integrated model, even designing their own GPUs nowadays.


Isn't this a bit of a non-sequitur? The desktop environments could have been made coherent regardless of the underlying window system. Indeed, I think you could say that there were many coherent desktop environments in the past. Catch comes from the fact that there were a ton of competing ideas. With no clear reason to want one over the other.

Apple rapidly evolved their GUI because they are trying to support just the one. And, frankly, they are starting to buckle on the idea that they are fully coherent.


People keep saying this but I never really had any complaints of kde vs windows vs macos. All 3 are perfectly usable, it's only in the past 8 years or so where windows has gone crazy with the obtuse spying and ads and "MICROSOFT" everywhere. Even that isn't too hard to remove with a download or two.


They're not talking about usability, they're talking about the technical underpinnings. ;P


And polish. With X you can sometimes feel the “layers” separating and slip-sliding around a bit in places that you don’t with a Wayland DE or in macOS. This was especially true in the late 00s when there was less computing power available to brute-force these issues into being less apparent.


As someone not very familiar with Linux, which desktop environment would you recommend with most “polish”? I’m thinking of switching from windows 10 on my new laptop but I do appreciate all the animations in windows. (I could also consider a hackintosh or BSD if they are prettier, especially in movement/animations.)


BSDs aren’t going to be functionally different from Linux in terms of desktop prettiness/animation as the two run the same DEs.

GNOME is probably the most polished in terms of aesthetics, animations, and gestures. The downside is that it’s much more a tablet OS experience with a few desktop affordances — kind of like iPadOS if it were turned into a desktop OS — than it is a traditional DE. As such expect for various power user features to be diminished or absent compared to macOS or Windows. Pantheon also ranks highly here, with similar limitations.

KDE comes in second. Compared to GNOME it still has a number of rough edges in terms of aesthetics but is comparable to Windows in terms of overall capabilities and design.

Third would be Cinnamon, which attempts to blend a more Windows-like desktop with GNOME-like polish, but last I knew this project is understaffed which has resulted in it falling behind a bit. Also requires X11, unlike GNOME and KDE which work well with Wayland.


Thanks a lot! I think I might stick with gnome or pantheon in that case.


don't use Gnome. Try first KDE/Plasma at least. Especially if you like doing things like easy customise your desktop.


Thanks, I’ll probably try all of them out then haha


>The X Window System was basically the wrong bet on how GUIs and computer hardware would evolve.

I disagree. Other platforms did similar things where clients would command the OS what it should draw and the OS would handle all app's rendering which allowed for good performance on primitive hardware. Like other display severs X did evolve to also be for passing around framebuffers of what the app drew once hardware became capable. X's problems come from being hard to maintain, being a monolith of unrelated concepts, having poor security, etc.


The hardware only became capable in the wrong place. Datacenters are full of heavily shared servers without GPUs, and the days of running everything on your own desktop are over.


Tangential - I just watched this and found it very interesting: https://www.youtube.com/watch?v=R-N-fgKWYGU


I agree that the client/server approach was the wrong way at the end. but saying that Unix GUI was far behind others, it's a too far bridge. The first time that I saw composite desktops using the GPU was on Linux. And current KDE/Plasma it's light years better that the inconsistent and ads shit hole that Windows has become.


To be fair, the secondary technologies available to support the preferred model now are far more advanced than what Quartz Extreme (in macOS) and DWM (in Win. Vista) had to use, and there's no longer a need to provide backward compatibility with software rendering.


Is running events though a domain socket really so different than Windows message ports?


The socket is fine for messages, but not for rendering. Here’s a simplified version of how 3D rendering apps usually work in different environments I have used.

Direct3D on Windows: app renders something and calls IDXGISwapChain.Present. The implementation communicates with the desktop compositor running in the dwm.exe process, dwm.exe renders the entire desktop composed of multiple windows, then communicates with physical GPU to wait for next vertical blank event.

Bare metal Linux with DRM: app renders something, calls drmModePageFlip(), then waits for next vertical blank event with poll() and drmHandleEvent() functions. Embedded Linux developers did an amazing job about DRM/KMS, in my experience the API is both reliable and efficient.

X Window systems have multiple different methods, glXSwapIntervalEXT, glXSwapIntervalMESA, glXSwapIntervalSGI, but in practice none of them is reliable, and on some systems none of them is even supported. Unfortunately, this makes rendering tear-free 3D content on X11-based desktops hard-to-impossible.


Didn't realize that this discussion was in the context of games.


The originally posted article is about new 3D GPU-based backend in the GTK, which is unrelated to games.

Hardware accelerated 3D graphics is the best way to render pretty much everything on modern hardware, games or not games. High-resolution displays are omnipresent. Even cell phones often have FullHD or more pixels, like 2796×1290 or 2556×1179 in the iPhones.


>Hardware accelerated 3D graphics is the best way to render pretty much everything on modern hardware, games or not games.

A counterexample is video playback. The best way for video is to use the hardware decoder and hardware compositor. You don't want to render the video texture to a quad. Using the hardware compositor requires less computation and less power than using 3D graphics.


> The best way for video is to use the hardware decoder and hardware compositor.

Not according to Microsoft, see that page https://learn.microsoft.com/en-us/windows/win32/medfound/how...

Microsoft strongly recommends that new code use MediaPlayer or the lower level IMFMediaEngine APIs to play video media in Windows instead of the EVR, when possible. Microsoft suggests that existing code that uses the legacy APIs be rewritten to use the new APIs if possible.

That lower level IMFMediaEngine API they recommend for Win10+ delivers uncompressed video frames in D3D11 textures.


>That lower level IMFMediaEngine API they recommend for Win10+ delivers uncompressed video frames in D3D11 textures.

Which in the best case is done via hardware decoding as I said.


Yes, but note no hardware composition is involved.

The composition is done by the 3D GPU like I said. Either in your app if you render that video yourself, or if you supply a swap chain’s back buffer to IMFMediaEngine.TransferVideoFrame method, dwm.exe will do it.


It's the compositor's job to composite in the most efficient way possible which in the best case would be using hardware compositing layers, if the hardware does not have enough layers than the compositor will use 3D graphics to combine layers together. Just because a GPU includes a 3D graphics pipeline that doesn't mean that everything has to go through it.


> Just because a GPU includes a 3D graphics pipeline that doesn't mean that everything has to go through it.

That’s how modern Windows does that in practice. Everything does go through the 3D graphics pipeline, and starting from Win8 it’s impossible to disable.

I’m not even sure modern PC hardware supports these hardware layers, except for one small extra layer for hardware mouse cursor. On my computer D3DCAPS_OVERLAY and DDCAPS_OVERLAY flags are unset. The driver reports to the OS the hardware doesn’t support any hardware overlay surfaces.


>I'm not even sure modern PC hardware supports these hardware layers

Yes, they do. I'm guessing you may be using an older nvidia gpu. In the case no overlay is available the video can become fullscreen and to take the only layer from dwm. For pcs the benefit is less about performance / power and more about lower latency.


> Yes, they do.

Why are you so certain?

> you may be using an older nvidia gpu.

Tested AMD Vega 7 inside Ryzen 5 5600U, and nVidia 1080Ti — dxcapsviewer.exe shows the same result, no overlays are supported. Only the D3DCURSORCAPS_COLOR hardware cursor is there.

> the video can become fullscreen and to take the only layer from dwm

Indeed, exclusive D3D full-screen mode allows to bypass dwm.exe compositor, even on modern Windows. But I don’t think it’s evidence of any hardware composition being used.


>Why are you so certain?

Look at documentation from GPU manufacturers, look at GPUs that support multiplane overlay on windows.

>But I don’t think it’s evidence of any hardware composition being used.

It's not, but it doesn't require using 3D graphics to put the texture on the display.


> GPUs that support multiplane overlay on windows

According to Microsoft, MPO is an optional feature: https://learn.microsoft.com/en-us/windows-hardware/drivers/d...

According to random people on the internets, the feature is broken for both nVidia and AMD: https://www.reddit.com/r/AMDHelp/comments/yr5dda/can_someone...

nVidia agrees and recommends disabling MPO with a registry setting: https://nvidia.custhelp.com/app/answers/detail/a_id/5157/~/a...

> it doesn't require using 3D graphics to put the texture on the display

In practice, 3D graphics is the best way to put pixels on the display. At this point, I believe other ways are remnants of the old GPUs which had 2D blitting hardware.


AFAIK, Windows applications typically use GDI instead of DirectX.

https://learn.microsoft.com/en-us/windows/win32/direct2d/com...


GDI is only used by legacy apps. It gonna stay there for quite a while for backward compatibility reasons, but modern Windows apps are using better GPU-centric APIs to render their GUI.

Specifically, WPF is based on DirectX 9, Direct2D is based on Direct3D 11, UWP and WinUI are based on Direct2D. On my computer, both Chromium and Firefox browsers are using Angle on top of D3D11, Firefox uses Direct2D for canvas only.

Also, even legacy apps who render their GUI with GDI are still using 3D GPUs to an extent. Some GDI operations like BitBlt are accelerated internally by the OS. And the OS composes windows on the desktop with D3D11, dwm.exe process does that.


This is utter horseshit. As someone who writes cross-platform native (desktop) GUI code, there's nothing about Apple's GUI system that is substantively superior to the one on Linux that has any relationship to XWindow being the underlying implementation or not. Apple has also had to continually evolve their own GUI toolkit(s) because they also failed to adequately target certain new expectations that arose as time passes (animation being a good example).


I'm sorry, but Xorg still doesn't have a way to correctly handle mismatched DPI monitors. I will also take Cocoa over any linux framework any day of the year.


But neither does Windows, so the issue here isn't XWindow specifically.


I'm currently using Windows 10, and it handles mismatched DPI just fine. The same with macOS.

There are some old applications on Windows that aren't DPI-aware and look weird, but usually fixable. It's still a million times better than whatever X11 does.

IMO macOS is the only OS where graphics and fonts work the exact way one would expect reliably.


Last line I agree with.

The rest, not so much. Applications work only if they do the right thing. Applications could do the right thing on X11 too. They don't just like the "old" applications on Windows.

Anyway, you've dragged this away from the general points: issues with Linux GUI toolkits have little to do with XWindow, and Apple's GUI toolkits have had to evolve just as the Linux ones have.


I may have outdated infor, but last time I checked on X11 apps while they could be DPI-aware, they don't handle mismatched DPI well - I had to choose between everything being small on my 4k screen or everything is blurry on my 1440p screen.

I've hard-coded some DPI settings for certain apps that I always display on the same monitor.

Meanwhile, I have zero issues with the same pair of monitors on Windows and macOS. I daily drive all 3 of them.


Can you elaborate the superiority of Apple GUI system?


I said there wasn't any.


Nothing that you said is a real criticism of X in general though, just its client/server model.


But it's so fundamental to the design that it's inescapable. The history of Linux GUI is a series of heroic workarounds to hide the misaligned X underpinnings.

Trying to separate X from the client/server model would be like saying: "I like Unix just fine, except files and processes and the shell and the software tool philosophy, really." — You just don't have much left.


I would say X was success. It could be modernized via extensions, while maintaining backwards compatibility and providing network transparency. And it still works well, while the the replacement which was said to have a much cleaner design and which is under development for 15 years still causes problems and has many limitations.

I also disagree that the client/server design is a problem. Modern graphics hardware is remote from the CPU for all intents and purposes. So remote buffer manipulations protocol is exactly what is needed.


Before a few days ago, I had never used Wayland. As you said, it causes problems and has many limitations. The solution that works is infinitely preferable to the one that doesn't.

But, a few days ago, I finally switched over, on a modern machine (~ 6 months old), with a modern OS (more than modern -- I'm running a pre-release version). And now, finally, I see what all the fuss is about. Wayland is absolutely beautiful. The rendering and animations are gorgeous, and so so smooth. The whole system is far faster, far more robust, and far more efficient both in computing resources and workflow than anything I was ever able to achieve on X.

I have been using Linux on X11 displays as my primary machine since 1996. I do development work, but not with graphics, where I'm just a regular user. I don't really care about how good the design is or how hard it is to write programs using X / Wayland / whatever. All I care about is how usable my system is. And for me, having reached the Wayland promised land, I now understand what I was missing with X, in a way that I could never have before.

Tear-free rendering, all the time, every time. Frame perfect windows, all the time, every time. A desktop that finally feels immediately responsive, all the time, every time. Under X11, all those round trips to and from the X server add milliseconds to response time, hard to benchmark, but very easy to feel. Worse yet, X11 responsiveness is variable. If the system is loaded, your desktop can lag so badly under X. This doesn't happen with Wayland.

I now have OSD volume controls that work even when the screen is locked. This was never possible with X, and can never be possible.

I now have a lockscreen that, guaranteed, will not leave my desktop unlocked if someone crashes it by feeding it malicious input. X11 is architecturally incapable of providing such guarantees. There were several instances of security bugs where lock screens in X11 crashed in exactly this way.

Wayland has now reached the point where it is a must-have for me. If it doesn't run Wayland, I won't buy it, and I won't install it. Yes, X11 works well after all these years, but Wayland is enjoyable to use in a way that X11 never was.


With gnome being the front runner on that one.

I’m curious to see how much of an impact the wayland architecture will be, and if ‘gnome only’ apps become a thing.


Funny that you say "gnome only", cuz currently there are "non-gnome only" apps due to a specific protocol not being implemented by gnome (drm-leasing for VR, at least on main. there is a PR that is specifically said to not merge since they want to handle it via a xdg-portal). They are very wary of exposing anything in an way that could very much backfire in the long run, be it as a general implementation or exposing something they would be stuck with.


The xdg-portal attempt was misguided and I don't beleive anyone is pursuing it at this point. Ideally drm-leasing would be managed by the login manager, allowing multiple compositors to lease connectors and run independently on other monitors, as well as being used for VR headsets. https://github.com/systemd/systemd/issues/29078.

Sidenote: I hacked the wayland protocol implementation for gnome into working at least for SteamVR, but at least with AMD gpus there is some serious bug preventing the card from performing properly. It basically throttles itself for no reason and never hits the refresh rates needed for smooth VR, especially since there is no asynchronous reprojection at the moment. So while ideally the drm-leasing problem would be solved already there are other even more important problems to solve with linux VR for now.


I’m not blaming gnome, I just see it as a big problem in the future ‘oh sorry just have to restart wayland so I can boot into $app’. Any display renderer that has this limitation and expects each individual project to cross-support other renderers is a poor design, at least in my books.


I'd love to see an ansi text renderer, to be able to run gtk programs inside my xterm (optionally, with some sixel thrown in...).


Nowadays most GTK apps look very simliar. A sidebar, some actions in the titlebar, a details view. (Same story for Mac apps, 'Modern' Windows apps, Mobile apps.)

I wonder if a UX toolkit could be completely declarative and semantic - "I need a master/detail view, a list view with the following fields, some actions, ...". At the high level you don't give any positioning, or styling. It would automatically use the appropirate system widgets. Then on top you would add some polish by using a bit of CSS, or maybe an escape hatch to get the native widgets.

Almost everything that is not a browser, a WYSIWYG editor, or a media viewer would fit that mould. The kicker is that from such a description you could easily generate a TUI.


You’re basically talking about the underpinnings of XAML. Terribly underappreciated and completely alien to the FOSS world.


I'm a bit late to answer but... I've used a bit of XAML and it is not at all what I have in mind. XAML, SwiftUI and QML all have the same problem, in that they mix structure and presentation. A typical XAML app is not easily themeable. Let alone having the ability to adapt to different UI paradigms or toolkits.

A symptom is that you have tags like "Rectangle" and attributes like "Margin". The high-level description shouldn't say anything about presentation, just what you want to accomplish. The building blocks should be like "CRUD Editor" or "List-Detail view".


XAML based WPF and Silverlight were really nice and eliminated so much boilerplate and plumbing code. Unfortunately, Microsoft management-marketing didn't want to sell an opinionated framework, among other blunders.


I've tried to convince the fledgling GUI toolkit world of rust to take a look at XAML. I'm not going to rehash my latest round of comments [0] just yet, but suffice to say that XAML gives application developers and end users such freedom and enables very nice architectural splits that other options do not.


broadway (discussed in other comments) + carbonyl is kind of similar https://github.com/fathyb/carbonyl

https://i.imgur.com/pIQ4K7Q.png


But this is not really usable, is it? I was expecting a look "native" to a terminal, i.e. a text-based user interface with the common conventions of the medium.


If they used https://wgpu.rs/ they would get directx and metal for free (:


looks like so much fun workin on this :) cool stuff. When i read about the anti-aliasing i thought nice, maybe signed distance fields will work just as nice for font rendering at arbitrary scales as in game engines... (Valve had a nice paper out there on this). there's lots of cool trickery in game renderers in UI code and for things like rendering decals that might be nice in gui code too.


I don't get why performance degradations are accepted though. I do most of my computing on old hardware and these are features I would turn off if I could and perhaps are not even supported by my gpus.


    > No, the new renderers are not faster (yet).
I think yet is here the important word.

If you see a noticeable performance decrease:

    > GSK_RENDERER=gl

Please keep in mind that it is unlikely that Microsoft, Apple or Google would discuss these. It would be probably „forget the old API, here another API“ (Microsoft), „enforced from ${WEIRD_NAME}“ (Apple) or „you won’t get that update“ (Google).


Yeah, reminds me of Grinding Gear Games (Tencent for a few years now) where if you read the update notes for Path of Exile 1, you can see "performance improvements" mentioned many times, yet, mysteriously, the game today has effective minimal requirements order(s) of magnitude (if multiplied together) higher than on release !


In GL it's quite easy to accidentially miss the "fast path" and get surprising slowdowns (and in reverse, GL can be surprisingly fast when hitting the fast path). What's disappointing though is that the Vulkan renderer only nearly reaches the same performance as the old GL renderer, this seems to indicate that the problem sits on the caller side of the 3D API.

It would probably have been a good idea to track performance throughout the implementation and iterate on that instead of "architectural purity".


I don't understand half of the post, but claiming that they're doing this for "architectural purity" sounds ungenerous to me. The post lists a couple of tangible benefits, most of which do, I believe, relate to performance:

> Proper color handling (including HDR) > Path rendering on the GPU > Possibly including glyph rendering > Off-the-main-thread rendering > Performance (on old and less powerful devices)

(Which is not to say that tracking performance isn't a good idea. It's just that "architectural purity" sounds needlessly dismissive to me.)


This is actually the only case where I tolerate performance regressions: where the previous implementation was actually incorrect, rather than merely old or in need of a rewrite in whatever framework or technology is currently in vogue.


Performance is part of the implementation. Therefore, if it regresses, it is silly to claim it was incorrect before. Seems more likely to claim that tradeoffs were made for performance.

I'm ok with performance leaning on advances in the hardware. I'm also ok with performance dropping if you are pushing more pixels. But, we had high resolution displays years ago. Such that that is a tough hill to defend.


Performance is a combination of a) required behavior, b) implementation. If an implementation does not correctly meet the requirements, its performance cannot be compared to one that does.


Close. Performance is also part of your "a", there. And you have to adjust implementation accordingly.

Is why nobody says that old "3d" games like Doom were poorly implemented. Even if they were not fully 3d.


Sure. But I'm approaching this from the assumption that we're actually talking "observed/resultant performance" and not "baseline required performance," i.e. talking about "'excess' performance over the required threshold," and if that regresses due to a correctness issue, I don't see the problem. Now what is "the minimum required performance threshold," I cannot say, but it is surely at least the ability to render on "CPU" at 60Hz.

(which CPU? render what? using how much of the available CPU power? with how many frames of latency allowed?)

Note that I don't fundamentally disagree with you and I cringe when I hear issues dismissed as "premature optimizations" myself.


I think that is fair. I was approaching this from the perspective of "my old machine was responsive, it is becoming less so." Back on the example of Doom, I understood why my machine at the time couldn't run Quake. It would be frustrating to force everyone to that paradigm.


These renderers are not on by default and likely never will be. I have never seen it work that an immediate mode rendering API translated to retained mode becomes faster. It is probably somehow possible, but will require a ton of work and probably changes on the API client side to fix some pathological cases.


> I have never seen it work that an immediate mode rendering API translated to retained mode becomes faster.

I don’t think I get your point here. Gtk 4 is retained-mode whatever renderer you use; Vulkan and OpenGL are immediate-mode (well, kinda) whatever renderer you use. Whatever problems that forces in the new renderers would be just as present in the old ones, wouldn’t they?


Oh, you are right about GTK4! It has switched to a scene graph and retained mode drawing. Somehow, I had never heard of that before. Presumably, it was done to be able to use 3D graphics APIs that all work in retained mode.

Vulkan and OpenGL, though, are in no practical sense immediate mode: You need to draw the whole frame every frame (barring a few exotic extensions for compositors and such), so you need to retain the state of everything in the frame so you can draw it.


Your usage of these terms in completely unconventional and wacky.

> You need to draw the whole frame every frame

This isn’t even true at a high level. You can composite buffers. That’s not exotic. That’s a fundamental operation.

But insofar as needing to actually do all your draw calls at once - that is literally what "immediate" in immediate mode means.

> Vulkan and OpenGL

They don’t specifically retain state - they’re immediate. That’s what immediate mode means. What something higher up does has nothing to do with it.

What would you even consider immediate mode by your definition?

> so you need to retain the state of everything in the frame so you can draw it.

That state can be a procedure and a handful of variables (which in the extreme is all a shader). The point is Vulkan and OpenGL have no say over the nature of that state.


OK, so sorry, I was misusing the terms.

I guess I'm looking at it from a GUI application programmer POV. That OpenGL or Vulkan are, themselves, immediate mode, doesn't matter all that much there. Drawing happens on the GPU, which wants to process large buffers without changing its state in between. An overview of all drawing operations (scene graph, proper use of the term retained mode) with batching and merging is needed to cater to that. Examples: Qt QML scene graph, GTK4 GSK

> You can composite buffers

Compositing buffers is still drawing, though

> What would I consider immediate mode?

Basically, a canvas-type thing. You just scribble wherever whenever (limited by convention and correctness, of course) and what you don't touch stays like it is. Drawing happens on the CPU, which is fairly happy to do small operations at the drop of a hat. No need to remember any state for the benefit of the graphics API. Examples: Qt QPainter, GTK < 4 using Cairo


GNOME dev team (and by extension GTK) probably don't care that much, iirc most of them use expensive macbooks so many issues get ignored because it "Works on my machine" (e.g. several issues with font rendering that don't affect retina displays)


The font rendering thing is more complicated than that. Subpixel antialiasing doesn't work with animations and I'm pretty sure it interacts poorly with fractional scaling too. The fact that you don't need subpixel antialiasing if you have a HiDPI display just lets them make the tradeoff be "you need a good monitor" instead of having to choose between jaggy text or weird shimmering on animations.


Also, it is impossible to know which subpixel structure the user has.


asking them doesn't count? https://i.stack.imgur.com/g2auK.png


Users might like the "wrong" look. Also, what happens if you connect a new screen or just rotate your current screen?


I hope I dont sound bitter, but most decent graphics engine developers have created renderers that are a couple of generations ahead of the open source GUI toolkit renderers. There are several of us that can truely bring next gen rendering to the open source desktop, however we’re working for gamedev companies (they pay our bills), and we have no time to contribute to open source stacks. If the community can organise a regular budget to pay for such devs, then you’d see a significant rendered snd toolkit updates. Same with other open source apps.


Couple times in the past I have implemented GPU-targeted GUI renderers, here’s an example: https://github.com/Const-me/Vrmac?tab=readme-ov-file#vector-... https://github.com/Const-me/Vrmac/blob/master/Vrmac/Draw/VAA...

2D graphics have very little in common with game engines. The problem is very different in many regards. In 2D, you generally have Bezier and other splines on input, large amount of overdraw, textures coming from users complicate VRAM memory management. OTOH, game engines are solving hard problem which are irrelevant to 2D renderers, like dynamic lighting, volumetric effects, and dynamic environment.


While 3D games are the most common ones these days, there's still plenty of development done on 2D games too :

https://www.factorio.com/blog/post/fff-251

https://www.factorio.com/blog/post/fff-264

https://www.factorio.com/blog/post/fff-281


2D games are mostly rendering sprites. GUI renderers do that too for raster images and font glyphs, but the harder part for GUI is vector 2D graphics.


There is PixiJS, a 2D web graphic engine. It is really, really fast. I can imagine something like this, as the base.


I gotta disagree that PixiJS is fast. I’ve worked with a bunch of 2D graphics engines going back to the Flash days, and I found it’s really easy to hit a wall with PixiJS performance, especially if you are using text and filters. I wouldn’t have much issue with it if the cacheAsBitmap feature was reliable, but I found it buggy as heck and it didn’t help performance as much as you would expect. There is no way I would use PixiJS for a full screen game or mobile game.


Ok, I also found that performance can drop in unexpected ways, causing workarounds, to get it flowing again.

And I don't use cacheasbitmap, but do my own caching. (Not because I knew of flaws, but because I was already doing it)

And for text I really recommend BitmapText. That is fast. (But not possible with all use cases, sure)

Also Pixi8 with WebGPU will be stable soon, looking forward to it.

But all in all I am really impressed, I used quite a few other js graphic engines before and Pixi was by far the best. Or which one did you find better?


Can you give examples of better JS renderers?

What is needed for performance of traditional GUI app rendering? I'm particularly interested in table rendering. Glide and Perspective are both canvas based renderers, but I haven't dug into the internals.

[1] https://github.com/glideapps/glide-data-grid

[2] https://github.com/finos/perspective


I am the author of GDG. If you have any questions feel free to let em rip.


PixiJS is the easy problem, blitting a bunch of premade textures to the screen. The much harder problem is getting curveTo() and text and gradients and strokes, etc.


That is all or mostly possible with PixiJS or an extension to my knowledge.

Edit: but then again, Pixi uses the HTML canvas element for text drawing, which uses the browsers text capabilities. So yes, at some point and somewhere those functionalities need to be implemented


Sure canvas can render text, but its filled with problems and limitations. It doesn’t do linewrapping or any of the font rendering correctly. Doesn’t do aliasing correctly. It needs so much hand holding and manual handling to render semi decent that basically all canvas apps just opt out and use html to render text on top of canvas.

Canvas is almost always the wrong choice if you want to do layouts.


I am a bit skeptical about this. I feel like game UI toolkit and desktop GUI framework live in two separate world with different expectation. At least, in my humble experience, having to have used both in my career.

GTK/Qt are usually good/very good with their integration with the OS, accessibility feature, keyboard navigation, handling of features like copy/paste, ... These are the kinds of things that game ui toolkit tend to completely forgo since they don't need it, and focus instead on performance, theming, integration with a game engine, ...

Theoretically, you could say that the renderer is agnostic with this, but in practice, it is not completely true. And also with the simple fact that you have a limited budget to work on feature, and they both rather work on different feature. Having a very fast and accurate renderer is just not as important for desktop GUI framework than for game UI toolkits.


Text is the sine qua non of GUIs.

Text is also the bane of renderers--there is a reason why we have exactly 3 text shaping engines--Windows, Apple, Harfbuzz. Text is a beast to deal with and is often ill-specified.

Text is also the bane of GPUs--we don't have good GPU-only algorithms for taking a string of text, handing that to the GPU, and having the GPU render that directly to a buffer.

Text is also something that games suck at rendering. SDF (signed distance fields) are considered a good rendering of text in the 3D world and they are blurry as hell.

I do think that the modern GUI world is going in a lot of wrong directions, but you must deal with text accurately to call yourself a real GUI.


How many of these game engines are abstracted at a level where you can swap in a pdf or svg backend? How many of them support cmyk and and print units? I'm only scratching the surface of things a gui renderer needs that a game engine doesn't.

I'm very skeptical that a bunch of game developers are going to whip together something that crushes Skia in performance without sacrificing a ton of capabilities.


"swap in a pdf or svg backend"

I am not sure, if I understand you right, but do you mean rendering to pdf or svg instead of the GPU?

If so, are there real world use cases?

"crushes Skia in performance without sacrificing a ton of capabilities"

Same question, what of those capabilities are really in use and needed? Linux GUIs in general are really not a beacon of light, in terms of performance or usability. I strongly suspect things could be better, if some bloat would be removed.

I am amazed with what is possible with PixiJS, a renderer for the Web, using WebGL and soon WebGPU. Having something simple, but powerful as the base, would be my way to go.


Printers often want pdf or postscript. Or if not some other format that isn't a regular GPU supported thing. Games rarely support printing, the main use is take a screenshot which uses the OS not the game engine, which is great for games. However if you are writing something where printing is important you want more control over the output than a screenshot can give (printers tend to be much higher resolution, but you have to deal with paper size - two things that should be passed back to the application as you can often make adjustments to better use show things when you know those limits - there are other compromises in printing too that may matter)


Saving websites as PDF comes to mind. If taking screenshots of windows as SVG was possible I would definitely use it.


A screenshot of a rasterized image saved in SVG is not something I see any use for. It will be a bloated monstrosity.

SVG is vector graphics, when you already have pixels - there is no clear way going back.


SVG files can refer to images for bitmap graphics, only the rectangles, text, etc would be specified as vectors. See https://www.joachim-breitner.de/blog/494-Better_PDF_screensh... for a demo.


I think he means taking the scenegraph and saving it directly to SVG. Pretty much how it would be done for PDF.


Ah, right. Does not sound too complicated, but is an entirely seperated render path. And apparently now supported? But I have never seen it put to use.


Cairo and Skia can do this.


But is there a real world use case, where this actually was put to use?

(Sorry, I am having flashbacks of the debate with X and wayland, where it was argued, but X is network transparent, except that it wasn't anymore since a long time and, or because - no one used it)


The "Save to PDF" feature in Firefox's print settings uses the Skia-to-PDF renderer on various platforms, I believe.


And for SVG?

Having features is nice, but not if they are not really necessary and come at the cost of the core feature (performant screen renderer).


The problem here, is that you have no clue what the core features of a UI toolkit need to be.

If performance was the primary concern, GTK and QT, would not be generations behind (a claim I am skeptical of.)

UI toolkits have a much larger audience than the tiny corner of the industry you work in.


Well, it just so happens that I build a UI toolkit over the last decade, but yes with performance in mind and not featuritis. But apparently I have no clue, well sad to hear that. I'll show myself out.


The use case is usually sending the screen to the printer or saving a document.


It seems super complicated to me :-) cool idea though


I got the impression from above, that this is something possible now, yet I have never seen such a capability. And I thing the effort will be way greater the reward to implement from scratch ..

("Easy" assumed, there is already something there)


Um, isn't that an application feature?


How do you expect applications to implement it without a renderer?


The ^community^ here are people like you - those who also have to pay bills but contribute anyways, in whatever way they can by using the software and occasionally contributing code.

While yes it would be great if a community could raise funds, that coordination job itself would have to become someone's not-paying-the-bills work.

As much as I love Open source/Free/Libre software and am grateful it exists (and contributing to it, when possible), I've a long held belief that it is the pursuit of the privileged. You need to have the privilege of free time and then the privilege of being able to choose to spend that free time on something that doesn't improve your standard of living and then the privilege of being able to do it consistently.


Sadly I totally agree: Open Source is the playground of people who can afford it.

I benefited a lot of Open Source in my career, life so I am very thankful for all contributors (and try to give back in money/time, when I can afford one or the other).

What really annoys me, that my government does not mandate that software build with tax money must be Open Source.

That would go a long way to fund Open Source and improve the quality.


What government is that?


I dunno, all of them? Which governments mandate open source software when calling for bids?


(I wasn't trying to make a point.) As far as I know, that's what the initiatives like PMPC¹ are for. I think in Switzerland, a law recently passed that seems to go in that direction² (Open Source should by default but some leniency as far as I can interpret the text). According to this³ OSOR report, something similar happened in Italy in 2019. So, I think we're slowly going in that direction in Europe.

¹: https://download.fsfe.org/campaigns/pmpc/PMPC-Modernising-wi...

²: https://www.admin.ch/gov/fr/accueil/documentation/communique...

³: https://joinup.ec.europa.eu/sites/default/files/inline-files...



I do not think it should be mandated. But it should be promoted


>The ^community^ here are people like you - those who also have to pay bills but contribute anyways

In principle, yes. In practice, most of the "community" is paid developers for companies like RedHat. So while they do have to pay the bills, they do so by those FOSS contributions.


True. I recognize that my statement was a bit simplistic.

That said, in the context of what the OP was saying, unfortunately, this is a chicken-and-egg situation. For someone, like the OP, who would like to get paid to do OSS, they'd need to have a reasonably active OSS presence prior to being hired at places like Red Hat. Which is another aspect of my original point of contributing to FLOSS being the pursuit of the privileged.


I'd be very skeptical about that. I work in the game industry, and while our 3D renderers are very good, I have not seen a 2D UI renderer I would feel is at all competive. What are you using for path and pattern rendering?


I am using rmlui in ly own small game. Not sure how good it is.


> [M]ost decent graphics engine developers have created renderers that are a couple of generations ahead of the open source GUI toolkit renderers.

Step one: get the knowledge out there. Have those developers at least, I don’t know, talked about what makes those toolkits better at GDC?


Game engines can afford to be generations ahead : they might even be targeting specific hardware (single console release), they typically assume a sandboxed environment that they are in total control of (at best, they'll make a limited and tightly controlled GUI framework for modders), they might have restricted themselves to a limited number of inputs/outputs (single screen with a specific resolution, gamepad only), they don't have to worry about unknown unknown uses by other developers...

None of these apply to a generalist renderer, therefore it can only "lag behind" the game ones. (Unless maybe if we're talking about the "human side of the question" : what are the best designs, layouts for a generalist human/machine interface ? Here it's the generalist GUIs that I would expect to be a couple of generations ahead (Xerox' labs, Apple's Macintosh, IBM's Common User Access standard, CERN's World Wide Web...)


There are several companies employing devs for this kind of free software work.

If you think you could contribute you can apply for them.

I guess you will probably find there are already brilliant and competent people working on this stuff and the problem is not so easy as you believe.


You may be surprised to know that the open source Godot game engine has also been adopted by some developers as a GUI toolkit (See Standalone Tools and Applications Made with Godot GUI - https://alfredbaudisch.com/blog/gamedev/godot-engine/standal... ).


Some of the people working on GTK are funded by Red Hat or otherwise. If you're really interested, you could try talking to people.

Edit: In particular, Matthias Clasen, the guy blogging here, has been with Red Hat for many years.


I can't think of a segment of user interfaces worse for accessibility than game development. Generally speaking, the game-centric UI toolkits and immediate-mode GUIs are very pretty but completely lack any accessibility hooks, including the ability for a screen reader to even know there is text present. If GNOME switched to gamedev-style UIs, I would probably just go buy a Macbook.


What do you want the community to do? Pitch in with money from their day jobs so a gamedev savior can come in and make things render slightly faster? At the expense of hideous unmaintable code that only a gamedev genius could understand ?


Some open source projects hire full time developers with donations / sponsor money (e.g. python, zig).

I'm not saying that's the correct approach for GTK, just noting is not an absurd idea.


Yeah, because there aren't well maintained pro rendering engines /s

And FOSS devs always create non-hideous maintanable code that everybody understands /s


game devs are already relatively underpaid for the value they bring to companies. I don't know how an open source initiative can even afford a single professional graphics developer.

Also, you say they are a couple of generations ahead, but do these kinds of software need to be bleeding edge? Even many games don't, the kinds of software in research labs that do pay even better than gamedev (and of course require PhD's and whatnot).


Why don't you get your company to pay for it?


Why should they, if they are interested in releasing games or a game engine?


Then why should this happen?

> If the community can organise a regular budget to pay for such devs, then you’d see a significant rendered snd toolkit updates. Same with other open source apps.


Erm, if the linux community comes together - then it can happen. But the employer of a game engine dev is not part of that community and probably simply does not care.

So if you care and feel the call, go organize something. I care, but have other duties.


That sounds a lot like modern extortion. I sincerely hope this attitude does not creep into FLOSS and Open Source more then it already did. Imagine a volunteer firefighter who has found a well paying job announcing "If you'd pay me more, your house wouldn't have burnt down."


A better analogy is volunteer firefighters not cutting it, houses burning down left and right, and a professional firefighter saying "I'd like to come work in your district and help with firefighting, and I could do a much better job than the volunteer guys, but I need to get paid to work".

>I sincerely hope this attitude does not creep into FLOSS and Open Source more then it already did

If you took away people working on FOSS because they're paid to, that wouldn't contribute or drop to 1/10th the rate otherwise, you removed the most prolific and important maintainers and contributors of lots of huge FOSS projects.


Since I just recently completed my certification as a Firefighter I/II (also Wildland FF 2), and have been an open source developer for more than 35 years, with the last 25 years of that being full time and the last 15 having FLOSS as my actual income source, I'd like to comment.

The primary difference between professional firefighters and their volunteer counterparts is hours on the job. When I graduated from the state academy, I knew as much as about firefighting as any of my fellow graduates and had been through precisely the same training requirements. However, the gap is going to open up very rapidly since the career guys will be doing regular shifts every week, whereas I will be answering 1-3 calls a month on average, most of which will not be fire-related. A year from now, the career guys will be even more familiar with everything we learned during our certification training and more, while I will be working hard to remember any of it.

So the question is: how well does this analogy hold for s/w development?

It doesn't.

First of all, the gap between most proprietary development outcomes and their FLOSS equivalents has more to do with UI/UX design questions than actual coding skills. At the source code level, it's generally proprietary projects that are "burning down left and right" (shoddily and quickly built, with inadequate attention to engineering and insufficient caring about one's work due to marketing deadlines).

Secondly, the difference between proprietary developers and their FLOSS equivalents in terms of hours of experience is not deterministic. It's going to be a function of employers, personalities, life situation. Plenty of (typically younger) FLOSS developers squeeze in more quality hours on their FLOSS work than their proprietary cousins do.

Thirdly, a firefighter only gets to put out the fires that actually happen. A software developer can pick their own problems and goals and work on them at any time. There's no relationship between the outside world and your ability to advance your skills and knowledge.


Not a volunteer fire department, but this comes to mind: https://www.npr.org/sections/thetwo-way/2010/10/08/130436382...


The recent "The things nobody wants to pay for in open source (lwn.net)"

https://news.ycombinator.com/item?id=39151000

comes to mind...


[flagged]


If you actually cared about that bug, you would have followed it, and seen that it has been closed.

But you just want to shit of GTK for no reason. It's an open source project, so they owe you nothing, and nothing was stopping you from contributing your own improvements.


I have been following it for almost a decade now. I am in #gtk idling all the time. I use gtk3 and gtk4 based desktop environments. The dozen of times the bug has been submitted may be closed: but it's a closed wontfix, despite the actual bug still reminaining.

The first instance of the bug report happened in 2014 when mclassen introduced it and refused to fix it. The most recent instance of the bug report happened 7 months ago when I proved to them they'd ported the bug from gtk3 to gtk4 too: https://gitlab.gnome.org/GNOME/gtk/-/issues/5872 I even showed them a partial patch for the problem for gtkfilechooserwidget.c to restore default text entry input but the filechooser is such spaghetti that everyone is afraid to change anything.

So, yeah, the bug still exists. And it really is more important than another renderer. This is basic functionality that's been missing for a decade.


I believe you haven't tried to make contributions to something like gtk, especially the ones they have no roadmap item for. Modern big open source projects are an entangled mess of subsystems, dependencies, opinions and sometimes plain ego, so even if you're able to program the library, there's little chance to get the change accepted into mainstream, and there's no chance you'll be able to maintain a whole parallel distribution which would make this change effective at least for you. With years I learned that this claim:

they owe you nothing, and nothing was stopping you from contributing your own improvements

lands inside the [ignorance..rudeness..hypocrisy] triangle most of the times. It would be more agreeable if stated as:

they owe you nothing, and nothing was stopping you from contributing approved improvements from their backlog


The paid gtk devs refuse to accept patches to gtkfilechooserwidget.c (it's "frozen"), you'd know this if you yourself tried to make contributions to gtk. They owe us nothing, but the fact remains they have been removing features (like respecting the gsettings (org.gtk.Settings.FileChooser location-mode) that would allow users to fix this bug). And they refuse to fix it themselves or look at patches from others. I ask once per year. Even just 2 lines in gtk/gtk/gtkfilechooserwidget.c swapping in priv->location_mode = LOCATION_MODE_FILENAME_ENTRY; would help.

The same bug exists in gtk4 too and they also refuse to fix it there. https://gitlab.gnome.org/GNOME/gtk/-/issues/5872

So, to conclude, everything you said is wrong re: this specific instance even if it may apply generally.


I'm a little confused whether your comment is the answer to mine or someone elses, cause we seem to agree on all points. In any case, I left G* lands ages ago (gtk2->3) and can only speak for that time. It coincided with the issues around their development policies.


Ah, I didn't realize the italics were you quoting parent. woops.


> Modern big open source projects are an entangled mess of subsystems, dependencies, opinions and sometimes plain ego

Do they differ in this respect from "Modern big proprietary projects" ?


No, why?


Then why single out "open source" ?

If this is a characteristic of "modern big software projects" then it has nothing specifically to do with whatever happens or doesn't happen in open source development.


But you can't send patches to closed source projects by definition. So I see it as the scope of this discussion, not as singling something out.


At the end of the day we're the ones that have to put up with the rough edges of this software. File picker bug closed or not, the GTK file picker sucks to use. I don't have the ability to just swap it out in programs that use it, at least not within a reasonable amount of time or without significant maintenance of a forked codebase.


People aren't shitting on GTK for no reason. We/I do it because new releases are flawed and GTK is often what you get if you buy a laptop with Linux preinstalled and hardware-certified. Even several months ago, if I scrolled to the bottom of Nautilus with a MX Master mouse, I couldn't scroll back up.

The "nothing" stopping me from contributing is that GNOME source is in an inconvenient location, is more difficult to understand coming from non-GUI microcontroller programming than for the core devs, and is a hassle to build. When I try to install or compile dependencies, I seem to encounter a new error every time, like GNOME Builder not respecting a corporate proxy while using WSL even though my environmental variables are set correctly. (So now, you're learning about "curl -vvv" and maybe installing MSYS instead of figuring out why your mouse doesn't scroll up. Make sense yet?)

Beyond that, I don't want to turn my computer into a machine with Flatpak and multiple GB of devel files just so I can fix the scroll bar that should already work and might be caused by libinput or kernel source.

Something is fundamentally wrong if the GTK scrollbar---one of just a handful of available widgets that would be put in countless GTK programs---doesn't work in a GNOME app.

The last 3 Linux devices I've owned have been an overpriced garbage heap of Certified hardware---running too hot with GPU glitches, OS hanging, everyone's special take on application packaging (aka No More Ubuntu for me), awful sleep and power management, terrible peripheral support, having to learn about journalctl and a laundry list of kernel parameters...

The thing in common with those terrible modern devices that weren't around on my much-loved XPS 13 9370?

Wayland

GTK 4

Pipewire

S0ix

Kernel 6+

I'm not necessarily blaming one specific new technology, but I just today took a video of my laptop (2500 euros, shipped with linux by a major manufacturer) evidently having graphics glitches just from opening Firefox (on a Radeon, just to clear up that not every GPU Linux problem is Nvidia and everyone using AMD is enamored). I have no idea if it's a software or hardware problem---it also happens if I change the UI scale to 125% or plug in a dock---but it's not even the first of ten major problems this laptop has had.

I'd gladly RMA the thing, but then I don't know what to develop on except maybe an Android tablet cross-compiling to x86 plugged into a dock.

My grandpa's 286 was more reliable than an off-the-shelf Linux or Windows laptop today, and the only MBP I ever used left a sour taste.


> My grandpa's 286 was more reliable than an off-the-shelf Linux or Windows laptop today

Perhaps but your grandpa's 286 couldn't have animated emojis made out of gamma correct antialiased vectors in button caption for its GUI :-P


> If you actually cared about that bug, you would have followed it, and seen that it has been closed.

You mean that the GTK/GNOME people declare they will never fix their file chooser? I've pretty much figured that out. That in itself is more of a problem than the state of the file chooser, because fixing faulty design is one thing, fixing but dogmatic insistence on poor UI is another thing altogether.

> But you just want to shit of GTK for no reason.

I disparage the GTK file chooser because it's horrible to use. I don't know anybody who works on GTK. I don't write software which competes with GTK (I'm a GPU guy). I have better things to do with my time than bad-mouth software projects I have no personal interests for or against.

What's actually happening is that some peopl are holding your hands over your eyes and refusing to see that (some of the) GTK UI elements are utterly broken intentionally and for years or decades. And while GNOME apps have alternatives, GTK is, unfortunately, popular among apps which don't really have alternatives, so I'm stuck with it.

> It's an open source project, so they owe you nothing

Huh? It's a project of human society to serve people's needs. So they owe users and developers to do a decent job and address those needs. You seem to be suggesting that if I don't personally pay them then I should just shut up and accept their choices.

I would never ever say something like that to a user of my FOSS - and I do owe my users, a lot.

> and nothing was stopping you from contributing your own improvements.

1. _Everything_ is stopping me from contributing an improvement. The idea is rejected on principle.

2. I understand you are offering to volunteer take up maintenance of the FOSS work I'm doing while I go off and start getting into GTK, with which I have no experience. Or perhaps you want to take over my day job so I can have more spare time?


The 20 year old issue where thumbnails weren't shown in the file chooser?

I believe that's fixed: https://blog.gtk.org/2022/12/15/a-grid-for-the-file-chooser/


It would be enough if they restored the "recently used folders" functionality.


And not that damn subpath searching on key press behaviour.

For a long time I joke that the Gnome/Gtk design decisions are made on mushrooms, it’s quite sad that it’s snot an unrealistic cause of this bizarre behaviour.


On the other hand, maybe they should introduce mushrooms into the design decisions. Maybe it'd be more reasonably designed.


Why isn't a lot of this just handled by the GPU?


GPUs are pretty low level so it takes a lot of code to render anything.


Just seems odd that it can't handle, for example, the fractional scaling and anti-aliasing.

Thanks for at least replying, everyone else was just downvoting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: