So you are at the first mile of that road...
On the second mile you will discover that X11 is just a protocol to present bitmaps on monitor surfaces + plus generator of events coming from some of input devices.
So on the third mile you will decide that you will need system of widgets and you will get a tree of HWND's, GtkWidget* - UI DOM elements tree as an ultimate abstraction.
Somewhere on the way further you will come up with the bright idea that look and feel of those UI bricks in the tree needs to be customizable. So you will try to invent style definition system/language. And CSS will be born again.
As a result, at N-th mile, you will end up with something close to Sciter/Go : https://github.com/sciter-sdk/go-sciter that already works not only on X11 but on Windows and Mac too.
Motif widgets on System 7 and Windows NT/2000? No thanks.
Back when Tk still mattered even the attempt to emulate Win32 L&F was just partially done.
I very much doubt that it has improved.
Solid tech, but c’est la vie, software still folows fashion, as you so amusingly showed :)
If he arrives at QML then he will be doing something awesome.
Naturally with such culture one gets Electron instead.
Want to earn money with the work from Qt devs? Give them something back to pay their bills.
that's absolutely not mandatory with Qt's license (though a nice thing to do).
Besides, a non-negligible part of contributions to Qt aren't made by The Qt Company but by others such as KDAB, etc... which won't see a cent.
My point was about those that want to use Qt as free beer, don't contribute anything back and even feel entitled to complain on Qt Creator and Qt bug reports.
I found that the model is good but the problem is that the Qt commercial licenses are quite expensive. You would have to reach really far to find an example of a product with a greater per-developer cost.
What are they "pretty cheap" when compared to?
Qt more expensive than Visual Studio Ultimate?!?
In my naivete I would have said a decade ago that a single author of traditional shareware-type stuff should just include their source code. After seeing what happened to the Paint.Net guy, people taking his work and just renaming it (as well as removing credit and license info) and reselling it, I'm afraid closed source is still the way to go if you want to not get ripped off while distributing Windows consumer software. So the high cost of the Qt license becomes relevant there.
It's fine, it's their right to price their software however they like, etc etc. It's just a scary commitment if you want to experiment with putting a small product out there.
> Qt more expensive than Visual Studio Ultimate?!?
They're just calling the top VS tier "Enterprise" now. The subscription version of VS Enterprise (which includes office and Windows all the way back to XP) is $3k a year now.
Most of it is licensed under LGPLv3, some under GPLv3, and there are commercial licenses available for almost all of it.
They need to save money from somewhere, and until now, supermarkets never gave away stuff for free, where programmers do.
Want open source stuff? Fine, but then pay the developers that created the software for their efforts.
Note I say the ones that created the software, not the ones that came after forked the repository and get to get money instead of the real authors.
That is why FOSS tools for RAD desktop development are so behind enterprise tooling.
> include – C and C++ include files that define the public Sciter engine API: window level functions, DOM access methods and utilities.
You may get the usual off-handed opinions when golang is involved - "Rust ...", "Yeah but Go doesn't ...", etc.
I do wish you had a timestamp of your article for reference in the future.
Well, there are third party HN reply notifiers, sounds like someone should make Remind-Me-HN. Then again, now that I've said that, I suspect there are probably plenty of services that allow you to supply a note and email you at a later date (other than just a calendar app...)
"it by-design blocks, for inane security reasons" or "for inane security reasons, my favourite feature of sdtui, which is X11 selection stealing". Either way, ik curious what the inane security reasons are. Also, why is blocking bad in Wayland?
It seems like one of the biggest complaints about Wayland is that it doesn't work over a network.
Does anyone actually work that way? I've tried it a few times, and every time I tried it it turned out to be unusably slow.
Having said that, I don't know if there are working VNC implementations for Wayland. Due to Wayland's security architecture, a VNC server (or any other thing that grabs screen contents) requires permission from (and thus cooperation with) the compositor.
OS X replaced PostScript with Quartz, which uses a PDF like architecture.
Wayland is pretty much a project whose unstated goal is not only to raze X to the ground, but also to salt the earth where it once stood so that none of its core principles survived the transition. That's why the security reasons are "inane".
It would be possible, and desirable, to design a secure X server with zero alterations to the X protocol itself, and most clients wouldn't know the difference. But again, security isn't really the goal. The goal is to destroy X so thoroughly that no one builds anything on top of X, even from a conceptual standpoint. Otherwise they wouldn't have gone to the trouble of deliberately eliminating network transparency, server side rendering primitives, window managers, etc.
What I'm stuck on is the whyyyy. X has its faults, most definitely, but... X works for quite a lot of people, too.
I've found that one of the most helpful additions to my worldview has been a sense of implementational balance - and particularly the understanding that reacting to perceived extremes by counterbalancing all the way in the direction of the opposing extreme does not produce balance, but rather drowns out the old extreme by all the noise made by the new one, unless the counterbalancing effort incorporates an embrace-and-extend model.
I get the impression Wayland is only being worked on by people who've either been heavily bitten by X, influenced by those who have, or influenced by a rumor amplification-chain of toxicity about X.
The 3rd is both the saddest and likeliest part, but understandably so - there was a nontrivial amount of X11 vitriol back in the 90s and 00s.
I think this is primarily because the hardware ecosystem and the various vendor implementations were all indeed atrociously broken, so X had to be a bit broken too in order to make everything work. Graphics drivers (PCI access!) did their work in userspace, so X had to run as root. The kernel had no idea about the video mode; it was just told "if you try and print anything to the screen right now you'll confuse the GPU so badly so please just don't" and X handled the rest. GPUs were going through formative times (circa 2006) with the introduction of fundamental OpenGL functionality (now used for compositing). X wasn't modular, _and_ there were two vendors. And of course 256-bit color paletting was still something that got talked about and needed to be maintained through the whole stack.
These were simply growing pains.
Today, X.Org is modular, which is a small mercy for everyone's sanity; KMS simplifies driver development (as in, X doesn't have to be involved at all anymore); GPUs are somewhat (!) more stable and offer a more consistent GL API; nobody really complains when the display doesn't support <16bpp (ie requires paletted display) anymore. The list goes on.
Things are absolutely not perfect... who knows, there may be an XInput3 :)... but things are on fire significantly less.
So, it is IMO utterly unreasonable to lump the good parts of X in with the bits that maybe made a few core devs gnash their teeth and kill the whole thing for guilt by association!
Sadly it looks like this is what we're stuck with.
So, X11 will live on - but as a kind of negative space that future software historians will puzzle over and go "why did this Wayland thing state that they avoided such an oddly specific set of functionality?" This kind of negates the devs' apparent desire to make everyone forget it, haha. If they'd just taken the good parts and made something 11X better than X11...
For more details I recommend Daniel Stone's LCA talk that was linked by emersion elsewhere in this thread.
I’m glad people decided to put together their heads to try to collaborate on a new standard, and horrified they decided to make it Linux-only.
Hard to believe that 15 years has already passed since I heard about it.
On the one hand, VNC suddenly just became obsolete. On the other hand... Ubuntu Core is playing with getting Webkit-GTK running directly on top of DRI (via Wayland), and I hope like mad there's never an alternate reality where Chromium takes the place of X11... eep
(I mean it already kind of has, with Electron :S)
 The terminology might seem weird to you with the server-side being the user's side, or what you'd usually term the "client", but it makes sense - you run the X11 server on your PC, then connect over the network to a remote host using e.g. SSH, so now you have an X11 server and an SSH client running on your PC, connected to an SSH server on the remote. Then you open a socket on the remote side connected to the X11 socket on the local side via the SSH tunnel and remote programs initiate connections to your X11 server. You type at your terminal, that gets picked up by your SSH client and sent to the SSH server, the SSH server relays your typing to the remote shell, which execs the remote X11 client, which connects to your X11 server. Simple!
I use it pretty much daily at work to run interactive programs on beefy data reduction machines (just ssh in with X11 forwarding turned on and it just works). Over the local network it works pretty much as good as locally - lightyears ahead of VNC or RDP - at least when using programs that don't use ui toolkits that are built around wayland model..
I spend most of my time in Emacs on Linux. At an old job I had to use a Windows desktop, so I used a full screen Cygwin X server to display an Emacs window running from one of our Linux test servers. This also had the advantage that Emacs would keep running in Screen, even after my dev machine was turned off.
Note that you can't do this using GTK, due to a long standing bug https://gitlab.gnome.org/GNOME/gtk/issues/221 . I avoid this by compiling Emacs to use the "lucid" toolkit.
Note that Emacs also works in a terminal, so I could just use SSH or MOSH rather than X11, but I much prefer the GUI version.
If I understand correctly that is the next gen protocol for this type of stuff, correct me if I'm wrong.
The intended meaning is almost certainly "It blocks X11 selection stealing for inane security reasons."
Of course, the elephant in the room is Electron, which in a way showed how much a new take was required in this area but also has a lot of limitations wrt app size, processing power etc.
The one I have been following (or rather waiting eagerly) is Scenic by Boyd Multerer which was incidentally announced / released yesterday after being shown in a very early state last year at ElixirConf. I find the approach of leveraging Elixir and OTP for GUI very interesting and the video should give you a good idea 
Don't forget the quite popular https://github.com/andlabs/ui, albeit missing several needed widgets (but work is active there and the base C lib at https://github.com/andlabs/libui). Granted uses Cgo but that should not be a deal breaker for interfacing w/ the native OS libs.
> First, people have naturally written bindings for Qt, GTK+, ui, ...
All these toolkits look out of place on any platform.
This appears to be X11-specific, and X11 essentially does not have native platform controls. Also if you're in X11 you've quite likely already given up any real hope of all your windows looking the same. You also seem to be casually assuming this would be cross-platform, but this would be the equivalent of implementing something in raw Win32 API calls. There's zero cross-platform concerns here, on the grounds it'll never be cross-platform. In fact, given that it's pretty likely the move to Wayland would be completed by the Linux community before this project could reach a point where it would be something you might consider using seriously for a non-trivial project, it's debatable whether it's even good for one platform.
Sometimes there's violating your own rules to push the State of the Art forward and this will eventually become the new standard. Then there's violating your own rules because you don't value consistency.
Most organizations with enough software are pretty guilty of that latter one, but destroying any semblance of consistency because it is economically convenient for the development team doesn't seem like a step forward to me.
Also, a big part of an individual’s platform preference boils down to things like conventions, look + feel, workflow, etc. By using electron one is tossing these choices made by the user in the garbage bin in favor of whatever the developer feels like supporting or what the designer thinks would be fun to design.
Besides, I wasn't talking specifically about electron-based apps, I was talking about apps in general which don't use the native widget toolkits.
Also, I can't speak for everyone, but my preference for a platform boils down to whether it provides a Unix shell and how pushy it is about forcing me to download updates or upload personal information. Consistent widget look and feel just doesn't matter when I spend most of my time in a browser or shell anyways.
Other multi-platform toolkits do not do better. They all violate platform's conventions. QT, Swing, GTK and co do have their own look and feel and conventions that do not match the platform's they are in, so I don't see how worst the web is in that aspect. Don't single out web techs on that specific aspect.
That said, I favor projects that have per-platform true native clients with a platform agnostic core over those that chase the “dream” of write once run everywhere. Transmission is a great example of how well this can work.
more and more windows applications are theming/drawing their own look and feel onto their UIs to give more consistency across the organization's applications rather than consistency with windows.
Try this (the two mentioned commands come with the gtk3 package):
- In one terminal, run 'broadwayd'
- In another terminal, run 'GDK_BACKEND=broadway BROADWAY_DISPLAY=:0 gtk3-demo'
- In _your Web browser_, go to http://localhost:8080/
- Reattach jaw
What could be a real use case for that
This has already happened for ALSA with some applications. If you want to use BlueZ or Firefox, your options are either PulseAudio or no sound at all under Linux.
Keep in mind that Wayland has been designed by people who've been maintaining X11 for years.
>it also offers no mode that would work over a network
X11 has a huge number of issues when it comes to network transparency. tl;dr is: it's not designed for this, you should use a proper remote desktop protocol instead. More info:
>Of that, EGL even requires Mesa and thus Cgo.
That's not true anymore. You can use the DMA-BUF protocol instead of wl_drm. I'm not sure how to export a DMA-BUF without Cgo though… But that's a different issue. If you're going to use the GPU you'll probably need Cgo anyway.
X is very much designed for network transparency. The modern extensions that sling around pixmaps for every little graphical operation are what isn't. I can still remotely run X11R5 clients thanks to X and its network support.
 I guess we're supposed to ignore the fact that the X11 server is a massive C application.
Might be best to port this to Go?
Go is maybe past peak hype, but it dominates in the cloud/containers space and is massively popular.
I don't like it, but this statement still seems odd.
I wonder if it's the little things about the experience that add up to that - e.g. the usual approach to statically linking all Go code in a single binary leads to no deployment dependencies, and Go community considers it a good thing; but the moment you start invoking C code, you need to drag the corresponding .so along. Or maybe it's the part where the weird stack structure necessary to enable goroutines also impedes FFI performance, and it's a language that's naturally fast enough for that penalty is noticeable.
Either way, the Go ecosystem feels... deliberately insular, for the lack of better term?
Keeping parameters straight between Cgo and Go is another fine kick in the nuts. That's true with almost every FFI of course, but with FFI from Python to C, there's usually a little performance boost to make up for the pain.
Go however, is fast enough to begin with. FFI is all pain there.
>Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
Sorry couldn't resist. I'm a Systems Admin. An old-fart at that. Every single encounter I have had with Go has been a burning pile of unpleasantness. Most recently, the Go-Ethereum client.
So, referring back to the quote; could we just not shoehorn this into everything? It may be fast but it's fragile FWIW.
The ironic thing is what you described ("fast but fragile") better describes the traditional systems languages that are used to build X11 components: C and C++. If I had to describe Go as a systems language I'd say it was the exact opposite: slow but stable. Sadly it's also pretty inflexible for any real low level stuff (in my experience).
Which is what the poster above you meant by fragile. C and C++ rarely break in most spaces, these days. I can take a C library and expect it to build and just work 99% of the time, and I can build something in those languages and expect it to be flexible enough to work most of the time as well.
You cannot imagine how many times I've heard that GCC upgrades broke builds. I've never heard of Go upgrades breaking builds.
Go does have its short comings, I'd be the first to admit that, but the actual reason I started writing Go was because it was far less brittle trying to write portable code than C++ and at the time I was having to do a fair amount of cross compilation on platforms of other CPU architectures as well as different host OSs (Rust wasn't mature at that point, .NET wasn't open source and I've never enjoyed working with Java so I thought I'd give Go a try)
I can't really speak very much for C++ systems, as I understand the dominant build system for that is CMake, which personally I have found completely abhorrent from a user-perspective. (I once tried to build a 32bit program on a 64bit system, and CMake completely and totally refused to do so! There were no override flag that actually affected the program, and it didn't look as if there was any standard place to inject flags to the compiler, I still have no idea how one goes about doing it, to be quite honest).
Can you elaborate?
> It may be fast but it's fragile FWIW.
Go explicitly focuses on language stability over fancy new features.
I think that might be the problem... C is probably the most stable language around in terms of features, but it is probably the language that is most prone to bugs and security flaws that is in common use.
I'm left to wonder what you are imagining would result in not handling errors on accident in Go.
I deploy Go code all day long to various clusters consisting of hundreds of machines, and I'll take whatever Go deployment issues you can throw at me all day long, any day, over "simple" python/ruby issues.
It's all been moot since I went with Docker anyway.
I've worked on systems that allocated and freed hundreds of GB every 10 minutes, written in Go with no such issues.
It really sounds like there was something not-quite-right in there. Might not be too hard for someone with time + clue to spot it. :)
I've seen happening in Go multiple times when people forget to close the response body after making a http request.
They come with no associated bits to hook them into the rest of the system. eg no launch/process monitoring things (start-at-boot and similar, metadata and "package info" for being able to pull down upgrades along with other system packages.
Go stuff is fine for a lot of deployment cases, but in production situations the rest of the er... crap that normal packages come with does serve some important purposes. At least on *nix systems. :)