This is a great positive example of the (often negatively presented) double-edged sword that is extreme backwards compatibility.
X11 is an incredibly backwards compatible system, but this comes at the cost of positively absurd code complexity. I remember reading some release notes from a new version a few years ago that said 100,000+ lines of deprecated code were removed, and that was barely a drop in the bucket.
And, of course, this is why projects like Wayland and Mir (Canonical's new display server) have been born — it's simply too cumbersome (and in some cases, impossible) to innovate on a behemoth like X11.
But people forget why X11 has such a giant codebase. It's not just bloat — a lot of it exists for a very good reason, and this article is an excellent illustration of what that is.
I will never understand why people insist on making new clients and servers that are backward-compatible with terribly old, deprecated protocols, instead of just preserving the old clients and servers that spoke those protocols natively within some sort of sandboxed emulation system.
Take Trident (IE's rendering engine), for example. Why can only one version of IE be installed at a time? Why does IE11 come with IE7/8/9/10 "compatibility modes" (which, much as they might claim, aren't faithful recreations of IE7/8/9/10 behavior)? Wouldn't it make more sense to just ship an IE "browser chrome" that could load up the actual rendering engine DLLs from IE6-through-11 (similar to how Google made IE connect to Chrome Frame), with increasing levels of virtualization and sandboxing as you go further back to cope with the difference in environments?
Apple and Microsoft have both got this right a few times: Apple had Rosetta for PowerPC emulation; Windows had WOW16 and WOW64. But besides these isolated instances, it really isn't a common idea.
EDIT: I didn't see your edit which gave Apple's Rosetta and X11 as examples. Those are good counter-examples to my point. But then again, those strategies were not done by X11 itself, rather by compatibility layers as you suggested. And in fact this is precisely what a lot of companies are doing, for example the XMir project (http://mjg59.dreamwidth.org/26254.html).
I agree that that's a great strategy in general, but I don't think it translates well to display servers.
You can't just "put it in a sandbox". You need driver support, and you'd need a sandboxed environment sophisticated enough to deal with all the idiosyncrasies of the X protocol.
Even putting running an old version of X11/UNIX in a virtual machine is of limited benefit, because X11's core functionality is often tightly coupled with drivers and other things which would need a lot of special wrappers and hacks to work properly.
I'm not saying it wouldn't work (and I am not an X11 developer), but I do think there are extremely compelling and legitimate reasons for people to not follow your suggestion (at least in this case).
Why not make a clean break? None of this sandbox stuff.
X11 2: Electric Bugaloo - The list of major graphical linux programs in active use is very finite and most of it doesn't talk to X directly but relies on stuff like Gnome or QT which are libraries that have frequent churn anyway. No?
I bet if you could get the Top 30 projects to target your clean shiny X replacement in one fell swoop you'll end up most of the way there in terms of adoption within a few years. Linux is command line centric and frequently updated after all with users who expect things to break anyway. "Old" X odds and ends that won't be updated can be thought of much like Java Applets are thought of now a days.
Oh you need that abandoned Chemistry application from 2006 that depends on X11? Spin up a VM. We can afford to be more pragmatic nowadays. Nobody expects to run 5 year old apps on their smartphones, but desktop linux can't afford a clean break once every 2 decades?
You're looking for the Wayland project. Excellent implementation. They reuse lots of ancillary X11 code too, like the input code from X.org, which has the virtue of working with a billion weird language/input/hardware combinations.
You can run X applications on top of it, transparently. They perform just as well.
GTK+ and Qt have been ported to it, Gnome and KDE are working hard at getting full ports over.
Canonical, of Ubuntu fame, has recently started working on a competitor, following the same strategy, called Mir. They've ported Qt to it, but are generally lagging. Wayland has the advantage of being crewed almost entirely by old X.org hands, who know a lot about what they're doing.
> Oh you need that abandoned Chemistry application from 2006 that depends on X11? Spin up a VM. We can afford to be more pragmatic nowadays.
That was exactly what I was suggesting with "sandbox stuff"--except that the OS can provide a stub library that recognizes what that "abandoned Chemistry application from 2006" is asking for, and spin up the VM itself, in the background, making it look as if it was just a compatibility layer. See, for example, Windows 7/8's "Windows XP Mode" -- or, as I mentioned before, Rosetta.
That's not as easy as it sounds. In a desktop GUI, customers expect seamless integration between such sandboxes. Making the clipboard and drag and drop work two-way between such VMs can be a lot of work. Even for simple text, that may involve encoding conversions. That, in turn, means that the host must be able to infer what encoding the guest expects. Styled text and graphics are way harder.
The VMs also must react to changes in the parent OS such as a change in the keyboard layout, two apps running in different VMs may both want to have some control over hardware, etc.
The approach works well in Windows because Microsoft spends lots of time on it and because the compatibility changes aren't that great.
Making the clipboard and drag-and-drop work between any two X11 programs running on the same server is a hell of a lot of work, and many times simply impossible. So why are your goals for sandboxing so impossibly high?
Because I want my system to work, unlike that example you give. And impossibly? It worked fine in Apple's 68k-PPC and PPC-x86 transitions, across the various shims that Microsoft has for all kinds of old software, and may even work fine across various VM hosts running on Mac OS X (disclaimer: I have little experience with those)
> Oh you need that abandoned Chemistry application from 2006 that depends on X11? Spin up a VM.
No, just load a compatibility library, possibly in the form of a wrapper program. If Wine can run Windows programs on Linux by translating Win32 to POSIX and X11, which it can to a very usable extent, something equivalent can translate X11 library calls to whatever the new library speaks, especially since it will be able to use whatever X11 code it needs.
Sandboxing doesn't remove the need to provide ongoing security maintenance for a Web browser engine. For one, sandboxes can be attacked through whatever IPC mechanism you provide (e.g. Pwnium). But more importantly, the Web browser engine enforces security mechanisms that OS-level sandboxing does not address (e.g. the same-origin policy and history sniffing countermeasures).
Back to that two page function. Yes, I know, it's just a
simple function to display a window, but it has grown
little hairs and stuff on it and nobody knows why. Well,
I'll tell you why: those are bug fixes. One of them fixes
that bug that Nancy had when she tried to install the
thing on a computer that didn't have Internet Explorer.
Another one fixes that bug that occurs in low memory
conditions. Another one fixes that bug that occurred when
the file is on a floppy disk and the user yanks out the
disk in the middle. That LoadLibrary call is ugly but it
makes the code work on old versions of Windows 95.
This is so absolutely true. What nobody recognizes is that most of these really fundamental systems we run on - unix system calls, our window managers, our shells, etc - were designed and implemented in an era where the supercomputers couldn't dent your Nexus 5 in terms of most computing performance metrics.
They didn't have the luxury of having an API that might be a few thousand KB big loaded into memory to act as a single redirection to your underlying implementation. They worked in the confines of bytes of memory, not gigabytes.
We can afford to be generic, to make extensible runtime programmable interfaces and runtime evaluation type dynamism, because we have the performance necessary. But our core APIs are still written like its 1980.
My only concern is that over the years I've seen everything go in cycles. So maybe 8mb is not a lot of RAM for a computer nowdays. A few years ago it was a lot for a router. I like my OpenWRT router that runs Linux. Before that the same could be said for mobile computers with GSM modules (cellphones).
I'd like to think that Linux will continue to run on machines with at most 4mb of RAM with not much processing capacity because if history keeps repeating itself we're going to keep on inventing new devices with those constraints.
But our core APIs are still written like its 1980.
Exactly what level of abstraction is good for the "core APIs"? Something actually has to send a series of bytes to be written to the disk, even if a bunch of serialization and encoding abstractions are written on top of that. If the latter is the "core", what's the less abstract stuff that inside that?
Amusingly enough, Wayland is already developing backwards-compatibility cruft even though it's not shipping anywhere yet. The core Wayland support for window creation and management cannot support minimizing windows, so it's being essentially obsoleted by an extension called xdg_shell, but xdg_shell isn't mandatory so applications and Wayland compositors will wind up having to support both.
This is funny. You know that window minimization is not in the X11 spec either, right? Its something that X window managers handle internally (just like on Wayland where this is an internal compositor thing).
This is part of the "wm spec", and the thing corresponding to it in Wayland is called "xdg_spec", Both are produced by "xdg" (X Desktop Group) and are optional things that are used by "traditional desktop environments".
It is a good thing that wayland does not enforce the existance of window minimization, because it is not always something that makes sense in all use cases of wayland. For instance in a phone like Jolla.
>The core Wayland support for window creation and management cannot support minimising windows
One of the problems with Big Rewrite projects is that, inevitably, early releases will never get feature-parity with the latest version of whatever it is that's being rebuilt. Managing expectations then becomes critical to avoid this sort of legacy-support nightmare.
I thought things like xdg_shell grow up as either extension or in Weston, and then it gets merged into the stable Wayland protocol after it is battle tested and ready. Is this not the case for xdg_shell?
There's really no point to doing that, since X is a protocol, and a pretty flexible one at that. Your new shiny can have an X server running in it to act as a go-between and talk something more modern with the new stuff instead.
The OP would have been able to do the same stuff with Exceed on Windows or the X server that comes with OSX, after all. X11's survival on linux seems to be largely a matter of momentum and the fact that it's the least common denominator in a fragmented landscape.
But now Ubuntu has approached the level of ubiquitousness that's necessary to push for a real change (thus mir) and the rest of the linux world is rallying behind wayland.
I remember I started to use Linux in the early to mid- nineties. Back then home networking equipment was a bit too expensive for me (basically I was student-broke), so I did set up a local network between my "big" Linux desktop computer and my small laptop using PLIP (that is: IP over the parallel line interface IIRC). Then I'd launch an X11 server on the laptop but display on it a session from a user on the desktop. So both my brother and I could surf simultaneously from the "fast" machine. As I remember it things the network was slow, but I clearly remember that it worked. We had our first Internet connection (dial-up) and we were "sharing it" and surfing simultaneously (using Mosaic?) for hours and hours.
Later on I've configured similar setups so that several devs could use older PCs to run fat IDEs from the one fast machine to older PCs (it was funny the day I yanked the power cord of the fast machine, interrupting everybody ; )
Up to this day I still love the fact that it's trivial for one local user (if you allow it) to run programs in the visual X11 session of another user. I'm using several browsers from unrelated user accounts: one only for my personal GMail / Google Docs, one only for browsing, one only for my professional GMail account, etc. That's a feature I use daily and, for my use, I think it's easier (and faster) than running several VMs.
You can also run simultaneously several X11 session, for example at different sizes (say one at 1920x1200, another one at 1280x800, etc.).
I'd really miss these features if they were to go away: I hope the newer Wayland and Mir etc. will still allow one "user" to display graphical apps on the display of another user.
for bandwith reasons, i had a headless VNC X server on the box, and executed X apps locally. Then i'd also start the usual Xserver and client and run a vnc client full screen. So i had the first vnc X server showing on the screen.
then on another box, i had the usual X server+client combo, and i could open a vnc client and have fast remote access and shared screen :)... vnc was much faster than remote X as it only sent input and graphics. X sends all sort of metadata that is not really that useful for most cases.
If you're willing to dick around with the kernel config options, you can still get x86 Linux to run binaries from the early 90s. I vaguely recall reading a mailing list thread where Alan Cox, I think it was, commented that he had a bunch a.out binaries from that era still kicking around for testing.
That is the least impressive thing on X11 backwads compatibility.
And this is the sole reason why everyone hates X11. sadly.
X11 is always a server (which show things on the screen) and a client (which handles windows and send what to show on screen to the server... or something like that).
So any X11 instance in a desktop is a server+client. Hence, anywhere you have any X graphical interface, you can either receive windows from some other client, or send your windows to another server (since you have both server and client running, remember). So the only thing that device did was send the X client windows to some TCP socket instead of the local one. Not taking credit for it having X at all... that is awesome. but what this post describe couldn't be more banal.
Sure, remote display is no big deal if you're already using X11. But most embedded devices were not farsighted enough to use X11 in the first place; it's much more common to see the wheel being reinvented poorly.
The Tektronix logic analyzer from the time used an embedded copies of Windows and had no 'remote' capability. No doubt if you hook it up to a modern network with Internet access it would become compromised immediately.
I agree and don't doubt that it did, but few things were both as vulnerable and as targeted as an unpatched Win95/98 system attached to the Internet (capital I). The reason was simply that embedded versions rarely got patched, and when a vulnerability was found that was in the embedded version as well, malware could find it and use it long after everything else was safe. There were a couple of EMC storage management consoles that got hit by this problem and ATM terminals as well. The scary part would be having your 15 year old piece of test equipment be rendered unusable by such an event. The odds were small but decidedly non-zero.
It didn't have DHCP, so I had to configure my laptop with its MAC address, acquire an IP, and then quickly move the cable to the UNIX PC. It didn't have DNS, so I had to manually find the IP of the CS webserver. And it obviously didn't have an HTTP client, so I had to use telnet. But it worked just fine!
This is also a testament to the old HP. Around the time this logic analyser was built I used a fair amount of HP kit - oscilloscopes, EPROM programming kit, the plotter printer where you could have multiple ink colours and the original Deskjet printer. All of it was fantastic stuff and very expensive - we joked that 'HP' was an acronym for 'Highly Priced'!
What do I mean by fantastic stuff? Specialist items such as this logic analyser were built to an exceptionally high standard. To all intents and purposes HP equipment back then was bullet-proof. It cost a lot but it was an investment rather than an expense. The documentation was also superb. You felt quite privileged to use it.
Moving on to the products HP make now, I have a few of their laptops and a printer. The laptops are collecting dust and the printer does not have any ink in it. I have no intention of ever buying any of their kit ever again. It is all consumer stuff that does a lot compared to what was possible in 1992 but none of it has 'wow' factor.
I cannot think of anything they make nowadays that is sophisticated in a 'rocket science' kind of way. Sure I might not be exposed to their finer and more sophisticated products, but I should be through marketing, reading 'tech' news websites and so on.
'HP' really should have kept onto the pioneering/cutting-edge-technology image and made their PC's the kit of choice for anyone that does difficult maths, scientific stuff, interfacing or UNIX.
Companies change focus but hang on to their brands for consumer recognition, despite radically different strategies. The "HP" of back then is the "Agilent" of today. Similarly, yesterday's "Motorola" is today's "Freescale".
...and sometimes they do something completely different. One company that changed focus and held onto their brand name is the UK's Whitbread. They had a virtual monopoly on beer and public houses in southern England up until some time in the 1980's, then, for no obvious reason, they sold all of the brewing interests, bought a coffee chain, a few hotels and some gyms. They kept onto the Whitbread name which was once synonymous with beer even though they moved so far away from the stuff that they most definitely were not the same company. There is no 'consumer recognition' as they trade on the High Street with different names - 'Costa Coffee' etc. - but they do trade on the stock market as Whitbread.
Back to the point in hand, Apple have somehow managed to make themselves de-facto for musicians, video editors and people that draw pretty pictures in Photoshop. These are the 'halo' users and mere mortals that think of themselves as possibly being creative one day buy Apple in part because that is what the creative professionals use. The fact they get no further than 'Crazy Birds' on their iPad Air is neither here or there.
HP should have worked on a similar strategy but in the 'technicial/scientific' sphere rather than the 'creative' sphere. They should have listened to and looked after the customer base they had built up from selling things like 'scopes so that the de-facto kit to buy for anyone doing anything vaguely technical was HP. This too could have had a halo effect, so anyone studying something like engineering would instinctively want to buy HP rather than some other cheap Chinese junk.
>mere mortals that think of themselves as possibly being creative one day buy Apple in part because that is what the creative professionals use. The fact they get no further than 'Crazy Birds' on their iPad Air is neither here or there.
I keep hearing this sentiment bandied around but I have seen no evidence of it in real life. Most of the "mere mortals" I know that use macs don't make any pretensions beyond "its much nicer and I can afford it." Nobody is saying to me "I bought it because I fantasise that one day I'll run a recording studio from it." And who on earth buys iPhones and iPads to be "creative?" What a load of nonsense.
Ah yes, the inevitable fall away from the power users towards the oh-so profitable average-consumer market. It's a sad(ish) story that's seen in too many companies.
Something I've noticed a bit of though is that this "shift" is generally only a perceived shift - it's usually not the case that the high-end is abandoned entirely, usually it's just an expansion to include crappy, cheap, consumer stuff.
Take for example Dell; Dell has one foot in the horrible world of consumer computers, with loads of crap. However, they still produce the totally excellent (at least last time I used them) Optiplex line of workstations for businesses. Those things are nicely laid out internally, with good support and pretty solid reliability.
Dell also produces some of the nicest monitors you can buy; I've lusted after one of those beautiful 30-inch 2560x1600 monitors for pretty much my entire life.
I don't have much experience with HP, but I've owned their inkjet printers, and they're total garbage. However, I also know that their larger business-oriented printers are durable workhorses that plenty of people will swear by.
So, yeah, often we think the companies we know and love go to crap, but in reality it's just us not spending the money on a worthwhile product (from them or any other company).
I especially hate the smugness some of the developers of the X replacements; of course you can design a leaner architecture when dropping 80% of the features that make X so great. Congratulations, you are true geniuses!