X11 is an incredibly backwards compatible system, but this comes at the cost of positively absurd code complexity. I remember reading some release notes from a new version a few years ago that said 100,000+ lines of deprecated code were removed, and that was barely a drop in the bucket.
And, of course, this is why projects like Wayland and Mir (Canonical's new display server) have been born — it's simply too cumbersome (and in some cases, impossible) to innovate on a behemoth like X11.
But people forget why X11 has such a giant codebase. It's not just bloat — a lot of it exists for a very good reason, and this article is an excellent illustration of what that is.
Take Trident (IE's rendering engine), for example. Why can only one version of IE be installed at a time? Why does IE11 come with IE7/8/9/10 "compatibility modes" (which, much as they might claim, aren't faithful recreations of IE7/8/9/10 behavior)? Wouldn't it make more sense to just ship an IE "browser chrome" that could load up the actual rendering engine DLLs from IE6-through-11 (similar to how Google made IE connect to Chrome Frame), with increasing levels of virtualization and sandboxing as you go further back to cope with the difference in environments?
Apple and Microsoft have both got this right a few times: Apple had Rosetta for PowerPC emulation; Windows had WOW16 and WOW64. But besides these isolated instances, it really isn't a common idea.
I agree that that's a great strategy in general, but I don't think it translates well to display servers.
You can't just "put it in a sandbox". You need driver support, and you'd need a sandboxed environment sophisticated enough to deal with all the idiosyncrasies of the X protocol.
Even putting running an old version of X11/UNIX in a virtual machine is of limited benefit, because X11's core functionality is often tightly coupled with drivers and other things which would need a lot of special wrappers and hacks to work properly.
I'm not saying it wouldn't work (and I am not an X11 developer), but I do think there are extremely compelling and legitimate reasons for people to not follow your suggestion (at least in this case).
X11 2: Electric Bugaloo - The list of major graphical linux programs in active use is very finite and most of it doesn't talk to X directly but relies on stuff like Gnome or QT which are libraries that have frequent churn anyway. No?
I bet if you could get the Top 30 projects to target your clean shiny X replacement in one fell swoop you'll end up most of the way there in terms of adoption within a few years. Linux is command line centric and frequently updated after all with users who expect things to break anyway. "Old" X odds and ends that won't be updated can be thought of much like Java Applets are thought of now a days.
Oh you need that abandoned Chemistry application from 2006 that depends on X11? Spin up a VM. We can afford to be more pragmatic nowadays. Nobody expects to run 5 year old apps on their smartphones, but desktop linux can't afford a clean break once every 2 decades?
You can run X applications on top of it, transparently. They perform just as well.
GTK+ and Qt have been ported to it, Gnome and KDE are working hard at getting full ports over.
Canonical, of Ubuntu fame, has recently started working on a competitor, following the same strategy, called Mir. They've ported Qt to it, but are generally lagging. Wayland has the advantage of being crewed almost entirely by old X.org hands, who know a lot about what they're doing.
That was exactly what I was suggesting with "sandbox stuff"--except that the OS can provide a stub library that recognizes what that "abandoned Chemistry application from 2006" is asking for, and spin up the VM itself, in the background, making it look as if it was just a compatibility layer. See, for example, Windows 7/8's "Windows XP Mode" -- or, as I mentioned before, Rosetta.
The VMs also must react to changes in the parent OS such as a change in the keyboard layout, two apps running in different VMs may both want to have some control over hardware, etc.
The approach works well in Windows because Microsoft spends lots of time on it and because the compatibility changes aren't that great.
Of course, it also works well in systems where there is no need for the VMs to interact (other than via the network). VM (http://en.wikipedia.org/wiki/VM_(operating_system)) is a nice example there.
No, just load a compatibility library, possibly in the form of a wrapper program. If Wine can run Windows programs on Linux by translating Win32 to POSIX and X11, which it can to a very usable extent, something equivalent can translate X11 library calls to whatever the new library speaks, especially since it will be able to use whatever X11 code it needs.
Maybe it's worth it. Maybe it's not. Just keep in mind that if you're keeping an emulated version alive, someone has to mind the emulator, and that's not free.
Back to that two page function. Yes, I know, it's just a
simple function to display a window, but it has grown
little hairs and stuff on it and nobody knows why. Well,
I'll tell you why: those are bug fixes. One of them fixes
that bug that Nancy had when she tried to install the
thing on a computer that didn't have Internet Explorer.
Another one fixes that bug that occurs in low memory
conditions. Another one fixes that bug that occurred when
the file is on a floppy disk and the user yanks out the
disk in the middle. That LoadLibrary call is ugly but it
makes the code work on old versions of Windows 95.
If hair regarding low-memory conditions, Internet Explorer and other things all appears in the same function, some abstractions are definitely missing.
Also, if the reason for these things isn't clear, that's what comments are for.
Rewriting code, from my experience, has had only great results.
They didn't have the luxury of having an API that might be a few thousand KB big loaded into memory to act as a single redirection to your underlying implementation. They worked in the confines of bytes of memory, not gigabytes.
We can afford to be generic, to make extensible runtime programmable interfaces and runtime evaluation type dynamism, because we have the performance necessary. But our core APIs are still written like its 1980.
I'd like to think that Linux will continue to run on machines with at most 4mb of RAM with not much processing capacity because if history keeps repeating itself we're going to keep on inventing new devices with those constraints.
Exactly what level of abstraction is good for the "core APIs"? Something actually has to send a series of bytes to be written to the disk, even if a bunch of serialization and encoding abstractions are written on top of that. If the latter is the "core", what's the less abstract stuff that inside that?
A) autoconf is not similar at all to an API layer hiding hairiness in its domain
B) autoconf becoming terrible does not mean that portability abstractions are necessarily terrible
Of course, desktop apps wanted to interact with minimization (read current state, etc), so the WM authors joined forces to create a spec that allowed this. See how it specs minimization here:
This is part of the "wm spec", and the thing corresponding to it in Wayland is called "xdg_spec", Both are produced by "xdg" (X Desktop Group) and are optional things that are used by "traditional desktop environments".
It is a good thing that wayland does not enforce the existance of window minimization, because it is not always something that makes sense in all use cases of wayland. For instance in a phone like Jolla.
One of the problems with Big Rewrite projects is that, inevitably, early releases will never get feature-parity with the latest version of whatever it is that's being rebuilt. Managing expectations then becomes critical to avoid this sort of legacy-support nightmare.
(My hope is inspired by ZFS which, due to its "rampant layering violations", had more features and less code than the UFS+LVM stack it replaced.)
The OP would have been able to do the same stuff with Exceed on Windows or the X server that comes with OSX, after all. X11's survival on linux seems to be largely a matter of momentum and the fact that it's the least common denominator in a fragmented landscape.
But now Ubuntu has approached the level of ubiquitousness that's necessary to push for a real change (thus mir) and the rest of the linux world is rallying behind wayland.
1 - http://wayland.freedesktop.org/qt5.html
2 - http://wayland.freedesktop.org/gtk.html
Qt has less trouble in this regard.
What do I mean by fantastic stuff? Specialist items such as this logic analyser were built to an exceptionally high standard. To all intents and purposes HP equipment back then was bullet-proof. It cost a lot but it was an investment rather than an expense. The documentation was also superb. You felt quite privileged to use it.
Moving on to the products HP make now, I have a few of their laptops and a printer. The laptops are collecting dust and the printer does not have any ink in it. I have no intention of ever buying any of their kit ever again. It is all consumer stuff that does a lot compared to what was possible in 1992 but none of it has 'wow' factor.
I cannot think of anything they make nowadays that is sophisticated in a 'rocket science' kind of way. Sure I might not be exposed to their finer and more sophisticated products, but I should be through marketing, reading 'tech' news websites and so on.
'HP' really should have kept onto the pioneering/cutting-edge-technology image and made their PC's the kit of choice for anyone that does difficult maths, scientific stuff, interfacing or UNIX.
And using that high-quality stuff could give some semblance of order and comfort while debugging tough hardware problems. ("At least I know the problem is not with the 'scope.")
I used to use HP oscilloscopes and logic analyzers from that era, and I still have and use a DMM and a couple of calculators.
Back to the point in hand, Apple have somehow managed to make themselves de-facto for musicians, video editors and people that draw pretty pictures in Photoshop. These are the 'halo' users and mere mortals that think of themselves as possibly being creative one day buy Apple in part because that is what the creative professionals use. The fact they get no further than 'Crazy Birds' on their iPad Air is neither here or there.
HP should have worked on a similar strategy but in the 'technicial/scientific' sphere rather than the 'creative' sphere. They should have listened to and looked after the customer base they had built up from selling things like 'scopes so that the de-facto kit to buy for anyone doing anything vaguely technical was HP. This too could have had a halo effect, so anyone studying something like engineering would instinctively want to buy HP rather than some other cheap Chinese junk.
I keep hearing this sentiment bandied around but I have seen no evidence of it in real life. Most of the "mere mortals" I know that use macs don't make any pretensions beyond "its much nicer and I can afford it." Nobody is saying to me "I bought it because I fantasise that one day I'll run a recording studio from it." And who on earth buys iPhones and iPads to be "creative?" What a load of nonsense.
Now, on my Mac, I do exactly zero multimedia related things. So I agree that the multimedia thing isn't necessarily true, but that's what I hear about Macs from not-technicals.
BTW, as long as it can run X, compile Python from source and run Emacs, I'm perfectly fine.
Something I've noticed a bit of though is that this "shift" is generally only a perceived shift - it's usually not the case that the high-end is abandoned entirely, usually it's just an expansion to include crappy, cheap, consumer stuff.
Take for example Dell; Dell has one foot in the horrible world of consumer computers, with loads of crap. However, they still produce the totally excellent (at least last time I used them) Optiplex line of workstations for businesses. Those things are nicely laid out internally, with good support and pretty solid reliability.
Dell also produces some of the nicest monitors you can buy; I've lusted after one of those beautiful 30-inch 2560x1600 monitors for pretty much my entire life.
I don't have much experience with HP, but I've owned their inkjet printers, and they're total garbage. However, I also know that their larger business-oriented printers are durable workhorses that plenty of people will swear by.
So, yeah, often we think the companies we know and love go to crap, but in reality it's just us not spending the money on a worthwhile product (from them or any other company).
DisallowTCP = false
AllowTCP = true
The issue is what happens if the line is missing entirely?
If the default is to allow, then adding a line "allowTcp= true" hasn't actually changed your config. And deleting a line "allowTCP = true" doesn't stop allowing TCP, which could be confusing.
What the addition of the line does is actually disallow TCP.
The above is the kind of reasoning that causes configs to end up with double negatives.
These cases seem to have happened the other way around - values/meaning are picked for the right side first, which then requires contorting the left in order to match intentions.
Later on I've configured similar setups so that several devs could use older PCs to run fat IDEs from the one fast machine to older PCs (it was funny the day I yanked the power cord of the fast machine, interrupting everybody ; )
Up to this day I still love the fact that it's trivial for one local user (if you allow it) to run programs in the visual X11 session of another user. I'm using several browsers from unrelated user accounts: one only for my personal GMail / Google Docs, one only for browsing, one only for my professional GMail account, etc. That's a feature I use daily and, for my use, I think it's easier (and faster) than running several VMs.
You can also run simultaneously several X11 session, for example at different sizes (say one at 1920x1200, another one at 1280x800, etc.).
I'd really miss these features if they were to go away: I hope the newer Wayland and Mir etc. will still allow one "user" to display graphical apps on the display of another user.
then on another box, i had the usual X server+client combo, and i could open a vnc client and have fast remote access and shared screen :)... vnc was much faster than remote X as it only sent input and graphics. X sends all sort of metadata that is not really that useful for most cases.
"However it's not an Open Source disease its certain projects like Gnome disease - my 3.6rc kernel will still run a Rogue binary built in 1992. X is back compatible to apps far older than Linux."
(IIRC Alan Cox maintained a version of Rogue for linux for a while.)
It's easier to run the Windows version of early Linux games like Unreal through wine than getting the native version to work.
And this is the sole reason why everyone hates X11. sadly.
X11 is always a server (which show things on the screen) and a client (which handles windows and send what to show on screen to the server... or something like that).
So any X11 instance in a desktop is a server+client. Hence, anywhere you have any X graphical interface, you can either receive windows from some other client, or send your windows to another server (since you have both server and client running, remember). So the only thing that device did was send the X client windows to some TCP socket instead of the local one. Not taking credit for it having X at all... that is awesome. but what this post describe couldn't be more banal.
also, remember this device probably cost more than 7x a average PC. and it was already the 90's...
So an off-the-shelf Ethernet switch will do. Contemporary 10GBase-T products are generally compatible with ancient 10Base-T. Another fine example of extreme backwards compatibility.
It didn't have DHCP, so I had to configure my laptop with its MAC address, acquire an IP, and then quickly move the cable to the UNIX PC. It didn't have DNS, so I had to manually find the IP of the CS webserver. And it obviously didn't have an HTTP client, so I had to use telnet. But it worked just fine!
It's RJ-45 and BNC aka 10BASE-T and 10BASE-2
On one hand, they're justified because most people don't use X as intended.
On the other hand, maybe people should be.