Hacker News new | past | comments | ask | show | jobs | submit login
Window System X (1984) (talisman.org)
263 points by marcodiego on Jan 17, 2022 | hide | past | favorite | 163 comments



The implementation of Scheifler' X window system for IBM/Intel was called X386, which eventually was forked and freed. It was renamed XFree86, as a pun. Then was renamed Xorg.

There were a few implementations of X for different unixes. Mostly closed source. It was very buggy, and patches were posted online, under the form a LD_PRELOAD-ed libraries. iirc, I had to preload a few libraries to work around the HPUX X implementation to be capable of running Mozilla Suite.

Fun times :)


> It was renamed XFree86, as a pun. Then was renamed Xorg.

It wasn't simply renamed Xorg, that's yet another fork after XFree86 development became stagnant and the license changed to something GPL-incompatible.

https://en.wikipedia.org/wiki/XFree86#2002:_Growing_dissent_...


> XFree86, as a pun

I have been using it (and its derivatives) for more than 20 years and only got the pun now!


Whooaaaaa, like replacing the "3" in X386 with "Free"? Uhh, today I learned! I can't believe I've gone this long without knowing this haha

It is mentioned in the Wikipedia page[0], unfortunately no citation to validate the story. Though I found it talked about in Linux Magazine's "The History of XFree86" article[1] (under "The Rebirth" section)

[0] https://en.wikipedia.org/wiki/XFree86#Early_history_and_nami...

[1] http://www.linux-mag.com/id/922/


Ha! It's fascinating when revelations like this happen. I used it for all those years with absolutely no idea of the pun in the title.


One issue is that while some vendors went out of their way to update and modernize the internals of their X11 implementations (most famously Xsgi, which afaik ripped out all graphic driver code due to being unsuitable), Xfree86 in many ways stuck to original code, even if certain advances were made (pluggability, loadable drivers, etc.). Which unfortunately led to more and more problems that were papered over with XShm and drawing on client side, finally leading to Wayland...


> Which unfortunately led to more and more problems that were papered over with XShm and drawing on client side, finally leading to Wayland...

But that ship hasn't totally sailed right? It's not as if Wayland was the amazing tech everybody hoped for and jumped to. Wayland is old by now, it's only half a success and many of us are still using Xorg.

I'm not saying that in two decades Xorg may not be gone for good but at the moment Wayland and Xorg still coexist.


There was additional fun when XFree86 started showing up in that new Linux thing around 97, roughly around Bo in Debian. Around that time you needed to do some math to figure out the dot clock rate for your CRT. If you got it a little wrong, you'd need to Ctrl-Alt-F1 into a text terminal to try again; more wrong and the deflection coils would buzz louder; really wrong (allegedly) you could burn them out.

https://wiki.debian.org/DebianBo


uh, xfree86 was in linux in '93 (yes I edited dot clocks to get 1024x768 x 16 bit color)


I can't recall where, however I remember reading a super-interesting blog post of an old-time linux user complaining about his video card not being able to drive his displays.

He "solved" the issue by lowering the video card refresh rate (60hz -> 30hz) -- iirc.

Can't find that post anymore, but I always wondered if that approach would have allowed me to dive more displays with my integrated intel gpu. I was unable to go further than three 1920x1200 external displays (I had to sacrifice the fourth display, the laptop's built-in display)


You could do a lot with weird modelines in Xfree86.

I had not enough VRAM in a computer to do 16 bit 1024x768 but the difference was tiny. I think it needed a few tiny buffers for the mouse cursor and dragdropped icon, which was advanced 2d hardware acceleration at the time (Onion, belt, ...),and whatever memory was left just did not make it by a hair.

So I wrote a custom modeline to create an absolutely nonstandard videomode. I think the only requirement was that vert and horiz resolution were divisible by 4, so I did 1000 by 752 or something like that.

You had to make sure the dot clock was acceptable, or you could physically damage the CRT. Scary. So I did the calculations and tripple checked everything, then started a custom config with my thumb on the power button. It worked.


Later xvidtune was added to easy the process of monitor destruction :)


I remember setting it up on a 386SX laptop with 3 megs RAM, back in late 93 or so. I could barely get X to load. Massive amounts of swapping. Fortunately I got a 486 a few months later.


I didn’t think Linux ran on an SX. Thought it required the mmu in the DX for 32-bit protected mode. Might be misremembering though.


Yes, you were misremembering. The 386SX had an MMU. However it had 32-bit internals, like the DX, but 24-bit address space (16 meg limit) and 16-bit memory access.


There was only one variant of the 386 that did away with much of the MMU functionality - the 80376. It was largely overshadowed by the 386EX.

https://en.wikipedia.org/wiki/Intel_80376


yes, my original computer was a 486/66 with 4 MB of RAM. It could not run emacs + g++ + X11 and a terminal all at the same time without swapping. Eventually, I upgraded it to 32MB and it "flew". Was a great lesson in swap performance and RAM upgrades.


Fellow dotclock editor checking in.


Thank you, that was entirely new history for me. From Wikipedia:

> MGR featured overlapped [...] windows and [...].

Took a while to see the strength of tiling window managers.


MGR was an interesting non-X window system. I ran it for a while (on Minix IIRC) although unfortunately it wasn't free software. One thing about it was that you created windows, drew lines etc entirely by printing special escape sequences to stdout. It was possible to write small graphical C programs using basically printf. This also made it network independent.

https://en.wikipedia.org/wiki/ManaGeR


Reminds me of rip graphics, a format which briefly existed as a way to add vector graphics to bulletin board systems. It was useable over a telnet or dialup modem connection if you had a terminal that supported the rip escape codes.

https://en.wikipedia.org/wiki/Remote_Imaging_Protocol


There was also NeWS:

https://en.wikipedia.org/wiki/NeWS

which had the interesting idea of using PostScript to drive the display.


Which was then also used in another successor (name wise) , W => X, S => T: NeXT


Fascinating!! What an interesting idea!


That was how all graphics terminals were operated, back then.

Tektronix had one with a "storage" scope that would retain (for a while, fading) a picture drawn once with a vector-steered electron beam. There was a graphics board to plug into a DEC VT100. HP had such a terminal. I think the VT100 card took the same codes as the Tek, but the picture would stay until erased.

Programs to draw graphics would need a driver layer to work on different terminals. If you were clever, the same code could drive an X-Y pen plotter, typically from HP. Good times!

You could buy a knife-pen and put rubylith in the plotter, and cut out circuit board optical masks, usually at 4x or more magnification. Rubylith was two layers of plastic, dark red and soft on clear and hard. The knife would cut just through the red layer. Peel off the stretchy red stuff where you didn't want traces, and send the rest to the PC board shop.


Might have been technical limitations. When you allow overlap, you can’t use the screen as _the_ backing store for pixels anymore, and either memory usage goes up (a lot, if you’ve several large windows open), or you have to invent redraw events (which may be slow even for relatively simple content, if you have to swap in the drawing code).

If you keep the screen buffer as (part of) the backing store for the pixels of windows, you also have to make drawing code clip to any overlapping windows. If you don’t, memory usage goes up again.

And memory could be tight. As an extreme example, the screen memory on an original Mac was about the same size as the amount of memory available to the running application.


For reference, the original Mac was 512x342, monochrome: 21,888 bytes. It had 128K RAM total, including that. It was essential for programs to rely on code in the ROM so as not to waste precious RAM. Running your program and also the "Finder" (desktop) at the same time was novel.

It did overlapping windows, and the program was expected to re-draw the part exposed if you moved one.

The 512k version, or "Fat Mac" came out soon after, and was actually useful.


Overlapped windows are great, until someone invented raise-on-click


Anybody here remember https://en.wikipedia.org/wiki/ManaGeR ? That was what I used on Unix before I got my grubby little hands on an X Windows source distribution.

It was quite elegant and super lightweight compared to X but at the same time it was somewhat limited. Monochrome only (for me, at least) and a bit sparse it got the job done with a minimum of fuss.


I saw Stephen Uhler give a demo of it as Usenix. Yes it was quite elegant! Worked via in-band escape codes so you could transparently use it remotely with rlogin or telnet.

On his web site, he says "it was (and probably still is) the largest post ever made on comp.sources.unix".

https://sau.homeip.net/


I never forgot and in most respects it was far more elegant than X which always seemed a very poor design.

What I especially liked about MGR was that remoting was dead simple; it worked as long as your terminal worked so ssh somewhere and start an app - it just worked (AFAIR).

X had remoting is a primary feature, but it's hackish and awkward and filled with issues (you really don't want to know what happens under the hood of 'ssh -Y').

However, neither really works for application that needs massive updates at a fast framerate, so hopefully Wayland will work better.


>ssh somewhere

SSH was introduced in 1995, so back in the 80's you just used good old insecure unencrypted rlogin or telnet, and maybe Rick Adam's SLIP over a modem in the late 80's.

https://en.wikipedia.org/wiki/Serial_Line_Internet_Protocol



Sure ssh didn’t exist, but as best I recall rsh did at the time I used MGR and worked much the same way sans the security.


if the sysadmin were any good, at least there might be s/keys for initial authentication.


No, you can just tell X clients (applications) to present themselves on a remote X server, either with no security or with an access key. It’s an unencrypted connection managed by the application itself and the X server. Sounds like a helpful feature until you realize that any X client can act as a keylogger for the whole session.

If you use SSH, it presents a fake X server local to the client that works as a proxy and tunnels the communication through the SSH connection. But as far as the client sees, it’s connected to a local server.


I was referring to running MGR clients over rlogin/telnet/slip, not X clients.

Yes I know X clients talk to the server via unencrypted connections over the network -- I developed a multi player X11 version of SimCity whose client connected to multiple players' servers at the same time. But as FullyFunctional rightly pointed out, X11 networking is "hackish and awkward and filled with issues". So I removed the multi player networking feature when I ported SimCity to the OLPC XO-1 Children's Computer for kids to use, since it was unthinkable to expect kids to deal safely with X-Windows network security using xauth or setting up ssh tunnels.

https://www.youtube.com/watch?v=_fVl4dGwUrA

https://www.youtube.com/watch?v=EpKhh10K-j0

David Chapman describes some of the problems with X-Windows "MIT-MAGIC-COOKIE-1" authentication in the book "Unix-Haters Handbook" chapter "X-Windows Disaster" section "Myth: X Makes Unix 'Easy to Use'":

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

Date: Wed, 30 Jan 91 15:35:46 -0800

From: David Chapman <zvona@gang-of-four.stanford.edu>

To: UNIX-HATERS

Subject: MIT-MAGIC-COOKIE-1

For the first time today I tried to use X for the purpose for which it was intended, namely cross-network display. So I got a telnet window from boris, where I was logged in and running X, to akbar, where my program runs. Ran the program and it dumped core. Oh. No doubt there’s some magic I have to do to turn cross-network X on. That’s stupid. OK, ask the unix wizard. You say “setenv DISPLAY boris:0”. Presumably this means that X is too stupid to figure out where you are coming from, or unix is too stupid to tell it. Well, that’s unix for you. (Better not speculate about what the 0 is for.)

Run the program again. Now it tells me that the server is not authorized to talk to the client. Talk to the unix wizard again. Oh, yes, you have have to run xauth, to tell it that it’s OK for boris to talk to akbar. This is done on a per-user basis for some reason. I give this ten seconds of thought: what sort of security violation is this going to help with? Can’t come up with any model. Oh, well, just run xauth and don’t worry about it. xauth has a command processor and wants to have a long talk with you. It manipulates a .Xauthority file, apparently. OK, presumably we want to add an entry for boris. Do:

xauth> help add

add dpyname protoname hexkey add entry

Well, that’s not very helpful. Presumably dpy is unix for “display” and protoname must be… uh… right, protocol name. What the hell protocol am I supposed to use? Why should I have to know? Well, maybe it will default sensibly. Since we set the DISPLAY variable to “boris:0”, maybe that’s a dpyname.

xauth> add boris:0 xauth: (stdin):4 bad “add” command line

Great. I suppose I’ll need to know what a hexkey is, too. I thought that was the tool I used for locking the strings into the Floyd Rose on my guitar. Oh, well, let’s look at the man page.

I won’t include the whole man page here; you might want to man xauth yourself, for a good joke. Here’s the explanation of the add command:

add displayname protocolname hexkey

An authorization entry for the indicated display using the given protocol and key data is added to the authorization file. The data is specified as an even-lengthed string of hexadecimal digits, each pair representing one octet. The first digit gives the most significant 4 bits of the octet and the second digit gives the least significant 4 bits. A protocol name consisting of just a single period is treated as an abbreviation for MIT-MAGIC-COOKIE-1.

This is obviously totally out of control. In order to run a program across the fucking network I’m supposed to be typing in strings of hexadecimal digits which do god knows what using a program that has a special abbreviation for MIT-MAGIC-COOKIE-1?? And what the hell kind of a name for a network protocol is THAT? Why is it so important that it’s the default protocol name?

Fuck this shit.

Obviously it is Allah’s will that I throw the unix box out the window. I submit to the will of Allah.


> X11 networking is "hackish and awkward and filled with issues

It doesn’t just allow random people to pop up a window on your display and annoy you. It allows them to see everything you do and everything you type and control your applications without you seeing it.

It’s not that it’s awkward and has some issues. It’s a security hole by design and that’s what makes it unusable.

And that is aside from the security risks posed to the client by an untrusted server (and all the other untrusted clients connected to it) and to the server by an untrusted client.

I’m not sure though what the problem is with X through SSH, it’s 99% transparent, the biggest issue is that, if you ssh to an untrusted server, you allow that server to connect to your X server and see what you are doing and what you type and control all your applications.


> It doesn’t just allow random people to pop up a window on your display and annoy you.

Oh, the joy of being an university student with time to spare on a lab of cluessles people about the mighty powers of xhost -.


Tell them to type "xhost +" to make their windows go faster! ;)


And then pop up a copy of xeyes on their screen.


Actually it was mostly xv with interesting content.


> Oh, the joy of being an university student with time to spare on a lab of cluessles people about the mighty powers of xhost -.

Or a large investment bank in NYC with Suns on everyone’s desk. Trading floors are hotbeds of middle school behavior. NSFW pics popping up on your screen randomly. Screenshots of shared with the entire office. Ahhhh, those were the days.


No, you exported $DISPLAY to a remote listening X server and then you launched your application.


ssh -Y came late. For a long time as I recall you just let your X windows server listen on the network and any other machine could just send it windows to draw. You could just manually set the environment variable on the remote machine. That was still how stuff worked at some banks in 2004, because they were slow to adopt ssh. Wide open connections to X servers enabled both impressive hacks and impressive trolling.


I remember one troll-your-friends app that would (at random intervals) watch as the mouse pointer approached a window, and then warp it into a random spot.


ssh didn't exist when either X or MGR were created.


s/ssh/rsh/


Yes! I compiled it a few years ago for Xenix and had it running quite happily on a 386


"Anyone who wants the code can come by with a tape."


80s was the golden era of hackers.


how many did go and got the code on tape?


Since I was at Stanford then, I downloaded it on sunet.


Oh, look at this guy everyone, with his fancy "downloading".


My current (air gapped) development environment includes CDE 1.6.2 on Solaris 10. At some point some devs found this to be a bit... dated, and decided to roll WM flavour of the day: Fluxbox. Sadly, one of the X components was missing from Solaris. No problem! One dev hacked together his own version in Perl based on the spec. There's something both beautiful and insane about the situation, which persists to this day.


It is interesting that something 'not the ultimate window system' can last almost 40 years already, though it also seems like Wayland is getting close to critical mass of adoption?


Wayland is getting close, and is very good indeed, but X is still the default window system. I feel like there are a lot of technologies that are this way. They were not designed to be the best, but good enough, and they have passed the test of time.


i have a hard time liking wayland because they're taking away remoting, which was one of my favorite parts of x.


That's not where it stops... wayland was designed without giving any thought to color management at all. Beware, long read ahead, not happy ending: https://discuss.pixls.us/t/wayland-color-management/10804

I will abandon Linux on desktop after ~20 years if that happens. Once upon a time we had a browser with real color management (firefox)... and lost it. Now it seems that time has come to other things as well.


https://www.collabora.com/news-and-blog/blog/2020/11/19/deve...

Not sure about the current state but there was some initiative to implement it. And as far as I know, the deliberately lean protocol will easily allow for this extension, that’s the cool part of Wayland.


Wayland was released in 2008. In 2008 color management should not be an afterthought for something which entire purpose is to ensure pixels end up on the screen, but rather an integral component.

Having a bunch of pixels with no knowledge of which color space they're in is useless.


You think that color management belongs in the window manager / compositor.

Wayland thinks color management belongs in the app.

The main difference, though, is not this disagreement, but your righteous insistence that your opinion is not an opinion but a very simple obvious fact. I can tell you from having been part of many dev teams of many major app having to do color management that such apps really, really like to be in control.

So no, the lack of color management in wayland is not such an obvious wrong decision. It's debatable, with pros and cons.


My point is, regardless of who does what when it comes to color management Wayland simply cannot be agnostic about it.

At least since it wants to touch pixel data. Alternatively they could have given up on that and never introduced wl_buffer, leaving all the buffer management and drawing up to the client.

edit: At least HDR has finally made Wayland devs wake up and smell the coffee. A good write-up here[1], with ongoing talk in the merge request[2].

Lots of good work going on there, but should be part of core and not an extension. Maybe it's time for Wayland2 soon?

[1]: https://ppaalanen.blogspot.com/2020/11/developing-wayland-co...

[2]: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/m...


Why does it matter that it is merged into an extension that will become core, or it was already core at inception?

It is hard to get everything right at once, so starting small is not bad, imo. Wayland has a really sane way of extension querying so it is not like clients will suffer due to these — they are already written in a way to query the capabilities.


> Why does it matter that it is merged into an extension that will become core, or it was already core at inception?

> Wayland has a really sane way of extension querying so it is not like clients will suffer due to these

A client shouldn't be able to use Wayland without considering the color space of the pixels it's providing. By not having it in core from day one, there's already apps out there which will need patching to work properly.

Here's[1] a random Wayland hello world example I found. It memcpy's the image it wants to display into a wl_buffer. What color space is that pixel data in? How can this work correctly without changes when the Wayland color management protocol lands?

[1]: https://github.com/emersion/hello-wayland/blob/master/main.c...


I am not knowledgeable enough on the topic, but as far as I know X didn’t have a wide-spread way to manage colors either — so I guess clients like that would default to “don’t care about it”, which will translate to some default color space.

Clients that query for this yet theoretical only color space extension can then ask the server for the color balance of the given screen and fill their buffers accordingly. The existing buffer protocol could also be extended to contain information about the color space used for server-side handling of this information, though as the other commenter mentioned, it is best handled by the client, the protocol has to display it only.


> as far as I know X didn’t have a wide-spread way to manage colors either

On the other hand X was created ages ago. Wayland was released in 2008.

> Clients that query for this yet theoretical only color space extension can then ask the server for the color balance of the given screen and fill their buffers accordingly.

Ok, lets say that's how it's done. I set my display to AdobeRGB or BT.2020. My legacy hello world application knows nothing of this just copies its data into the buffer blissfully ignorant of this. If Wayland does nothing more than to display the pixels, the hello world image will look like crap.

Ok, let's say we patch the hello world application, and all the other existing Wayland applications, to respect the output color space. Now I plug in a second monitor which is sRGB, and drag the hello world application over on it. If the hello world application does nothing more, it looks like crap again.

For this to work, it has to be notified it's on a new screen, and fill in new pixel values for the new screen. And every application must handle this or it will look like crap.


I tried looking at how is it done in Apple’s Quartz compositor but couldn’t find it.

But more asking than telling, why wouldn’t defaulting to sRGB for unlabeled buffers work? The compositor knows that this monitor is in C color space, and it knows how to convert between the two, than a simple shader can trivially convert the sRGB hello world app to the monitor’s color space - even in the multiple monitor with different spaces case.

Of course that might not be the intention of the client, but if they didn’t bother with it than they likely are okay with whatever is the default.


> But more asking than telling, why wouldn’t defaulting to sRGB for unlabeled buffers work?

If Wayland is to do color management, that's a reasonable approach. Doesn't work at all if the client is supposed to do it.

But are you sure Wayland uses sRGB now? I couldn't find any information regarding the implicit color space currently used by Wayland...

> but if they didn’t bother with it than they likely are okay with whatever is the default

Not like they had a choice now is it?


I don’t think it has any implicit color space in use right now, I mentioned/asked for confirmation whether this approach could work.


Yeah it's the best approach given the current state, and that seems to be what they're going for in the MR.

However you're still in a bit of a mess due to lack of an explicitly documented implicit color space. And it's still a bit of a footgun for developers who don't know they need to care about this.

It's a bit like XML files without the encoding attribute. Except for text you can try to analyze the bytes and find a likely encoding, no such thing for pixel data.


> the deliberately lean protocol will easily allow for this extension, that’s the cool part of Wayland.

That's a strange way to look at it. It's more like the Wayland folks decided that basically everything of import is out of scope of the protocol. This way any graphical system built on top of Wayland is ridiculously inelegant, but hey, at least there is no responsibility on Wayland itself.


> Once upon a time we had a browser with real color management (firefox)...

I remember that episode... at some point it was basically impossible to display an image that uses the exact same color as some color used in your CSS. Lot's of logos with just the wrong. I'm sure the web platform could be extended to properly support color management in some way, but what Firefox did back than wasn't it.


Personally I like the modularity of X11 because it has a lot of standardized interfaces. For example, you can exchange the window manager and the compositing manager at runtime without affecting any running programs.

In my opinion X12 should be even more modular and should have even more standardized interfaces. E.g. window decorations should not be tied to a window manager. There should be a standardized way to do toolkits/widgets where the backend and rendering can be swapped at runtime. It should be possible to run headless GUI programs that can be attached and reattached to any running instance of a display server.

So kind of the opposite of what Wayland is doing.


"X: The First Fully Modular Software Disaster"

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

John Steinhart though X-Windows could have been a hell of a lot more modular and less complex, and could have been much more simply and modularly designed and better implemented with "less than a dozen API calls":

https://news.ycombinator.com/item?id=17056516

DonHopkins on May 12, 2018 | parent | context | favorite | on: Build your own X: project-based programming tutori...

Hasn't somebody reimplemented X11 in JavaScript/canvas/websockets yet?

There was an X11 server for Lisp Machines! Not sure who wrote it, but it was probably written inside or at least nearby the X Consortium, and I remember Robert Scheifler used it regularly.

https://news.ycombinator.com/item?id=6864364

"For example the TI Explorer Lisp Machine came with an X11 server written in Lisp. On my Symbolics Lisp Machine I used the usual MIT X11 server written in C - this was possible because the Symbolics Lisp machine had a C compiler." -lispm

John Steinhart wrote XTool, a nice snappy reimplementation of X11 on top of SunView! ;)

https://minnie.tuhs.org//pipermail/tuhs/2017-September/01047...

https://news.ycombinator.com/item?id=15325226

https://web.archive.org/web/20171028110659/https://minnie.tu...

>XTool was very small and fast compared to the X sample server because I wrote the server from scratch. I think that I'm the only person to write an X server outside of the X Consortium. One of the things that I learned by doing it was that the X Consortium folks were wrong when they said that the documentation was the standard, not the sample server. There were significant differences between the two.

>The only really worthwhile thing about X was the distributed extension registration mechanism. All of the input, graphics and other crap should be moved to extension #1. That way, it won't be mandatory in conforming implementations once that stuff was obsolete. As you probably know, that's where we are today; nobody uses that stuff but it's like the corner of an Intel chip that implements the original instruction set. As an aside, I upset many when working on OpenDoc for Apple and saying the same thing there.

>The atom/property mechanism allows clients to allocate memory in the server that can never be freed. Some way to free memory needs to be added.

>The bit encodings should be part of a separate language binding, not part of the functional description.

[...]

>X suffers from the same problems as the original Mac API. Scheifler et. al. didn't really do any system level design and modelling. I know this because I discussed it with Scheifler at an ANSI meeting in Tulsa, the only place that I have travelled to on business that had no redeeming qualities. He said "I don't believe in models because they predispose the implementation."

>Had he done some real design work and looked at what others were doing he might have realized that at its core, X was a distributed database system in which operations on some of the databases have visual side-effects. I forget the exact number, but X includes around 20 different databases: atoms, properties, contexts, selections, keymaps, etc. each with their own set of API calls. As a result, the X API is wide and shallow like the Mac, and full of interesting race conditions to boot. The whole thing could have been done with less than a dozen API calls.

[...]


How is that the opposite? Anyone is free to extend the protocol with whatever they want — e.g. wlroots-based compositors have exactly this “standardized way of doing toolkits/widgets”.


The only thing that Wayland standardizes is multi process GPU memory access. Neither GNOME nor KDE (which are far more popular Wayland implementations) use any of the wlroots "standards" and thus they can not really be called as such. Also wlroots does not define one bit about toolkits and widgets.

> Anyone is free to extend the protocol with whatever they want

The protocol itself is very opnionated and strict in some places (especially concerning things like vsync) and completely undefined in other areas where standardization is essential (access control).


AFAIK Wayland in a datacenter still falls back to software rendering and running images through a video codec. I think X is more likely to be replaced by Javascript apps, because those are cross platform and a browser running locally can take advantage of the end user’s GPU.


Gee, that sounds kind of like the whole point of NeWS. ;)

https://en.wikipedia.org/wiki/NeWS

NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:

- used PostScript code instead of JavaScript for programming.

- used PostScript graphics instead of DHTML and CSS for rendering.

- used PostScript data instead of XML and JSON for data representation.

https://donhopkins.medium.com/the-x-windows-disaster-128d398...


> - used PostScript code instead of JavaScript for programming.

I'm no fan of JavaScript, but even I would prefer it to PostScript.

And I actually like FORTH.


That's interesting! I love Forth, but I love PostScript even more, because it's so much like Lisp. What is it about PostScript that you dislike, that doesn't bother you about Forth?

Arthur van Hoff wrote "PdB" for people who prefer object oriented C syntax to PostScript. I wrote some PdB code for HyperLook, although I preferred writing directly in PostScript.

https://news.ycombinator.com/item?id=25432748

DonHopkins on Dec 15, 2020 | parent | context | favorite | on: Source of the famous “Now you have two problems” q...

Leigh Klotz has written more PostScript than Jamie too, while working at Xerox! But "KLOTZ IS A LOGO PRIMITIVE [BEEP BEEP BEEP]". He wrote a 6502 assembler in Logo!

https://news.ycombinator.com/item?id=13524588

Leigh Klotz's comment on the regex article:

>OK, I think I’ve written more PostScript by hand than Jamie, so I assume he thinks I’m not reading this. Back in the old days, I designed a system that used incredible amounts of PostScript. One thing that made it easier for us was a C-like syntax to PS compiler, done by a fellow at the Turning Institute. We licensed it and used it heavily, and I extended it a bit to be able to handle uneven stack-armed IF, and added varieties of inheritance. The project was called PdB and eventually it folded, and the author left and went to First Person Software, where he wrote a very similar language syntax for something called Oak, and it compiled to bytecodes instead of PostScript. Oak got renamed Java.

>So there.

>And yes, we did have two problems…

>— comment by Leigh L. Klotz, Jr. on June 7th, 2008 at 3:22am JST (12 years, 6 months ago) — comment permalink

Arthur van Hoff (the author of PdB and the original Java compiler written in Java) has also written more PostScript than Jamie, especially if you count the PostScript written by programs he wrote, like PdB and GoodNeWS/HyperNeWS/HyperLook.

Here's the README file (and distribution) of PdB, Arthur van Hoff's object oriented C to PostScript compiler:

https://github.com/IanDarwin/OpenLookCDROM/blob/master/NeWS/...

Also a paper by Arthur van Hoff about "Syntactic Extensions to PdB to Support TNT Classing Mechanisms":

https://www.donhopkins.com/home/archive/NeWS/PdB.txt

Some before and after examples, like menu.h menu.pdb menu.PS:

https://www.donhopkins.com/home/archive/HyperLook/Turing/hn3...

menu.h: https://www.donhopkins.com/home/archive/HyperLook/Turing/hn3...

menu.pdb: https://www.donhopkins.com/home/archive/HyperLook/Turing/hn3...

menu.PS: https://www.donhopkins.com/home/archive/HyperLook/Turing/hn3...

GoodNeWS/HyperNeWS/HyperLook:

https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-g...

pvg on Dec 15, 2020 | prev [–]

Arthur van Hoff (the author of PdB and the original Java compiler written in Java) And the original AWT, if this is to be a full Airing of Sins.

DonHopkins on Dec 15, 2020 | parent [–]

Agreed, AWT was a horrible compromise in an impossible situation! But he made up for it by creating "Bongo" at Marimba.

Bongo is to Java+HyperCard as HyperLook is to PostScript+HyperCard.

https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-g...

>Arthur van Hoff [...]

>Marimba Castanet and Bongo

>Eventually Arthur left Sun to found Marimba, where he developed the widely used Castanet push distribution technology, and the under-appreciated Bongo user interface editing tool: a HypeLook-like user interface editor written in Java, that solved the runtime scripting extension problem by actually calling the Java compiler to dynamically compile and link Java scripts.

>Nobody else had ever done anything remotely like Bongo before in Java. Dynamic scripting with Java was unheard of at the time, but since he had written the compiler, he knew the API and how the plumbing worked, so had no qualms about calling the Java compiler at runtime every time you hit the “Apply” button of a script editor.

>Danny Goodman’s “Official Marimba Guide to Bongo”

https://www.amazon.com/Official-Marimba-Guide-Bongo-Goodman/...

>Danny Goodman, the author of the definitive HyperCard book, “The Complete HyperCard Handbook”, went on to write the “Official Marimba Guide to Bongo”, a great book about Bongo, described as the “reincarnation of HyperCard on the Internet”.

>[TODO: Write about Bongo’s relationship to HyperCard, HyperLook and Java.]

>Java applets are everywhere on Web pages these days, but if you’ve made the move to Java from a contemporary programming environment you’ve probably been dismayed by its relative immaturity. The Official Marimba Guide to Bongo covers Marimba’s Bongo environment, which is designed to allow rapid development of Java user interfaces. The book shows you how to use the large library of graphics “widgets” supplied with Bongo, how to wire them together with simple scripting, and how to integrate other Java applets. It also explains how Bongo can be used to build channels for Marimba’s Castanet system. -Amazon.com Review

>Java users should be rejoicing at the promise of programming aid Bongo, which is is the reincarnation of HyperCard on the Internet. It is fitting that the first major book about Bongo comes from Goodman, the author of the definitive HyperCard book of days gone by (The Complete HyperCard Handbook, Random, 1994). His background is as a journalist, not a technologist, and readers will make good use of this first-rate introduction. This book will circulate. -Library Journal Review

Unfortunately Marimba's Bongo got overshadowed by Sun's announcement of "Java Beans" which Sun was pushing with much fanfare and handwaving as an alternative to "ActiveX", but which eventually turned out to actually be just a server side data modeling technology, not a client gui framework.

https://news.ycombinator.com/item?id=21784027

[...]

Marimba developed Bongo, a Java-based gui toolkit / user interface editor / graphical environment, inspired by HyperCard (and HyperLook), which they used to develop and distribute interactive user interfaces over Castanet.

https://people.apache.org/~jim/NewArchitect/webtech/1997/10/...

>Feel the Beat with Marimba's Bongo, By Chris Baron

>In 1996, four programmers from the original Java-development team left Sun to form Marimba and produce industrial-strength Java-development tools for user interface and application administration. Bongo, one of Marimba's two shipping products, allows developers to create either a Java-application interface or a standalone Java-based application called a "presentation." A Bongo presentation resembles a HyperCard stack -- it allows developers to quickly create an application with a sophisticated user interface, but without the tedious programming of directly coding in Java or C/C++. Bongo's nonprogramming, visual approach makes it ideal for producing simple applications that don't involve a lot of processing, such as product demonstrations, user-interface prototypes, and training applications. Bongo is fully integrated with Castanet, Marimba's other product, a technology for remotely installing and updating Java applications.

Bongo was unique at the time in that it actually let you edit and dynamically compile scripts for event handlers and "live code" at run-time (in contrast with other tools that required you to recompile and re-run the application to make changes to the user interface), which was made possible by calling back to the Java compiler (which Arthur had written before at Sun, so he knew how to integrate the compiler at runtime like a modern IDE would do). Without the ability to dynamically edit scripts at runtime (easy with an interpreted language like HyperTalk or PostScript or JavaScript, but trickier for a compiled language like Java), you can't hold a candle to HyperCard, because interactive scripting is an essential feature.

Danny Goodman, who wrote the book on HyperCard, also wrote a book about Bongo. Arthur later founded Flipboard and JauntVR, and now works at Apple.

Here's a paper I wrote comparing Bongo with IFC (Netscape's much-ballyhooped Java Internet Foundation Classes). (Notice how IFC = Internet Foundation Classes was Netscape's answer to MFC = Microsoft Foundation Classes. Never define your product's name in terms of a reaction to your widely successful competitor's name. cough SunSoft cough)

NetScape's Internet Foundation Classes and Marimba's Bongo

https://donhopkins.com/home/interval/ifc-vs-bongo.html

>In summary, I think it was too early to write a Java toolkit before JDK 1.1, so IFC has gone and done a lot of its own stuff, which will have to be drastically changed to take advantage of the new stuff. Bongo is not as far down the road of painting itself into a corner like that, and if some effort is put into it, to bring it up to date with the new facilities in Java, I think it will be a better framework than IFC. Java Beans remains a big unknown, that I don't have a lot of faith in. Arthur says Java Beans does too much, and I'm afraid it may try to push competing frameworks like IFC and Bongo out of the limelight, instead of just providing a low level substrate on top of which they can interoperate (like the TNT ClassCanvas). If Bongo can pull off ActiveX integration with style and grace, then it wins hands down, because I doubt IFC can, and I don't trust Sun to deliver on their promises to do that with Java Beans.

More:

https://news.ycombinator.com/item?id=19837817

>Wow, a blast from the past! 1996, what a year that was. [...]


> What is it about PostScript that you dislike, that doesn't bother you about Forth?

I think it's what we were trying to accomplish. My work with Forth was on Apple II computers while my contact with PostScript was to make printers do things they weren't supposed to do - such as printing fractals - because, for some time, they were the most powerful computers we had at the office.

Apart from that, the syntax seemed awkward for things more complicated than rendering text in predefined positions, but, then, I didn't have any development tools similar to what was available with NeWS (when NeWS was still hot).


I learned FORTH on the Apple ][ too! 6502 FORTH (and others) have a great RPN assembler that is nice for writing the performance sensitive and hardware accessing parts of your program. It was perfect for writing stuff like terminal emulators.

The original LaserWriter was actually a more powerful computer than the Macs that used it, at the time of its release.

Debugging on a laser printer definitely was a challenged, and used lots of paper.

You're right that NeWS made it a lot easier to debug PostScript code interactively without killing trees. It had a command line debugger, but was great for making visual interfaces too! Here's a visual PostScript programming and debugging environment I made with NeWS:

The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989

Abstract

The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.

https://donhopkins.medium.com/the-shape-of-psiber-space-octo...


It was explained to me by one of Arthur's colleagues at the Turing Institute that PdB stood for "Pure Dead Brilliant" - which indeed it was and the Institute being in Glasgow.

HyperNeWS was definitely one of the neatest systems I have ever worked with.

Edit: I also really liked PostScript.


That's right! ;)

http://www.scotranslate.com/translate/scottish/pure-dead-bri...

>pure dead brilliant - Scottish to English : The English translation of "pure dead brilliant" is 1. exceptional 2. fantastic


Was Marimba "push" the inspiration for the expression "This like TV. I don't like TV" going around at the time? I never found out.

"Before co-founding Marimba, [Kim] Polese spent more than seven years with Sun Microsystems and was the founding product manager for Java when it launched in 1995. She also influenced the transition of its internal name of "Oak" to "Java".[11]

"Prior to joining Sun, Polese worked on expert systems at IntelliCorp Inc., helping Fortune 500 companies apply artificial intelligence to solving complex business challenges."

Anybody remember "expert systems"?


Yeah, I keep saying that Wayland is the replacement for X, but browsers and Electron are the replacement for NeWS.


That's a good metaphor. I just wish there was one "official" stable version Electron on all platforms that you could depend on everybody having installed, so instead of shipping huge apps with their own copy of electron, you could just ship little applets in one file (or just a few), like was possible with NeWS.


Or the Decode-Encode Language:

https://datatracker.ietf.org/doc/html/rfc5


most webapps have to be updated constantly due to browser changes though, don't they?

if not, would be interesting if there were a protocol for programs to securely serve up front end resources and api endpoints for javascript guis along with a protocol for telling your machine to pop a window and display that gui.

so like the equivalent of an xserver that programs can connect to that pops browser windows and secure plumbing to support it.

export WEBDISPLAY=localhost:0 myjavascriptterminalemulator &


waypipe is one solution in that realm (i haven't used it myself though). but it's not like x servers and clients will stop working.


Once wayland gets good adoption, people will gradually stop supporting X. That is how it always works in software.


X is not “network transparent” for any practical definition. Most modern applications don’t use X render commands to draw, so you will get shitload of bitmaps moved over the network, which will result in really bad performance.

Wayland just realized that remoting is not an operation that has to be integrated into the protocol. It can be done by third-party applications better through proper compression techniques.


While common X use evolved way from network (modern apps passing bitmaps around instead of primitives), the network architecture still have one huge advantage over Wayland: it is a client server architecture, so one can restart things without losing state. That is, under X I can crash/restart the DE without losing any application state, open apps continue to run and simply connect to the new instance. But under Wayland any crash anywhere on the stack means all apps are killed and all work lost.


This is not mandated by the protocol — one is free to create a separate server and window manager program under wayland just as well. But I really don’t find it meaningful — such core functionality should just not crash (just as Xserver itself is not prone to crash, because than your work is also lost).


Not really; there is not a lot of support in the Wayland protocol for multiple clients sharing objects. See for example how difficult it is to emulate XEmbed, systray, wine, etc.


How is that relevant? A client and the server is different processes already. Sharing it between 2 process or 3 is at that point meaningless. (Also, not everything has to be shared to a dumb window manager module)


No, is not really meaningless. Basically, you assume from the server that each relevant object is owned and accessible by 1 client only. If 2 clients want to "own" the same object, they have to coordinate between themselves using an entirely different protocol so that only one client owns the object from the PoV of the compositor (you also can't do it through Wayland since it lacks generic IPC support, which X11 has even if poor). See https://www.winehq.org/pipermail/wine-devel/2021-December/20...

If you want to make a separate window manager, it is likely that the result is going to be incompatible with most existing clients. But this seems par for the course for almost every Wayland extension so far (see Gnome...).


Having more than one mutable reference to the same thing already begets advanced synchronization if that’s what you mean, but I don’t see how is that relevant.

And beside gnome and kde there is a third big group of wayland compositors based on wlroots which does support some form of common set of extensions not in core, effectively making you write only the wm part.


> but I don’t see how is that relevant.

Because this prevents creation of a separate window manager program. You can do all of this with the core protocol in X11. You can't do any of it in Wayland almost by design. The entire idea of a client getting simultaneous access to the objects of another client is not in the protocol. You cannot render in another process surface. You cannot even iterate over its windows! The role of the compositor is hardcoded in the protocol design.

The closest thing you can do is to write some type of modular compositor which speaks _yet another_ protocol with the window manager module (or even exists within the same process). But this is not like in X11, where you plug new things (e.g. a new WM, a new pager) without having to restart the X server, much less having to change it.


that's bullshit. I use Qt apps and they are still in 2022 able to use X11 paint commands to render over the network in a way that is much, much, much more fluid than e.g. rdp, vnc or whatever other bullshit like that. Many GTK apps also seem to work fine (though not all, Geany is snappy while Gimp isn't for instance)

see how it looks: https://streamable.com/t9foij


> much more fluid than e.g. rdp, vnc or whatever

RDP also sends draw calls, and as such cannot be compared to VNC or similar. It's not even a remotely close comparison in terms of fluidity. I haven't compared it to Qt over X11, but I wouldn't be surprised if RDP wins over an internet link.

The lack of a proper RDP alternative is the primary reason I don't use Linux as my main driver. I keep checking every 12 months or so, but alas.


> RDP also sends draw calls

on linux with native linux apps ?


I haven't checked in a few years, but I imagine they haven't implemented that yet given how tightly RDP draw calls maps to the GDI.

Point was that RDP in its natural environment can forward draw calls, and as such is more like X than VNC.


Don't forget NX protocol. It's far more faster and lightweight that VNC/RDP.


As far as I know NX is just a layer of compression on top of the X11 protocols though, e.g. it uses zlib for compressing paths, jpeg for compressing pixmaps, etc.. Not sure it'd actually help that much on gigabit ethernet.


One part of it (the NXagent) is centered on reducing round-trips rather than compressing the protocol. May not be worth on low-latency links but it is a deal-killer in high-latency ones.


RDP is far, far more advanced than X11. In addition to doing "GDI over the wire" much like X11's draw calls, RDP supports compressed bitmaps and streaming video, remote audio, remote file system, remote USB, clipboard sharing, and more recently remote GPU. (Yes, I know about GLX, it's incredibly old and hasn't been updated.)

RDP is to X as PowerShell is to bash: better than the Unix standard thing because it isn't weighed down by Unix community baggage.


Eh, no. PowerShell is trash. And instead of Bash I prefer ksh and for anything more complex, perl, which curb-stomps PowerShell by large.

Also, you were right on RDP, but something less is more.

If any, I prefer plan9/9front's drawterm/cpu which runs circles over Unix/Linux VNC and RDP. By a huge mile.

And 9p>>>>>>>> NFS/SMB. For anything else, use FS' permissions FFS, and choose wisely what you share.


> If any, I prefer plan9/9front's drawterm/cpu which runs circles over Unix/Linux VNC and RDP. By a huge mile.

How can this [1] code possibly be faster than modern hardware accelerated blitting? I don't see a line of SIMD in there.

[1]: https://github.com/9fans/drawterm/blob/master/libmemdraw/dra...


Call me odd, but 9front on native machines looks smoother than most composited desktops here.


composited WMs are so frustrating to use when you're used to non-composited


Powershell is not at all trash. Being object oriented makes it very easy to do on the fly inspection of your data and allows extremely powerful pipelining. If you want even more functionality, you can load in .NET libraries at runtime. Working with Powershell really feels like working with Python but using a different syntax.


You have PERL REPLs like that with Task::Kensho leaving out PSH and .Net libraries on the dust.

What you could do today with PSH and iPython Perl folks did that before with far more modules thanks to CPAN and C bindings.


GLX is not much older than RDP and has also been updated several times.


I assume you use it through a local network where it will be reasonably fast. But then please compare it with any sort of “modern” video streaming approach on the same network as well.


here's the exact same case with VNC, on the same network: even with the smaller resolution and lossy image compression, VNC manages to be quite slower than X11

https://streamable.com/50fdss

I tried steamlink on the same network to put games on my TV, and it is barely useable with absolutely horrendous image compression (got a GTX1080 GPU so that's likely not the culprit)


Note that Qt5 does not send paint commands by default, it sends bitmaps. You need a special env var to revert to drawing commands.


VNC outperforms it, which is saying a lot since VNC is terrible. Remote x apps were always this selling point in linux culture and I always thought I was doing something wrong because they were so incredibly slow until I realized its the protocol that's just slow.


It was fine until apps ended up doing everything on their end and push pixmaps to the X server. Back in the xaw and motif days, it was fine.


Well yeah, but then you could just as well use an ncurses app through ssh for the same “experience”


Debian uses Wayland by default these days.

"As [GNOME] is the default Debian desktop environment, Wayland is used by default in Debian 10 and newer, older versions use Xorg by default." -- https://wiki.debian.org/Wayland

I personally use Arch and GNOME seemed to use Wayland by default without any intervention from me.


Gnome still doesn't support adaptive sync in the Wayland session? KDE does, but last time I tried the Wayland session it had a major bug with display sleep, so I'm waiting for Plasma 5.24 to try it again since it should have some fixes for that.


What's pretty wild is it took 3 years to reach X11, and that's where we're at, although there have been a bunch of revisions.


I invite anyone interested by the subject to watch Keith Packard's amazing presentation on this at LCA 2020:

A Political History of X - https://www.youtube.com/watch?v=cj02_UeUnGQ

(can be listened without the video if you want to do other things while listening to it)


I wonder how those email addresses worked. Who or what is window@athena? Project Athena is from MIT, so maybe window is a shared account for the group or window is a proto newgroup?

Also its so odd to think that the code couldn't be emailed because of size limitations of the time and you'd have to bring a tape. Weird that he didn't ask for a floppy because floppies were common by then too. Lets remember in 1984 you had the original Mac and a year out from the Amiga so not exactly ancient history and both those devices supported the newer 3.5" disks with 5.25" disks being very common by then. I'm guessing the unix big iron culture of the time was primarily tape based.

Also talk about a missed opportunity. I'd love to see a screenshot of this email on the original x. I'm guessing there was no email client for x back then but it could at least be read in a terminal window. I'm assuming x supported a terminal window this early if it had a working windows manager, but maybe vt100 emulation wasn't in the cards just yet. Still, what a neat piece of computer history. Stuff like this always gives me warm feelings, like in a past life I was somehow active in the culture then and feel nostalgia for it.


> I wonder how those email addresses worked. Who or what is window@athena? Project Athena is from MIT, so maybe window is a shared account for the group or window is a proto newgroup?

The history of email standards is interesting, to say the least!

https://en.wikipedia.org/wiki/Non-Internet_email_address


The source wouldn't fit on a single floppy disk.


It doesn't mention which kind of tape. I'd be interested to know what they used.

Also it looks like they're referring to the VS100.

https://en.m.wikipedia.org/wiki/VAXstation

I never had access to those but I'd conjecture from the post they might've used tape. There at least.


> I'm guessing the unix big iron culture of the time was primarily tape based.

pretty much any 'real' workstation and for sure minis+ had tapes


Interesting to see the origins of X. Were there really 10 versions before it got to X11 and why did the version stop there?


Yes. There were a number of rapidly developed versions up to X10 in 1986, which was the first one released under a free license (and thus the first one widely adopted). X11 was an overhaul of X10 intended to increase portability/compatibility, and also the first version developed as a community open source project rather than internally to MIT. That came out in 1987.

As for why X11 was the last major version: the protocol has remained backwards compatible since then. Clients created decades ago can still operate with modern X servers. It's pretty cool!

http://www.theresistornetwork.com/2013/12/a-testament-to-x11...


It happened really fast. X1 happened in may 1984, x6 in 1985

See https://en.wikipedia.org/wiki/X_Window_System#Origin_and_ear...


Yes. 11 is the version of the protocol, which didn’t change fundamentally after X11, so they moved to X11R2, R3, etc., and adding extensions.


11 is the protocol version. With multiple implementations.

I think once extensions were introduced, they probably didn't need to extend the protocol.


Similar to HTML: a lot of early evolution in the early releases. Then a long wait for version 5. Version 4 had a lot of neat stuff like math typesetting. Html balkanized during the commercial browser wars. Tim made a valiant effort to hold it together these past decades.


Will never forget fumbling around with my NVIDIA TNT2 graphics card in X config files only to end up in a console with X failing to start

Waste of my goddamn time.


The most fun part of the PC ports of the X server (probably XFree86, memory is getting a little spongey) was that getting the modeline wrong when you were using a CRT could cause physical damage to the monitor.

Back then, I always used to buy fancy, expensive monitors (Sony trinitron based things) and there was always that moment of doubt, chewing your fingernails before you made changes to the modeline and kicked off the X server.

It's all automatic now of course, but it was a genuine halt and catch fire scenario.


Selecting the option "Monitor that can do <highest resolution of your monitor> at <vertical scan frequency listed on the back>" from Xconfigurator has never steered me wrong -- and I have used some grotty monitors.

This kind of malarkey was only necessary because, at the time, unlike sensible systems like the Macintosh, PC monitors did not report their identification and capabilities to the host system. Eventually the EDID system would be implemented, allowing automatic configuration -- but it would be a few years before X got decent support for it.


I never heard of a monitor damaged by a bad modeline. Doubting now it happened.


Did anyone ever get around to writing documentation?


Tim O'Reilly published some books on it. /s


The real cognoscenti bought the DEC Press books instead, actually. The O'Reilly things looked great on a bookshelf (I still have my set!), but Scheifler/Gettys what was we actually used when we wanted to look something up.

And indeed, these were all really great documentation. XFree86 itself didn't have much on its driver layer or integration interface, and that was a real problem. But the protocol references were a thing of beauty.


Has anyone here has used FreeNX? I had read good things about its performance over the network (and even cell networks), but I've never encountered anyone who's actually used it. I guess I'm going to have to be the person to try it so I can be the "person who's used it" to others. And to those wondering, I think this is relevant to this link, as I believe FreeNX is more-or-less the X11 protocol, but does much more caching and compression.


I used to use it back in the day. It had amazing performance — the first Remote Desktop that showed me what was possible, and far and away the best way to do remote X.


My first encounter with graphics on UNIX was in 1994 or 1995. It was CDE https://en.wikipedia.org/wiki/Common_Desktop_Environment. Before that I've used GEOS (https://en.wikipedia.org/wiki/GEOS_(8-bit_operating_system) on Commodore 64 :)


Mine was in 95 on NCD X Terminals, machines that would only have a X server. ISTR they automatically were getting a Window Manager and a Terminal from a central VMS server. We'd then telnet to another VMS server, where we'd access a mailbox via CLI, or the web with Mosaic. There were also other SunOS or Linux servers for various purposes.

The rooms where the terminals were had something like 20 terminals, and sometimes we fooled around by making things appear in sequence on each of them, as though they were moving from one to the next, which you could do from one of the servers, whuch would connect to each terminal in sequence (the good old days of all ports being freely accessible)


I've used VMS just a few times. I didn't have an account on server running it so I could just use it when someone would let me.


I've used it a few years, but honestly I don't remember much of it. I remember files were versioned, the shell was called DCL command or something like that, and involved a lot of dollar signs and brackets... and that's about it. To think that I wrote entire scripts in that language...


There was also a local WM but I don't remember seeing anybody use it.


For me it was OpenWindows, I believe running on SunOS 4.x


I remember that when I got my first gig as sys admin we've had two SGI workstations. I think that they were SGI Indigo2 workstations.


We had some SGI workstation running in a prepress print environment because Leonardo was the only software that could handle the amount data. Amazing little machines.


Ah, back in the day.

I got X11r3 on tape to build locally on a Sun3 as part of a project. 4MB RAM - huge!

What was the network game? Maze war?


I don't miss having to type xf86config and answering a lot of question, some of witch required reading the manual for the monitor and the video card.


You realize all that tuning[1] still happens today, right? The difference is that your Linux box with a display is now a boxed product from Google or Samsung or whoever and they did it for you. In the 90's we were sharing a big communal integration effort.

[1] And more, though the details have changed. No one cares about "mode timings" since all that synchronization happens across the link in the panel hardware. Now it's all about panel self-refresh drivers and backlight control and DMA threshold tuning.


I still remember dialling into my university account back in the very early 90's and running X windows over the connection at a friends house.


So what's Argus?


Seems to be a programming language created at MIT around the same time.

https://en.wikipedia.org/wiki/Argus_(programming_language)


Wow! I'm same age that X !


Love that it was build on W - which I have never heard of. Some good trivia there.


The timelines were changed as of this publication.


(1984)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: