Hacker News new | past | comments | ask | show | jobs | submit login

CAD and Simulation Apps were written in those specific ways.

You are right.

Where that was done, performance was excellent.




It just seems wrong in that case to to say that X is network transparent. The real concern is that OpenGL 1.0 was capable of running over the network, and in order to use it effectively application developers had to take network operation into consideration, and the server had to support the non-standard extension required to use it correctly. In some circumstances using display lists locally could actually reduce performance, so the application may not have wanted to take that path in all cases: https://www.opengl.org/archives/resources/faq/technical/disp...

Generally if your application has any code that does this:

    if (client is remote)
        ...
    else if (client is local)
        ...
Then I wouldn't say the protocol you're using is network transparent.


Open GL could run over the network because X could.

Sgi used X fully. They wrote the book on all that, including GLX, and it was flat out awesome.

The Apps I used all ran display and pick lists. They worked very well over the wire and that use case for used a lot.

The quibbles are really getting past the basic idea of running an app remotely on ones local display.

That happened and worked well and had some advantages. I personally built some pretty big systems for modeling and simulation that were cake to admin and very effective on many other fronts.

Notice I did not say network transparent in my comments above.

Multi-use graphical computing is much more accurate when it comes to how X worked and what it delivered people.


BTW, display performance on those was great local. A small hit locally is not that big of a deal. Never has been.

Users will employ detail limits, model scope limits, whatever to get the UX they need.

Developers can dual path it, or provide options. And they will provide options because not all users have latest and greatest. They never do.

In the end, it is mostly a wash for most things.

The big gains were had in other areas.

In the CAD space, sequential CPU is far more of a bottleneck. Mid to lower grade GFX subsystems perform more than good enough for a ton of cases today. Can't get a fast enough sequential compute CPU. And while there is serious work to improve multi threaded geometry, fact is most important data is running on crazy complex software that needs the highest sequential compute it can get.

Big data actually sees a gain with the X way of doing things.

Huge model data running over shared resources is a hard problem. And it continues to be one. Mix in multi user and it takes serious tools to manage it all and perform.

In the 90's, many of us were doing those things, multi user, revision control, concurrent access, you name it on big models and fast, local file systems. There was software with all that well integrated. We did not have cloud yet. Not really.

The app server model rocked!

X made all that pretty easy. One setup, and just connect users running whatever they want to, so long as a respectable X server is available, they are good to go.

One OS, the whole box dedicated to one app, fast storage, big caches, multiple CPUS all tuned to get that job done well and perform.

Once that work is done, doing things the X way means it stays done, and users just run the app on the server. Bonus is they can't get at that data directly. Lots of silly problems just go away.

And, should that system need to be preserved over a long period of time? Just do that, or package it all up and emulate it.

In all cases, a user just connects to run on whatever they feel like running on.

Those of us still talking about how X does things see many advantages. Like anything, it is not the be all end all. It is a very nice capability and a "unixey" way of doing things being lost.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: