Hacker News new | comments | show | ask | jobs | submit login
OpenVMS State of the Port to x86_64 [pdf] (vmssoftware.com)
113 points by emersonrsantos 4 days ago | hide | past | web | 99 comments | favorite

For those interested in DEC history, HP have put the text of their Digital Technical Journal articles online at http://www.hpl.hp.com/hpjournal/dtj/past.htm (no figures, alas).

Volume 4 issue 4 introduced the Alpha architecture and includes an article describing the effort to port OpenVMS: http://www.hpl.hp.com/hpjournal/dtj/vol4num4/vol4num4art7.pd...

There's also the later (since 2003 or so) OpenVMS technical journal: http://h41379.www4.hpe.com/openvms/journal/

About a year ago, all of these were scanned from the paper originals. Available as PDF - WITH figures - from http://www.dtjcd.vmsresource.org.uk

OpenVMS supports nine programming languages,six of which use a DEC-developed, proprietary back-end code generator on both Alpha and Itanium. We are creating a converter to internally connect these compiler front-ends to the open source LLVM back-end code generator, which targets x86_64 as well as many other architectures. (The other three compilers have their own individual pathways to the new architecture.)

This should be interesting when they get done. OpenVMS had a reputation of not going down and having amazing storage clustering. I found it a bit odd when I used it in college and years later on a job. I wish it had been open sourced, but I guess it makes me feel good to know it’s still going regardless.

If you worked for one of their bigger customers, chances are someone paid for source access and it was available on an internal server :)

I browsed through a copy a few jobs ago and it's honestly kind of boring.

Boring in this context sound correct.

VAX/VMS was always open sourced and available first on microfiche and later on CD. I don't know if this continued after Compaq and then HP came to own the code but that's how it started life. The only censored code was the license management facility, which realistically, was trivially subvert-able if you knew VAX assembly language.

This isn't quite correct. What was - and still is - made available to customers willing to pay for it were listings, generated by the compilers, not the original sources. There are more bits and pieces besides LMF that are excluded, sometimes for security reasons, sometimes for legal reasons. Also not made available, are any of the build procedures. What was never done, is release the sources under an open-source license. The source listings are intended as a debugging aid, not as a way to rebuild the OS.

While a factually correct description of what was made available, I don't think that's quite the intended meaning of "open source".

"Open source" historically meant just that: that you could look at the source, in contrast to machine code-only software.

Only later did the term got confused with "free software" idea from Stalman and the "preferred form for making modifications" as he put it.

While "source listings as an aid to debugging" was definitely done, and still is (e.g. Microsoft Windows source is available to researchers), Wikipedia disagrees with the use of the term "open source" as ever meaning this.

The term "open source" was first proposed by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term "free software"


Interesting. I'm probably confusing some things. I was under the impression that the term "open source" was older than both the free software movement and "open source" as defined by Eric Raymond et al. Not sure where I remember that from. Thanks for correcting me.

That's an interpretation that I had not seen before. You would probably need to provide some more references to evidence for that claim in order to combat the down voting.

> (The other three compilers have their own individual pathways to the new architecture.)

Is one of them for VAX or is binary support for that architecture being dropped?

Based on some of the prior documents, I believe they are:

MACRO-32 (The compiler for VAX assembly), C++, Ada

MACRO-32 works at a lower level than the other languages, and needs to generate assembly directly instead of an IR. The C++ compiler is being replaced by Clang, and the previous VMS releases used Adacore's compilers, so Adacore will need to be convinced to port their compiler to x86 VMS.

Close. Macro-32 on x86 will use a lower-level interface to the same LLVM backend used by the other languages.

Adding more OpenSource to OpenVMS is __NOT__ a good thing. Especially adding it at the compiler level.

Moving to a common back end, especially for BASIC, would be a truly wretched decision. It would also be the final nail in the coffin of the interactive BASIC environment which VAX had and the industry begged for on Alpha. When it to the Itanium (Itanic) hardware we were all too busy begging for a processor that wouldn't combust like Samsung Galaxy or one of those hoverboards.

The interactive BASIC environment wasn't just for students. It provided a great sandbox to test out algorithms quickly, much like today's script kiddies would call a modern IDE.

I see lots of questions about who uses OpenVMS. In general, places where downtime and security breaches are frowned upon, as well as markets where VMS has had a long-time presence, and where moving away doesn't solve any problems they have and just costs money (in some industries, particularly the cost of re-certifying systems).

Without giving away any specific customers, I have personally dealt with VMS customers in the following markets: nuclear industry (power plants, research reactors, reprocessing plants, research labs), military (weapons systems, logistics), law enforcement, railroads, airports, healthcare, manufacturing (process control, logistics), retail, and I may have overlooked some markets.

In the x86_64 server space (which is, what, 90% of the server market today?), the choice of operating systems is basically between Windows and some variant of Unix (Solaris and its open source offspring, numerous GNU/Linux distros, and *BSD).

Adding another system to the mix, especially one that has been in development for ~40 years, is an interesting proposition in itself. Add to that the reputation OpenVMS has for reliability and security, and I could certainly see a bright future for it on commodity hardware.

Whether that actually comes to pass is another question, but there is at least potential.

If the licensing fees on x86_64 aren't ridiculous, I will be tempted to make a serious evaluation of it at work.

This may be a fun anecdote or a cool story bro but I attended the 1982 National Computer Conference (aka NCC) in Houston, TX. At the time I was attending a high school with a PDP-11/05 and a PDP-11/34A which along with a PLATO terminal which were the center of any student's life interested in computers.

Needless to say I was beyond excited to see Digital's booth with actual Vaxen. They were mystical, super powerful systems we could only dream of one day touching at college. One of the floor walkers was demonstrating VMS and let me login and play around.

At school we had learned all sorts of commands by a mix of monkey command line silliness (we discovered SUBMIT when someone literally typed SUBMIT TO MY WILL at the prompt) and reading the hallowed documentation which were normally locked away in the IT director's office. You had to be especially good (straight A student) or a toady to get access to the docs.

Anyway we learned RSTS/E would let you claim resources with the allocate command. You could ALLOC a printer or disk but at NCC Digital had networked their systems to demonstrate global VAX clusters and VMS supported the same command across the network! You could ALLOC HOST::DEVICE and it would work.

Out of curiosity I ALLOC'd an entire VMS node (ALLOC RED::). It crashed. Unable to believe it when they got the server up I tried it again and it crashed again. I slunk away nervously sweating the rest of the day, convinced somehow that the Digital police were going to descend on me and kick me out of the conference. I went back the next day and confessed and told them what I had done to crash the VAX, figuring they would no longer be angry and that they should know what happened.

Unrelated to crashing the VAX I ended up getting a job offer from a Digital client to write Pascal code for MRI machines connected to VAXen strictly from one of those right place right time conversations. Those were the days. (Sadly as a 10th grader with no car I had to turn down the job.)

I found this picture of the team. Not sure how old the picture is. https://yellow.place/file/image/cover/0/0/204/ulriejwjzgbqti...

That's a fairly old one, IIRC it dates from the 8.4-1H1 release, or May 2015. The team is considerably larger today.

For those who have never heard of OpenVMS they might want to get themselves a copy of this book:


VMS was the OS of choice when it posolutely absitively had to work. It was even banned from the Black Hat conferences because it never gave up the flag. THEN they started adding OpenSource components and it became just as insecure and unstable as Windows and Linux. It is now, once again welcomed at Black Hat conferences because every piece of OpenSource on it is the same 8-lane wide security breach the other platforms have.

I worked on OpenVMS at the start of my career. I heard that the customer didn't want to port away from OpenVMS and this prevented HP from killing it off. One famous story in VMS folklore was where a building containing a datacenter catches fires and Unix admins run into the building to retrieve the backup disks and OpenVms admin do not since clustering give hot backup by default. Another story was of banks having data centers near the twin towers. The datacenters go down since the air conditioners go down due to the dust from the twin towers crashing. But the OpenVms cluster continue to operate since the half of the cluster is in New Jersey.

It was ported to Itanium while the x84 port and the POSIX compatibility project was never completed.

The Commerzbank case (datacenter near twin towers, contingency site in Rye, NY) is well documented: http://www.availabilitydigest.com/public_articles/0407/comme...

Really hoping the Hobbyist License program continues on to the x86 port.

These dudes are insane in a good way. Also, their earlier slide decks had more diagrams and even screenshots.

Thanks! I have some videos online of presentations I did at last year's OpenVMS Boot Camp. Some with diagrams :-) My apologies for the sometimes poor audio quality.


For those unfamiliar with OpenVMS clusters, here's an excellent write-up: http://www.availabilitydigest.com/public_articles/0306/openv...

Is there a VAX emulator/VM for x86 that would allow running VMS?

SIMH emulates a lot of things, including VAX. You used to be able to get a Hobbyist license for OpenVMS from HP. I'm not sure whether you still can.


Yes, you can still get hobbyist licenses from HP for OpenVMS: https://www.hpe.com/h41268/live/index_e.aspx?qid=24548

Note that some fields that are not marked mandatory should be treated as mandatory, esp. the one about the intended use.

Besides VAX emulators, there are also Alpha emulators you can run OpenVMS on; of interest to hobbyists are FreeAXP - free as in free beer, not open source, and Windows only - (www.freeaxp.com), and for those who want to be able to change it, the open source ES40 Emulator (www.es40.org) (disclaimer - I am the author of both of these emulators)

Who, and why?

There are still a non-trivial pile of organizations out there that rely upon VMS. I still can hardly believe it myself when I run into them, but as I understand the situations when I ask out of curiosity, these are relatively lower-margin organizations, and the cost of rewriting their VMS applications for them greatly exceeds the cost of adopting a platform that is a port to a major chip architecture and emulating the older chip architecture on that new chip when needed.

Generally, a rewrite places the organization's ongoing processes at huge, disruptive risk, because the existing application and its configuration accumulated all the corner cases earned painfully in the past (often stretching into decades of accreted domain knowledge). Forget the cost of the technical effort to rewrite: even if that were zero, the business impact to take on all the costs of relearning even just a quarter of those lessons in even just a five year project is excessive.

I worked with it for 7 years and the software ran car dealerships. That codebase is 40 years old now and still had remnants from the 70s where it had been originally built for the VAX.

Migrating it to a new architecture would not be trivial. That software is extremely tied into the underlying inter-process messaging protocols and data file formats provided by the operating system. You're not just working with a single monolith and flat files or any known database system for example. Though I think some were available much later.

The company did split and build a new piece of software from the ground up to replace it. That effort took nearly a decade and millions of dollars - way longer than anticipated - and so long that it was sold off to private investors for continued funding long before it ever became marketable. It does well now though and bears little resemblance to the original system except in the UI concepts and screens.

OpenVMS was a pleasure to use though I was more on the support side and didn't have anything to do with the admin level maintenance. I did manage to accidentally shut down a production timeshare instance in the middle of the day once though and impact thousands of users.

Which reminds me it had its own networking, and networking hardware, so a dealership would have VT terminal green screens and the DECnet protocol would route it over IP directly into the server. It was pretty awesome.

You don't really hear of systems that "small" these days serving thousands of simultaneous connections and tonnes of back end processing.

It had nightly batching for accounting which was a common source of problems when something went wrong because nobody really understood how it worked anymore and THAT code was mostly the very old shit.

I am always curious to understand what corporate process on earth can take a decade to be rebuilt. It can be complex but not that complex. Particularly something like car dealership. We are not talking about some super diverse supply chain and interfacing with lots of different hardware, etc.

I think you'd be surprised at what goes into running a dealership. The software is complex with a lot of dependencies.

You have a full parts/warehouse inventory system and invoicing internally and to other mechanics. A service system (including rosters and timesheets) and it's own invoicing system to customers. You need a vehicle module to manage all the local laws and taxes as well as interfacing with manufacturer systems for models and options (which are God awful undocumented weird binary formats over long dead networks - sometimes hardwired through parts of the phone companies that even they don't know they have anymore) so you can sell to customers.

So that's 3 different invoicing systems to start. It all needs to be underpinned with an extremely flexible accounting system in the back end because every dealer does stuff differently but all of those things are related. The parts department sells to the service department at a discount; the service department prepares new vehicles for sale (called pre-delivery) and bonuses and rebates occur through each step.

You need to import a lot more data covering log book service information and parts catalogs. And lots of reports for those car companies and online sellers and more.

Some of that isn't exactly a programmers problem but the breadth of the issue and lack of generalised knowledge sources (and how every car manufacturer does things completely differently), it's a minefield to keep one going let alone building a new one.

Consequently most of the world runs on only 2 or 3 systems and they're all ancient and generally the companies that develop them are awful and treat customers like garbage. There's a lot of vendor lock in and manufacturers fighting each other.

Car dealerships have a tough job. Also their accountants are near God level skills wise.

Anyway I don't think I answered your question. Yes I also don't know exactly what happened but yeah it was a disaster that only happened to work out because of millions of dollars. Once it WAS ready though money started flowing in because dealers are desperate to get away from the existing awful vendors. On the other hand each wanted so much custom functionality that it led to work pipelines planned years in advance by the time I left - and that doesn't make for happy customers either.

The key point in the GP is nobody really understood how it worked anymore. No matter how accessible the domain generally might seem, once a system becomes a mysterious black box sans documentation or continuing expertise then no CIO is prepared to sign off the migration, for fear of missing a business rule or special case.

I've been the guy that left taking all the knowledge with him, years later learning that they've lost the documentation & the source code but the system is still running.

I've seen this occur in other domains (finance and education sectors) and basically the black box hangs on until either the unit, or the entire business, ceases to exist.

I did see one project that attempted a like-for-like reimplementation via a carve-out process and it was a decade-long gigadollar boondoggle that has basically crippled the enterprise that tried it.

It takes a minimum of 7 years to design from scratch, develop, test then settle in a "simple" customized order and inventory processing system, sans a WMS and picking component.

The "services" people subscribe to when setting up some kind of shopping cart on their Web site are neither complex nor highly customizable. Most importantly, they offer no opportunity what-so-ever to purchase a guaranteed delivery + guaranteed execution service.

When an order from a dealer (doesn't matter what automotive area) leaves their Business System, it is no longer their responsibility. It is the responsibility of the badge (Freightliner, Harley, Toyota, etc.) That order, which can be hundreds of thousands of dollars each and every day, must completely pass through the badge system, routing out to warehouses, vendors, distributors and manufacturing schedules, without ever being lost no matter what fails.

This is a completely different business model than Criminal-Net sales where people complain about the billions of dollars worth of "abandoned shopping carts" each year. By and large these are not "abandoned shopping carts." These are the symptom of a poorly designed system built with feeble tools. Something FAILED and the consumer went to a different site to place their order.

Blade and rack mount servers are built with inferior hardware. Their entire premise is the swarm principle.

Many will die, but some will survive.

Each and every time one of the many die, the customer leaves. Some sites have tried to "fix" this problem by sending you emails every few hours for days about the things "you forgot in your cart." This isn't a fix. This is an alcoholic refusing to admit they have more than a drinking problem.

To some extent this is true. One, used to be Fortune 50 but now probably not even Fortune 1000 famous corporation has been "going to replace TOLAS on OpenVMS" for the past 20+ years. Every attempt, without exception, has failed spectacularly. Now every attempt will fail because everyone who knew the business rules which went into the code modifications is either retired or dead. If management decided to "just pull the plug" and install some worthless Oracle or SAP or (insert big named software package here) the lawsuits for contract violations would bankrupt the company 5 times over.

Despite _constant_ denial by a good many people who claim to be "in the know" OpenVMS is still heavily used by defense contractors and various Homeland Security/DOD groups. Once you purge _all_ OpenSource from it the OS really is rock solid and secure.

During the time around the Bejing (sp?) Olympics, the largest and most modern Chinese steel mill was being designed and scheduled to be built after the games. The came to America for a custom control system written in FORTRAN running on OpenVMS. I know because I architected it.

A certain very famous brewer of dark beer in Ireland supposedly uses it to make their tasty beverages.

Most steel and paper mills around the world run OpenVMS for process control. A large number of nuclear power plants as well.

Putting it bluntly. OpenVMS, with _all_ OpenSource removed, is the OS humans bet the species on each and every day.

If you want to read a novella which has a big section discussion what happens when OpenVMS suddenly doesn't exist you should read this:


It's the middle book of the "Earth That Was" trilogy.

Infinite Exposure Lesedi - The Greatest Lie Ever Told John Smith - Last Known Survivor of the Microsoft Wars

Could you give an example of such an industry? VMS to me has always been that OS that everyone claimed to be better than UNIX-likes, but none could explain why.

Here's a couple of examples for you:

I Sysadmined a VMS box running on a couple of AlphaServers (2100 and 2100A) if I recall.

At one point a disgruntled employee had taken a 22 rifle into the server room and shot several rounds at the equipment, only 1 bullet hit, but it went through a RAM bank, the system disabled the faulty hardware and continued running.

The other cool part was the interchangeable boards - the system had a case that swung open on the side and you could insert cards much the same way as you slot in PCI cards today, except one of the cards could be a CPU block, or it could be RAM, so if you had a CPU intensive workload you could disable a physical board of ram, take it out, and replace with a CPU board - without halting or rebooting the OS

Ever work with Sun servers? Yeah, you could swap out cards, CPU, ram, drive, power, without halting or rebooting the OS. But not just Sun servers. This is not a feature unique to Alphaservers/VMS.

It was originally a DEC specific design copied/stolen by others. DEC created the design concept of a common back plane. Other than some logic switches to control power/enabling, there was nothing it provided. Every machine was completely customizable.

I'm not a hardware buy, but I believe this is a carry over from the VAX days long before there was SUN.

Nope! I didn't know that. Very interesting.

The Alphas had a motherboard and daughterboard - and the daughterboard was a generalized bus, so a single slot could have a RAM or CPU slot in it --- were Sun servers capable of this too? Or was a RAM slot a RAM slot and a CPU slot a CPU slot and they were not interchangeable?

I'm curious because I've never seen any other machines with a generalized CPU/RAM bus, I'm curious if Sun hardware supported this? If it did, can you remember the model numbers so I can read up on it?

What happened to the processes using the RAM?

Imagine server that has been up for decades. It can remain live through OS upgrades, hardware upgrades and I believe even architecture changes.

It's like places that use mainframes it's probably not a good idea to build a new bank on mainframes but if you have one it's far less risk to keep the mainframe than rewrite everything.

OpenVMS has some very unique features but truthfully it's most valuable feature is it runs the software they have.

But that's the thing: places which fit those requirements usually run some IBM solution (AS/400 and the likes). I've never encountered VMS in production before, so I'm really curious to some concrete examples of companies using it.

We use it at our company (financial software). One of the reasons being that the code is older than Linux and VMS was a good bet at the time.

We have almost entirely moved away from it and onto Linux but considering the decades of code and systems built on and around it, it is truly a monumental task to move off VMS while keeping a system used by very high paying clients running smoothly.

If your employer is who I think it is - you still have many years to go. :-)

I used to work as a software developer in the UK insurance industry, and I can tell you that we (and hence several of the insurers/brokers we dealt with) used OpenVMS.

In contrast to some of the other comments here, we weren't using it because moving to another OS - or architecture - would have been a pain. In fact, our software originally ran on x86 machines at some point in the distant past, before we migrated it to OpenVMS relatively recently.

I don't know why OpenVMS was given preference over GNU/Linux, *BSD, or Windows; I'm guessing that the technical managers above me had their reasons, but I couldn't say what they were.

I left that job a few years ago, but I think that they did intend to eventually phase out the VMS systems as our new, GUI-based product slowly replaced the older text-based one.

I haven't used other mainframe solutions, but one of my favorite features of VMS is logicals. Logicals are environment variables backed by a clustered key value store that's safe to use for distributed locks. Logicals appear to your process in a hierarchy of tables that inherit from one another, by default you have: job (shell subprocess) > process (shell session) > user > group > system (node) > cluster. You can also build your own tables stored with which ever chosen locality (system, cluster, etc) in the hierarchy. Meaning if you follow "12-factor" design, well it didn't have fancy name then, you could do things change configuration cluster wide in an instant.

We used to have a CNC machine controlled by a VMS box. I almost burst into tears when it was unceremoniously decommissioned while I was on holiday - that thing had been up for more than six years! (Which is not at all uncommon in VMS circles!)

Non military example: IKEA.

VMS was widely used in healthcare.

Some cable companies run it

In the days when I used VMS it was common for applications to be tied into the hardware. Therefore you only really ran a fairly bespoke set of applications that were specific to your business. Therefore it is never possible to eulogise fully about the cool wonders of VAX VMS as everything was domain specific.

I worked in remote sensing at a time when UNIX workstations were taking over the show. Again applications were tied into the hardware so if you needed an SGI for your code, that was what you had and again it was hard to fully explain the merits of a given platform in a way that makes sense to a Windows user.

One nice feature of VMS that I haven't seen mentioned here yet is the built in versioning of files. https://en.wikipedia.org/wiki/Files-11

This was widely reputed to be a method for DEC to sell more (expensive) disks. The only thing I ever found a use for the built-in versions was to purge them.

OpenVMS is more reliable than Unix, and bears a more sophisticated kernel design intended to handle real-world workloads. In particular, unlike every extant Unix except Solaris, async I/O is handled the correct way in OpenVMS (and Windows NT), with real async system calls and completion-based notification.

> OpenVMS is more reliable than Unix, and bears a more sophisticated kernel design intended to handle real-world workloads.

So Linux is not intended to run real worldloads?

Linux is not Unix (?)

For all the reasons that matter, it is. Same kernel design, same external interfaces, same kind of internal interfaces.

Windows NT was a spiritual successor to VMS, sharing a father in DEC's Dave Cutler. VMS fell out of use and is widely considered history. Still, it had some interesting ideas and I've always preferred my history to be living.

My question is, which of those "interesting ideas" didn't end up in Windows NT? Is there anything other OSes (Linux, ⋆BSD, macOS) could learn from VMS, that they can't better learn from WNT?

Clustering - OpenVMS does clustering amazingly well. that is one thing that NT, Linux, and the BSDs could learn. Although, DragonFly BSD is working on the file system clustering with HAMMER 2.

Could you elaborate on how it did things better than the common cluster file systems of today? (OCFS2, Hadoop, Ceph, etc.)

Okay so OpenVMS wasn't just FileSystem clustering.

It was physical PC clustering. So I group of networked computers appeared like 1 physical computer. While a MESO or DCOS can do this today, they don't offer durability.

On OpenVMS clusters if the machine running a job physically explodes you don't lose data. RAM and process state is network synchronized.

This allows for rolling upgrades. Your cluster can update and reboot 1 box at a time, but your apps never actually stop, TCP connections are never dropped ETC. For large corporations who've gotten used to these features they're awesome to hold onto.

"RAM and process state is network synchronized."

Is this as slow as it sounds?

Yes. I saw the last days of a VMS cluster (actual VAX not Alpha) system being ported to windows/vb/com/asp. The speed difference was actually incredible. It ended up on a mediocre single Xeon 500MHz P3.

At the end of the day, the windows based system was just as reliable as far as nines went and tens of times faster.

This was only just recently replaced with asp.net I understand. They got nearly 20 years out of each rewrite.

You're confusing hardware speed with the software/IPC measurement - the fastest VAX machine ever made ran at 100Mhz (or maybe 120), with 10Mbit ethernet and SCSI-2 I/O paths (1MB/s on a disk or so).

If you found a 486/100 and compared raw compute speed, the vax would likely have won for most cases, because it has 4x the registers.

I'm not actually sure about the clustering IPC performance, but you are essentially comparing hardware about 3-4 generations apart and blaming the software for the issues.. if you were comparing with 500MHz Alphas, etc, then you might have a point..

Not quite. We scaled it to 50x the user count at the same time.

TCP connections are never dropped ETC

One thing I wonder now: how can this work?

Is there a physical load balancer in front or something?

That claim is simply not true.

I don't have VMS experience, but rather AS/400 and iSeries experience. The difference vs generic purpose OS (Linux, Windows, OSX) is that the AS/400, and I'm going to assume VMS since they operated in similar markets, had these features baked in by the manufacturer. OS/400 on the AS/400 was a menu based system. If the menu option existed, the feature was available, and it was as simple as flipping the bit to YES to enable a feature. Enable clustering? Select Yes, add the information of the cluster and you were done. It just worked. Did it work well? I guess. There was no way to know if it could be done better since it was a manufacturer's enabled feature. Of course Linux, Windows, OSX can do the same thing as OS/400 on AS/400 but with a lot of configuration, and a lot of jumping through hoops [it's not called Hadoop for nothing ;-)].

For example: you could make a cluster-wide lock that you code uses to coordinate access to cluster-wide shared resource.

Locks on most OSes have an "outer scope" of a process or the machine.

Dont know about OCFS2 but Hadoop's file system HDFS and Ceph are hardly the stellar examples of rock solid and performant (I know its not an English word) distributed file systems. Google's mapreduce was so much more efficient in speed (4X and 4x last time I played with those tools) and resources primarily because of the bottlenecks imposed by HDFS design and implementation.

If performant isn't actually a word than a lot of people (myself included) have been doing it wrong. I can't think of a different variation on the word performance that would work

Some people are allergic to it, as for myself i cannot find a better substitute.

Please bear with the lack of details, as my (very limited) VMS experience involved an offline system which never broke, but: it is amazingly tolerant of hardware failures, I recall sitting slack-jawed over the AlphaStation manual after learning you could hot-swap just about anything - including CPU boards!

Granted, that is as much (if not more!) a feature of the hardware architecture than the OS - but amazing still.

I've never heard of an NT box capable of that - maybe the Alpha port could do it? Anyone know?

Fun fact: Windows 2008 R2 and Itanium 2 had lock-step fault-tolerance. (Which is on par with the other "biggies"-- VMS, IBMs OS360/MVS, and HP (nee Compaq Tandem)).

On a more relevant note, I know x86-64 Xeon's can mark bad RAM, disk drives, and network cards, and hot-swap them all out out (and has been able to do so for > 10 years) on most motherboards (it's supported at the OS level for Windows server), not sure if it ever had full CPU failure support though.

The "everything is a file" paradigm and a non-physical filesystem come to mind.

I hear that VMS is one of those OS's that got its concurrency API done right (BeOS would be another). Windows NT inherited a lot of that. To give an example, POSIX compliant asynchronous I/O on Linux is not that great (it used threads internally). There is also non POSIX async I/O on linux. But these are fare less mature and time tested and over all not a very uniform API.

That's not really a fair comparison: Windows I/O completion ports are a proprietary standard that is part of the NT kernel, while POSIX AIO is just a user-space libc extension that was added later on (as far as I know, none of the kernel-space posix AIO proposals got merged in). It's easier to get things right when you defined your own standards, at the cost of portability.

But even then, you might argue that that completion ports are less than ideal: you incur a lot of complexity in the file system drivers, while e.g. solaris/BSD offer similar functionality without that burden.

I'm not familiar with BeOS at all, but browsing through Haiku's API documentation I can't really find much. What exactly were you referring to?

If you're talking about Linux specifically, that's not quite accurate history. Prior to POSIX threads, Linux had it's own threading model--Linux Threads--which frankly was terrible. They added the full POSIX later on in part b/c the Linux threading was so awful. [1] has more detail.

[1] https://en.wikipedia.org/wiki/LinuxThreads

True that, but I was talking about the POSIX async I/O API: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/aio.... I don't know much about the LinuxThreads-era.

Both LinuxThreads and the current NPTL are implementations of POSIX threads. NPTL is closer to the standard, but if your code doesn't need the stricter conformance, it could run on either with no changes.

Proprietary or not is something quite orthogonal to my comment. I was commenting on the design of the API, its implementation and the leak of the abstractions used etc. etc. Async was never Unix's strength, plan9 might have fixed matters somewhat if it caught on.

Several companies in Petrochemicals, refinery, utilities, Oil & Gas, Pharma still have OpenVMS in operation. The last one I encountered was in 2012, I was told it had been running for 10+ years nonstop.

I mostly worked with OpenVMS on VAX in mid-90s. They were the workhorse of manufacturing/process control systems.

See also Tandem, another HP owned system now. https://en.wikipedia.org/wiki/NonStop_(server_computers)

Which makes me wonder - does anyone know if HP has plans to port NonStop to some non-Itanium platform?

I have never worked with this system, but I have heard great things about its scalability and availability.

Nonstop has already been ported to x86-64 some years ago. It runs fine there - with better performance than it did on Itanium.

Thanks for the info!

I think at this point they are mostly targeting servers and not bothering with things like web browsers.

More like why not. It's still a very modern OS, in many ways unique, and with a large install base that pays well.

A few companies like IKEA and Indian Railways, as well as a bunch of healthcare companies.

As for why, because it works really well. It's not new and shiny but it doesn't break, it's fast, and it has a lot of nice features that still aren't common (clustering and RMS being the two big ones).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact