Volume 4 issue 4 introduced the Alpha architecture and includes an article describing the effort to port OpenVMS:
This should be interesting when they get done. OpenVMS had a reputation of not going down and having amazing storage clustering. I found it a bit odd when I used it in college and years later on a job. I wish it had been open sourced, but I guess it makes me feel good to know it’s still going regardless.
I browsed through a copy a few jobs ago and it's honestly kind of boring.
Only later did the term got confused with "free software" idea from Stalman and the "preferred form for making modifications" as he put it.
The term "open source" was first proposed by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term "free software"
Is one of them for VAX or is binary support for that architecture being dropped?
MACRO-32 (The compiler for VAX assembly), C++, Ada
MACRO-32 works at a lower level than the other languages, and needs to generate assembly directly instead of an IR. The C++ compiler is being replaced by Clang, and the previous VMS releases used Adacore's compilers, so Adacore will need to be convinced to port their compiler to x86 VMS.
Moving to a common back end, especially for BASIC, would be a truly wretched decision. It would also be the final nail in the coffin of the interactive BASIC environment which VAX had and the industry begged for on Alpha. When it to the Itanium (Itanic) hardware we were all too busy begging for a processor that wouldn't combust like Samsung Galaxy or one of those hoverboards.
The interactive BASIC environment wasn't just for students. It provided a great sandbox to test out algorithms quickly, much like today's script kiddies would call a modern IDE.
Without giving away any specific customers, I have personally dealt with VMS customers in the following markets: nuclear industry (power plants, research reactors, reprocessing plants, research labs), military (weapons systems, logistics), law enforcement, railroads, airports, healthcare, manufacturing (process control, logistics), retail, and I may have overlooked some markets.
Adding another system to the mix, especially one that has been in development for ~40 years, is an interesting proposition in itself. Add to that the reputation OpenVMS has for reliability and security, and I could certainly see a bright future for it on commodity hardware.
Whether that actually comes to pass is another question, but there is at least potential.
Needless to say I was beyond excited to see Digital's booth with actual Vaxen. They were mystical, super powerful systems we could only dream of one day touching at college. One of the floor walkers was demonstrating VMS and let me login and play around.
At school we had learned all sorts of commands by a mix of monkey command line silliness (we discovered SUBMIT when someone literally typed SUBMIT TO MY WILL at the prompt) and reading the hallowed documentation which were normally locked away in the IT director's office. You had to be especially good (straight A student) or a toady to get access to the docs.
Anyway we learned RSTS/E would let you claim resources with the allocate command. You could ALLOC a printer or disk but at NCC Digital had networked their systems to demonstrate global VAX clusters and VMS supported the same command across the network! You could ALLOC HOST::DEVICE and it would work.
Out of curiosity I ALLOC'd an entire VMS node (ALLOC RED::). It crashed. Unable to believe it when they got the server up I tried it again and it crashed again. I slunk away nervously sweating the rest of the day, convinced somehow that the Digital police were going to descend on me and kick me out of the conference. I went back the next day and confessed and told them what I had done to crash the VAX, figuring they would no longer be angry and that they should know what happened.
Unrelated to crashing the VAX I ended up getting a job offer from a Digital client to write Pascal code for MRI machines connected to VAXen strictly from one of those right place right time conversations. Those were the days. (Sadly as a 10th grader with no car I had to turn down the job.)
It was ported to Itanium while the x84 port and the POSIX compatibility project was never completed.
VMS was the OS of choice when it posolutely absitively had to work. It was even banned from the Black Hat conferences because it never gave up the flag. THEN they started adding OpenSource components and it became just as insecure and unstable as Windows and Linux. It is now, once again welcomed at Black Hat conferences because every piece of OpenSource on it is the same 8-lane wide security breach the other platforms have.
Note that some fields that are not marked mandatory should be treated as mandatory, esp. the one about the intended use.
Besides VAX emulators, there are also Alpha emulators you can run OpenVMS on; of interest to hobbyists are FreeAXP - free as in free beer, not open source, and Windows only - (www.freeaxp.com), and for those who want to be able to change it, the open source ES40 Emulator (www.es40.org) (disclaimer - I am the author of both of these emulators)
Generally, a rewrite places the organization's ongoing processes at huge, disruptive risk, because the existing application and its configuration accumulated all the corner cases earned painfully in the past (often stretching into decades of accreted domain knowledge). Forget the cost of the technical effort to rewrite: even if that were zero, the business impact to take on all the costs of relearning even just a quarter of those lessons in even just a five year project is excessive.
Migrating it to a new architecture would not be trivial. That software is extremely tied into the underlying inter-process messaging protocols and data file formats provided by the operating system. You're not just working with a single monolith and flat files or any known database system for example. Though I think some were available much later.
The company did split and build a new piece of software from the ground up to replace it. That effort took nearly a decade and millions of dollars - way longer than anticipated - and so long that it was sold off to private investors for continued funding long before it ever became marketable. It does well now though and bears little resemblance to the original system except in the UI concepts and screens.
OpenVMS was a pleasure to use though I was more on the support side and didn't have anything to do with the admin level maintenance. I did manage to accidentally shut down a production timeshare instance in the middle of the day once though and impact thousands of users.
Which reminds me it had its own networking, and networking hardware, so a dealership would have VT terminal green screens and the DECnet protocol would route it over IP directly into the server. It was pretty awesome.
You don't really hear of systems that "small" these days serving thousands of simultaneous connections and tonnes of back end processing.
It had nightly batching for accounting which was a common source of problems when something went wrong because nobody really understood how it worked anymore and THAT code was mostly the very old shit.
You have a full parts/warehouse inventory system and invoicing internally and to other mechanics. A service system (including rosters and timesheets) and it's own invoicing system to customers. You need a vehicle module to manage all the local laws and taxes as well as interfacing with manufacturer systems for models and options (which are God awful undocumented weird binary formats over long dead networks - sometimes hardwired through parts of the phone companies that even they don't know they have anymore) so you can sell to customers.
So that's 3 different invoicing systems to start. It all needs to be underpinned with an extremely flexible accounting system in the back end because every dealer does stuff differently but all of those things are related. The parts department sells to the service department at a discount; the service department prepares new vehicles for sale (called pre-delivery) and bonuses and rebates occur through each step.
You need to import a lot more data covering log book service information and parts catalogs. And lots of reports for those car companies and online sellers and more.
Some of that isn't exactly a programmers problem but the breadth of the issue and lack of generalised knowledge sources (and how every car manufacturer does things completely differently), it's a minefield to keep one going let alone building a new one.
Consequently most of the world runs on only 2 or 3 systems and they're all ancient and generally the companies that develop them are awful and treat customers like garbage. There's a lot of vendor lock in and manufacturers fighting each other.
Car dealerships have a tough job. Also their accountants are near God level skills wise.
Anyway I don't think I answered your question. Yes I also don't know exactly what happened but yeah it was a disaster that only happened to work out because of millions of dollars. Once it WAS ready though money started flowing in because dealers are desperate to get away from the existing awful vendors. On the other hand each wanted so much custom functionality that it led to work pipelines planned years in advance by the time I left - and that doesn't make for happy customers either.
I've been the guy that left taking all the knowledge with him, years later learning that they've lost the documentation & the source code but the system is still running.
I've seen this occur in other domains (finance and education sectors) and basically the black box hangs on until either the unit, or the entire business, ceases to exist.
I did see one project that attempted a like-for-like reimplementation via a carve-out process and it was a decade-long gigadollar boondoggle that has basically crippled the enterprise that tried it.
The "services" people subscribe to when setting up some kind of shopping cart on their Web site are neither complex nor highly customizable. Most importantly, they offer no opportunity what-so-ever to purchase a guaranteed delivery + guaranteed execution service.
When an order from a dealer (doesn't matter what automotive area) leaves their Business System, it is no longer their responsibility. It is the responsibility of the badge (Freightliner, Harley, Toyota, etc.) That order, which can be hundreds of thousands of dollars each and every day, must completely pass through the badge system, routing out to warehouses, vendors, distributors and manufacturing schedules, without ever being lost no matter what fails.
This is a completely different business model than Criminal-Net sales where people complain about the billions of dollars worth of "abandoned shopping carts" each year. By and large these are not "abandoned shopping carts." These are the symptom of a poorly designed system built with feeble tools. Something FAILED and the consumer went to a different site to place their order.
Blade and rack mount servers are built with inferior hardware. Their entire premise is the swarm principle.
Many will die, but some will survive.
Each and every time one of the many die, the customer leaves. Some sites have tried to "fix" this problem by sending you emails every few hours for days about the things "you forgot in your cart." This isn't a fix. This is an alcoholic refusing to admit they have more than a drinking problem.
Despite _constant_ denial by a good many people who claim to be "in the know" OpenVMS is still heavily used by defense contractors and various Homeland Security/DOD groups. Once you purge _all_ OpenSource from it the OS really is rock solid and secure.
During the time around the Bejing (sp?) Olympics, the largest and most modern Chinese steel mill was being designed and scheduled to be built after the games. The came to America for a custom control system written in FORTRAN running on OpenVMS. I know because I architected it.
A certain very famous brewer of dark beer in Ireland supposedly uses it to make their tasty beverages.
Most steel and paper mills around the world run OpenVMS for process control. A large number of nuclear power plants as well.
Putting it bluntly. OpenVMS, with _all_ OpenSource removed, is the OS humans bet the species on each and every day.
If you want to read a novella which has a big section discussion what happens when OpenVMS suddenly doesn't exist you should read this:
It's the middle book of the "Earth That Was" trilogy.
Lesedi - The Greatest Lie Ever Told
John Smith - Last Known Survivor of the Microsoft Wars
I Sysadmined a VMS box running on a couple of AlphaServers (2100 and 2100A) if I recall.
At one point a disgruntled employee had taken a 22 rifle into the server room and shot several rounds at the equipment, only 1 bullet hit, but it went through a RAM bank, the system disabled the faulty hardware and continued running.
The other cool part was the interchangeable boards - the system had a case that swung open on the side and you could insert cards much the same way as you slot in PCI cards today, except one of the cards could be a CPU block, or it could be RAM, so if you had a CPU intensive workload you could disable a physical board of ram, take it out, and replace with a CPU board - without halting or rebooting the OS
I'm not a hardware buy, but I believe this is a carry over from the VAX days long before there was SUN.
The Alphas had a motherboard and daughterboard - and the daughterboard was a generalized bus, so a single slot could have a RAM or CPU slot in it --- were Sun servers capable of this too? Or was a RAM slot a RAM slot and a CPU slot a CPU slot and they were not interchangeable?
I'm curious because I've never seen any other machines with a generalized CPU/RAM bus, I'm curious if Sun hardware supported this? If it did, can you remember the model numbers so I can read up on it?
It's like places that use mainframes it's probably not a good idea to build a new bank on mainframes but if you have one it's far less risk to keep the mainframe than rewrite everything.
OpenVMS has some very unique features but truthfully it's most valuable feature is it runs the software they have.
In contrast to some of the other comments here, we weren't using it because moving to another OS - or architecture - would have been a pain. In fact, our software originally ran on x86 machines at some point in the distant past, before we migrated it to OpenVMS relatively recently.
I don't know why OpenVMS was given preference over GNU/Linux, *BSD, or Windows; I'm guessing that the technical managers above me had their reasons, but I couldn't say what they were.
I left that job a few years ago, but I think that they did intend to eventually phase out the VMS systems as our new, GUI-based product slowly replaced the older text-based one.
We have almost entirely moved away from it and onto Linux but considering the decades of code and systems built on and around it, it is truly a monumental task to move off VMS while keeping a system used by very high paying clients running smoothly.
I worked in remote sensing at a time when UNIX workstations were taking over the show. Again applications were tied into the hardware so if you needed an SGI for your code, that was what you had and again it was hard to fully explain the merits of a given platform in a way that makes sense to a Windows user.
So Linux is not intended to run real worldloads?
It was physical PC clustering. So I group of networked computers appeared like 1 physical computer. While a MESO or DCOS can do this today, they don't offer durability.
On OpenVMS clusters if the machine running a job physically explodes you don't lose data. RAM and process state is network synchronized.
This allows for rolling upgrades. Your cluster can update and reboot 1 box at a time, but your apps never actually stop, TCP connections are never dropped ETC. For large corporations who've gotten used to these features they're awesome to hold onto.
Is this as slow as it sounds?
At the end of the day, the windows based system was just as reliable as far as nines went and tens of times faster.
This was only just recently replaced with asp.net I understand. They got nearly 20 years out of each rewrite.
If you found a 486/100 and compared raw compute speed, the vax would likely have won for most cases, because it has 4x the registers.
I'm not actually sure about the clustering IPC performance, but you are essentially comparing hardware about 3-4 generations apart and blaming the software for the issues.. if you were comparing with 500MHz Alphas, etc, then you might have a point..
One thing I wonder now: how can this work?
Is there a physical load balancer in front or something?
Locks on most OSes have an "outer scope" of a process or the machine.
Granted, that is as much (if not more!) a feature of the hardware architecture than the OS - but amazing still.
I've never heard of an NT box capable of that - maybe the Alpha port could do it? Anyone know?
On a more relevant note, I know x86-64 Xeon's can mark bad RAM, disk drives, and network cards, and hot-swap them all out out (and has been able to do so for > 10 years) on most motherboards (it's supported at the OS level for Windows server), not sure if it ever had full CPU failure support though.
But even then, you might argue that that completion ports are less than ideal: you incur a lot of complexity in the file system drivers, while e.g. solaris/BSD offer similar functionality without that burden.
I'm not familiar with BeOS at all, but browsing through Haiku's API documentation I can't really find much. What exactly were you referring to?
I mostly worked with OpenVMS on VAX in mid-90s. They were the workhorse of manufacturing/process control systems.
I have never worked with this system, but I have heard great things about its scalability and availability.
As for why, because it works really well. It's not new and shiny but it doesn't break, it's fast, and it has a lot of nice features that still aren't common (clustering and RMS being the two big ones).