Logical systems still sell their NuVAX machines, see https://logical-co.com/product/nuvax-4400-system/
They also do PDP/11 systems and hardware.
I think they might be built using an emulator on PC hardware with some interfacing hardware, but still I would call it a "new" VAX for most purposes.
Funnily enough, it appears that real vaxen ended up being produced for shorter time than PDP-10, despite efforts going as far as encasing last high-end PDP-10 prototype in concrete and dumping it in mill pond (or so the story goes), as known single-chip PDP-10 were produced at least as late as 2004, after last VAX rolled off production line.
Doug had a KA10 in the living room of his apartment in a high rise tower in Maryland, near where I used to live.
He had to take it up the elevator in two pieces, and the elevator had a hard time aligning with the floor because of the weight, so he had to lift it up a foot to get it out the door.
The apartment didn't have the right two phase 220 volt power outlet, but it was possible for two people to simultaneously plug it into two different 110 volt outlets at opposite ends of the apartment that were on different phases to get it to work.
It didn't have any memory except for the registers, but you could load a loop of code into the registers and run it really fast (since they were mapped to low memory), to keep the apartment nice and cozy warm.
Good thing electricity was included in the rent!
Who was producing a PDP-10 in 2004? I'd love to know more about this.
The hardware is used as a router, but supposedly if you ask it very nicely you can get it to run TOPS-20.
DEC were very pleased with themselves when they got to ~40VUPs in the later ECL models, but a full modern VLSI - not FPGA - implementation wouldn't break a sweat at 1000VUPs.
PDP-11s ran at what, 1.25Mhz? I'd think that a modern CPU software emulating a PDP-11 CPU could get to below those cycle times.
It's not uncommon for IO interfaces to be cycle-sensitive. I can only speculate, but perhaps they do not use a timer and it is cycle-exact code for certain timing operations. They could be bit-banging a serial protocol, as that was done then as now. Or they could be controlling very sensitive things where even a few microseconds of jitter is unacceptable. Tying it so closely to the CPU like that was a dirty but common practice in the 70s and 80s. And sometimes simply obligatory! There is no high precision timer on the low-end PDP-11s by default, I believe.
> 512 KiB per core, 8-way set associative
So the 64 core 7763 / 7713 you are looking at 32 megabytes of L2.
The 11/70 in 1975 already had 22 bit address space and like 10MB disks. So -- yes, it's not unlikely with a bit of hand crafting you could keep the whole shebang in L2 but you need a top end CPU.
The big issue may not be with the trains themselves but the communications protocols used to talk to signalling equipment and other peripherals. Changing the timing in the communication with them may lead to problems.
And what if the original software has race condition bugs which have never been surfaced, and the occasional inaccuracy in timing starts to surface them? Good luck fixing bugs in some obscure piece of PDP-11 software that was written in the 1970s.
You are correct that 10ms is well within the margin of error for safety stopping a train, but it may be out of the margin for some subsystem in the control.
Are they real VAX servers or have they've been virtualized/emulated somewhere down the line.
 http://www.avanthar.com/healyzh/decemulation/pdp_fpga.html (scroll to bottom)
Not much info -- no more than is here:
Noboyuki Kondoh's PhD thesis on the university effort is here, but only the abstract is in English:
"Using FPGAs to Simulate old Game Consoles": https://jakob.engbloms.se/archives/3026
"AMSTRAD ON AN FPGA": https://hackaday.com/2017/01/06/amstrad-on-an-fpga/
"MISTER FPGA: THE FUTURE OF RETRO GAME EMULATION AND PRESERVATION?": https://www.racketboy.com/retro/mister-fpga-the-future-of-re...
The NextZ80 is a good example. It's designed to run 4x the amount of instructions at the same clock rate as a real Z80. And you can clock it to 40MHz, so it's effectively a 160Mhz Z80...compare to a typical 4/8Mhz real Z80.
New Vax Implementation https://news.ycombinator.com/item?id=27742540
You could cluster different versions of VMS, or cluster Alpha VMS with Itanium VMS, or all 3 at once. Alphas and Itaniums can run VAX emulators for legacy compatibility, and the copy of VMS in the emulator can be clustered with the OS on the host machine it's running on.
VMS clusters had uptimes in decades.
Compaq bought DEC. HP bought Compaq. So HP inherited VMS.
It's now been spun off as a new company: https://vmssoftware.com/
When VMS later gained a POSIX mode and TCP/IP it was renamed OpenVMS. VMS Software is now porting VMS to x86-64.
You can port *nix apps to VMS and run them natively. The clustering is still there and still works.
There's a small chance OpenVMS might enjoy a small renaissance, bringing mainframe-like uptime and resilience to commodity x86 hardware, without all that pointless faffing around with VMs and one OS running another different OS inside a VM, with all the duplication and waste that this entails.
Like a lot of people, I really miss filenames with built-in version numbers.
 Create a file called `WUMPUS.FOR`
 Every time you saved it, the OS automatically incremented the version number: `WUMPUS.FOR;2` and `WUMPUS.FOR;3` and `WUMPUS.FOR;42`.
 You could optionally `PURGE WUMPUS.FOR` and all the old versions would be removed, and then the OS would continue making new versions.
With this, who needs a revision control system? For a single user, this is all you need -- you can go back to old versions, copy or rename them to branch, etc.
Unix came along at the same time as this was happening and it never incorporated this functionality, which is why we need clumsy functionality bolted on top, such as Git. The actual versioning info isn't in the filesystem -- it's hidden inside concealed data files.
The author says he re-implemented the CPU (using Verilog) on an FPGA running at 50 MHz, well enough to run a test binary successfully.
OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families... from the same company.
Minicomputers are what came after mainframes, before microcomputers. A microcomputer is a computer whose processor is a microchip: a single integrated circuit containing the whole processor. Before the first one was invented in 1974 (IIRC), processors were made from discrete logic: lots of little silicon chips.
The main distinguishing feature of minicomputers from micros is that the early micros were single-user: one computer, one terminal, one user. No multitasking or anything.
Minicomputers appeared in the 1960s and peaked in the 1970s, and cost just tens to hundreds of thousands of dollars, while mainframes cost millions and were usually leased. So minicomputers could be afforded by a company department, not an entire corporation... meaning that they were shared, by dozens of people. So, unlike the early micros, minis had multiuser support, multitasking, basic security and so on.
The most significant minicomputer vendor was a company called DEC: Digital Equipment Corporation. DEC made multiple incompatible lines of minis, many called PDP-something -- some with 9-bit logic, some with 12-bit, 18-bit, or 36-bit logic.
One of its early big hits was the 12-bit PDP-8. It ran multiple incompatible OSes, but one was called OS-8. This OS is long gone but it was the origin of a command-line interface with commands such as DIR, TYPE, DEL, REN and so on. It also had a filesystem with 6-letter names (all in caps) with semi-standardised 3-letter extension, such as README.TXT.
This OS and its shell later inspired Digital Research's CP/M OS, the first industry-standard OS for 8-bit micros. CP/M was going to be the OS for the IBM PC but IBM got a cheaper deal from Microsoft for what was essentially a clean-room re-implementation of CP/M, called MS-DOS.
So DEC's PDP-8 and OS-8 directly inspired the entire PC-compatible industry, the whole x86 computer industry.
Another DEC mini was the 18-bit PDP-7. Like almost all DEC minis, this too ran multiple OSes, both from DEC and others.
A 3rd-party OS hacked together as a skunkworks project on a disused spare PDP-7 at AT&T's research labs was UNIX.
More or less at the same time as the computer industry gradually standardised on the 8-bit byte, DEC also made 16-bit and 32-bit machines.
Among the 16-bit machines, the most commercially successful was the PDP-11. This is the machine that UNIX's creators first ported it to, and in the process, they rewrote it in a new language called C.
The PDP-11 was a huge success so DEC was under commercial pressure to make an improved successor model. It did this by extending the 16-bit PDP-11 instruction set to 32 bits. For this machine, the engineer behind the most successful PDP-11 OS, called RSX-11, led a small team that developed a new, pre-emptive multitasking, multiuser OS with virtual memory, called VMS.
VMX is still around: it was ported to DEC's Alpha, the first 64-bit RISC chip, and later to the Intel Itanium. Now it has been spun out from HP and is being ported to x86-64.
But the VMS project leader, Dave Cutler, and his team, were headhunted from DEC by Microsoft.
At this time, IBM and Microsoft had very acrimoniously fallen out over the failed OS/2 project. IBM kept the x86-32 version OS/2 for the 386, which it completed and sold as OS/2 2 (and later 2.1, 3, 4 and 4.5. It is still on sale today under the name Blue Lion from Arca Noae.)
At Microsoft, Cutler and his team got given the very incomplete OS/2 version 3, a planned CPU-independent portable version. Cutler _et al_ finished this, porting it to the new Intel RISC chip, the i860. This was codenamed the "N-Ten". The resultant OS was initially called OS/2 NT, later renamed – due to the success of Windows 3 – as Windows NT. Its design owes as much to DEC VMS as it does to OS/2.
Today, Windows NT is the basis of Windows 10 and 11.
So the PDP-7, PDP-8 and PDP-11 directly influenced the development of CP/M, MS-DOS, OS/2, & Windows 1 through to Windows ME.
A different line of PDPs directly led to UNIX and C.
Meanwhile, the PDP-11's 32-bit successor directly influenced the design of Windows NT.
When micros grew up and got to be 32-bit computers themselves, and vendors needed multitasking OSes with multiuser security, they turned back to 1970s mini OSes.
This project is a FOSS re-implementation of the VAX CPU on an FPGA. It is at least the 3rd such project but the earlier ones were not FOSS and have been lost.
and lets not forget:
* VMS/VAX had virtualization & clustering in the 80ties
* systems which ranged from low-cost CMOS - for example microVAXes or the older bitsliced 11/730 - to high-performance ECL - for example the 9000
* VMS uses ASTs - asynchronuous system traps - for system calls ... imho. this approach is far supirior to any "software-interrupt" concept from UNIX etc. and pays out especially under high loads (read: database-server)
ps.: VMS and windows NT where so similar in their (basic) concepts, that the joke VMS++ = WNT still holds ;)
pps.: when it came out 1992 and during the following years digitals alpha-processor was far superior to nearly anything in the market back then ... ok, maybe 2nd to MIPS ;)
There was a ton more influence too. I was trying to highlight perhaps the most significant bits people who weren't yet born in the 1970s or 1980s might have seen, used, and know.
As pointed out over on lobste.rs by David Chisnall:
• The VAX memory layout is why modern OSes' memory layouts are the way they are.
- The VAX block size of 512 bytes is why hard disks had 1/2 kB blocks until recently.
- The VMS VM file was called PAGEFILE.SYS, which is why NT's has that name.
- Unix first got virtual memory in the VAX version. The ordinary kernel was called `unix`, and to distinguish it, the VM version was called `vmunix`. That's why the Linux kernel was called `vmlinux` and later the compressed version `vmlinuz`.
• The PDP-11 instruction set influenced the CPU designs of the Heathkit H11 (CP1600) & Intellivision (CP1610), Hitachi SuperH and Motorola 68000.
• The VAX's protection rings influenced the protection rings of the Intel 80386, which is why x86-32 has 4 rings but almost all OSes use only 2 of them.
There is a huge amount of low-level stuff in the designs of the Mac, Amiga, ST, the PC and x86 in general, in games consoles, even in late-1970s and early-1980s 8-bit home computers, that are that way because DEC were so totally dominant... but a lot of this stuff is only visible if you program in machine code, know the memory layouts and so on.
So I tried to focus on the stuff visible to users, sysadmins, maybe techie-minded gamers. Filenames, commands, and so on.
The 1970s computer industry was factiously described as "IBM and the Seven Dwarves". The minicomputer makers were called "the BUNCH":
Of the BUNCH, DEC were by far the biggest and most influential.
DEC's range of minicomputers ranged from de-facto microcomputers -- single-user desktop minis built into terminals -- to massive multi-cabinet minicomputers that competed directly with mainframes. Really, the lines blurred together.
But DEC's fatal flaw was that its senior management thought cheap single-user micros, and UNIX, were passing trends of no real importance. It feared micros cutting into its lucrative minicomputer profit margins, and thought its proprietary OSes were better than UNIX.
Both of these statements were arguably true but even an industry giant can't change the course of a whole industry.
So, ironically, in the end, IBM designed the IBM PC using ideas and concepts from DEC (and Apple), not from IBM's own big hardware. The PC was a huge success and changed the industry... but IBM itself lost control of the PC industry and ended up quitting it.
DEC refused to embrace micros and UNIX until it was too late, so never gained decisive control of those markets... which doomed it.
thank you for this.
• OS/2 version 1.x: 80286, 16-bit, only a single DOS session. Cannot run Windows apps, but can run "family apps" -- binaries compiled to execute natively as DOS apps on DOS and OS/2 apps on OS/2.
• OS/2 version 2.x: 80386, 32-bit, true DOS multitasking alongside OS/2 text-mode and Presentation Manager GUI multitasking. Can boot DOS in a VM, and also run a modified copy of Windows 3 inside a VM allowing 16-bit Windows apps to run on the OS/2 desktop alongside native OS/2 apps.
Marketed as OS/2 v2.0, 2.1, and later as "Warp 3", "Warp 4" and "Warp Server". Licensed to 3rd parties as eComStation (from Mensys, a PDP-11 tools vendor) and later as Blue Lion.
• OS/2 version 3: unfinished, planned to be CPU-independent, able run on various RISC chips (e.g. IBM POWER, SUN SPARC, MIPS etc.) as well as x86.
OS/2 3 was substantially rewritten and completed by Cutler and team using a lot of VMS ideas, technologies and terminology as OS/2 NT, later Windows NT. NT for "New Technology" was a backronym.
NT 3.1, 3.5 and 3.51 contained an "OS/2 subsystem" and were able to execute text-mode 16-bit OS/2 binaries natively. They could also format and read/write hard disk partitions in OS/2 HPFS format.
NT 4 removed the ability to handle HPFS but the 3.51 SP5 HPFS driver could be installed and used to access HPFS partitions, but not create them.
Windows 2000 removed the OS/2 subsystem completely.
• IBM Workplace OS/2 for POWER: a largely unrelated OS, running on top of the same Mach microkernel used in Mac OS X, with an OS/2-compatible userland on top. Able to run 16-bit and 32-bit OS/2 x86 binaries under emulation as well as native POWER binaries.
(To you: you might want to clarify in the title that this is the VAX architecture they're talking about (since "vax" unfortunately might be confused here with the now-used shorthand for vaccine).)