I think that a mainframe computer like the B5000 (or more conventional mainframe) was just too much machinery back then to overlap much with the needs of Apollo). Hamilton realized that a really good notion of "module" would help tremendously with integrity, and that the OS should have an active "overlord" to dynamically assess and deal with real-time conditions and needs for resources. I think this was just amazingly brilliant work for that time (and any time). NASA did give her a major award (but where was the ACM?).
I've never had the pleasure of talking with her, but I have a feeling that NASA could also have done something to encourage her ideas to be more communicated and put out into the open.
The one you are referring to was done in the 80s in the UK (I think). I recall that there were many good things about this system, and it did use microcode in a rather similar way to Parc in the 70s. Wasn't one of the machines they used a PERQ (which was an Alto spinoff by some CMU folks)?
I think I recall -- as with the Intel 432 -- they bit off more than they could chew, and tried to optimize too many things.
Microcode (invented long long ago by Maurice Wilkes, who did the EDSAC, arguably the first real programmable computer) used the argument that if you can make a small amount of memory plus CPU machinery be much faster than main memory, then you can successfully program "machine-level" functionality as though it were just hardware. For example, the Alto could execute about 5 microinstructions for every main memory cycle -- this allowed us to make emulators that were "as fast as they could be".
This fit in well with the nature, speed, capacity, etc of the memory available at the time. But "life is not linear", so we have to look around carefully each time we set out to design something. As Butler Lampson has pointed out, one of the things that make good systems design very difficult is that the exponentials involved every few years mean that major design rules may not still obtain just a few years later.
So, I would point you here to FPGAs and their current capacities, especially for comingling processing and memory elements (they are the same) in highly parallel architectures. Chuck Thacker, who was mainly responsible for most of the hardware (and more) at Parc, did the world a service by designing the BEE-3 as "an Alto for today" in the form of a number of large FPGA chips plus other goodies. Very worth looking at!
The basic principle here is that "Hardware is just software crystallized early" so it's always good to start off with what is essentially a pie in the sky software architecture, and then start trying to see the best way to run this in a particular day and time.