Hacker News new | past | comments | ask | show | jobs | submit login

"Flex" has been a very popular label for computer systems. I was referring to an early desktop computer done by Ed Cheadle and myself in the late 60s for a company owned by LTV (Ling Tempco Vought -- an aerospace conglomerate).

The one you are referring to was done in the 80s in the UK (I think). I recall that there were many good things about this system, and it did use microcode in a rather similar way to Parc in the 70s. Wasn't one of the machines they used a PERQ (which was an Alto spinoff by some CMU folks)?

I think I recall -- as with the Intel 432 -- they bit off more than they could chew, and tried to optimize too many things.




The final part is about "microcode" etc. in being able to get the most flexibility/performance from hardware. A good heuristic is to look far out (30 years) for "It would be ridiculous if we didn't have ...". See what any of those might mean 10-15 years out. Simulate the promising ones of these today using $ to pay for what Moore's Law and other engineering will provide at lower cost in the future. This will give a platform today on which the software of the future can be invented, developed, and tested. (This is what we did to get the Alto at Parc -- and this is what it was -- in the mid 70s an Alto cost about $22K -- about $120K today. But it allowed the software of the future to be invented.)

Microcode (invented long long ago by Maurice Wilkes, who did the EDSAC, arguably the first real programmable computer) used the argument that if you can make a small amount of memory plus CPU machinery be much faster than main memory, then you can successfully program "machine-level" functionality as though it were just hardware. For example, the Alto could execute about 5 microinstructions for every main memory cycle -- this allowed us to make emulators that were "as fast as they could be".

This fit in well with the nature, speed, capacity, etc of the memory available at the time. But "life is not linear", so we have to look around carefully each time we set out to design something. As Butler Lampson has pointed out, one of the things that make good systems design very difficult is that the exponentials involved every few years mean that major design rules may not still obtain just a few years later.

So, I would point you here to FPGAs and their current capacities, especially for comingling processing and memory elements (they are the same) in highly parallel architectures. Chuck Thacker, who was mainly responsible for most of the hardware (and more) at Parc, did the world a service by designing the BEE-3 as "an Alto for today" in the form of a number of large FPGA chips plus other goodies. Very worth looking at!

The basic principle here is that "Hardware is just software crystallized early" so it's always good to start off with what is essentially a pie in the sky software architecture, and then start trying to see the best way to run this in a particular day and time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: