
A Brief History of Microprogramming - quickfox
https://people.cs.clemson.edu/~mark/uprog.html
======
Animats
They missed the Thompson-Ramo-Woolridge 530, which was the first machine with
microcode in main memory. They called the microprogram words "logands", for
"logic operands", and there was an logand assembler, in case someone wanted to
write new microcode. This was a major milestone in microprogrammability. Or it
would have been, if the machine worked.

The TRW-530 was supposed to compete with the IBM 1401, but the 530 was so
unreliable that only two were made. Case Tech (now CWRU) got stuck with one
years after it was discontinued. It was used only to copy UNIVAC tapes to IBM
tapes as part of a migration to new UNIVAC hardware. The machine at least had
decent tape drives. A custom interface connected it to a UNIVAC 1107, as a
total slave of the reliable 1107. The machine was so cost-reduced that they
didn't have buttons on the front panel; the operator used a "magic wand" (a
logic probe) to touch contacts to set bits in registers.

Everybody involved in keeping it running was happy when all tapes had been
converted and it could be sent off to the scrap heap.

The microcode in main memory idea came from an era when CPUs were slower than
memory. Today, we need three levels of caches to deal with CPUs orders of
magnitude faster than main memory accesses. But it wasn't until the 1980s that
mainstream CPUs decisively pulled ahead of main memory on speed. In the 1960s,
memory was waiting for the CPU, not the other way round.

This led to some strange dependencies on memory. The IBM 1620 had a base 10
multiplication table in main memory, loaded at boot time. That was how
multiply worked.

Many architectural decisions in the history of computing came from the speed
and cost of main memory. Virtual memory was developed as a cost-saving
measure, to allow faking more memory by paging stuff out to disk. Paging out
to disk is obsolete today (unknown on mobile, painful on desktops, dumb on
servers) although most major OSs still support it.

~~~
userbinator
_Many architectural decisions in the history of computing came from the speed
and cost of main memory_

RISC is one of those too, and it's not a coincidence that the idea started in
the early 80s when CPUs were still slower than memory.

~~~
capitalsigma
How so? The major idea behind RISC, I thought, was to implement very few
opcodes and make them very fast, which seems to me to be distinct from the
question of CPU vs memory speed

~~~
Animats
The original claim for RISC was one instruction per clock. There are some nice
simple RISC machines, especially from MIPS, which really work that way. But
once CPUs got lookahead, parallelism, and went superscalar, they were doing
_more_ than one instruction per clock. This took far more transistors and
complexity, but the Pentium Pro showed it could be done. It took 3000
engineers to design that CPU, and the Pentium II and III were essentially the
same architecture.

Now RISC was behind. Superscalar RISC machines were built. But the lower cost
and design simplicity were gone. The motivation for RISC went with them.

On top of that, most RISC machines padded all instructions out to the same
length. Compared to x86, programs took 2x as much memory, which not only ran
up cost, but required more memory bandwidth, one of the scarcest resources now
that CPUs were pumping so many instructions.

------
dreamcompiler
Great article. Microcode is indeed a different activity from assembly
programming. It's the place where software and logic design come together. I
built a custom bitsliced CPU for TI many years ago and wrote all the microcode
myself. It was one of the most valuable experiences of my engineering
education.

------
p1esk
Microprogramming is a level of abstraction in between an assembly language and
a hardware description language.

------
minipci1321
I first saw the mention of 'millicode' when HP introduced then-new PA-RISC 1.0
ISA (second half of eighties), and now Wikipedia says IBM invented the concept
and the term in 1997.

