Did someone ask about Intel processor history? :-) The Intel 8080 (1974) didn't use microcode, but there were many later processors that didn't use microcode either. For instance, the 8085 (1976). Intel's microcontrollers, such as the 8051 (1980), didn't use microcode either. The RISC i860 (1989) didn't use microcode (I assume). The completely unrelated i960 (1988) didn't use microcode in the base version, but the floating-point version used microcode for the math, and the bonkers MX version used microcode to implement objects, capabilities, and garbage collection. The RISC StrongARM (1997) presumably didn't use microcode.
As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing.
My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions.
If you have a child, you should definitely check out FIRST, started by Dean Kamen, who also invented the Segway. For elementary students, FIRST LEGO League uses simple, LEGO-based robotics. In high school, FIRST Robotics has the students building very impressive robots.
Microcode is specific to a given implementation, so if you make your own x86 implementation, it's not going to run AMD's or Intel's microcode unless you go out of your way to make it do so. NEC didn't infringe Intel's copyright, because their processor ran different microcode than Intel's, and NEC won that lawsuit.
Way back (circa 1988ish timeframe) I remember a digital logic professor giving a little aside on the 8087 and remarking at the time that it (the 8087) used some three value logic circuits (or maybe four value logic). That instead of it being all binary, some parts used base 3 (or 4) to squeeze more onto the chip.
From your microscopic investigations, have you seen any evidence that any part of the chip uses anything other than base 2 logic?
The ROM in the 8087 was very unusual: It used four transistor sizes so it could store two bits per transistor, so the storage was four-level. Analog comparators converted the output from the ROM back to binary. This was necessary to fit the ROM onto the die. The logic gates on the chip were all binary.
Do you know what other prior systems did for co-processor instructions? The 8086 and 8087 must have been designed together for this approach to work, so presumably there is a reason they didn't choose what other systems did.
It is notable that ARM designed explicit co-processor instructions, allowing for 16 co-processors. They must have taken the 8086/8087 approach into account when doing that.
AMD's Am9511 floating-point chip (1977) acted like an I/O device, so you could use it with any processor. You could put it in the address space, write commands to it, and read back results. (Or you could use DMA with it for more performance.) Intel licensed it as the Intel 8231, targeting it at the 8080 and 8085 processors.
I remembered Weitek as making math co-processors but it turns out they did an 80287 equivalent, and nobody appears to have done an 8087 equivalent. Wikipedia claims the later co-processors used I/O so this complicated monitoring the bus approach seems to have only been used by one generation of architecture.
Yes, the 80287 and 387 used some I/O port addresses reserved by Intel to transfer the opcode, and a "DMA controller" like interface on the main processor for reading/writing operands, using the COREQ/COACK pins.
Instead of simply reading the first word of a memory operand and otherwise ignoring ESC opcodes, the CPU had to be aware of several different groups of FPU opcodes to set up the transfer, with a special register inside its BIU to hold the direction (read or write), address, and segment limit for the operand.
It didn't do all protection checks "up front", since that would have required even more microcode, and also they likely wanted to keep the interface flexible enough to support new instructions. At that time I think Intel also had planned other types of coprocessor for things like cryptography or business data processing, those would have used the same interface but with completely different operand lengths.
So the CPU had to check the current address against the segment limit in the background whenever the coprocessor requested to transfer the next word. This is why there was a separate exception for "coprocessor segment overrun". Then of course the 486 integrated the FPU and made it all obsolete again.
"On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity. "
My unpopular opinion is that programming is stuck in the 1970s: a lot of programmers use a 1970s-style terminal window to enter 1970s OS commands, which run on a 1970s processor architecture (which is slowly getting replaced by a 1980s architecture). They use a 1970s editor (which is much superior to the other 1970s editor) to write programs in a 1970s language. ASCII diagrams are just a symptom of this. Hardware is millions of times better than in the 1970s, but programming is stuck in local optimums for historical reasons.
(Not to take anything away from Monosketch, which is cool.)
I wish it were stuck in the 1970s! (Although the mouse had been invented by then.) I do not want the mouse and I do not want all these windows. If I am using agents I want the mouse even less.
This is not historical reasons, this is just that moving my hands from the keyboard to the mouse is inefficient and technically unnecessary. I prefer mouse only on niche (for me) tasks like screenshot cropping or something.
I am about to test out Niri on my laptop and I expect to be quite pleased with the change.
I was thinking about this the other day - I watched a video about the acme editor and it was showing off text editing in a shell buffer, much like M-x shell. I realized I haven’t yet found a terminal emulator that will let you select text with a mouse while you’re editing in the shell. It’s such a simple thing that would be so useful, especially on a Mac where CUA bindings don’t conflict with terminal escape codes. iTerm lets you Option+click to position the cursor but you can’t select a word with the mouse and press ‘delete’. Why? It seems like such a simple thing to do.
Windows Terminal allows this, afaik. It might be a feature of clink, but I feel like I've seen it in powershell and cmd both. Not sure if it's available in traditional console window, I rarely use those much these days though (sudo on windows is nice, the only reason to use an elevated window is for multiple commands, like browsing system directories you don't have access to; I just wish it had a different 'official' name)
Because we are yet to invent a more efficient data transformation system as a shell, or a more efficient text editing interface as vi, but its not like there is no innovation in the space, we have `jq` now.
A historical note: even though the line/box-drawing characters go back to the IBM PC, they aren't ASCII. The PC used Code page 437, which added a bunch of characters to ASCII. To be genuinely ASCII, you need to draw your boxes with pipes and hyphens (| and -).
The 80×24 display of the IBM 3270 is the reason that terminal windows nowadays are typically 80×24 (or 25 lines because the IBM PC added an extra line). IBM dominated the CRT terminal market in the mid-1970s, so other manufacturers were mostly forced to 80×24 in order to be compatible. (And of course the IBM 3270 had 80 columns for compatibility with punch cards.)
I thought hydragyrum was a made-up word, but it's the Latin word for mercury, which explains the Hg chemical symbol. (Just in case anyone finds this interesting.)
As a historical note, the first President Bush proposed in 1989 to establish a base on the Moon and send astronauts to Mars by 2020. In 2004, the second President Bush set a goal of returning to the Moon by 2020 and going to Mars in the 2030s, starting the Constellation program. In 2017, Trump announced that astronauts would return to the Moon, with the Artemis III project now planning a landing no earlier than 2028.
As a result, I don't have a lot of optimism about a US landing on the Moon. On the other hand, the James Webb Space Telescope did succeed even though the launch date slipped from 2007 to 2021. So I've learned not to be completely pessimistic.
> In 2004, the second President Bush set a goal of returning to the Moon by 2020 and going to Mars in the 2030s, starting the Constellation program. In 2017, Trump announced that astronauts would return to the Moon, with the Artemis III project now planning a landing no earlier than 2028
Between those two the economic effects of invading Iraq came home to roost. We “won” the invasion. But lost the board.
As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing.
My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions.
reply