Hacker News new | comments | show | ask | jobs | submit login
Microcode: the place where hardware and software meet (alanwinfield.blogspot.co.uk)
58 points by ColinWright 1536 days ago | hide | past | web | 24 comments | favorite



Reminds me of this question on Super User: How does a CPU 'know' what commands and instructions actually mean? http://superuser.com/questions/307116/how-does-a-cpu-know-wh...

Microcode in its most basic form is just a big look-up table (could be nested if word size is limited) containing the electrical signals for a particular instruction, and a new address to "jump" to (either a subsequent micro-operation, or jump back to the "fetch" micro-op). Electrically, however, there's no difference between a microcoded and hard-wired CPU - they both fetch and execute sequences of instructions.

Of course, there's an obvious advantage to the microcoded approach: your instruction set gets uploaded to your hardware in a ROM! You can change the operation of instructions after the hardware is made, make custom sequences of instructions, or even fix microcode bugs (while uncommon, there have been cases in the past).


> You can change the operation of instructions after the hardware is made, make custom sequences of instructions, or even fix microcode bugs (while uncommon, there have been cases in the past).

I am reminded of the famous Intel Pentium FDIV bug (1994). Seems they had implemented a new and improved floating point division algorithm. It needed only half the cycles of the previous algorithm.

The new algorithm used a 1066-cell lookup table to find partial quotients used in the calculation. Unfortunately, due to a screwup, five of the cells were left blank when the ROM was loaded. This led to obscure errors in the results, which went unnoticed until many thousands of chips were in customers' hands. At that point, Intel had a massively embarrassing recall on their hands.

http://engineeringfailures.org/?p=466

http://www.trnicely.net/pentbug/pentbug.html


Uncommon? I'd venture to say that all desktop architectures have known bugs. Some are just documented, some are fixed by microcode, some require work arounds in software "or" compilers.

Example: (core i7, may 2011) http://download.intel.com/design/processor/specupdt/320836.p...

BIOS updates often contain bugfixes for your CPU.


I don't like waiting for BIOS vendors to package cpu firmware updates and I don't like to update the BIOS unless I have to. If you are running debian just install (amd64|intel)-microcode[1]. The package provides the most recent microcode updates from the CPU vendor and updates the microcode on boot.

[1] http://lists.debian.org/debian-user/2012/11/msg00193.html


Donn Stewart has a very enlightening and relatively simple explanation on the very basics of how a CPU works, which includes a great explanation of microcode and the control cycle.

http://cpuville.com/cpu_design.pdf

And here's his main site: http://cpuville.com


Clicking on http://cpuville.com/cpu_design.pdf gives me:

    Forbidden
    Remote Host: [<redacted>]

    You do not have permission to access this
    page or file

    Data files must be stored on the same site
    they are linked from.
To see the PDF you need to visit the main page and then click through. Fortunately, if you first try the forbidden page, the link you want will already be purple for you.


I can see it by turning off my referer in Firefox.


If you're having trouble viewing, the easiest way is to copy-and-paste the URL into a new window or tab.

(As noted by krapp, it's a referer check)


The original ARM RISC design did away with Microcode, with the chip executing instructions directly, although I think it might have crept back in since then

http://en.wikipedia.org/wiki/ARM_architecture#Instruction_se...


There are many more machines without microcode - 6502, machineForth chips like f21, c18 etc. I'd argue that they are much more appealing to a designer with taste.


I think that the 6502 decode ROM which matches opcode bits and sequence numbers to internal CPU functions is a very good example. It can be seen as either just a rather complex demultiplexer or, with keeping the sequencing nature in mind as a minimal, hard coded, computer program.

Does microcode have to execute like traditional software to be called microcode?


The 6502 is an interesting case. The decode ROM (PLA) gets instructions about 1/3 of the way decoded, and then there's a whole pile of random control logic [1], [2] that generates the actual control lines [3]. The output from the decode ROM consists of semi-meaningless things such as ADC/SBC in state T0, or JSR in T5 [4], and after a whole bunch of gates these get turned into the actual control functions such as S bus to A register. [3]

[1] image of the 6502 at http://www.visual6502.org/images/6502/index.html

[2] schematic http://www.visual6502.org/wiki/index.php?title=650X_Schemati...

[3] block diagram http://www.weihenstephan.org/~michaste/pagetable/6502/6502.j...

[4] details of ROM http://visual6502.org/wiki/index.php?title=6507_Decode_ROM


"Random control logic" Oxymoron?

Is the control logic really random in some way? That sounds impossible. Or at least, very highly inefficient.


No, it's obviously not random. It is carefully designed, but there is no regularity to it. The expression "random logic" is used to describe the circuitry that is "ad hoc" rather than a matrix, like RAM, registers, or an ALU.

Google the expression to learn more.

* http://www.google.co.uk/search?q=random+logic

* http://en.wikipedia.org/wiki/Random_logic


Ah, taste, one of the most important traits of a good engineer.


"RISC architecture eliminates microcode routines and turns low-level control of the machine over to software. The RISC approach is not new, but its application has become more prevalent in recent years, due to the increasing use of high-level languages, the development of compilers that are able to optimize at the microcode level, and dramatic advances in semiconductor memory and packaging. It is now feasible to replace relatively slow microcode ROM with faster RAM that is organized as an instruction cache. Machine control resides in this instruction cache that is, in effect, customized on-the-fly: the instruction stream generated by system- and compiler-generated code provides a precise fit between the requirements of high-level software and the low-level capabilities of the hardware."[1]

-- MIPS R4000 Microprocessor User’s Manual, Chapter 1, Page 2

[1] http://groups.csail.mit.edu/cag/raw/documents/R4400_Uman_boo...


Having read just to that point in the book, it seems this information is a bit old. Modern x86 processors have microcode ROMs on die, so the performance penalty is near zero.


A bit off topic, but I've always wondered why some refer to the language as 'assembler' whereas others call it 'assembly'. I always thought the assembler is the name of the thing that reads in your assembly code and turns it into a machine binary. Is there some historical significance to calling it 'assembler'?


I'm not old enough to know but my theory is things in the programming world historically came from the computer architectural world. Computer architecture uses stacks, interrupts (events), and other things in the physical layout of the computer chip. When people were done with the hardware stuff, they created software which has similarities to hardware and thus called them the same. When people then created higher level software from lower level software, they saw similarities there as well and went from there.


Calling the language assembler might be considered potentially confusing and ambiguous, since this is also the name of the utility program that translates assembly language statements into machine code. However, this usage has been common among professionals and in the literature for decades.:

http://en.wikipedia.org/wiki/Assembler_(computing)#Assembler


I think it's called an 'assembler language' because it's the language you use to program your assembler (specifying what to emit into the binary).


I am still unclear after reading the article.

What is the exact interface where a software change leads to a change in some physical word state?

For instance, logically storing X in some register is going to lead to some particular configuration of electrical current on the CPU. I still don't see the exact point where that happens.

How do the two intersect?


I highly recommend the book, and the course, http://www.nand2tetris.org/ (previously known as The Elements of Computing Systems or TECS).

This starts with discussion on NAND gates and goes through building CPU components, to a complete computer, to implementing a high-level language for it.

(All the chapters from the book revevant to your question are available as free PDFs).


... music, movies, and pizza delivery?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: