
Reverse-engineering the adder inside the Intel 8086 - parsecs
http://www.righto.com/2020/08/reverse-engineering-adder-inside-intel.html
======
jecel
> The 68000 had address adders, separate from the ALU.

It actually had a 16 bit "complex ALU" and two 16 bit adders (or the other way
around - I can't get to my old documents in my office). The ALU was for the D
registers, one of the adders was for the A registers and the last one could be
shared for both to add bits 16 to 31.

You could do a one cycle addition of an address in parallel with a one cycle
ALU operation on 16 bits of a data register (very common) or two cycles for 32
bits. You could also do a 32 bit add on a data register in a single cycle if
you were not also calculating an address. So you could crunch 48 bits per
clock.

------
egsmi
It’s surprising, to me, that the address adder and the ALU are so comparable
in area. I’m guessing the adder is like 1/3 the size of the ALU even though
the ALU has so much more functionality.

My guess is the flip flops needed to register the operands and resultant
compromise a significant chunk of the area in both and the extra 2/3 area in
the ALU is the area of the combinational logic to implement all the extra math
functions.

------
supernova87a
I'm curious to know whether (if you live in the Silicon Valley area) you have
gotten access to talk to people who could share the original intent/design of
the circuits you study? I.e. to see how close you got it or left out any
observations? It would be interesting to learn how they actually intended the
design to work and why it looks the way it does, compared to what you found!

Or is the idea really more to see what can be discerned by someone receiving
the chip and trying to sleuth out what connections and parts do what?

------
jhallenworld
I'm amazed at the large fraction of the die area that the adder takes.

~~~
kens
Yes, it gives you an idea of the simple building blocks that make up the 8086.
In a modern processor, the blocks on the die are a floating point unit or
something. But on the 8086, the blocks are things like an adder or latch.

------
Stierlitz
> multiple offset/segment combinations could address the same physical memory
> address.

Would this have anything to do with memory exploits on the entire x86 range of
processors?

------
kens
Author here, if anyone has questions.

~~~
ncmncm
Has enough time passed yet to acknowledge that the x86 ISA is a really bad
design?

~~~
kens
The funny thing is that the Intel 8800 (iAPX 432) was supposed to be Intel's
big new thing, a 32-bit processor designed for object-oriented languages.
Unfortunately, that project fell behind schedule and Intel quickly threw
together a chip, the 8086, so they'd have something to sell until the iAPX 432
was ready.

Unexpectedly, the 8086 took over the world and the iAPX 432 was a failure. The
bad design that shipped beat the good design that was late and slow. Intel
repeated this with the Itanium, their super new design that was supposed to
take over in 2001. Again, the superior design was slow, late, and a market
failure.

I'm not sure what the moral of this is. Maybe it illustrates the "worse is
better" principle.

~~~
jhallenworld
Itanium 2 and beyond were fast, but no argument that it failed. Even so I read
that HP took final orders for Itanium systems this year.

SPEC benchmarks fom 2003:

    
    
        CINT2000
        Itanium 2 1.5 GHz - 1322
        Xeon 3.06 GHz - 1242
        Opteron 1.8 GHz - 1095
    
        CFP2000
        Itanium 2 1.5 GHz - 2119
        Xeon 3.06 GHz - 1173
        Opteron 1.8 GHz - 1122

~~~
ncmncm
Amd64 caught up and passed it when?

