Hacker News new | past | comments | ask | show | jobs | submit login
Reverse-engineering the 8086 processor's address and data pin circuits (righto.com)
127 points by picture 10 months ago | hide | past | favorite | 20 comments



I’m going to leave smarter inquiries to those more qualified to raise them, but I wanted to take a moment to express my admiration and gratitude for your work in preserving, expanding, and advancing our collective knowledge and understanding of low-level computing and systems architecture. Hats off to you, it’s been incredibly rewarding to watch your work.


Thanks! I'm trying to do what I can to preserve this history and knowledge.


Why is it up to reverse engineering to preserve history and knowledge. I wonder if the original engineers are alive to help and doesn't Intel have this information locked away somewhere?


Great article as always! I especially like the section on "A historical look at pins and timing".

Unless I'm misunderstanding the terminology, there may be an error in the discussion of the shift/crossover circuit. And a minor typo...

> The [buses] can be connected in three ways: direct, crossed-over, or swapped.

> The "direct" mode connects the 16 bits of the C bus to the lower 16 bits of the address/data pins.

> The second mode performs the same connection but swaps the bytes.

> The final mode shifts the 20-bit AD bus value four positions to the right.

It sounds like those two modes got swapped? ;-)

For clarity, I think I would change the order in the introduction and use the mode names instead of "second" and "final":

> The buses can be connected in three ways: direct, swapped, or crossed-over.

> The "direct" mode connects the 16 bits of the C bus to the lower 16 bits of the address/data pins.

> The "swapped" mode performs the same connection but swaps the bytes.

> The "crossed-over" mode shifts the 20-bit AD bus value four positions to the right.


Thanks, I've fixed that section since I kind of mangled it.


How far are we away from a true transistor or gate-level simulation of 8086/8088?


I have a transistor-level 8086 simulator that mostly works but needs some cleanup and bug fixes. For now, I'm concentrating on analysis of the 8086 rather than finishing the simulator.


About how many cycles per second are you getting in the simulated 8086?


I haven't really timed it let alone optimized it, but about 50 clock cycles per second. If I had a nice graphical display like Visual 6502, it would be way slower.


That's cool! Is it helpful for your analysis?


Yes, the simulator is extremely helpful for analysis. I started doing the analysis on paper, but it's very easy to make a mistake and end up confused. The simulator is also very helpful when trying to understand complicated state machines such as the bus control circuitry. So I plan to put more emphasis on simulation for future projects. The tradeoff is that it takes a lot more time up front to get the simulation working.


Your simulation if developed to a good working state would make an excellent tool for making an FPGA implementation.


Author here if anyone has questions.


Thanks for the blog, I read every post.

I'm curious about the circuitry that drives the difference between "minimum" and "maximum" modes from the processor - another Intel pin-saving strategy. Is there basically an 8288 sitting in a corner of the 8086 die, or are things more complicated than that?


There's not really much to the minimum and maximum modes. You can think of it as multiplexers on the relevant pins, selecting the signal for the appropriate mode. (Of course, the logic isn't quite that clean but that's the basic idea.)


Thanks. I guess what I'm wondering is whether the maximum-mode signals (S0, S1, etc) are somehow more fundamental to the 8086's operation, with the minimum-mode signals (WR, INTA, ALE etc) being derived from them solely for the benefit of people without an 8288?

Put another way, if you could, recursively, delete all the gates in the 8086 which only exist to drive minimum-mode pins, would any remnant of the minimum-mode signals remain for other internal uses?


I haven't looked at the signals from that perspective, but I'd say that both sets of signals are derived from various internal signals that are more fundamental.


Great article, thank you so much for taking the time to write all your explorations down. With regards to the mentioned patent(US4449184A) It turns out I am terrible at reading patents, any explanations what it is for?

My best guess is it for placing memory or state between the registers and the processing units. Or perhaps some subtlety of this as registers are already memory or state. I could not figure it out.

Modern patents are especially prone to this. They try to claim as much as possible while avoiding publishing any trade secrets. The result tends toward a unreadable mess.


Thank you so much for your work! Always great fun to read your blog.


I read 2 years ago this post: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...

Where they go and mention that the attackers went ahead and built their own simulator of a CPU as part of their attack vector. Do you know what they emulated and would you have more details on that specific hack ?

I think the authors promised to come back with a part 2 eventually but so far they haven't




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: