
Why the Z-80's data pins are scrambled - zdw
http://www.righto.com/2014/09/why-z-80s-data-pins-are-scrambled.html
======
userbinator
_The motivation behind splitting the data bus is to allow the chip to perform
activities in parallel. For instance an instruction can be read from the data
pins into the instruction logic at the same time that data is being copied
between the ALU and registers._

Essentially pipelining, several years before the RISC movement popularised it?
Could the Z80 have been one of the first pipelined single-chip CPUs?

That was a very interesting article. I've tried staring at the Visual6502 chip
images for a long time, and although I understand the principles behind how
diffusion/polysilicon/metal layers are put together to form transistors, for
some reason I feel absolutely lost trying to follow the connections and find
the borders between the regions especially when one layer is hidden beneath
another.

Even looking at the NOR gate with its layout side-by-side I can't see much
beyond the metal layer, despite it being partially transparent. I have no
problems with transistor-level schematics, however. Is there some sort of
trick to being able to easily read and follow the circuitry in die images and
layout-level diagrams? It's like some people can read these and visualise/draw
the schematic immediately.

~~~
vidarh
The 6502 also has a similar 1-stage pipeline, and the concept was old already
at that point, though not used much commercially outside of Cray's designs for
CDC.

------
ohazi
There was a JPL probe years ago (can't remember which, and can't seem to find
a reference) that had a radiation hardened memory IC with error correcting
codes and a system to detect and correct the bit flips that were expected due
to cosmic rays.

After launch, the number of unrecoverable errors (due to multiple bits flipped
within the same codeword) was higher than expected. It turned out that someone
had swapped some combination of address or data lines, which ended up changing
the physical grouping of bits within the codewords. Some of the bits within a
logical codeword were so close together that a single event was able flip both
of them, causing the error correction to fail.

------
analog31
Back in the days of hand made printed circuits, I randomly assigned both data
and address pins on a microprocessor circuit, and got everything onto a single
side with just one or two jumpers.

I felt so clever. Then I remembered that the program in the ROM assumed a
particular bit numbering, literally while my board was bubbling away in the
ferric chloride. Oops.

Rather than re-design the board, I thought about writing a program to
rearrange my binaries, or make a socket adapter for the EEPROM programmer. The
socket adapter won out.

~~~
kens
That's kind of how the TMS1000 microcontroller (used in the Speak n Spell)
works. Instead of incrementing the program counter on each instruction (like
every normal processor), they saved a few gates by using a linear feedback
shift register. The result is the program counter goes through a pseudo-random
but predictable sequence. So they just program the code into the ROM in the
same sequence and everything works just fine. (Some day I'll write a blog post
about this, since it's interesting to look at the silicon that does this.)

~~~
yzzxy
This is a very weird and specific question I know, but I figured I'd ask
anyway because you seem to have experience with this chip.

In 'Halt and Catch Fire,'[0] one of the characters loads his children's names
onto a Speak n Spell's memory. He's portrayed as a very talented engineer, and
a lot of the show seems to be pretty true to the tech.

My question is, would this be possible for someone with a lot of patience,
experience, and a home workshop at that point, or is it an apocryphal story?

[0] A last TV season about an early PC startup in the 80s

~~~
kens
I'd say its possible but unlikely that someone reprogrammed their Speak n
Spell. The first tricky thing is the TMS5100 voice synthesis chip uses a
complex LPC-10 encoding to encode the sound with a very low bit rate (1100
bits/sec). Basically it's modeling the filter characteristics of the vocal
tract. So the first problem is you have to convert your audio signal into this
representation, which is going to be really, really hard unless you have
access to the TI system that does this conversion.

The second problem is the speech data is stored in a TMS6100 ROM which is kind
of a strange chip: the 14-bit address is loaded 4 bits at a time, and then the
ROM steps sequentially through memory from there. The point is that you can't
reprogram this chip (since it's a ROM), and emulating it with a standard EPROM
would be a big pain.

I should point out that I don't have firsthand experience with these chips
(apart from using a Speak n Spell years ago). But I happen to have been
studying them in detail a couple weeks ago for random reasons.

For more information on this chipset, the datasheets are at
[http://www.datasheet-pdf.com/datasheet-
html/T/M/S/TMS5100_Te...](http://www.datasheet-pdf.com/datasheet-
html/T/M/S/TMS5100_TexasInstruments.pdf.html) and
[http://www.ti99.com/exelvision/website/telechargement/tms610...](http://www.ti99.com/exelvision/website/telechargement/tms6100-80-data-
manual.pdf)

I just did a search and found someone who hacked new words into a Speak n
Spell a couple years ago. But he needed to use a CPLD (like a FPGA) to
simulate the ROM, and a Windows LPC encoding program, so this wouldn't have
been possible in the 80s.
[http://furrtek.free.fr/index.php?a=speakandspell&ss=1&i=2](http://furrtek.free.fr/index.php?a=speakandspell&ss=1&i=2)

~~~
dgalloway
Looked it up and that show is set in 1983 so it does seem rather implausible
but not impossible of course. :) Here's a good run through using the Windows
LPC encoding program.
[http://www.youtube.com/watch?gl=GB&v=wVDE-6TtmFQ](http://www.youtube.com/watch?gl=GB&v=wVDE-6TtmFQ)

------
ableal
> I have been reverse-engineering the Z-80 processor using images and data
> from the Visual 6502 team.

Many moons ago, I heard a seventh-hand rumor that the guy doing the layout of
the Z-80 chip had a nervous breakdown, because of the difficulty of the work.

I have no idea if there was any truth whatsoever to that, but I'm glad to find
it's not a Langford Blit thing which maddens those who see it ;-)

~~~
kens
The Computer History Museum's oral history of the Z-80 has some interesting
stories about the layout (but doesn't mention any nervous breakdown). The Z-80
project hired some layout draftsmen, but they were slow and the chip was
running behind schedule. So CEO Federico Faggin starts helping with the layout
and ends up doing 3/4 of the chip layout himself after 3 1/2 months of 80-hour
weeks. "You know, a CEO doing layout draftsman job was not something that
would be normal, but that’s what I had to do, and I did it." By comparison,
the simpler 8080 took six months of layout.

[http://archive.computerhistory.org/resources/text/Oral_Histo...](http://archive.computerhistory.org/resources/text/Oral_History/Zilog_Z80/102658073.05.01.pdf)

------
raverbashing
So, I'm thinking

I think to connect a memory chip you just don't care and you can swap them as
you want (as long as you connect the 8 data pins to 8 data pins in the memory)

For IO you care, of course, or you "just" shuffle all data that you want to
write (which is a sure way of making someone go crazy)

~~~
analog31
For old fashioned RAM and ROM chips, you'd be OK, and you're probably
generally OK with data pins.

Address pins are another matter. The Z80 had a built in static RAM refresh
circuit, and there is some schtick about which addresses are refreshed as a
group (rows or columns, I forget which). So, rearranging the address bus might
result in some nasty surprises. And it might get even more interesting with
more modern memory devices, which are way over my head.

On a whim, I got the Howard Sams book on the Z80 while I was in high school,
around 1981, and I devoured it.

~~~
tryp
Modern DDR2 SDRAM busses are a bit more involved. They use the address lines
as a command word for putting the chips in the correct "link trainging" mode
at startup, selecting burst access lengths, enabling self-refresh mode,
setting on-die termination values, &cetera so they may not be swapped. Each
"byte lane" of 8 data lines is allowed to have a different signal path length
difference between clock and data (that is measured during training for
compensation during operation) and signals may be swapped arbitrarily within
the byte lanes.

Furthermore, the high-performance DDR3+ controllers typically hash the data
word with the address so that when a repetitive data stream is transmitted it
doesn't generate more EMI. (Some controllers also hash with a random seed
gaining resilience against chilling the DIMMs of a running machine and reading
them out on another machine in search of sensitive data.)

I find it really cool that any time you change the DIMMs in your computer, it
essentially has to measure the length of the wires to the ICs on it. (I've
found it less cool to have to manipulate timing values to compensate for
deviation from PCB design rules, but thankful that it's possible. The fun of
board bring-up.) If your BIOS has a "fast boot" option, mostly that means it
remembers the wire lengths from last time so it doesn't have to do the
measurement again every boot.

~~~
PhantomGremlin
In the "good old days" it was certainly possible to do board design using 2x
or 4x sized Bishop Graphics tape for lines, vias, etc. You applied them to
transparent mylar sheets corresponding to board layers.

But now, sheesh! You need to carefully constrain the PCB CAD program so that
all the lines match to within 0.1" or less. And, as you mention, that's just
the tip of the design iceberg.

It's no longer possible to layout computers at low cost in a garage. Oh, well.
Now hipsters sit around in open offices in SOHO and create silly apps.

~~~
yuhong
[http://electronics.stackexchange.com/questions/13372/how-
to-...](http://electronics.stackexchange.com/questions/13372/how-to-build-a-
custom-laptop-computer-with-original-chassis-keyboard-etc/13374)

Notice the reason why! In fact, I think the idea of a startup producing
laptops targeted at developers has been mentioned before here on HN.

~~~
fnordfnordfnord
[http://www.bunniestudios.com/blog/?tag=novena](http://www.bunniestudios.com/blog/?tag=novena)

~~~
yuhong
Not the kind of laptop the link I posted is talking about, which is x86-based.
And I found the HN comment where the idea was mentioned BTW:
[https://news.ycombinator.com/item?id=7079053](https://news.ycombinator.com/item?id=7079053)

------
joezydeco
Amazing analysis. Another reminder we're all standing on the shoulders of
giants every time we whip out our phones carrying billions upon billion of
gates...

~~~
rbanffy
It's giants all the way down.

------
bluecmd
While layouting an full-custom SPI bus for an IC I had to do the same thing.
It simply made sense to sacrifice programming model in that case to layout
simplicity.

