
Here's to the systems programmers – Writing Hello World on a home brew CPU - jackqu7
https://medium.com/@jackqu7/heres-to-the-systems-programmers-cba40a41b608
======
nickpsecurity
It sounds cool and interesting but has almost no detail. Come on, Jack! Give
us some details, RTL, code, board schematics, whatever.

@ All

This is how you do homebrew CPU's and presentation best:

[http://www.homebrewcpu.com/](http://www.homebrewcpu.com/)

Links to others for anyone wanting to learn about this stuff:

[http://www.homebrewcpu.com/links.htm](http://www.homebrewcpu.com/links.htm)

~~~
jackqu7
Magic 1 was what inspired me to start the project, I continue to be in awe at
the ambition of Bill's project. Watch this space for a more technically
focused post.

~~~
nickpsecurity
It awed me as well. Plus, the performance and such gave me inspiration that
homebrew might be an answer to non-subverted computers post-Snowden. At least,
for the bootstrapping phase of other computers, SCM/builds, or key storage.

Only thing was that a HW guy told me the TTL chips he used might be hard to
come by. I looked them up and had trouble finding them. So, rather than
cloned, the next Magic 1 should be ported use components currently
available... preferably 10 year horizon. Also, ideally something fabbed at
0.35 microns or above for visual inspection. Plus realism, as we're
practically cheating if you're using deep submicron for "hand-built, old-
school" systems.

What kind of primitive components are you using or found consistently
available?

------
blastrat
I really wish he'd document even the barest scraps of the CPU architecture. Is
it microcoded? There's 7400 series like 7402 or 7473, and there's 7400 series
like 74154, 74181, etc.

"hey kid, I built one of these too, back in the day, and it was damn hard and
time consuming, but oh a labor of love. I had no internet, nor any schooling.
I read articles in Popular Electronics and I read the TI 7400 series databook.
Then I reread them. Then I reread them." Yes, in fact it was snowing, uphill,
and I in fact had no shoes!

~~~
vvanders
That reminds me of my favorite project of all time, the 7400 FPGA:

[http://blog.notdot.net/2012/10/Build-your-own-
FPGA](http://blog.notdot.net/2012/10/Build-your-own-FPGA)

~~~
nickpsecurity
Never saw that one. Wild stuff. Here's the latest on open FPGA's:

[http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43...](http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43.pdf)

------
outworlder
> Built from simple logic chips, it runs at a blistering 4MHz, has 32kb of RAM

I know that the author was being sarcastic, but really, 4MHz with 32kb of RAM
is like state of the art 80's technology. The fact that we are even able to
get to this point with protoboards and discrete off-the-shelf logic chips is
impressive.

I mean, I learned how to program back in 1988 in a computer not much more
powerful than that.

------
jhallenworld
I wrote Hello World for my relay computer- only 27 lines (uses self modified
code, same as Jack!):

[http://relaysbc.sourceforge.net/example-
code.html](http://relaysbc.sourceforge.net/example-code.html)

------
ntumlin
Any idea where I could go about learning to do this myself? A few google
searches mainly led me to things saying it was impossible.

~~~
Luc
I think you'll like this then:
[http://www.nand2tetris.org/](http://www.nand2tetris.org/)

~~~
dang
It's a wonderful, in fact an astonishing course. I hope that when they come
out with part 2, it really does go all the way to Tetris. The "nand" bit is a
slight fudge (they actually start you off with both nand and a flip-flop,
which is still amazing), but we can grant them a bit of poetic license on
that. But not on the Tetris!

~~~
nickpsecurity
I wonder if this had something to do with someone I countered a day or two ago
thinking all you needed was a NAND gate to do anything. I said you need memory
and some other things for most hardware. Maybe that commenter got the idea
from this project's name...

~~~
andars
Note that you can in fact create memory with NAND gates. See
[https://en.wikipedia.org/wiki/Flip-
flop_(electronics)#SR_NAN...](https://en.wikipedia.org/wiki/Flip-
flop_\(electronics\)#SR_NAND_latch)

~~~
nickpsecurity
That's neat. Lets test the rest then. Full claim was a computer needed logic
cells, RAM, ROM/flash, and analog components (eg I/O & power-related). Still
only need a NAND gate? ;)

~~~
andars
I'll give it a shot :)

logic cells: easy, by universality of NAND.

RAM: distinction between flip flops and RAM is unnecessary from a strictly
technical standpoint. a massive (or not so much) array of flip flops with
decoder and mux (also just made of NANDs) can get you by.

ROM/flash: maybe I could argue NAND flash counts as just a NAND gate, but not
quite. I'll concede nonvolatile memory with just plain old NANDs. May I
suggest a
[https://en.wikipedia.org/wiki/Diode_matrix](https://en.wikipedia.org/wiki/Diode_matrix)?

"analog components": having Vcc and ground available is an implicit
requirement of having a functional NAND gate, and IO could just be some wires
mirroring a memory location.

Overall I'd say its a largely accurate statement. All you need is something to
set up your initial conditions (e.g. program in memory).

~~~
cbd1984
We've forgotten how hard of a problem RAM used to be, and how far into the
Rube Goldberg we went to solve it.

Things We Actually Did:

Mercury delay lines were tubes full of mercury we sent vibrations through.
Those vibrations would reflect around and get detected and read out later.
This wasn't, technically, random-access, but it was memory.

Williams-Kilburn tubes were CRTs with _long-persistence_ phosphor, the exact
opposite of what display CRTs used, and metal plates on the front to allow the
contents of memory to be read out. You could have two identical CRTs, one with
the plate to use as RAM and one built into the display panel so the operators
could see the contents of RAM in real time.

Core memory was tiny little ceramic ferrite doughnuts woven into complicated
metal grids which would change how they were magnetized in response to a
sufficient current; their state could be read back out non-destructively, and
core was, in fact, nonvolatile. The core modules were woven by hand by women
working with microscopes and very steady hands.

Being able to make usable amounts of memory out of components we can just etch
into silicon was an amazing advance. Nothing short of revolutionary, really;
computers as we know them would be flatly impossible without cheap, plentiful
RAM.

~~~
nickpsecurity
There's a few of us here combing through the old stuff to see all that old
wisdom and tricks. So many clever things. I saw the mercury computers
researching old systems of my arch-nemesis, the NSA. The document joked that
buffer overflows were a serious problem on those. No shit lol!

The rope memory I read about studying Apollo and Margaret Hamilton's work.
Core I saw a bit of watching videos of Burroughs computers, my favorite, being
assembled in factories. It was actually kind of mesmerizing watching that old
stuff because you can _see_ all the brilliance and intricacies of the device.
Today, it's all hidden behind plastic for anyone except labs (eg ChipWorks)
that can pull it out. People say, "Why would I pay (obscene amount here)
dollars for this little piece of plastic?" One thing I do is show them
pictures of old computers without covers saying "It's basically that... with
10x more components... squeezed into that tiny space. What, did you think
squeezing a room full of wiring and components into a tiny space cost a few
bucks?"

------
zero_iq
Fantastic achievement. Would love to know some more details about the project
and hardware design.

~~~
coldpie
Agreed, this is super cool. I've read the book "Code" a few times, so I
understand a lot of this stuff in theory, but I sure don't understand the
practice. Awesome work.

~~~
vishvananda
I read code a couple of times and did Nand2Tetris. I also found the edX 6.004
mit course[1] to be extremely useful to fill in some of the gaps. I'm waiting
for part three to start in a couple of weeks.

[1]: [https://www.edx.org/course/computation-structures-
part-1-dig...](https://www.edx.org/course/computation-structures-
part-1-digital-mitx-6-004-1x2)

------
amelius
Does it do pipelining, branch prediction and (hyper)threading? How is its
memory hierarchy (cache levels) organized? Does it support a privileged mode?
Does it support hardware page faults? Did you implement floating-point
operations?

> In order to be able to program it at all, I wrote a very simple compiler
> (strictly speaking it’s actually an assembler)

Does that compiler/assembler run on your hardware?

~~~
cdkersey
In a word: no. Processors this simple don't implement any of the features you
mention, including multi-level memory hierarchies, and I doubt the author had
any inclination to make the assembler self-hosting.

~~~
Someone
I'm not saying one should expect or demand it in a project like this, but
minimalistic pipelining should be doable; both the 6502
([http://www.6502.org/users/andre/65k/arch.html#pipelining](http://www.6502.org/users/andre/65k/arch.html#pipelining))
and (IIRC) the 8080 started fetching the next instruction before finishing
execution of the current one, where possible.

~~~
to3m
Do you have an example of a case where this provided a performance benefit to
6502 code?

