Hacker News new | past | comments | ask | show | jobs | submit login
Here's to the systems programmers – Writing Hello World on a home brew CPU (medium.com)
112 points by jackqu7 on May 13, 2016 | hide | past | web | favorite | 31 comments

It sounds cool and interesting but has almost no detail. Come on, Jack! Give us some details, RTL, code, board schematics, whatever.

@ All

This is how you do homebrew CPU's and presentation best:


Links to others for anyone wanting to learn about this stuff:


Magic 1 was what inspired me to start the project, I continue to be in awe at the ambition of Bill's project. Watch this space for a more technically focused post.

It awed me as well. Plus, the performance and such gave me inspiration that homebrew might be an answer to non-subverted computers post-Snowden. At least, for the bootstrapping phase of other computers, SCM/builds, or key storage.

Only thing was that a HW guy told me the TTL chips he used might be hard to come by. I looked them up and had trouble finding them. So, rather than cloned, the next Magic 1 should be ported use components currently available... preferably 10 year horizon. Also, ideally something fabbed at 0.35 microns or above for visual inspection. Plus realism, as we're practically cheating if you're using deep submicron for "hand-built, old-school" systems.

What kind of primitive components are you using or found consistently available?

I really wish he'd document even the barest scraps of the CPU architecture. Is it microcoded? There's 7400 series like 7402 or 7473, and there's 7400 series like 74154, 74181, etc.

"hey kid, I built one of these too, back in the day, and it was damn hard and time consuming, but oh a labor of love. I had no internet, nor any schooling. I read articles in Popular Electronics and I read the TI 7400 series databook. Then I reread them. Then I reread them." Yes, in fact it was snowing, uphill, and I in fact had no shoes!

That reminds me of my favorite project of all time, the 7400 FPGA:


Never saw that one. Wild stuff. Here's the latest on open FPGA's:


Completely agree, this was just the first foray into documenting this project and I wanted to make it less heavy on the details of the hardware to reach a wider audience. I'm definitely going to put up a more technical post and the sources/schematics as soon as I can.

To answer your questions, it's hard wired with the control logic made out of the simpler chips (7400, 7408, 7432 etc) but the rest of the system does contain more complex chips, the 74181 ALUs being the largest.

Judging from the small number of chips in the photograph, they used the higher integration 74 series logic rather than the lower level flip-flop/NAND type.

> Built from simple logic chips, it runs at a blistering 4MHz, has 32kb of RAM

I know that the author was being sarcastic, but really, 4MHz with 32kb of RAM is like state of the art 80's technology. The fact that we are even able to get to this point with protoboards and discrete off-the-shelf logic chips is impressive.

I mean, I learned how to program back in 1988 in a computer not much more powerful than that.

I wrote Hello World for my relay computer- only 27 lines (uses self modified code, same as Jack!):


Any idea where I could go about learning to do this myself? A few google searches mainly led me to things saying it was impossible.

I think you'll like this then: http://www.nand2tetris.org/

It's a wonderful, in fact an astonishing course. I hope that when they come out with part 2, it really does go all the way to Tetris. The "nand" bit is a slight fudge (they actually start you off with both nand and a flip-flop, which is still amazing), but we can grant them a bit of poetic license on that. But not on the Tetris!

I wonder if this had something to do with someone I countered a day or two ago thinking all you needed was a NAND gate to do anything. I said you need memory and some other things for most hardware. Maybe that commenter got the idea from this project's name...

Note that you can in fact create memory with NAND gates. See https://en.wikipedia.org/wiki/Flip-flop_(electronics)#SR_NAN...

Yes, and in one of their Coursera videos the nand2tetris guys sketch out how this works. But I wish they'd have included it in the course itself. Maybe it would have needed an extra chapter, but for those of us who never studied it before it's really cool stuff.

That's neat. Lets test the rest then. Full claim was a computer needed logic cells, RAM, ROM/flash, and analog components (eg I/O & power-related). Still only need a NAND gate? ;)

I'll give it a shot :)

logic cells: easy, by universality of NAND.

RAM: distinction between flip flops and RAM is unnecessary from a strictly technical standpoint. a massive (or not so much) array of flip flops with decoder and mux (also just made of NANDs) can get you by.

ROM/flash: maybe I could argue NAND flash counts as just a NAND gate, but not quite. I'll concede nonvolatile memory with just plain old NANDs. May I suggest a https://en.wikipedia.org/wiki/Diode_matrix?

"analog components": having Vcc and ground available is an implicit requirement of having a functional NAND gate, and IO could just be some wires mirroring a memory location.

Overall I'd say its a largely accurate statement. All you need is something to set up your initial conditions (e.g. program in memory).

We've forgotten how hard of a problem RAM used to be, and how far into the Rube Goldberg we went to solve it.

Things We Actually Did:

Mercury delay lines were tubes full of mercury we sent vibrations through. Those vibrations would reflect around and get detected and read out later. This wasn't, technically, random-access, but it was memory.

Williams-Kilburn tubes were CRTs with long-persistence phosphor, the exact opposite of what display CRTs used, and metal plates on the front to allow the contents of memory to be read out. You could have two identical CRTs, one with the plate to use as RAM and one built into the display panel so the operators could see the contents of RAM in real time.

Core memory was tiny little ceramic ferrite doughnuts woven into complicated metal grids which would change how they were magnetized in response to a sufficient current; their state could be read back out non-destructively, and core was, in fact, nonvolatile. The core modules were woven by hand by women working with microscopes and very steady hands.

Being able to make usable amounts of memory out of components we can just etch into silicon was an amazing advance. Nothing short of revolutionary, really; computers as we know them would be flatly impossible without cheap, plentiful RAM.

There's a few of us here combing through the old stuff to see all that old wisdom and tricks. So many clever things. I saw the mercury computers researching old systems of my arch-nemesis, the NSA. The document joked that buffer overflows were a serious problem on those. No shit lol!

The rope memory I read about studying Apollo and Margaret Hamilton's work. Core I saw a bit of watching videos of Burroughs computers, my favorite, being assembled in factories. It was actually kind of mesmerizing watching that old stuff because you can see all the brilliance and intricacies of the device. Today, it's all hidden behind plastic for anyone except labs (eg ChipWorks) that can pull it out. People say, "Why would I pay (obscene amount here) dollars for this little piece of plastic?" One thing I do is show them pictures of old computers without covers saying "It's basically that... with 10x more components... squeezed into that tiny space. What, did you think squeezing a room full of wiring and components into a tiny space cost a few bucks?"

"I'll give it a shot :)"

I admire your initiative. :)

"logic cells: easy, by universality of NAND."

Got me dead to rights there. Seems to be able to do anything boolean.

"a massive (or not so much) array of flip flops"


Close but still need the resistors.

"May I suggest a"

That's not a NAND gate but it's pretty neat. Thanks for that link. Especially as when you posted it I was, in search of esoteric constructions, just discovering and looking at these diode-based works:



Note: D-17B is pretty wild in design, appearance, tech, and reliability all at once. Makes me want to try it on 0.35 micron in voter part of lockstep or TMR circuits. :)

"having Vcc and ground available is an implicit requirement of having a functional NAND gate, and IO could just be some wires mirroring a memory location."

Now you're stretching. NAND doing signal conditioning, power regulation, or 2-3 component differential equations is hard to believe. I'm thinking it fails here. I admit I cheated by including an analog requirement but it exists in ever real-world computer (esp SoC) so it seemed fair-ish.

Overall, a high-scoring counterpoint that taught me some useful stuff about NAND gates and old-school ROM. I'm researching both reductionism and old-school techniques in anti-subversion hardware. So, quite relevant. Thanks!

Those resistors are not part of the flop, they're just there to limit current through the LEDs to make the state visible. See here for a D flip flop: https://en.wikipedia.org/wiki/Flip-flop_(electronics)#/media....

My point was about Vcc and ground was just meant as a response to the "power-related analog components". If you don't already have a regulated power rail even a single NAND gate won't be able to do it's thing, so its definitely a requirement but if you say you are allowed a functional NAND gate you must be allowed power rails as part of that.

I'll be darned. So, NAND gets some extra points on the flip flop. Far as analog, even if I ignored power rail we still have ADC's, DAC's, etc. This time I hesitate as I say I doubt they'll be implemented just with NAND gates.

Fantastic achievement. Would love to know some more details about the project and hardware design.

Agreed, this is super cool. I've read the book "Code" a few times, so I understand a lot of this stuff in theory, but I sure don't understand the practice. Awesome work.

I read code a couple of times and did Nand2Tetris. I also found the edX 6.004 mit course[1] to be extremely useful to fill in some of the gaps. I'm waiting for part three to start in a couple of weeks.

[1]: https://www.edx.org/course/computation-structures-part-1-dig...

Does it do pipelining, branch prediction and (hyper)threading? How is its memory hierarchy (cache levels) organized? Does it support a privileged mode? Does it support hardware page faults? Did you implement floating-point operations?

> In order to be able to program it at all, I wrote a very simple compiler (strictly speaking it’s actually an assembler)

Does that compiler/assembler run on your hardware?

In a word: no. Processors this simple don't implement any of the features you mention, including multi-level memory hierarchies, and I doubt the author had any inclination to make the assembler self-hosting.

I'm not saying one should expect or demand it in a project like this, but minimalistic pipelining should be doable; both the 6502 (http://www.6502.org/users/andre/65k/arch.html#pipelining) and (IIRC) the 8080 started fetching the next instruction before finishing execution of the current one, where possible.

Do you have an example of a case where this provided a performance benefit to 6502 code?

Why should it?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact