
A2Z: A computer designed and built from scratch. Custom CPU on FPGA (2017) - F4HDK
https://hackaday.io/project/18206-a2z-computer
======
QuadrupleA
Cool project! It's interesting the dichotomy in tech communities between the
'minimalists' on one hand who love to get under the hood, work their way down
to the bare metal and understand how everything works, and the opposite trend,
building seemingly simple web apps that sit on top of 1,000 libraries and
frameworks, pack their huge dependency chains into Docker containers
distributed onto clusters (probably because the app runs so slow on a single
VM), etc. I wonder if it's two fundamentally different kinds of personalities
at work.

~~~
freehunter
The first person you describe has to have almost endless amounts time and/or
technical interest available to them. The second person you describe has a job
to get done as quickly as possible and move on to the next job.

The whole reason libraries and frameworks were created is so everyone doesn't
have to dig down to the bare metal to get a task completed. If my manager
asked me to provision a server to run an application on and I sat down and
built my own hardware from scratch and then wrote my own OS rather than
clicking a single button in VMware, I'd be fired pretty quickly.

~~~
greenshackle2
You sound defensive. I don't think the parent meant that one type is better
than the other.

The fundamental difference is not amount of free time. It's just a question of
interest and what you are good that. There are people who spend their endless
free time making shiny web apps just like there are people who spend it
designing CPUs for fun.

Depending on your line of work digging to the bare metal _does_ help get your
job done quickly. I know a number of embedded systems programmers, bare metal
_is_ their job. There are people paid to work on the Linux kernel, to program
FPGAs for high-frequency trading, and so on.

And learning the low level can help doing your job quickly even if you are not
a systems programmer. All abstractions are leaky and inevitably some low-level
problem will bubble up into your high-level application and you will have to
deal with it. If you understand the low-level it may take half or one tenth of
the time to figure out and fix the problem.

~~~
Jach
There are tradeoffs everywhere. The parent made it sound like libraries and
frameworks are always going to make you go faster, of course that's not always
the case and I don't think the parent believes that. It's very often worth it
to descend a level or two to do your job, some things can be done faster,
lower, though it's rarely worth it to descend all the way to the bare metal or
deeper (but sometimes necessary). Another but echoes your conclusion,
sometimes a bit of knowledge at those layers will indirectly help at the
layers above.

When the subject of many layered JS frameworks and dependency graphs with
hundreds of mini libraries comes up I'm always reminded of Rasmus's 30 second
ajax tutorial:
[https://web.archive.org/web/20060507105529/http://news.php.n...](https://web.archive.org/web/20060507105529/http://news.php.net/php.general/219164)
(And the modern equivalent:
[http://youmightnotneedjquery.com/](http://youmightnotneedjquery.com/)) Very
often libraries and frameworks are brought in because they "help us go faster
[because of x,y,z]" with some x,y,zs like "we don't have to think about that
problem" or "the global architecture/structure is taken care of", but the cost
of dependencies sometimes outweighs the cost of thinking about the problem and
doing it yourself. Libraries and frameworks are tradeoffs, you'll likely use a
lot of them if you look at every layer you can actually influence, but they're
not necessarily net boons.

~~~
greenshackle2
Yeah, I can agree with that. You'll get the most value out of understanding
the 1 or 2 layers below. Digging all the way to bare metal is usually not
necessary or practical, though it may help once in a long while.

For example, ORMs. If you try to use an ORM without knowing SQL you will have
a bad time as soon as you hit performance issues or you have to do something
that doesn't fit quite nicely into the ORM model. I have yet to see a project
that uses an ORM that doesn't use SQL in places.

ORMs are not necessarily net positive. They make _some_ things simpler at the
cost of an extra layer of indirection.

Going a level deeper, knowing how query planners work at a high level will
help writing performant SQL queries.

Going as far as understanding how your RDMS is implemented should not
typically be necessary. It would help for finding bugs in the RDMS, but that
should be very rare.

------
shacharz
There's the great nand2tetris course [1] - teaches step by step how to build a
computer from the simplest logical gates, using hdl, to building your own ALU,
computer, and later on operating system, etc'.

[1] [https://www.nand2tetris.org/](https://www.nand2tetris.org/)

------
whitten
To quote the author (F4HDK) who designed everything from hardware to software,
compiler,loader,assembler etc: It is a design that came from my imagination,
not from existing CPUs. It is a RISC design, without microcode, and
instruction are very low level. Therefore, the executable code is much bigger
than for CISC CPUs. But the CPU core itself is very simple: especially
instruction decoding is very very simple. It is also slower than CISC because
the CPU takes lots of time just for reading instructions from the bus (and of
course there is no execution pipeline)... But it works!

------
ChuckMcM
Nice Job! Don't be discouraged when people say "What a waste of time, why
didn't you just use an Arduino?"

When I went to college I had a choice of majoring in Computer Science or
Electrical Engineering (there weren't computer systems degrees at the time).
Since I really really wanted to know how to build a computer from the ground
up (I had assembled one from a kit already in high school and was hungry to
know more!) I chose getting my degree in EE with a minor in CS. I don't know
where you are in your studies but if you have the opportunity you might find,
as I did, that this path scratches that particular itch.

There are a number of books you might track down which you would find
interesting given what you now have learned about computer internals. One is
"A DEC view on hardware design" which talks about the minicomputers and their
architecture that DEC designed. "Introduction to Computer Systems using the
PDP-11 and PASCAL" (McGraw-Hill computer science series). And "Digital
Computer Design" by Kline. All are out of print but a good computer science
section in a library should have them.

One of the reasons I enjoy the older books on computer design is that they
assume you don't know why different paths were chosen and so they explain in
more detail why one path is better than another. Modern texts often assume you
have learned these best practices elsewhere and so treat these design
decisions as given knowledge.

If you ever do decide to pick it up again, the two places that you might find
rewarding would be automated co-processing (like DMA controllers) and complex
mathematical instructions (like floating point).

~~~
F4HDK
Thank you very much for this encouraging comment! I ended my studies 15 years
ago, I made this project as an "autodidact". I don't know if I will work again
on such projects, because I have tons of other electronic topics I want to
work on (mainly radio). But if I pick it up again, it will be with a brand new
CPU project.

------
LeonM
Needs a (2017) tag.

Cool project none the less. I build a custom CPU in FPGA as a school project
once. Far less complicated than A2Z, iirc I copied an instruction set from a
different CPU, so I could use the assembler (and subsequently the C compiler)
from that vendor. Can recommend doing such project (VHDL is not that hard to
learn), it's an awesome learning experience!

~~~
wilsonnb3
Why do we bother tagging recent articles with the year? Is there a reason that
I need to know this was from 2017 instead of 2016 or 2018?

~~~
LeonM
I think it's just a way to manage expectations. Since the project hasn't been
updated for at least a year.

~~~
jeff_carr
Also, RiscV has matured and had upstream gcc and linux kernel support. It is
also completely GPL'd and you can run it on an FPGA.

This project is cool and done with the correct intentions by the author, but
there are other projects with the same correct intentions that are already
much farther along.

~~~
F4HDK
Why do you compare my A2Z project with RISC-V? Have you read the pages and the
blog posts on hackaday? Have you understood what A2Z is? A2Z is (only) a
DIDACTIC project. The goal is _learning by doing_ , and therefore the goal is
to reinvent the wheel, just for fun. The "learning by doing" method is the
best method I know. The principle is absolutely NOT to take an existing CPU or
OS and assembling existing parts. And of course, I have learned a lot of
things with this method. That's why I'm sharing this project. I hope some
people will begin such project on their own, and learn as much as I did.

------
jwineinger
Cool project. I did something similar for an FPGA class in college. Prof gave
us 3 C programs and we had to implement everything to make it work. Difficult
project but one of the most rewarding.

------
bogomipz
Under the hardware section the author states:

>"I have built this development board by myself, using wrapping technique,
because I couldn’t find any board with 2MB of SRAM arranged in 16bits. I
wanted SRAM, instead of DRAM, for the simplicity."

I am have heard the term "wrapping" or "board wrapping" in historical
references by Steve Wozniak and the original Home Brew Computer Club as well.
Could someone describe what this "wrapping" process entails? Is this
essentially translating the verilog into physical wires and pins?

~~~
fasquoika
It's a way of cold welding wires to the pins of electronics[1]. It's pretty
much fallen out of usage as computers have gotten too small for the technique,
but it's nice for prototyping because it's easier to undo than solder but more
permanent than a breadboard

[1]:
[https://en.wikipedia.org/wiki/Wire_wrap](https://en.wikipedia.org/wiki/Wire_wrap)

~~~
blacksmythe

      >> easier to undo than solder
    

Bonus fun - there would almost always be multiple wiring errors to find after
wirewrapping.

You start with empty wirewrap sockets, wirewrap the board, then 'buzz' the
board with an audio buzzer to check every connection, then add the electronic
chips, then start debugging the circuit.

------
srcmap
What's the software required to compile, debug, test the Verilog code for this
project or other similar projects?

I used Xlinx VTPro 20, 10 + years ago, like to know the state of FPGA Software
tools today.

~~~
F4HDK
The source code is compatible with Altera Quartus II. You can also execute the
A2Z emulator on your PC, without the Altera suite.

~~~
tinktank
How long did it take you to do this? Did you have previous experience with all
the various aspects (compiler, FPGA, instruction set etc) before or did you
build it up as you went along?

~~~
F4HDK
I am an autodidact for all these things (FPGA, compiler). This is my first
FPGA-Verilog project and my first compiler project. I have learned these
things specifically for this project. It took me 2 years to complete this
project, during evenigs and week-ends. I have not counted exactly, but
probably between 200 and 400 hours of work.

------
cushychicken
Best part of the FPGA class I took in college was writing a processor from
scratch. ALU, program counter, control logic, all in VHDL.

Wish I'd taken the followon course about writing peripherals.

~~~
jwineinger
Same. We had/got to implement a keyboard controller and VGA output as well and
the grading was based on our system running prof's C programs, taking input,
and producing correct output. Lots of late nights, but great fun when it
worked.

------
bradhoffman
Any recommended resources for getting into FPGA development? I've always been
interested, but don't know where to start.

~~~
F4HDK
It really depends on you current skills. If you already know about
electronics, and roughly what an FPGA is, and if you know C programming, then
you can jump rapidly to FPGA and Verilog. One good and very condensed training
course below:
[http://www.ee.ic.ac.uk/pcheung/teaching/ee2_digital/Altera%2...](http://www.ee.ic.ac.uk/pcheung/teaching/ee2_digital/Altera%20Tutorial%20-%20Verilog%20HDL%20Basic.pdf)

------
megous
I was reading through a gcc source code yesterday and found a moxie
architecture, which seems like quite a similar very small project. It is from
the autor of libffi and includes gcc, binutils, qemu ports.

It's probably a nice example in how to take this further and implement a gnu
toolchain support for something like this.

~~~
teabee89
From
[https://hackaday.io/project/18206-a2z-computer/log/71637-5-a...](https://hackaday.io/project/18206-a2z-computer/log/71637-5-and-
after-whats-going-next)

 _I’m afraid, Linux and a C compiler is totally not feasible.

A2Z lacks many things to achieve the goal of C retargeting and Linux porting.

\- A2Z only manages Direct Addressing in hardware. No complex address
computation. If you want to implement data stack, recursive functions, then
direct addressing is not enough. You cannot retarget a C compiler with only
direct addressing (or if you emulate complex addressing modes by software, it
would be very very slow).

\- A2Z has not interrupt management. You cannot implement/port a pre-emptive
multitasking OS (Linux) without interrupt management.

\- A2Z has no memory management unit.

\- A2Z’s ALU is compatible with nothing. _

~~~
userbinator
Linux is probably out of the question but lack of addressing modes won't stop
a C (-ish) compiler implementation; one only need to look at microcontrollers
like the PIC family, the 8051, and the 6502 to see how far they've been pushed
--- C compilers are available for all of the latter.

 _or if you emulate complex addressing modes by software, it would be very
very slow_

This is exactly what 8051 compilers do, and it's actually acceptably fast in
practice.

...and Linux is only "out of the question" if you rule out any sort of
emulation. Otherwise... well, just take a look:

[http://dmitry.gr/?r=05.Projects&proj=07.%20Linux%20on%208bit](http://dmitry.gr/?r=05.Projects&proj=07.%20Linux%20on%208bit)

~~~
F4HDK
Maybe I have not understood in details what you mean... but my A2Z CPU has no
internal (hardware) stack pointer, unlike the 8051. The only available
addressing mode is "direct addressing". Of course, you can emulate indirect
addressing modes, and you can emulate 32 bits manipulations... But it will be
not optimal at all (=very slow compared to programs using only direct
addressing). Here, on A2Z, the compiler matches the simplicit of the CPU
architecture, it is the same philosophy : only static allocation for
variables. If I wanted a custom CPU compatible with a C compiler and with
Linux, the CPU would have been totally different.

------
Zardoz84
This make me think about resurrecting my little toy RISC 32bit CPU...

------
peter_d_sherman
Absolutely Brilliant!!!

------
godelmachine
Wonderful!

