Hacker News new | past | comments | ask | show | jobs | submit login
A2Z: A computer designed and built from scratch. Custom CPU on FPGA (2017) (hackaday.io)
262 points by F4HDK 7 months ago | hide | past | web | favorite | 50 comments



Cool project! It's interesting the dichotomy in tech communities between the 'minimalists' on one hand who love to get under the hood, work their way down to the bare metal and understand how everything works, and the opposite trend, building seemingly simple web apps that sit on top of 1,000 libraries and frameworks, pack their huge dependency chains into Docker containers distributed onto clusters (probably because the app runs so slow on a single VM), etc. I wonder if it's two fundamentally different kinds of personalities at work.


The first person you describe has to have almost endless amounts time and/or technical interest available to them. The second person you describe has a job to get done as quickly as possible and move on to the next job.

The whole reason libraries and frameworks were created is so everyone doesn't have to dig down to the bare metal to get a task completed. If my manager asked me to provision a server to run an application on and I sat down and built my own hardware from scratch and then wrote my own OS rather than clicking a single button in VMware, I'd be fired pretty quickly.


You sound defensive. I don't think the parent meant that one type is better than the other.

The fundamental difference is not amount of free time. It's just a question of interest and what you are good that. There are people who spend their endless free time making shiny web apps just like there are people who spend it designing CPUs for fun.

Depending on your line of work digging to the bare metal does help get your job done quickly. I know a number of embedded systems programmers, bare metal is their job. There are people paid to work on the Linux kernel, to program FPGAs for high-frequency trading, and so on.

And learning the low level can help doing your job quickly even if you are not a systems programmer. All abstractions are leaky and inevitably some low-level problem will bubble up into your high-level application and you will have to deal with it. If you understand the low-level it may take half or one tenth of the time to figure out and fix the problem.


There are tradeoffs everywhere. The parent made it sound like libraries and frameworks are always going to make you go faster, of course that's not always the case and I don't think the parent believes that. It's very often worth it to descend a level or two to do your job, some things can be done faster, lower, though it's rarely worth it to descend all the way to the bare metal or deeper (but sometimes necessary). Another but echoes your conclusion, sometimes a bit of knowledge at those layers will indirectly help at the layers above.

When the subject of many layered JS frameworks and dependency graphs with hundreds of mini libraries comes up I'm always reminded of Rasmus's 30 second ajax tutorial: https://web.archive.org/web/20060507105529/http://news.php.n... (And the modern equivalent: http://youmightnotneedjquery.com/) Very often libraries and frameworks are brought in because they "help us go faster [because of x,y,z]" with some x,y,zs like "we don't have to think about that problem" or "the global architecture/structure is taken care of", but the cost of dependencies sometimes outweighs the cost of thinking about the problem and doing it yourself. Libraries and frameworks are tradeoffs, you'll likely use a lot of them if you look at every layer you can actually influence, but they're not necessarily net boons.


Yeah, I can agree with that. You'll get the most value out of understanding the 1 or 2 layers below. Digging all the way to bare metal is usually not necessary or practical, though it may help once in a long while.

For example, ORMs. If you try to use an ORM without knowing SQL you will have a bad time as soon as you hit performance issues or you have to do something that doesn't fit quite nicely into the ORM model. I have yet to see a project that uses an ORM that doesn't use SQL in places.

ORMs are not necessarily net positive. They make some things simpler at the cost of an extra layer of indirection.

Going a level deeper, knowing how query planners work at a high level will help writing performant SQL queries.

Going as far as understanding how your RDMS is implemented should not typically be necessary. It would help for finding bugs in the RDMS, but that should be very rare.


You are 100% right. This project fits 100% with my personality. I like to understand in details what I'm working on, even if I need to "loose" time exploring things not directly linked to what I'm supposed to do. And I also like systems that are designed at necessary level.


Hi, this is fantastic! Thanks for sharing.

I would be curious to know what your level of proficiency was in some of these areas such as C, Verilog, writing assemblers, programming FPGAs, board wrapping, etc.

How much learning did you as you went vs how much prior exposure did you have to some of these things?

Are there any resources you found helpful or that you would recommend to others who want to undertake a similar project?

Cheers.


I started this project with only standard electronic knowledges. I had no experience of FPGAs, of writing compiler and assemblers. I learned these topics as an autodidact. It is a "learning by doing" project. And of course, I have learned tons of new things, this is a very rewarding project.


> I wonder if it's two fundamentally different kinds of personalities at work.

Not two different kinds of personalities, just two different kinds of work. Many people find ways to do both.

The minimalist role is better for learning and building for the art of it — when the purpose of the making is the making itself. The article is a great example of this.

The other role is when that learning needs to be applied to a further end. E.g. when shipping product, code is a "means," and pre-built libraries and layers of abstraction are leverage.


Give it a few years, and we will see the first ‘compile npm to FPGA’ project that still uses a thousand libraries, but does away with the CPU, the OS, the docker container, the web server, etc.

(if it doesn’t exist already. The best I can google is https://github.com/hamsternz/FPGA_Webserver, which is incomplete, misses the ‘npm to’ part, and seems abandoned)


I don't think it's necessarily two fundamentally different personalities (unless I have multiple...which I might...)

I grew up taking things apart, and I loved the courses where we build logic gates or modified compiler or interpreter code.

I now build things on the shoulders of giants.

But when I need to, I know I can dive to the deepest levels to debug something, or I can write or customise any part of the stack.

It sure is a mindset shift and a context switch. I consider choosing the right moment to switch approach to be one of the most important and hardest tasks of developing software systems.


I'm a bit of an odd duck because I do ASIC design for work (custom Digital Audio Codecs) and do web development for fun/side work. So I do the bare metal and the high level.


I have an unrelated question for you: what's the state of the art in audio codecs?

Back in college ten years ago I was told that while technically we have 24bit ADCs, it's more of a marketing stunt because the few last bits are essentially garbage since the voltage levels correspoding to some of the more significant bits are somewhat off.


There are some DACs that claim crazy high SNRs of 130dB. But my experience is that anything over 100dB is really good. Anything over 110dB is nutty. That puts it at 21-22 bits.


You might be referring to "Effective Number Of Bits". Physically realizable ADC and DAC devices are not perfect so you can use metrics like ENOB to estimate how your system might actually perform.

https://en.m.wikipedia.org/wiki/Effective_number_of_bits


It reminds me a little of people who restore old cars, yes, there's something great about it, but no, I'm not interested in driving a Model-T around myself...


You are right, but my A2Z is not like old cars or vintage computer. It's more like designing and building from only few existing parts a karting, or a small buggie, without refering to existing drawings. A very simple machine, not very advanced, but that I understand in all its details.


It's like a form of art, and this is great artistry. Thanks/merci for sharing, it's motivational to see these things reach a "finished" state.


Yes, the "finished" status of A2Z is what I am the most proud of. Lots of "CPU on FPGA" projects only deal with the CPU itself, and will never become a fully "usable" computer. I managed to work on the 3 main topics : hardware, software dev toolchain, and software itself.


There's the great nand2tetris course [1] - teaches step by step how to build a computer from the simplest logical gates, using hdl, to building your own ALU, computer, and later on operating system, etc'.

[1] https://www.nand2tetris.org/


To quote the author (F4HDK) who designed everything from hardware to software, compiler,loader,assembler etc: It is a design that came from my imagination, not from existing CPUs. It is a RISC design, without microcode, and instruction are very low level. Therefore, the executable code is much bigger than for CISC CPUs. But the CPU core itself is very simple: especially instruction decoding is very very simple. It is also slower than CISC because the CPU takes lots of time just for reading instructions from the bus (and of course there is no execution pipeline)... But it works!


Nice Job! Don't be discouraged when people say "What a waste of time, why didn't you just use an Arduino?"

When I went to college I had a choice of majoring in Computer Science or Electrical Engineering (there weren't computer systems degrees at the time). Since I really really wanted to know how to build a computer from the ground up (I had assembled one from a kit already in high school and was hungry to know more!) I chose getting my degree in EE with a minor in CS. I don't know where you are in your studies but if you have the opportunity you might find, as I did, that this path scratches that particular itch.

There are a number of books you might track down which you would find interesting given what you now have learned about computer internals. One is "A DEC view on hardware design" which talks about the minicomputers and their architecture that DEC designed. "Introduction to Computer Systems using the PDP-11 and PASCAL" (McGraw-Hill computer science series). And "Digital Computer Design" by Kline. All are out of print but a good computer science section in a library should have them.

One of the reasons I enjoy the older books on computer design is that they assume you don't know why different paths were chosen and so they explain in more detail why one path is better than another. Modern texts often assume you have learned these best practices elsewhere and so treat these design decisions as given knowledge.

If you ever do decide to pick it up again, the two places that you might find rewarding would be automated co-processing (like DMA controllers) and complex mathematical instructions (like floating point).


Thank you very much for this encouraging comment! I ended my studies 15 years ago, I made this project as an "autodidact". I don't know if I will work again on such projects, because I have tons of other electronic topics I want to work on (mainly radio). But if I pick it up again, it will be with a brand new CPU project.


Needs a (2017) tag.

Cool project none the less. I build a custom CPU in FPGA as a school project once. Far less complicated than A2Z, iirc I copied an instruction set from a different CPU, so I could use the assembler (and subsequently the C compiler) from that vendor. Can recommend doing such project (VHDL is not that hard to learn), it's an awesome learning experience!


Same here, we had implemented the architecture of a CPU in FPGA, I think it was 8bit or 16 bit, not sure. It was great for understanding the ALU, Opcodes ...etc. Probably the most satisfying lab in my computer eng. degree.


Why do we bother tagging recent articles with the year? Is there a reason that I need to know this was from 2017 instead of 2016 or 2018?


If it's CompSci, it lets me know if it's an older version of a current work or an updated version of older work. I like to skip the former in many cases to go straight to the final, polished paper. Whereas, the latter keeps me from skipping a paper whose preprint or initial results I already read. Without a date, I might think it's the same paper.

I'd say it's even more important as a search aid. I run through hundreds of papers looking for the next dozen or so worth submitting to a wide audience. Search will bring me results that span decades. Dates in prominent places help me quickly filter or discover stuff depending on what I'm looking for. For instance, I was looking for CompCert-like projects getting results going back to the 1980's. Verification of realistic, low-level programs didn't get feasible until into the 2000's. So, I immediately filtered anything before that time. Where the date shows up also varies depending on what kind of site has the article. And you bet people might have not thought about this use case when publishing their papers in the 80's. Yet, the standard practice of dating the stuff still helped me.


I think it's just a way to manage expectations. Since the project hasn't been updated for at least a year.


Also, RiscV has matured and had upstream gcc and linux kernel support. It is also completely GPL'd and you can run it on an FPGA.

This project is cool and done with the correct intentions by the author, but there are other projects with the same correct intentions that are already much farther along.


Why do you compare my A2Z project with RISC-V? Have you read the pages and the blog posts on hackaday? Have you understood what A2Z is? A2Z is (only) a DIDACTIC project. The goal is learning by doing, and therefore the goal is to reinvent the wheel, just for fun. The "learning by doing" method is the best method I know. The principle is absolutely NOT to take an existing CPU or OS and assembling existing parts. And of course, I have learned a lot of things with this method. That's why I'm sharing this project. I hope some people will begin such project on their own, and learn as much as I did.


Cool project. I did something similar for an FPGA class in college. Prof gave us 3 C programs and we had to implement everything to make it work. Difficult project but one of the most rewarding.


Under the hardware section the author states:

>"I have built this development board by myself, using wrapping technique, because I couldn’t find any board with 2MB of SRAM arranged in 16bits. I wanted SRAM, instead of DRAM, for the simplicity."

I am have heard the term "wrapping" or "board wrapping" in historical references by Steve Wozniak and the original Home Brew Computer Club as well. Could someone describe what this "wrapping" process entails? Is this essentially translating the verilog into physical wires and pins?


It's a way of cold welding wires to the pins of electronics[1]. It's pretty much fallen out of usage as computers have gotten too small for the technique, but it's nice for prototyping because it's easier to undo than solder but more permanent than a breadboard

[1]: https://en.wikipedia.org/wiki/Wire_wrap


  >> easier to undo than solder
Bonus fun - there would almost always be multiple wiring errors to find after wirewrapping.

You start with empty wirewrap sockets, wirewrap the board, then 'buzz' the board with an audio buzzer to check every connection, then add the electronic chips, then start debugging the circuit.


I still use wire-wrapping with veroboards quite often for my home hobby. I like this method, because above certain frequency, it is fairly difficult (quite impossible) to use Breadboards because of electromagnetic issues. The goal is to test different configurations, to rapidly test a new component, explore new ideas quickly. On Veroboard, the wire-wrapping method enables very dense connections.


What's the software required to compile, debug, test the Verilog code for this project or other similar projects?

I used Xlinx VTPro 20, 10 + years ago, like to know the state of FPGA Software tools today.


The source code is compatible with Altera Quartus II. You can also execute the A2Z emulator on your PC, without the Altera suite.


How long did it take you to do this? Did you have previous experience with all the various aspects (compiler, FPGA, instruction set etc) before or did you build it up as you went along?


I am an autodidact for all these things (FPGA, compiler). This is my first FPGA-Verilog project and my first compiler project. I have learned these things specifically for this project. It took me 2 years to complete this project, during evenigs and week-ends. I have not counted exactly, but probably between 200 and 400 hours of work.


Best part of the FPGA class I took in college was writing a processor from scratch. ALU, program counter, control logic, all in VHDL.

Wish I'd taken the followon course about writing peripherals.


Same. We had/got to implement a keyboard controller and VGA output as well and the grading was based on our system running prof's C programs, taking input, and producing correct output. Lots of late nights, but great fun when it worked.


Any recommended resources for getting into FPGA development? I've always been interested, but don't know where to start.


It really depends on you current skills. If you already know about electronics, and roughly what an FPGA is, and if you know C programming, then you can jump rapidly to FPGA and Verilog. One good and very condensed training course below: http://www.ee.ic.ac.uk/pcheung/teaching/ee2_digital/Altera%2...


I was reading through a gcc source code yesterday and found a moxie architecture, which seems like quite a similar very small project. It is from the autor of libffi and includes gcc, binutils, qemu ports.

It's probably a nice example in how to take this further and implement a gnu toolchain support for something like this.


From https://hackaday.io/project/18206-a2z-computer/log/71637-5-a...

I’m afraid, Linux and a C compiler is totally not feasible.

A2Z lacks many things to achieve the goal of C retargeting and Linux porting.

- A2Z only manages Direct Addressing in hardware. No complex address computation. If you want to implement data stack, recursive functions, then direct addressing is not enough. You cannot retarget a C compiler with only direct addressing (or if you emulate complex addressing modes by software, it would be very very slow).

- A2Z has not interrupt management. You cannot implement/port a pre-emptive multitasking OS (Linux) without interrupt management.

- A2Z has no memory management unit.

- A2Z’s ALU is compatible with nothing.


Linux is probably out of the question but lack of addressing modes won't stop a C (-ish) compiler implementation; one only need to look at microcontrollers like the PIC family, the 8051, and the 6502 to see how far they've been pushed --- C compilers are available for all of the latter.

or if you emulate complex addressing modes by software, it would be very very slow

This is exactly what 8051 compilers do, and it's actually acceptably fast in practice.

...and Linux is only "out of the question" if you rule out any sort of emulation. Otherwise... well, just take a look:

http://dmitry.gr/?r=05.Projects&proj=07.%20Linux%20on%208bit


Maybe I have not understood in details what you mean... but my A2Z CPU has no internal (hardware) stack pointer, unlike the 8051. The only available addressing mode is "direct addressing". Of course, you can emulate indirect addressing modes, and you can emulate 32 bits manipulations... But it will be not optimal at all (=very slow compared to programs using only direct addressing). Here, on A2Z, the compiler matches the simplicit of the CPU architecture, it is the same philosophy : only static allocation for variables. If I wanted a custom CPU compatible with a C compiler and with Linux, the CPU would have been totally different.


This make me think about resurrecting my little toy RISC 32bit CPU...


Absolutely Brilliant!!!


Wonderful!




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: