The whole reason libraries and frameworks were created is so everyone doesn't have to dig down to the bare metal to get a task completed. If my manager asked me to provision a server to run an application on and I sat down and built my own hardware from scratch and then wrote my own OS rather than clicking a single button in VMware, I'd be fired pretty quickly.
The fundamental difference is not amount of free time. It's just a question of interest and what you are good that. There are people who spend their endless free time making shiny web apps just like there are people who spend it designing CPUs for fun.
Depending on your line of work digging to the bare metal does help get your job done quickly. I know a number of embedded systems programmers, bare metal is their job. There are people paid to work on the Linux kernel, to program FPGAs for high-frequency trading, and so on.
And learning the low level can help doing your job quickly even if you are not a systems programmer. All abstractions are leaky and inevitably some low-level problem will bubble up into your high-level application and you will have to deal with it. If you understand the low-level it may take half or one tenth of the time to figure out and fix the problem.
When the subject of many layered JS frameworks and dependency graphs with hundreds of mini libraries comes up I'm always reminded of Rasmus's 30 second ajax tutorial: https://web.archive.org/web/20060507105529/http://news.php.n... (And the modern equivalent: http://youmightnotneedjquery.com/) Very often libraries and frameworks are brought in because they "help us go faster [because of x,y,z]" with some x,y,zs like "we don't have to think about that problem" or "the global architecture/structure is taken care of", but the cost of dependencies sometimes outweighs the cost of thinking about the problem and doing it yourself. Libraries and frameworks are tradeoffs, you'll likely use a lot of them if you look at every layer you can actually influence, but they're not necessarily net boons.
For example, ORMs. If you try to use an ORM without knowing SQL you will have a bad time as soon as you hit performance issues or you have to do something that doesn't fit quite nicely into the ORM model. I have yet to see a project that uses an ORM that doesn't use SQL in places.
ORMs are not necessarily net positive. They make some things simpler at the cost of an extra layer of indirection.
Going a level deeper, knowing how query planners work at a high level will help writing performant SQL queries.
Going as far as understanding how your RDMS is implemented should not typically be necessary. It would help for finding bugs in the RDMS, but that should be very rare.
I would be curious to know what your level of proficiency was in some of these areas such as C, Verilog, writing assemblers, programming FPGAs, board wrapping, etc.
How much learning did you as you went vs how much prior exposure did you have to some of these things?
Are there any resources you found helpful or that you would recommend to others who want to undertake a similar project?
Not two different kinds of personalities, just two different kinds of work. Many people find ways to do both.
The minimalist role is better for learning and building for the art of it — when the purpose of the making is the making itself. The article is a great example of this.
The other role is when that learning needs to be applied to a further end. E.g. when shipping product, code is a "means," and pre-built libraries and layers of abstraction are leverage.
(if it doesn’t exist already. The best I can google is https://github.com/hamsternz/FPGA_Webserver, which is incomplete, misses the ‘npm to’ part, and seems abandoned)
I grew up taking things apart, and I loved the courses where we build logic gates or modified compiler or interpreter code.
I now build things on the shoulders of giants.
But when I need to, I know I can dive to the deepest levels to debug something, or I can write or customise any part of the stack.
It sure is a mindset shift and a context switch. I consider choosing the right moment to switch approach to be one of the most important and hardest tasks of developing software systems.
Back in college ten years ago I was told that while technically we have 24bit ADCs, it's more of a marketing stunt because the few last bits are essentially garbage since the voltage levels correspoding to some of the more significant bits are somewhat off.
When I went to college I had a choice of majoring in Computer Science or Electrical Engineering (there weren't computer systems degrees at the time). Since I really really wanted to know how to build a computer from the ground up (I had assembled one from a kit already in high school and was hungry to know more!) I chose getting my degree in EE with a minor in CS. I don't know where you are in your studies but if you have the opportunity you might find, as I did, that this path scratches that particular itch.
There are a number of books you might track down which you would find interesting given what you now have learned about computer internals. One is "A DEC view on hardware design" which talks about the minicomputers and their architecture that DEC designed. "Introduction to Computer Systems using the PDP-11 and PASCAL" (McGraw-Hill computer science series). And "Digital Computer Design" by Kline. All are out of print but a good computer science section in a library should have them.
One of the reasons I enjoy the older books on computer design is that they assume you don't know why different paths were chosen and so they explain in more detail why one path is better than another. Modern texts often assume you have learned these best practices elsewhere and so treat these design decisions as given knowledge.
If you ever do decide to pick it up again, the two places that you might find rewarding would be automated co-processing (like DMA controllers) and complex mathematical instructions (like floating point).
Cool project none the less. I build a custom CPU in FPGA as a school project once. Far less complicated than A2Z, iirc I copied an instruction set from a different CPU, so I could use the assembler (and subsequently the C compiler) from that vendor. Can recommend doing such project (VHDL is not that hard to learn), it's an awesome learning experience!
I'd say it's even more important as a search aid. I run through hundreds of papers looking for the next dozen or so worth submitting to a wide audience. Search will bring me results that span decades. Dates in prominent places help me quickly filter or discover stuff depending on what I'm looking for. For instance, I was looking for CompCert-like projects getting results going back to the 1980's. Verification of realistic, low-level programs didn't get feasible until into the 2000's. So, I immediately filtered anything before that time. Where the date shows up also varies depending on what kind of site has the article. And you bet people might have not thought about this use case when publishing their papers in the 80's. Yet, the standard practice of dating the stuff still helped me.
This project is cool and done with the correct intentions by the author, but there are other projects with the same correct intentions that are already much farther along.
>"I have built this development board by myself, using wrapping technique, because I couldn’t find any board with 2MB of SRAM arranged in 16bits. I wanted SRAM, instead of DRAM, for the simplicity."
I am have heard the term "wrapping" or "board wrapping" in historical references by Steve Wozniak and the original Home Brew Computer Club as well. Could someone describe what this "wrapping" process entails? Is this essentially translating the verilog into physical wires and pins?
>> easier to undo than solder
You start with empty wirewrap sockets, wirewrap the board, then 'buzz' the board with an audio buzzer to check every connection, then add the electronic chips, then start debugging the circuit.
I used Xlinx VTPro 20, 10 + years ago, like to know the state of FPGA Software tools today.
Wish I'd taken the followon course about writing peripherals.
It's probably a nice example in how to take this further and implement a gnu toolchain support for something like this.
I’m afraid, Linux and a C compiler is totally not feasible.
A2Z lacks many things to achieve the goal of C retargeting and Linux porting.
- A2Z only manages Direct Addressing in hardware. No complex address computation. If you want to implement data stack, recursive functions, then direct addressing is not enough. You cannot retarget a C compiler with only direct addressing (or if you emulate complex addressing modes by software, it would be very very slow).
- A2Z has not interrupt management. You cannot implement/port a pre-emptive multitasking OS (Linux) without interrupt management.
- A2Z has no memory management unit.
- A2Z’s ALU is compatible with nothing.
or if you emulate complex addressing modes by software, it would be very very slow
This is exactly what 8051 compilers do, and it's actually acceptably fast in practice.
...and Linux is only "out of the question" if you rule out any sort of emulation. Otherwise... well, just take a look: