Hacker News new | past | comments | ask | show | jobs | submit login

I think there's a lot of truth to this, in that it's good to understand all the layers below you. But there's also some truth to the opposite: Did anyone ever really understand all the layers, or even most of them? Even if you knew all the details of how your microchip worked (weren't those details usually proprietary?), you might have wanted your code to work on other chips, or even on future versions of your current chip. At some point, we always have to close our eyes and abstract away parts of the problem behind the contract we believe those parts will follow.



You could specifically limit the processing to a microprocessor that doesn't have advanced features, and is implemented on an FPGA. You'd own all the RTL and could make a modern ASIC if you really wanted.

You can understand all the layers if the design is meant to be simplified; like project Oberon, but based on a system that runs C code (and the UI would need to be a little more modern).

You can understand all the layers, but the system has to be designed to be understood.


You can have a system that is understandable (to someone not at that layer), or you can have a system that is robust (not crash all the time) and functional (more than just a bare-bones feature set). It's the things you have to do to make it robust and functional that make it hard to understand.

As time goes by, the understandable version and the useful version diverge further and further.


I don't think that's necessarily the case, but it's a damn good point; I'd posit that a number of advancements of modern systems could be retroactively applied to a much less sophisticated "base" system and still maintain reliability, because they're born out of increasingly sophisticated programming practices that are widely established at this point (unit testing, fuzzing, even formal methods).

I'm no longer sold on "added functionality" though. I want my computers to be dumber, do less, and be fully owned; I want a 32-bit Z80 (or something like it) that runs at 400MHz on an FPGA, and a graphics subsystem that's primitive and easy to understand and work on. It won't play Crysis - fine. But DOOM, sure! (There is a killer project about a multi-Z80-core CPU implemented on an FPGA, I'll find a link to the project later)

It seems like we made this deal that we get more and more functionality, and it's set us up so that computing could be taken away from us (walled gardens, the modern internet, telemetry, software that requires subscriptions, updates you don't control, etc etc).

But you've brought up an interesting point that I'll need to consider deeply about reliability and its effect on system complexity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: