Hacker News new | past | comments | ask | show | jobs | submit login

C is tightly coupled and co-evolved with the Von Neumann architecture. If you understand C you can better understand that architecture, but it's far from the only one. Beyond the world of single core CPUs, systems rarely hyper-specialize to the point that the C/Von Neumann system has (focusing all energies on ALU throughput). And the larger (and more distributed) systems we build, the less they resemble Von Neumann machines.

So while it's realistic to embrace C for many tasks, it's wrong to convince yourself that "the rest follows" from C.




Can you recommend some non-Von Neumann architectures worth learning about? Maybe you agree with these suggestions I found by Googling, or maybe you have other ideas: https://stackoverflow.com/questions/1806490/what-are-some-ex...


The biggest concrete types of Harvard architecture devices I know of are some older small microcontroller architectures (e.g. the PIC10 and PIC12 families) and purpose-built DSPs (e.g. Analog Devices SHARC and TI C6X families). I'm pretty sure that some GPU shaders are also Harvard architecture, but I've heard that mainstream vendors have moved to a von Neumann model.


I second this, I want to know some recommendations too. Are there university courses out there that teach non-Von Neumann architectures? And what are the application areas? I could Google myself but my experience with HN is that if someone knows a good course, then it is a good course. While with Google, not so much.


The Mill architecture:

> the mill uses a novel temporal register addressing scheme, the belt, which has been proposed by Ivan Godard to greatly reduce the complexity of processor hardware, specifically the number of internal registers. It aids understanding to view the belt as a moving conveyor belt where the oldest values drop off the belt and vanish.

https://en.wikipedia.org/wiki/Mill_CPU_Architecture


What architectures are (1) not incredibly niche, (2) have practical hardware in the wild, and (3) are so divergent from Von Neumann as to dramatically change the principles applicable to C?


I was thinking bigger than a single CPU. For example: 2 computers (even if they're both Von Neumann machines) together comprise a non-Von Neumann system. Some single-device examples are pretty common, like GPUs and NICs.

But what I really had in mind are systems that communicate by passing messages. Distributed systems certainly have an "architecture", but it spans many machines; and communication occurs via messages (RDMA being an exception).

Even modern CPUs contain non-Von Neumann features like multiple cores, pipelines, and out-of-order execution, so the line gets blurry. To a large extent modern CPUs enable C-style programming with a lot of contrivances to hide the fact that they're not quite Von Neumann anymore. Dealing with the different architecture becomes the compiler's job.

"Thinking in C" hinges on the idea that the size of memory is several orders of magnitude larger than the number of cores, and that you can only modify these words one at a time.


FPGAs & CPLDs.

Of course those aren't really CPUs to be programmed with software, they're another level of abstraction down. But they're pretty common and hardware description languages are vastly different from C. The inherent massive parallelism of FPGAs & the resulting combinatorial-by-default (sequential only when explicitly declared) languages requires a very different way of thinking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: