Hacker News new | past | comments | ask | show | jobs | submit login

As I've heard it explained, RISC in practise is less about "an absolutely minimalist instruction set" and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible".

Although as I recall from reading the RISC-V spec, RISC-V was rather particular about not adding "combo" instructions when common instruction sequences can be fused by the frontend.

My (far from expert) impression of RISC-V's shortcomings versus x86/ARM is more that the specs were written starting with the very basic embedded-chip stuff, and then over time more application-cpu extensions were added. (The base RV32I spec doesn't even include integer multiplication.) Unfortunately they took a long time to get around to finishing the bikeshedding on bit-twiddling and simd/vector extensions, which resulted in the current functionality gaps we're talking about.

So I don't think those gaps are due to RISC fundamentalism; there's no such thing.




Put another way, "try to avoid instructions that can't be executed in a single clock cycle, as those introduce silicon complexity".


But that's not even close to true, either, eg any division or memory operation.

In practice there's no such thing as "RISC" or "CISC" anymore really, they've all pretty much converged. At best you can say "RISC" now just means that there aren't any mixed load + alu instructions, but those aren't really used in x86 much, either


You've hit the nail on the head. Really, when people complain about CISC vs RISC, they are mostly complaining about two particular things. The first is that x86 processors carry legacy baggage (aka they have had a long history of success that continues to this day) and the second is that x86 has a lot of variable length instructions. After that, most of the complaints are very nit-picky, such as the number of general purpose registers and how they are named.


>and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible"

What are the advantages of that?


It shifts implementation complexity from hardware onto software. It's not an inherent advantage, but an extra compiler pass is generally cheaper than increased silicon die area, for example.

On a slight tangent, from a security perspective, if your silicon is "too clever" in a way that introduces security bugs, you're screwed. On the other hand, software can be patched.


I honestly find the lack of compiler/interpreter complexity disheartening.

It often feels like as a community we don't have an interest in making better tools than those we started with.

Communicating with the compiler, and generating code with code, and getting information back from the compiler should all be standard things. In general they shouldn't be used, but if we also had better general access to profiling across our services, we could then have specialists within our teams break out the special tools and improve critical sections.

I understand that many of us work on projects with already absurd build times, but I feel that is a side effect of refusal to improve ci/cd/build tools in a similar way.

If you have ever worked on a modern TypeScript framework app, you'll understand what I mean. You can create decorators and macros talking to the TypeScript compiler and asking it to generate some extra JS or modify what it generates. And the whole framework sits there running partial re-builds and refreshing your browser for you.

It makes things like golang feel like they were made in the 80s.

Freaking golang... I get it, macros and decorators and generics are over-used. But I am making a library to standardize something across all 2,100 developers within my company... I need some meta-programming tools please.


I usually talk a lot about Oberon, or Limbo, however their designs were constrained by hardware costs of the 1990's, and how much more the alternatives asked for in resources.

We are three decades away from those days, with more than enough hardware to run those systems, only available in universities or companies with very deep pockets.

Yet, the Go culture hasn't updated themselves, or very reluctantly, with the usual mistakes that were already to be seen when 1.0 came out.

And since they hit gold with CNCF projects, pretty much unavoidable for some work.


complexity that the compiler removes doesn't have to be handled by the CPU at runtime


Sure but that's not necessarily at odds with "programmer conveniences or other such cleverness" is it?


it is in the sense that those are programmer conveniences only for assembly programmers and Riscv's view is that to the extent possible the assembly programmer interface should largely be handled by psuedo-instructions that disappear when your go to machine code rather than making the chip deal with them


Instructions can be completed in one clock cycle, which removes a lot of complexity compared to instructions that require multiple clock cycles.

Removed complexity means you can fit more stuff into the same amount of silicon, and have it be quicker with less power.


That's not exactly it; quite a few RISC-style instructions require multiple (sometimes many) clock cycles to complete, such as mul/div, floating point math, and branching instructions can often take more than one clock cycle as well, and then once you throw in pipelining, caches, MMUs, atomics... "one clock cycle" doesn't really mean a lot. Especially since more advanced CPUs will ideally retire multiple instructions per clock.

Sure, addition and moving bits between registers takes one clock cycle, but those kinds of instructions take one clock cycle on CISC as well. And very tiny RISC microcontrollers can take more than one cycle for adds and shifts if you're really stingy with the silicon.

(Memory operations will of course take multiple cycles too, but that's not the CPU's fault.)


>quite a few RISC-style instructions require multiple (sometimes many) clock cycles to complete, such as mul/div, floating point math

Which seems like stuff you want support for, but this is seemingly arguing against?


It seems contradictory because the "one clock per instruction" is mostly a misconception, at least with respect to anything even remotely modern.

https://retrocomputing.stackexchange.com/a/14509


Got it, so it's more about removing microcode.


The biggest divide is that no more than a single exception can occur in a RISC instruction, but you can have an indefinite number of page faults in something like an x86 rep mov.


That's not even true as you can get lots of exceptions for the same instruction. For example a load can all of these and more (but only one will be reported at a time): instruction fetch page fault, load misaligned, and load page fault.

More characteristic are assumptions about side effects (none for integer, and cumulative flags for FP) and number of register file ports needed.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: