Hacker News new | past | comments | ask | show | jobs | submit login
ARM Thumb-2 has a higher code density than RISC-V(RV32IMAC req. ~18% more text) (segger.com)
3 points by Sweepi on Oct 7, 2022 | hide | past | favorite | 10 comments



Note that the article is specifically about the SEGGER toolchain, not to be confused with GNU's or LLVM.

Personally, I won't use a toolchain that isn't open source, so I can't care less about how bad SEGGER is at producing dense RISC-V code.

As for the actual facts, 32-bit RISC-V is very dense, but Thumb2 was better at some point. It is indeed still better than ratified spec, but watch out for Zc and B extensions.

Last time I checked, 32bit RISC-V was already more dense than Thumb2 with these extensions.

In 64bit, RISC-V has held the code density crown for a long time now. Wins by a wide margin. There's no contest. And it is indeed also getting even better from these extensions.


> As for the actual facts, 32-bit RISC-V is very dense, but Thumb2 was better at some point

Seeing a graphical library being a part of the benchmark, it reminded me of the Acorn Risc Machine origins: a graphical workstation in which the processor does the bitmapping. Naturally Thumb2, as a descendant, would be strong here, and those instructions also work out for crypto.

My second point would be, as you imply, that RISCV has a mostly unused code space, which could be filled in to make it more performant. To make it comparable to thumb2 that would mean adding bit-shifting instructions, notably lacking from RISCV, like the ARM has. Your comment is spot on.

Third, at best this is a benchmark for high-end embedded systems, not what I would call embedded, a market which would be better served with a genuine operating system, if not for the power requirements that would entail.

Lastly, there's competition for their benchmark in the form of a RISC-V backed initiative, by the same name, targeting embedded, quoting:

"Dhrystone and Coremark have been the defacto standard microcontroller benchmark suites for the last thirty years, but these benchmarks no longer reflect the needs of modern embedded systems. Embench™ was explicitly designed to meet the requirements of modern connected embedded systems. The benchmarks are free, relevant, portable, and well implemented"

That benchmark puts thumb at a 7% advantage: https://youtu.be/xX0krFFvlUM?t=1200, but is comparing to a slightly extended RV32IMC.


HN's obsession with RISC-V code density reminds me of the 1980's and the first heyday of RISC architectures (SPARC, MIPS, HP-PA, POWER, and later Alpha).

End users don't care at all about code density. If their application runs faster, that chip is what they'll want. (Compare a DECstation with a MIPS CPU to a VAXstation of the same era to get an idea.)

We don't have high-performance implementations of RISC-V yet, though, so these debates can go on and on. I'm looking forward to seeing actual high performance implementations from Tenstorrent and others, so there's something concrete to discuss.


All of the architectures you mention were targeted at high performance systems, not embedded systems. RISCV is the opposite. It is simple enough to implement yourself. Despite this, code density is comparable or better then existing options. This is a major design feat and not to be dismissed by calling this an obsession.

Especially because code density ends up limiting the speed one can get out of a design, given a memory bandwidth bottleneck. So one might expect RISCV to outpace all other architectures once a neat set of extensions has been worked out. Indeed that needs a frustrating bit of time.

However, you do sound like a self-righteous OpenSource "customer" who can only spitefully complain about a product that he didn't lift a finger to help deliver. That of course describes wrong doing, only somewhat understandable because we similarly inherited a world to live in, comparable to that open source product, and it isn't up to standard, if ever, and democracy doesn't allow us to make the changes we would like, to make it less of a bitch.


I was not being critical of RISC-V, or dismissing it. I was saying that performance is the metric end users care about, and that HN commenters spend too much time arguing endlessly about the code density metric.

I don't understand your final paragraph even a little bit. It's a bit odd.


Code density and performance go hand in hand.

Higher code density means higher likeliness that performance-critical code will fit in L1$.


Code density is one factor in a tremendously complex system. The entire design philosophy of RISC architectures for the last 35+ years is predicated on the notion that it's not the most important one, and shouldn't be focused on in isolation.


The RISC paper was all about the need to strongly justify any complexity. I don't quite remember it even touching code density.


From the abstract of the RISC-1 paper: "Although instructions are simpler, the average length of programs was found not to exceed programs for DEC VAX 11 by more than a factor of 2."


Chip and ISA are almost irrelevant stuff. No matter how you emphasize on end product performance, you cannot deny there SHOULD be a metric to compare ISA designs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: