Hacker News new | past | comments | ask | show | jobs | submit login
The risk of RISC-V: What's going on at SiFive? (morethanmoore.substack.com)
405 points by klelatti 6 months ago | hide | past | favorite | 335 comments



I'll start by saying that I am a chip designer for the last 25 years.

A former company I worked at about 6 years ago talked to SiFive about a partnership. I'm not sure why the deal fell apart but I was never sure what SiFive's business model was. It seemed to be a lot of different things.

They had the cores that you could license but then they also made chips like an SoC similar to to the Broadcom chips on Raspberry Pi systems.

In August 2020, SiFive bought Open Silicon.

Open Silicon was a design services company started by former Intel people. We hear a lot about Intel wanted to be a foundry like TSMC for external customers. A lot of the design services is physical design (synthesis, place and route, integrating together third party IP like PCIE, DDR, fabrics, controller IP) Intel tries to be a foundry about every 10 years then stops. Either their internal volume gets too high to handle external customers or they don't get enough external customers.

SiFive then rebranded the Open Silicon team as OpenFive. I thought they were going to do design services for external customers integrating together their internal RISC-V cores.

In September 2022, SiFive sold the OpenFive group to Alphawave. I assume that the design services stuff didn't go well. Most of the companies I know that are using RISC-V are big companies that don't really go to a third party design services company. I'm going to guess that the VC's didn't like the cash burning and lack of revenue and sold OpenFive but now who knows what will happen.

https://en.wikipedia.org/wiki/Open-Silicon

https://www.datacenterdynamics.com/en/news/alphawave-acquire...


The CEO of SiFive at the time of the Open Silicon purchase used to be the CEO of Open Silicon.

Big doubt that deal was in the best interest of SiFive, maybe thats why he isnt CEO anymore.


Thanks. I totally missed that. I knew people at Open Silicon around 2010-2014 and interviewed there but didn't join. This industry is so small you run into the same people again and again.


SiFive bought OpenSilicon (and Naveed moved over to SiFive) long before 2020. I'm not sure why the WikiPedia says that. The SiFive Wikipedia page says they were acquired in 2018, but I thought it was before that. I thought they had purchased OpenSilicon around the time Naveed was appointed SiFive's CEO (just before I started in 2017.)

My take on the SiFive / OpenSilicon venture was they initially thought they (SiFive) initially thought they were going to make their money with the Core Designer and they wanted customers to be able to push a button and have designs move from the core designer (Rocket++) to verilog to OpenSilicon who would do all the bits you have to do to turn a verilog design into actual chips. But when the core designer strategy was de-emphasized, OpenSilicon was an expensive investment that didn't make a lot of revenue, so they needed to get rid of it if there was any hope to be acquired.

I was just a peon and had no contact with the management team, so this is all just a guess on my part.


I interviewed at Open Silicon in 2010 and had a few friends that were there from 2010-2013. I'm in physical design and they had a big PD team but it sounded like a sweatshop where they just bend over and do whatever ridiculous crap the customer asks for. I actually had lunch with one of the co founders during my interview process. I got a bad vibe from the whole thing and said no thanks.


This may add a bit of colour to a technical debate going on right now in the RISCV Profiles working group. Qualcomm have proposed dropping the C (16-bit compressed instructions) extension from the RV23A profile (effectively the set of things to support if you want a 'standard' high performance RISC-V core). They have two main reasons

1. The variable length instructions (currently 16 bit or 32 bit but 48 bit on the horizon) complicate instruction fetch and decode and in particular this is a problem for high performance RISC-V implementations.

2. The C extension uses 75% of the 32-bit opcode space, this can be put to better use.

They're saying the benefits from the C extension don't outweigh the costs. They're also saying that if you move forward with the C extension in RVA23 now there's no real backing out of it. As the software ecosystem develops removing it once it's baked in just won't be possible. However adding it back in later is more feasible.

SiFive strongly disagree. They believe the C extension is worth the cost and that it doesn't prevent you from building high-performance cores. They also say that there's lots of implementations with C in already, so backing out of it now disadvantages those implementations.

It could end up in being the first major fragmentation in the eco-system Qualcomm go one way and SiFive the other (other companies also sit on one side or the other of this debate but Qualcomm and SiFive are driving it). Indeed the latest proposal from Krste is to do just that with a new 'RVH23' profile..

I wonder how much this development has been driving SiFive's thinking here? Clearly they are under pressure to deliver to their investors so you can see why they want to keep things as they are rather than consider a big change. Good for SiFive, but good for the long-term RISC-V ecosystem?

Edit: If you want the details check out the publicly readable tech-profiles list: https://lists.riscv.org/g/tech-profiles/messages they've got recordings of the last two meetings that discussed the issue and presentations, all available via that message archive.


There's a bit more context rumbling under the surface.

Not too long ago, Qualcomm bought NUVIA, a designer of high performance arm64 cores that can theoretically compete with Apple cores on perf. Arm pretty much immediately sued saying that the specifics of the licenses that Qualcomm and NUVIA have mean that cores developed under NUVIA's license can't be transferred to Qualcomm's license.[0] Qualcomm obviously disagrees. Whatever happens those cores as they exist today are going to be stuck in litigation for longer than they're relevant.

Qualcomm's proposal smells strongly like they're doing the minimum to strap a RISC-V decoder to the front of these cores. For whatever reason the seem hell bent on only changing the part of the front end that's the 'pure function that converts bit patterns of ops to bit patterns of micro-ops'. Arm64 is only 32bit aligned instructions, so they don't want to support anything else.

At the end of the day, the C extension really isn't that bad to support in a high perf core if you go in wanting to support it. The canonical design (not just for RISC-V but high end designs like Intel and AMD too) is to have I$ lines fill into a shift register, have some hardware on whatever period your alignment boundary is that reports 'if an instruction started here, how long is it', and a second stage (logically, it doesn't have to be an actual clock stage) that looks at all of those reports generates the instruction boundaries and feeds them into the decoders. At this point everything is also marked for validity (ie. did an I$ line not come in because of a TLB permissions failure or something).

[0] - https://www.reuters.com/legal/chips-tech-firm-arm-sues-qualc...


> Qualcomm's proposal smells strongly like they're doing the minimum to strap a RISC-V decoder to the front of these cores.

Hmm.. At about the same time as the proposal to drop C from RVA, Qualcomm also proposed an instruction-set extension [1] that smells very much of ARM's ISA (at least to my nose). It also has several issues to criticise, IMHO.

[1] https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...


The 32-bit aligned instruction assumption is probably baked into their low-level caches, branch predictors etc. That might mean much more significant work for switching to 16-bit instructions than they are willing to do.


I don't think anyone bakes instruction alignment into their caches since the early 2000s, and adding an extra bit to the branch predictors isn't that big of a deal. It's got to be the first or second stage of their front end right before the decoders.


Why not bake instruction alignment into the cache? When you can assume instructions will always be 32bit aligned, then you can simplify the icache read port and simplify the data path from the read port to the instruction decoder. Seems like it would be an oversight to not optimise for that.

Though, I suspect that's easy problem to fix. The more pressing issue is what happens after the decoders. I understand this is a very wide design, decoding say 10 instructions per cycle.

There might be a single 16bit instruction in the middle of that block 40 bytes, changing the alignment halfway though. To keep the same throughput, Qualcomm now need 20 decoders, one attempting to decode on every 16bit boundary. The extra decoders waste power and die space.

Even worse, they somehow need to collect the first 10 valid instructions from those 20 decoders. I really doubt they have enough slack to do that inside the decode stage, or the next stage, so Qualcomm might find them selves adding an entire extra pipeline stage, (probably before decode, so they can have 20 simpler length decoders feeding into 10 full decoders on the next) just to deal with possible misaligned instructions.

I don't know how flexible their design is, it's quite possible adding an entire extra pipeline stage is a big deal. Much bigger than just rewriting the instruction decoders to 32bit RISC-V.


Because RISC-V was designed to be trivial to decode length for, you simply need to look at the top two bits of each 16bit word to tell if it's a 32bit or 16bit instruction. At that point, spending the extra I$ budget isn't worth it. Those 20 'simple decoders' are literally just each one 2nand gate. Adding complexity to the I$ hasn't even made sense for x86 in two decades, because of the extra area needed for the I$ versus the extra decode logic. And that's a place where this extra decode is legitimately an extra pipeline stage.

> I don't know how flexible their design is, it's quite possible adding an entire extra pipeline stage is a big deal. Much bigger than just rewriting the instruction decoders to 32bit RISC-V.

I'm sure it is legitimately simpler for them. I'm not sure we should bend over backwards and bring down the rest of the industry because they don't want to do it. Veyron, Tenstorrent were showing off high perf designs with RV-C.


It doesn't matter how optimised the length decoding is. Not doing it is still faster.

For an 8-wide or 10-wide design, the propagation delays are getting too long to do it in all in single cycle. So you need the extra pipeline stage. The longer pipeline translates to more cycles wasted on branch mispredits.

RISC-V code is only about 6-14% denser than Aarch64 [1], I'm really not sure the extra complexity is worth it. Especially since Aarch64 still ends up with a lower instruction count, so it will be faster whenever you are decode limited instead of icache limited.

> Adding complexity to the I$ hasn't even made sense for x86 in two decades

Hang on. Limiting the Icache to only 32bit aligned access actually simplifies it.

And since the NUVIA core was originally an aarch64 core, why wouldn't they optimise for hardcoded 32bit alignment and get a slightly smaller Icache?

[1] https://www.bitsnbites.eu/cisc-vs-risc-code-density/


> Hang on. Limiting the Icache to only 32bit aligned access actually simplifies it.

Even x86 only reads 16 or 32 byte aligned fields out of the I$, then shifts them. There's not extra I$ complexity. You still have to do that shift at some point, in case you don't jump 32 byte aligned address. You also ideally don't want to only hit peak decode bandwidth starting on aligned 32 byte program counters, so that whole shift register thing is pretty much a requirement. And that's where most of the propagation delays are.

> RISC-V code is only about 6-14% denser than Aarch64 [1], I'm really not sure the extra complexity is worth it. Especially since Aarch64 still ends up with a lower instruction count, so it will be faster whenever you are decode limited instead of icache limited.

There's heavy use of fusion, and fwiw, the M1 also heavily fuses into micro ops too (and I'm sure the AArch64 morph of NUVIA's cores do too).


Under a classic RISC architectures you can't jump to non-aligned addresses. That lets you specify jumps that are 4 times longer for the same number of bits in your jump instruction. Here's MIPS as an example:

https://en.wikibooks.org/wiki/MIPS_Assembly/Instruction_Form...


Classic RISC was targeting about 20k gates and isn't really applicable here.


AArch64 does the same thing.

https://valsamaras.medium.com/arm-64-assembly-series-branch-...

And it's not only a way of decreasing code size. It help with security too. If you can have an innocuous looking bit of binary starting at address X that turns into a piece of malware if you dump to instruction X+1 that's a serious problem.

https://mainisusuallyafunction.blogspot.com/2012/11/attackin...

RISC-V, I'm pretty sure, enforces 16 bit alignment and is self synchronizing so it doesn't suffer from this despite being variable length. But if it allowed the PC to be pointed at an instruction with a 1 byte offset then it might be.

As far as I'm aware every RISC ISA that's had any commercial succss does this. HP RISC, SPARC, POWER, MIPS, Arm, RISC-V, etc.


> And it's not only a way of decreasing code size. And RISC-V has better code density than AArch64.

> It help with security too. If you can have an innocuous looking bit of binary starting at address X that turns into a piece of malware if you dump to instruction X+1 that's a serious problem.

JIT spraying attacks work just fine on aligned architectures too, hence why Linux hardened the AArch64 BPF JIT as well: https://linux-kernel.vger.kernel.narkive.com/M0Qk08uz/patch-...

Additionally, MIPS these days has a compressed extension to their ISA too, heavily inspired by RV-C. https://mips.com/products/architectures/nanomips/


Not all JIT spraying relies on byte offsets to get past JIT filters, the attack I gave is just an example.

And NanoMips requires instructions to be word aligned just like everybody else, it's just that it requires 16 bit alignment rather than 32. Attempting to access an odd PC address will result in an access error according to this:

https://s3-eu-west-1.amazonaws.com/downloads-mips/I7200/I720...


> And NanoMips requires instructions to be word aligned just like everybody else, it's just that it requires 16 bit alignment rather than 32. Attempting to access an odd PC address will result in an access error according to this:

That's the same as RV-C.


Right, and I mentioned RISC-V as yet another sane RISC architecture that requires word alignment in instruction access. But the fact that it requires alignment means that the word size has implications for the instruction cache design and the complexity of the piping there.

I don't have a strong opinion on whether the C extension is a net good or bad for high performance designs, but I do strongly believe that it comes with costs as well as benefits.


Back in 2019, RISC-V was 15-20% smaller than x86 (up to 85% smaller in some cases) and was 20-30% smaller than ARM64 (up to 50% smaller in some cases).

https://project-archive.inf.ed.ac.uk/ug4/20191424/ug4_proj.p...

Since then, RISC-V has added a bunch more instructions that ARM/x86 already had which has made RISC-V even smaller relative to them.


No idea if this is true for Qualcomm, but people from Rivos have also been in that meeting arguing against the C extension and as far as I know Rivos have no in-house Arm cores they are trying to reuse.


Rivos was formed from a bunch of ex-Apple CPU engineers. I'm sure they would feel more comfortable with a closer to AArch64 derived design as well.


They might also know a bunch of techniques to give high performance that only work if you've got nice 32-bit only aligned instructions!


Haha that'd be a little counterintuitive given all 32-bit aligned is the trivial case for decoding variable length instructions, unless you're thinking about prefetching/branch prediction etc


> all 32-bit aligned is the trivial case for decoding variable length instructions

That's the point? You can go faster if everything is 32-bit aligned, i.e. you don't have variable length instructions.


The shift register design sounds quite expensive. You're essentially constructing <issue width> number of crossbars of 32 times <comparator widths> connected to a bunch of comparators to determine instruction boundary. In a wide design you also need to do this across multiple 32-bit lines


Well, half that because the instructions are 16 bit aligned. And approaching half of even that because not every decoder needs access to every offset. Decoder zero doesn't need any. Decoder one only needs two, etc.

But you need most of that anyway because you need to handle program counters that aren't 32 byte aligned, so you need to either do it before hitting the decoders, or afterwards when you're throwing the micro-ops into the issue queues (which are probably much wider and therefore more expensive).


I think an example is something like opcodes crossing I-cache lines, re: fetch and decode complication; instructions are 16-bit aligned when C is present, so you can have a 32-byte instruction cross cache lines easily. At minimum it will definitely require a bunch of extra verification to handle those cases, and that's often the longest part of the whole development process anyway, so I see the reasoning for not wanting it. It doesn't matter how high performance something is or can be, if you can't prove it works to some tolerance level.

I know there's the big discussion about macro-op fusion. But in hindsight, I think a big motivator for C -- implicit or not -- was the fact that on the very low-end microcontroller or in the softcore (FPGA) world, you typically have disproportionately low amounts of SRAM available versus compute fabric. Those were the initial deployment targets (and initial successful deployments!) for RISC-V, since you need tons of extra features for "Application Class" designs. These cores often have a short pipeline and are completely in-order, so their cost and verification effort are much lower. These are (very likely) not going to implement macro fusion, at least on the medium-low end. So, increasing the effective size of the I-cache through smaller opcodes is often a straight win to increase IPC. On the other hand, Application Class designs today are typically OoO, so they achieve high IPC while still hiding miss latencies pretty effectively; smaller instructions are still good but the benefits they provide aren't as prominent. And it does use a ridiculous amount of opcode space, yes.

I wonder if they would have just been better off copying one of ARM's design principles from the very start: actual design families akin to the -M, -R, and -A series of ARM processors, created for different actual design spaces. These could actually be allowed to have (potentially large!) incompatibilities between them while still sharing a lot of the base instruction set and privileged e.g. PMP extensions could probably exist among all of them. I'd be happy to have an "Application Class" "-A series" RISC-V processor that could run Linux but didn't have compressed instructions or whatever; likewise I would probably not miss e.g. Hypervisor extensions on a microcontroller.

EDIT: Clipped an incorrect bit about ABI compatibility with the C extension. I was misremembering some details about a specific implementation!


> I think an example is something like opcodes crossing I-cache lines

Consider also a 16-bit aligned 32-bit instruction crossing a page boundary, potentially with different access permissions. This type of bug allowed userspace applications to hang early Cortex-A8 based phones(ARM errata 657417).


See also: Intel SKX102, whose documentation has nice diagrams.


> Another big issue for Application Class systems IIRC -- unrelated to all this -- is that I don't think hardware implementing C can actually run binaries compiled without it.

I believe this is incorrect? I believe RV{32,64}-with-C is simply a superset of RV{32,64}-without-C. Now I have only implemented RV32I, so I'm not that familiar with the C extension or other extensions for that matter, but in my digging through the various RV specs, I haven't found anything which suggests that implementing C requires breaking code compiled without the use of C.

Do you have any details?


Nope, I was wrong! I was curious about a reference and went digging, and I was misremembering the details of a particular Linux-class system that didn't implement C (Shakti), so they needed an entirely new set of binary packages from distros to support them.

Wish I could use strike-outs here, but oh well.


F̶Y̶I̶ ̶y̶o̶u̶ ̶c̶a̶n̶ ̶s̶t̶r̶i̶k̶e̶-̶o̶u̶t̶ ̶w̶i̶t̶h̶ ̶U̶n̶i̶c̶o̶d̶e̶ (e.g. via https://yaytext.com/strike )


It looks atrocious though.


which is strange tbh. it should not

i̵t̵ ̵t̵u̵r̵n̵s̵ ̵o̵u̵t̵ ̶t̶h̶e̶r̶e̶ ̶a̶r̶e̶ ̶d̶i̶f̶f̶e̶r̶e̶n̶t̶ ̴s̴t̴r̴i̴c̴t̴ ̴t̴h̴r̴o̴u̴g̴h̴ ̴s̴t̴y̴l̴e̴s̴

and the font HN uses doesn't handle any of them well


No harm in emailing dang asking for a feature request. hn@ycombinator.com

Perhaps if enough people asked for it then the <s> markdown equiv could be added ~~ in some apps.


The FPGA world rarely cares about RAM limits on MCU cores unless they are trying to make something that benchmarks really well. When they have to be tiny, most of those MCUs run fairly short programs out of a single block RAM (which is a few kB), and actually care more about the LUT count than the BRAM count, making the C instructions actively bad.

(This mostly applies to the high end, but designers on low-end devices may be more SRAM-constrained)


Good points; though I admit, I was thinking more Linux-capable/"Linux-class" cores in the mid-range FPGAs which is where I spend a bunch of my time. For that, the paltry cache sizes due to limited amounts of BRAM are more noticeable (especially when other features may compete for them) so minimizing icache footprint as much as possible is win. Admittedly that's probably a marginal case versus the tiny softcores you mention; workloads that need high performance cores will practically be better off running on attached ARM processors or whatever.

Anyway, all that aside, I personally wouldn't be sad to see the C extensions go away. I'm actively designing a RISC-V core for a small game console, and probably won't implement them, and will spend the time on more DV instead. I don't expect them to have any meaningful benefits for my case and mostly increase frontend complexity.


I have made a few RISC-V cores for FPGAs personally, and I would say that the C extensions are 100% not worth your time to implement unless you need absolutely minimum RAM size.


https://riscv.org/wp-content/uploads/2019/06/riscv-spec.pdf

> The C extension is compatible with all other standard instruction extensions. The C extension allows 16-bit instructions to be freely intermixed with 32-bit instructions, with the latter now able to start on any 16-bit boundary [...]


As a chip designer who has made a few RISC-V cores (including one open-source one that nobody uses), I personally hate the C instructions, and I am on Qualcomm's side here. There are just too many of them, and they really muck up instruction decoding without providing large benefits for anything but the smallest MCUs.

Maybe I should weigh in on this issue in the official channels.


Consider the Linux kernel code getting 50% larger when you move from compressed to uncompressed instructions[0]. That puts RISC-V as among the least efficient ISAs out there and would make it unsuitable for most applications.

[0] https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-1.p...


At the scale of Linux, you don't care about code size very much, but you care a lot about working set size. 2-20% seems to be the range of working set size reductions you see in the literature, and if you compensate with other instructions, you can get back a lot of that code size.

The analysis from the SiFive folks generally doesn't include that compensation factor: it just involves a straight find-and-replace in the binary.


Your assertion has two major issues.

First, you context switch a lot to and from the Linux kernel, so decreased cache pressure does matter.

Second, if you have proof that loops predominantly consist of 32-bit instructions, prove your case. To my mind, a loop is likely to use fewer registers and likely to have shorter branches and smaller immediate values. all of these seem to favor compressed instructions actually favoring working code even MORE than general code.


I think we agree about working set size - that's what actually matters for performance rather than overall code size. Krste from SiFive was relatively insistent on their recent call - without any proof (citing mysterious customer calls) - that people care about code size of the Linux kernel, not working set size. The performance gain he suggested that came from C instructions due to working set size in the Linux kernel is 3%. This is the performance argument coming from the biggest proponent of the C instructions.

As to what you suggested, I have actually started putting something together to possibly send to the RISC-V foundation from my own experience implementing RISC-V designs, but pretty much nobody is asserting that loops are predominantly 32-bit instructions. Tight loops are often already sitting in a uop cache once you get to a core of reasonable size, so compressed vs uncompressed is completely irrelevant. Contrary to what you seem to be hoping for, correct arguments about working set size and performance are very subtle.

The C instructions aren't free in frequency terms, either. You have significant complexity increases in decoders and cache hierarchies to support them. Making that cost add up to 3% is not that hard.


I keep coming back to this table and wondering why the conclusion was that 16-bit instructions were necessary and not more complex 32-bit instructions were necessary. For example, ARMv8's LDR and LDP instructions are amazing and turn what is often 3-4+ 32-bit RISC-V instructions into just one 32-bit instruction. Making C optional and building a new "code size reduction" extension that is more suitable for large application processors (which can reasonably be assumed to use uops) would help so much more.


> Making C optional and building a new "code size reduction" extension that is more suitable for large application processors (which can reasonably be assumed to use uops) would help so much more.

Andrew at SiFive disagrees vehemently[0].

0. https://lists.riscv.org/g/tech-profiles/message/391


Andrew seems to mostly disagree with Qualcomm in general and is rejecting the idea not so much out of technical merit but because he doesn't believe them. Qualcomm is definitely not alone with its dislike of the C extension and trying to wholesale dismiss criticisms of it because they're coming from Qualcomm is not appropriate.


SiFive made a technical case[0] on keeping C, too.

>Qualcomm is definitely not alone

It's also worth noting that they tried to appropriate Rivos's opinion, only to be called out[1].

0. https://lists.riscv.org/g/tech-profiles/topic/slides_on_reta...

1. https://lists.riscv.org/g/tech-profiles/message/396


Please do. And SOON -- if the current task group doesn't resolve this issue in the next couple of weeks, it will likely be passed out of that group for decision at a higher level, with less opportunity for public input.


You can join the Profiles meeting. It is every Thursday. Technically you must be an RVI member, but there is free membership for individuals.


> The variable length instructions (currently 16 bit or 32 bit but 48 bit on the horizon) complicate instruction fetch and decode and in particular this is a problem for high performance RISC-V implementations.

I want to see variable length instructions, but a requirement for instruction alignment.

Ie. every aligned 64 bit word of RAM contain one of these:

[64 bit instruction]

[32 bit instruction][32 bit instruction]

[16 bit instruction][16 bit instruction][32 bit instruction]

[32 bit instruction][16 bit instruction][16 bit instruction]

[16 bit instruction][16 bit instruction][16 bit instruction][16 bit instruction]

That should make decode far simpler, but put a little more pressure on compilers (instructions will frequently need to be reordered to align - but a review of compiler generated code is that that frequently isn't an issue)


As the other reply states that is effectively the Qualcomm proposal though note the 16-bit instructions likely gobble up a large amount of your 32-bit instruction space. You have to have something to identify an instruction as 16-bit which takes up 32-bit encoding space. The larger you make that identification (in terms of bits) the less encoding space it takes up but then the fewer spare bits you have to actually encoding your 16-bit instruction. RISC-V uses the bottom two bits for this purpose, one value (11) indicates a 32-bit instruction, the others are used for 16-bit instructions. So you're dedicating 75% of your 32-bit encoding space to 16-bit instructions.


By requiring alignment, you can halve or more the size of the identifier.

Since if you have a 16 bit instruction, you know that it must be followed by another 16 bit instruction. Therefore, that 2nd instruction doesn't need the identifying bits. Or, more precisely, within a 32 bit slot, the 2^32 instructions possible need to be divided - and one way to do that is 2^31+2^30 possible 32 bit instructions, and 2^15 * 2^15 16 bit instructions. Now, the 16 bit instructions are only taking 25%, not 75% of the instruction space.


But now you have two kinds of 16-bit instructions, the ones for the leading position and the ones for the trailing position, and the latter ones have slightly more available functionality, right? Personally, at this point I'd think the decoder must already be complicated enough (it has either to maintain "leading/trailing/full" state between the cycles, or to decode 8/16-byte long batches at once) that you could simply give up and go for an encoding with completely irregular lengths à la x86 without much additional cost.


Not necessarily.

    000x -- 64-bit instruction that uses 60 bits
    001x -- reserved
    010x -- reserved
    011x -- reserved
    100x -- two 32-bit instructions (each 30-bits)
    101x -- two 16-bit instructions then one 32-bit instruction
    110x -- one 32-bit instruction then two 16-bit instructions
    111x -- four 16-bit instructions (each 15 bits)
    xxx1 -- explicitly parallel
    xxx0 -- not explicitly parallel
Alternatively, you view them as VLIW instruction sets. This has the additional potential advantage of some explicitly parallel instructions when convenient.


One advantage of just sticking with only 32bit instructions is that nobody needs to write packet-aware instruction scheduling.

Even with decent instruction scheduling, you are still going to end up with a bunch of instruction slots filled with nops.

And it will be even worse if you take the next step to make it VLIW and require static scheduling within a packet.


In this case, it's probably not that bad as with actual VLIW: if you see that e.g. your second 16-bit instruction has to be a NOP, you just use a single 32-bit instruction instead; similarly for 32- and 64-bit mixes.


The packet would be external and always fit in a cache line. You'd specify the exact instruction using 16-bit positioning. The fetcher would fetch the enclosing 64-bit group, decode, then jump to the proper location in that group.

In the absolute worst-case scenario where you are blindly jumping to the 16-bit instruction in the 4th position, you only fetch 2-3 unnecessary instructions. Decoders do get a lot more interesting on the performance end as each one will decode between 1 and 4 instructions, but this gets offset by the realization that 64-bit instructions will be used by things like SIMD/vector where you already execute fewer instructions overall.

The move to 64-bit groups also means you can increase cache size without blowing out your latency.

VLIW doesn't mean strictly static scheduling. Even Itanic was just decoding into a traditional backend by the time it retired. You would view it more as optional parallelism hints when marked.

I'd also note that it matches up with VLIW rather well. 64-bit instructions will tend to be SIMD instructions or very long jumps. Both of these are fine without VLIW.

Two 32-bit instructions make it a lot easier to find parallelism and they have lots of room to mark exactly when they are VLIW and when they are not. One 32-bit with two 16-bit still gives the 32-bit room to mark if it's VLIW, so you can turn it off on the worst cases.

The only point where it potentially becomes hard is four 16-bit instructions, but you can either lose a bit of density switching to the 32+16+16 format to not be parallel or you can use all 4 together and make sure they're parallel (or add another marker bit, but that seems like its own problem).


I think if you have 64bit packets, you might as well align jump targets to the 64bit boundary.

I'd rather have an extra nop or two before jump targets than blindly throw 1-3 instructions worth of decoding bandwidth on jumps (which are often hot)


If you're fetching 128-bit cache lines, you're already "wasting" cache. Further, decoding 1-3 NOP instructions isn't much different from decoding 1-3 extra instructions except that it adversely affects total code density.

If you don't want to decode the extra instructions, you don't have to. If the last 2 bits of the jump are zero, you need the whole instruction block. If the last bit is zero, jump to the 35th bit and begin decoding while looking at the first nibble to see if it's a single 32-bit instruction or two 16-bit instructions. And finally, if it ends with a 1, it's the last instruction and must be the last 15 bits.

All that said, if you're using a uop cache and aligning it with I-cache, you're already going to just decode all the things and move on knowing that there's a decent chance you jump back to them later anyway.


But if you don't have a uop cache (which is quite feasible with a RISC-V or AArch64 style ISA), then decode bandwidth is much more important than a few NOPs in icache.

Presumably your high performance core has at least three of these 64bit wide decoders, for a frontend that takes a 64bit aligned 192bit block every cycle and decodes three 64bit instructions, six 32bit instructions, twelve 16bit instructions, or some combination of all sizes every cycle.

If you implement unaligned jump targets, then the decoders still need to fetch 64bit aligned blocks to get the length bits. For every unaligned jump, that's upto a third of your instruction decode slots site idle for the first cycle. This might mean the difference between executing a tight loop in one cycle or two.

A similar thing applies to a low gate count version of the core, a design where your instruction decoder targets one 32bit or 16bit instruction per cycle (and a 64bit instruction every second cycle). On unaligned jumps, such a decoder still needs to load the first 32bits of the instruction first to check the length decoding, and waste an entire cycle on every single branch.

Allowing unaligned jump targets might keep a few NOPs out of icache (depending on how good the instruction scheduler is), but it costs you cycles in tight branchy code.

Knowing compiler authors, if you have this style of ISA and even it does support unaligned jump targets, they are still going to default to inserting NOPs to align every single jump target, just because the performance is notably better on aligned jump targets and they have no idea if this branch target is hot or cold.

So my argument is that you might as well enforce jump target alignment of 64 bits anyway. Allow all implementations gain the small wins from assuming that all targets are 64bit aligned, and use the 2 extra bits to make your relative jump instructions have four times as much range.


Which is easier to decode?

[jmp nop nop], [addi xxx]

OR

[xxx jmp], [nop nop addi]

OR

[xxx jmp], [unused, addi]

All of these tie up your entire decoder, but some tie it up with potentially useful information. That seems superior to me.


It's only unconditional jumps that might have NOPs following them.

For conditional jumps (which are pretty common), the extra instructions in the packet will be executed whenever the branch isn't taken.

And instruction scheduling can actually do some optimisation here. If you have a loop with an unconditional jump at the end and an unaligned target, you can do partial loop unrolling, for example:

With [xxx, inst_1, inst_2], [inst3]...(loop body) ...[jmp to inst_1, nop, nop], you can repack the final jump packet as [inst_1, inst_2, jump to inst_3]

This partial loop unrolling actually is much better for performance than not wasting I-cache as it has reduces the number of instruction decoder packets per iteration by one. Compilers will implement this anyway, even if you do support mid-packet jump targets.

Finally, compilers already tend to put nops after jumps and returns on current ISAs, because they want certain jump targets (function entry points, jump table entries) to be aligned to cache lines.


Don't forget the possibility of 5x 12-bit instructions. In particular, if you have only one or two possibilities for destination registers for each of the 5 positions (so an accumulator-like model), you could still have a quite useful set of 12-bit instructions.


No, the idea was to say "prefix 0"=>31bit, "prefix 10"=>30bit, "prefix 11"=>2*15bit. If you need you can split the two bits to have the two 15 bit chunks aligned identically.


Once you're moving to alignment within a larger fixed width block you don't even need to stick to byte boundaries. I've got a toy ISA I've played around with that breaks 64 bit chunks into a 62 bit instruction, a 40 and a 21 bit instruction, or 3 21 bit instructions.


I think Itanium did something like that


Yup, 128 bit bundles with 3 instructions each, and ways to indicate that different bundles could execute in parallel.


The CDC 6600 (1963) had 60-bit words and both 15-bit and 30-bit instructions, with the restriction that 30-bit instructions couldn't straddle a word boundary. The COMPASS assembler would sometimes have to insert a 15-bit no-op instruction to "force upper". Careful optimizing programmers and compilers would try to minimize "forcing upper" to avoid wasting precious space in the 7-word instruction "stack".

So it's been done, and is not a big deal.


was it called assembler though? I was looking and https://web.archive.org/web/20120910064824/http://www.bitsav... does not say so but http://www.bitsavers.org/pdf/cdc/cyber/cyber_70/60225100_Ext... does refer to "COMPASS assembly language". Interesitng.


This is basically what qualcomm proposes, 32 bit instructions and 64 bit aligned 64 bit instructions.

I don't think we have real data on it, but I suspect that the negative impact of this would effect 16/32/48/64 way more than just 32/64.


I would like to also have aligned 16 bit instructions. And maybe even aligned 8 bit instructions for very common things like "decrement register 0" or "test if register 0 is greater than or equal to zero". "Jump back by 10 instructions", etc. Those instructions get widely used in tight loops, so might as well be smaller.


8 bit instructions are a really bad idea. you only get a tiny number of them and they significantly increase decode complexity (and massively reduce the number of larger instructions available)


Others have pointed out adding bits to identify instruction types eats into your instruction length, so let's go stupid big time: what if you had the instructions as described here, without any instruction length being part of the instruction, but have that stored separately? (3 bits would be plenty per word) You might put it 1. as a contiguous bit string somewhere, or you might 2. put it at the bottom of the cache line that holds the instructions (the cache line being 512 bits I assume).

Okay, for 1. you'd have to do two fetches to get stuff into the I-cache (but not if it's part of the same cache line, option 2.) and of course you're going to reduce instruction density because you're using up cache, but there's nothing you can do about that, but at least it would allow n-bit instructions to be genuinely n-bits long which is a big advantage.

That this hasn't been done before to my knowledge is proof that it's a rotten idea, but can the experts here please explain why – thanks


> you'd have to do two fetches

I think this is the big downside. You're effectively taking information which will always be needed at the same time, and storing it in two different places.

There is never a need for one piece of information without the other, so why not store it together.


> There is never a need for one piece of information without the other, so why not store it together.

Why not? as I said, so you can have full-length instructions!

And you can store it together in the same fetchable unit - the cache line (my option 2)


Interesting idea. Effectively moving the extra decode stage in front of the Icache, making the Icache a bit like a CISC trace/microOp cache. On a 512b line you would add 32 bits to mark the instruction boundaries. At which point you start to wonder if there is anything else worth adding that simplifies the later decode chain. And if the roughly 5% adder to Icache size (figuring less than 1/16th since a lot of shared overhead) is worth it.


Why not treat it like Unicode does and just have two marker instructions before and after the compressed ones?

Start compressed instructions <size>

Compressed instructions

End compressed instructions <size>


Which Unicode encoding are you talking about? It sounds a bit like you're talking about UTF-16 conjugate pairs, but that's not how those work. It's not how UTF-8 or UTF-32 work. So, which encoding is this?


If I understand you correctly, the guy I'm responding to is proposing allowing the mixing of different sized instructions. Your suggestion effectively says "I'm starting a run of compressed instructions/I'm finishing a run of compressed instructions" which is a different proposition. Just my take though.


> let's go stupid big time

Wouldn't that be to Huffman encode the instructions? Fixed table, but still, would save a lot of bits on the common instructions surely...


> instructions will frequently need to be reordered to align

Can't you pad it with nops up to the alignment boundary?

Even if there's not an explicit nop instruction in 16-bit and 32-bit variants (I don't know) there's surely something you can find that will have no side effects.


Yes, of course - but in general you want to avoid padding with nops because it makes the code larger (which, as well as costing more RAM, also uses more power and time to read the code from RAM, and fits less code into the instruction cache, which makes the power and time cost of reading code from RAM even bigger).

If you can make a compiler fill those NOP slots with useful instructions, then all the better.

It adds complexity for humans writing assembly code by hand, but that is a tiny minority of code now.


For one, I don't see why you would ever pad a 16-bit instruction with a 16-bit noop instead of just using the 32-bit equivalent instruction. That way, you can skip decoding the no-op.


Because you have a long odd numbered chain of 16-bit ops. Ex: 15 16-bit ops, leaves you with a 16bit NOP in order to realign for the following 32-bit op.


To be more clear: if a 16-bit instruction is 32-bit aligned, then you might want a 16-bit noop so that the instruction following the noop is also 32-bit aligned. But, in that case, you could just use a 32-bit aligned 32-bit instruction at the same location. No padding, one instruction saved, and the following instruction is still 32-bit aligned.

If the 16-bit instruction isn't 32-bit aligned, then the following instruction will be 32-bit aligned with no padding.

So, equivalently: "I don't know why you'd ever want to add padding after a 16-bit instruction in order to force the next instruction to not be 32-bit aligned." Is there such a use case (other than the obvious use case of checking behavior/performance of the sub-optimal case, or writing a noop-slide for an exploit payload)?


Use 14 16-bit ops instead, and use a regular 32-bit op as the 15th instruction (which is already correctly aligned since 14 is an even number).


Motorola 68k has this kind of aligned variable length instructions (in their case it’s always aligned on 16 bit boundaries). It’s not super difficult to support this in compilers though


> They're also saying that if you move forward with the C extension in RVA23 now there's no real backing out of it.

That doesn't seem correct. I think adding and dropping C for desktop/server workloads would be relatively easy. Most of what will be run on it is either open source (Linux, Apache et al) or Java/Python/Go/.Net. Either way, I'd expect Oracle or somebody to support both with a single installer. This isn't x86 where there is a lot of binaries with no source, or lots of janky code that assumes x86 that we need backward compatibility. (Note: IIRC RV32A is the "application" profile, not for embedded where hand tuned assembly is a real thing, things are much more fragile there).

That said, just like Linux supported multiple x86 based platforms (PC-98), I'd imagine Debian and others would support non-C processors with distros, so I don't think it would really hurt Qualcomm if it's kept and they don't include it.

> They also say that there's lots of implementations with C in already, so backing out of it now disadvantages those implementations.

Ugh. So we should hold hold onto something, even if it is a bad idea just because other people wasted time on it? That seems like a very crab-bucket mentality. Not saying it should be removed, but the decision should be technical, not based on favoring certain players.


> I think adding and dropping C for desktop/server workloads would be relatively easy.

It's not super easy because standard RISC-V without the c extension balloons code size by about 30%. So Qualcomm is proposing a set of custom extensions on top of that to get the code size back down. It's not clear what the patent situation is on those extensions since they're so obviously AArch64 inspired.


More than 30%. For example, in this 2016 paper, the Linux kernel using compression was 67% the size of the non-compressed instructions.

Stated in the reverse, removing compressed instructions would increase the kernel size by 50%.

https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-1.p...


From that document what is interesting is that they explicitly avoided the approach that Qualcomm are suggesting:

p. 51

> each RVC instruction must expand into a single RISC-V instruction. The reasons for this constraint are twofold. Most importantly, it simplifies the implementation and verification of RVC processors: RVC instructions can simply be expanded into their base ISA counterparts during instruction decode, and so the backend of the processor can be largely agnostic to their existence. ... This constraint does, however, preclude some important code size optimizations: notably, load-multiple and store-multiple instructions, a common feature of other compressed RISC ISAs, do not fit this template. ... Given these constraints, the ISA design problem reduces to a simple tradeoff between compression ratio and ease of instruction decode cost. ... The dictionary lookup is costly, offsetting the instruction fetch energy savings. It also adds significant latency to instruction decode, likely reducing performance and further offsetting the energy savings. Finally, the dictionary adds to the architectural state, increasing context switch time and memory usage.


Debian maybe, but then again maybe not. The distros don't like supporting a lot of separate arch revisions because they tend to behave like different arches. That is why most of them have dropped 32-bit arm support, it was from a distro perspective a completely separate arch despite being able to run on much the same HW as the 64-bit arm distro. Given most arm devices made in the past ~decade have been 64-bit it was an obvious choice. People with 32-bit binary apps can run them on the 64-bit distro, and the maintainers don't have to keep building/testing/fixing an entirely separate set of machine images.

So, if someone forks the arch such that two different distros are required based on HW, its just going to fragment the distro's too because some of them will just pick one or the other profile.


I doubt many people would consider it easy. How many people running their stuff on EC2 would like to hear at some point that to upgrade the newest instance type you need to remake your VMs/containers?


I mean it's irritating, but we all have CI/CD pipelines now don't we? I'd see it being a project for a single team for a month to get it changed, tuned, and verified. We regularly build both x86 images and ARM images for devs on Macs. It's really not that difficult when you aren't redoing manual steps.


Just to add that Qualcomm have also proposed a new extension that will help keep code size down without using the C extension. It includes new load/store addressing modes, pre-post increment load/stores and load/store pairs amongst others.

It would seem to take RISC-V closer to AArch64 in approach?


Yes exactly, the 'existence proof of a competitive architecture using exclusively 32-bit instructions' has often been reference.

Qualcomm's proposal is all instructions are aligned to their size. Initially that means everything is a 32-bit instruction, now with a lot more green-field encoding space to play with (so less need to have larger instructions). 64-bit instructions would be introduced (aligned on a 64-bit boundary) when needed with the expectation they'd be used for rare operations and 48-bit instructions wouldn't happen.

The SiFive (and original RISC-V architects view) is RISC-V is meant to be a variable length instruction set and a mix of 16/32/48 provides better static code size along with better dynamic code size meaning smaller icaches needed, smaller buffers in fetch units etc.

Interesting that the architecture that was meant to be a 'purer' RISC implementation than ARM is pushing towards the more CISC style variable length instructions. In a sense Qualcomm are trying to keep it closer to the RISC ideal!


variable length instructions == CISC is not quite a correct equivalence.

The compressed 'C' extention is designed to re-use a lot of the existing decode infrastructure. On RV32 the C instructions are a strict subset of the full length instructions, so at least on RV32 it is very light weight, and it adds barely any logic to a core. It's almost always worth it to turn on C extensions versus making the cache bigger or trying to speed up main memory.

In my experience I-cache pressure is real, especially on lightweight implementations that don't have multiple levels of cache hierarchy and huge amounts of associativity to reduce the impact of an instruction cache miss.

I have played with both C and non-C variants, and also played with compiler tuning that saves code size versus 'performance' (which includes loop unrolling and thus more I cache misses). Generally smaller code size is better for power and system complexity, while keeping performance at par. Of course if you aren't as restricted on power or complexity (as is the case on a high end CPU), the calculus is different.

This kind of simplicity to me embodies the heart of RISC. If your CPU is hitting cache lines more often, you don't have to speculate as deep, don't have to re-order as much, thus less logic, less complexity, less power, higher clock rates and fewer side channels.

On the other hand, I suppose if you are already committed to deep speculation and out of order, compressed instructions might extract a disproportionate cost, maybe less so in decode and more so in precise exception handling and in tricks like register renaming.


> variable length instructions == CISC is not quite a correct equivalence.

Yeah I'd tend to agree, in particular x86 variable length encoding is a lot more complex than the RISC-V encoding!

What I'm really getting at is CISC and RISC aren't well-defined things and it's interesting seeing how the design of RISC-V is getting pulled in different directions.

> so at least on RV32 it is very light weight, and it adds barely any logic to a core. It's almost always worth it to turn on C extensions versus making the cache bigger or trying to speed up main memory.

Definitely, and the Qualcomm proposals are that things should stay that way for RV32/low end in general. It's high-end RV64 they care about.

> On the other hand, I suppose if you are already committed to deep speculation and out of order, compressed instructions might extract a disproportionate cost, maybe less so in decode and more so in precise exception handling and in tricks like register renaming.

This is the root of it. It's easy enough to do a study demonstrating changes in static code size, also easy enough to build a low-end RISC-V processor and examine the trade-offs. It's all a lot more complex at the higher-end especially as high-end RISC-V cores are far from mature.


Really, by a reasonable definition CISC was dead long before x86 became a thing. We don't even conceive of architectures where every instruction has 6 operands, including double indirection and implicit increments.


> We don't even conceive of architectures where every instruction has 6 operands, including double indirection and implicit increments.

Is this an exaggeration, or were there ever such ISAs?


It's a half-remembered description of VAX, but it's not much of an exaggeration if at all.


For those curious, I found this VAX manual, but haven't found anything truly egregious yet:

https://www.ece.lsu.edu/ee4720/doc/vax.pdf


MOVC6


> Interesting that the architecture that was meant to be a 'purer' RISC implementation than ARM is pushing towards the more CISC style variable length instructions. In a sense Qualcomm are trying to keep it closer to the RISC ideal!

The initial idea of RISC-V was pretty much, a variable length RISC isa, but sane and easy to decode. That is not the x86, "we need to add yet another prefix".


Why would you need a 64 bit instruction; what kinds of things are going to be used for it?

What does 'rare' mean here, does it mean rare in execution, or rarely appears in code? (The difference being that something might only appear once in your code but be part of your hot loop so be executed any number of times)

If they are rare in execution, what is their value over composing them of 32-bit instructions, where the (rare) overhead of doing so would be typically a amortised away?

(The only thing I can think of that 64 bit instruction seem suited to is some kind of internal CPU management instructions, but context switches etc. are relatively rare & very expensive anyway so... I don't know)


From the RVI thread on 48 bit instructions, 64 bit ones would probably look similar:

> There are several 48-bit instruction possibilities.

> 1. PC-relative long jump

> 2. GP-relative addressing to support large small data area, effectively giving GP-relative access to entire data address space of most programs

> 3. Load upper 32-bits of 64-bit constants or addresses

> 4. Or lower 32-bits of 64-bit constants or addresses

> 5. And with 32-bit mask

> 6. More effective ins/ext of 64-bit bit fields

Another thing thats offten discussed is moving the vtype and setvl into each vector instructions, I'm not sure if that requries 48 or 64 bit instructions.


I was really asking about 64-bit instructions specifically, but going with what you've put, if you don't mind...

> 1. PC-relative long jump

My understanding is that these are rare

> 2. GP-relative addressing to support large small data area, effectively giving GP-relative access to entire data address space of most programs

What is 'GP' here? but "...access to entire data address space of most programs" In this case you are just going to be bouncing all over the address space, substantially missing any level of cache much of the time, surely?. Maybe you get a little extra code density but you aren't going to get any extra speed to speak of.

> 3. Load upper 32-bits of 64-bit constants or addresses

> 4. Or lower 32-bits of 64-bit constants or addresses

> 5. And with 32-bit mask

Well yeah, but how common is this? I understand the alpha architecture team looked at this and found it uncommon which is why they were okay with less-than-32-bit constants. If it really speeded things up you might build a specific cache to store constants (a kind of larger, stupider, register set). It would seem a simpler solution.

I'm not sure what you mean with 6, and I'm not familiar with vtype/setvl


On vtype/setvl: in the RISC-V V extension (aka RVV / Vector (≈SIMD)), due to the 32-bit instruction length, there's a separate instruction that does some configuration (operated-on element size, register group size, masked-off element behavior, target element count), which arith/etc operations afterwards will work by. So e.g. if you wanted to add vectors of int32_t-s, you'd need something like "vsetvli x0,x0,e32,m1,ta,ma; vadd.vv dst,src1,src2"

Often one vsetvl stays valid for multiple/most/all instructions, but sometimes there's a need to toggle it for a single instruction and then toggle it back. With 48-bit or 64-bit instructions, such temporary changes could be encoded in the operation instruction itself.

Additionally, masked instructions always mask by v0, which could be expanded to allow any register (and perhaps built-in negation) by more instruction bits too.


> My understanding is that these are rare

Depends on how many bits you had to start with. On Power ISA they aren't common either, but when they happen you need up to seven instructions (lis, ori, rldicl, oris, ori, then for branches mtctr/b(c)ctr) to specify the new address or larger value. Most other RISCs are similar when full 64-bit values must be specified. This is a significant savings.


Well you can embed longer immediates directly in the opcode.

You could have a lot more registers.

The first example, I'm not sure you'd want a full 64bit encoding space. You still aren't going to be able to load a 64bit immediate directly so I'd rather see an instruction that uses the next instruction as the immediate. But then 50% of the time you're still going to be padding this to 64bit alignment, so it's unclear to me that this is a benefit over 2 lots of the same but with 32bit immediates.

The second option is interesting. But if you've got 256 addressable registers say, what use are the 32 and 16 bit instructions that can only address a tiny proportion of those registers.


How do you even use all those registers? Serious question. I've toyed with a couple of 256-register ISAs, and the moment you hit function calls/parameter passing you realize that to utilize those efficiently, you really need some way to indirectly refer to registers, be it register windows, or MMIX's register slide, or Am29k's IPA/IPB/IPC registers; the only other option seems to be to perform global register allocation but that hardly works in scenarios with separate compilation/dynamic code loading.


Off the top of my head I don't really know. But then if you had asked me 20 years ago if we'd need multi core multi GHz multi GB computers to display a web page I'd probably have said no.

I suppose the os could reserve registers for itself to save swapping in and out quite so often.

Register windows for applications/functions/threads.

Or maybe something radically different, like get rid of the stack, and treat them conceptually like a list?


> I've toyed with a couple of 256-register ISAs, and the moment you hit function calls/parameter passing you realize that to utilize those efficiently

Very revealing, thanks, this had never occurred to me


The sweet spot for scalar code is about 24 registers, but that leads to weird offset-bits (there's an ISA that does this, but I forget what it's called), so 32 registers is easier to implement and provides a mild improvement in the long tail of atypical functions.

On the flip side, the ability to have more registers is very good for SIMD/GPU applications.


Absolutely, I'm not saying a 64bit instruction length with 5/6/7/8 bits of registers would be bad per se. In fact I'd be interested to see where it leads.

But if you have a processor that also uses 16 bit instructions those extra registers become unusable. Thumb can't encode all registers in all instructions so you have the high registers that are significantly less useful than the low registers.

X86 is the same, never really done 64bit ASM so I don't know if they improved that.

So then you may aswell just divide up the registers so you've got 16 general purpose registers and 16 registers for simd or whatever.


Power10 added "prefixed" instructions, which are effectively 64-bit instructions in two 32-bit halves (the nominal instruction size). They are primarily used for larger immediates and branch displacements.

https://www.talospace.com/2021/04/prefixed-instructions-and-...


MIPS had load const to high or low half. More that 40 years ago Transputer had shift-and-load 8 bit constants. Lots of ancient precedents for rare big constants.


So does classic PowerPC, SPARC, and many other ISAs. It's the most common way to handle it on RISC. The Power10 prefixed instruction idea just expands on it.


Personally, I like the idea of doubling the instruction length every time -- 16, 32, 64, 128, etc. There's a big use case on the longer instruction end for VLIW/DSP/GPU applications.


AFAIK you want short instructions for VLIW because you want to pack multiple of them into a single word.


If this is such a big problem, why have the other RISC-V high performnace people never made this into a big issue?

This really just seems to be Qualcomm wanting it to be more like ARM so they can use their existing cores. That seem pretty clear from what they are proposing.


They want to have the cake and eat it too with same instruction set fitting "small" (few hundred kB of flash/RAM at most) and "big" (Linux kernel running devices and up) ones.

IMO it's futile effort that unnecesarily taxes the big codes.


It would decrees code size but not to the degree the C extension can.


I think James comment summaries the problem quite well, as both aises have segnificant self interest/sunk cost for their prefered approach: https://lists.riscv.org/g/tech-profiles/topic/rva23_versus_r...


I'm not sure either side has all that much sunk cost. So far, there isn't much binary RISC-V code thats distributed in binary form and expected to run on future processors.

So far, nearly all RISC-V is in the embedded space where everything is compiled from scratch, and a change to the ISA wouldn't have a huge impact.

Far more important to get it right for RISC-V phones/laptops/servers, where code will be distributed in binary form and expected to maintain forward and back CPU compatibility for 10+years.


The sunk cost here I think refers to the existing CPU designs of the respective camps. Qualcomm's ARM-based cores don't support an equivalent of the C extension and adding it would presumably require major and expensive rework.


>> This is basically what qualcomm proposes, 32 bit instructions and 64 bit aligned 64 bit instructions.

Well that's only sunk cost if they assumed from the start that they were going to change the design to RISC-V AND drop the C extension. In that case, it was a rather risky plan from the start - assuming they can shift the industry like that. I'm guessing RISC-V was a change of direction for them and this would make things easier short term.


Debian has already started compiling it, and even more will be by the time this new incompatible ISA would hit the shelves.

The time to 'get it right' has already passed IMO. If a hard ISA compatibility break happens at this stage, who is going to trust that it won't happen again?


Can’t Debian just recompile it? I think that was OPs point. We’re not at a place yet where _only_ binaries are floating around we have all the source code for these applications.


Reading through this thread, I think that if the conclusion is to disallow misaligned 16-bit instructions, then not much has to happen.You technically only need to relink the programs to fix that issue.

The question is really about how far do they want to go beyond just disallowing page-crossing / cacheline-crossing instructions.

Personally, I always thought the C extension would have been much easier to implement if it had certain rules about it. Imagine looking at a random location in memory, how can you tell where instructions begin and end? You can't.


So basically RVA is like UTF-8 and the proposed RVH will be like UTF-32?


Hmm. The C extension has played a very important PR role for RISC-V, with compressed instructions + macro-op fusion being the main argument for why the lack of addressing modes is no big deal. It would be interesting to see how big of a difference it actually makes to binary sizes in practice though.


We know, it's around 30% larger binaries. That's why qualcomm also added a bunch of custom extensions.


This is an important point which I didn't realize after reading only gchadwick's comment. It's a discussion of how best to design a compressed instructions extension, not whether to have a compressed instructions extension.

Does Qualcomm have a concrete proposal for how their version of compressed instructions would work, or is the idea more or less just "the C extensions but 32-bit instructions must be 32-bit aligned"? Have they published details somewhere?


AFAIU qualcomms proposal for extra 32-bit instructions is https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...

It adds new addressing modes, and things like load/store-pair instructions.


Ah I had assumed they proposed their own alternative compressed instructions without the alignment issues, but they're actually proposing more addressing modes and adding instructions which operate directly on memory. That makes sense I guess.


Specifically they're proposing adding the kinds of instructions AArch64 uses to get halfway decent code size.


Wouldn't adding these instructions cause potential patent issues? IIRC, the original designers of RISC-V were very careful to only add instructions which were old enough that they can be assumed to not have any trouble with patents.


Yeah, I'm absolutely concerned about that too. Particularly considering that Apple apparently owns a bunch of the patents around AArch64 and cross-licenses them with ARM, so there's some ownership in there somewhere that lawyers have looked at and think are valid.


Oh, that would be great for Qualcomm. I imagine they wouldn't mind a future where, while anyone can implement the base RISC-V, only the big dogs like Qualcomm can implement the extensions everyone actually targets due to patent issues.

(I would mind that future though.)


>Oh, that would be great for Qualcomm. I imagine they wouldn't mind a future where, while anyone can implement the base RISC-V, only the big dogs like Qualcomm can implement the extensions everyone actually targets due to patent issues.

Not an issue; Qualcomm is a member of RISC-V, thus it has signed the agreement. It has legalese designed to prevent this and further entire categories of legal issues.


The RISC-V community seeing an influx of large interests, used to ARM-style architectures might create conflicts. But it is really a sign of RISC-V winning.


The compressed instruction extension was described somewhere as overfit to a naive gcc implementation which seems plausible. It does have a significant cost to a 32bit opcode space. Getting rid of that looks right to me, have some totally different 16 bit ISA if you must, but don't compromise the 32 bit one for it.


RISC-V architectural purism was never going to survive any major effort to deploy it. Either you make changes like what Qualcomm suggest here or you aren't competitive.

The major question is how well RISC-V will manage disputes over this sort of thing without some group such as Qualcomm deciding to just release their version anyway.


I'm wondering what part you call "architectural purism" here. Spending a whole lot of opcode space on a set of compressed instructions doesn't strike me as an especially purist solution to the code size problem, and if what camel-cdr suggests in https://news.ycombinator.com/item?id=37997077 is correct, then Qualcomm's solution is also pretty much a set of 16-bit compressed instructions, but where the 32-bit instructions must be 32-bit-aligned, which strikes me as neither significantly more nor significantly less pure than the current C extension.

To me, this looks like a reasonable argument over design decisions, where there are clear advantages and disadvantages to either side. It's basically a trade-off between code size and front-end complexity. Can you detail where exactly you see the purism thing being an issue?


I believe Qualcomm are proposing dropping 16 bit instruction support, exactly like Aarch64.


You seem to be right. I had interpreted some other responses in this thread to mean that Qualcomm has their own alternative 16-bit encoding that doesn't have the 32-bit instruction alignment issue, but it seems like they instead have a whole bunch of new 32-bit instructions which have memory operands and a bunch of addressing modes.

I see now what you mean by posing this as a conflict between ISA purists (only provide load/store all other instructions have register or immediate operands, only provide one store and one load instruction, add compressed instructions to combat binary bloat) and ISA pragmatists (add new special-case instructions with memory operands and useful addressing modes).


> such as Qualcomm deciding to just release their version anyway.

Qualcomm must resolve this within RISC-V International somehow. Going its own way would designate those products as non-conformant to RV spec, not passing test suites, not allowed to carry RISC-V logo or claim "RISC-V compatible", etc. With all the software headaches that would result in. Or 3rd party vendors avoiding such Qualcomm products.

So this comes down to "convince majority of RISC-V members Qualcomm's proposal is better". Or failing that, just deal with it.

Whatever happens, chances are slim that backward compatibility with existing implementations & software would be broken @ this point. So creating some kind of alternative profile seems like the most sane option?


Not having the compressed instruction set extension wouldn't make Qualcomm's CPUs not RISC-V. They just wouldn't be compliant with the RVA23 profile.


>wouldn't make Qualcomm's CPUs not RISC-V.

Only if they use the op space reserved for custom extensions.

If they don't and instead step all over space that belongs to C, then they would indeed not be RISC-V.


Yeah, when I wrote that, I didn't realize they were also arguing for an alternate extension that would add address modes and the like. If they fail to add this to the standard, but go ahead with implementing their extension anyway, then their CPUs would be RISC-V but with a custom instruction set extension. RV64g + their custom stuff.


Only if they use custom extension encoding space exclusively, and do not step over standard encoding space, such as what's reserved for the C extension.

Which is what I understand they're doing. Thus could not be called RISC-V.


> not allowed to carry RISC-V logo or claim "RISC-V compatible", etc

If Qualcomm’s offering is performant (in its dollars, power and speed mix) and Qualcomm keeps it open enough (I think this is at the moment, as they are using an instruction set that anybody can copy), would Qualcomm’s customers care about that? If so, would Qualcomm?

The largest possible concern I see is that customers would have to be convinced that Qualcomm can deliver good compilers that don’t inadvertently spit out instructions not supported by their somewhat off-beat hardware.


I think costumers have a few issus. First of all, one of the biggest reason consumer companies want RISC-V is because they can select between multiple vendors. Qualcomm would very likely be the only high performance implementation of their standard. So you are binding yourself to Qualcomm.

Second, the software ecosystem is huge, far more then compiles. And given how everybody today uses open source, making all that available for Qualcomm seems like a losing effort.

Is Qualcomm gone pay to make Android Qualcomm-RISC-V ready. Are they gone provide advanced verification suits. Formal analysis and all that stuff?


To be fair here you have historically been able to bork certain Qualcomm devices running legitimate ARM code due to them not supporting all the instructions they claimed. (And the BSP would do sneaky things to attempt to obfuscate this).

Qualcomm absolutely have the market power in the Android space to redefine a new open ISA if they want to though.


>Qualcomm absolutely have the market power in the Android space to redefine a new open ISA if they want to though.

Only if Google allows them. Which is unlikely.


Seems to me this statement goes way, way to far.

RISC-V has already been deployed widely, including 64-bit. Various medium performance cores are out there being used or are being interceded soon.

There are also various companies making high performance RISC-V designs, not a single one of them has suggested that RISC-V design isn't gone work well for their effort. In fact quite the opposite.

And then Qualcomm shows up making

> group such as Qualcomm deciding to just release their version anyway.

They are free to do so. The can even call it 'RISC-V' as long as the base ISA is ok. But its unlikely to be a standard.

It would be, very, very, very hard for them to make all the distros, compilers, and other tools available for their distribution. And Google isn't gone make Android available for Qualcomm specifically unless they get paid a lot.

There is a reason Qualcomm want to be the new standard, they know they can't finance all the software work themselves.

The reality here is not that RISC-V can't be competitive, but rather that Qualcomm doesn't want to invest lots of money in changing their designs to be 'RISC-V native' so they simply propose RISC-V to be almost exactly like AArch64. This seem to me to simply be a pretty transparent Qualcomm short term money saving effort that nobody else asked for.


I am not quite convinced about companies making high-performance RISC-V design. Fastest SiFive cores are at best comparable with mid-performance cores from ARM (or E-cores from Apple). With SiFive likely stopping development of these cores altogether the only high-performance RISC-V design I am aware of is the upcoming Ascalon from Tenstorrent. Which (at least on paper) looks to be comparable to Apple's A12. Not quite cutting edge performance, at any rate.

Qualcomm's proposal to add complex addressing modes to RISC-V is a design that has been tested by time and is known to work. Apple (and now ARM, with X4) are using this ISA design to deliver enthusiast-level performance in a thermal envelope of a compact handheld device. It is not at all obvious to me that RISC-V, which requires the CPU to perform additional work to bundle operations for efficient execution, is capable of the same feat.


x86 is known to work and it has 15 possible lengths instead of just 2 (moreover, those lengths aren't known until decode is already started). Despite this, x86 still holds the crown for fastest overall CPU design.

RISC-V compressed instructions are provably WAY less complex to decode than x86 and only a bit more complex than ARM64. Once you get past slicing apart the instructions, RISC-V decoders are much more simple than ARM64 because the stuff they are decoding is way less complex.


Yes, x86 works, but appears to pay a huge cost in power consumption to squeeze out those last few % of performance. Whether it’s just the property of how the mainstream x86 implementations have grown historically or the ISA itself remains to be seen.

I also agree that RISC-V decoders are simpler but only because the base ISA itself is very limited. Once you add functionality like FP, atomics, Zb extension, vectors etc… there is not that much difference. And the need to do fusion for address computation adds another layer of complexity on top.


RISC-V already has those things and the answer seems pretty clear. P670 gets around the same performance as A78 while being 50% smaller (according to SiFive) while having vectors, atomics, floats, etc.


>Yes, x86 works, but appears to pay a huge cost in power consumption to squeeze out those last few % of performance. Whether it’s just the property of how the mainstream x86 implementations have grown historically or the ISA itself remains to be seen.

https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...


The chief architects of the two companies who are making RISC-V cores seem to have a lot of experience if you are gone look up their resumes. Arguably more then people Qualcomm. Qualcomm bought Nuvia and are now trying to make money from that design, they never actually designed super high performance cores.

Addressing modes have never been mentioned as a limiting factor. Its not clear at all that addressing modes are a game changer for performance.

You can also argue that any other unique feature of ARM or x86 is the 'magical pile' that allows for much higher performance. The more reasonable assumption to me is that its simply about how much is invested to make it happen. I think based on its design, less investment into RISC-V will lead to a higher performance core compared to ARM because of that complexity.

Qualcomm motivation here seems pretty clear, and I don't believe its actually because they have pure technical merit at the heart of their desires.

So should I really believe the company that has clear financial motivation to push their line?


> Either you make changes like what Qualcomm suggest here or you aren't competitive.

What's the evidence that the existing RISC-V approach is not competitive, and thus that Qualcomm's changes are necessary?


I've always disliked the RISC-V instruction encoding. The C extension was an after thought and IMHO could have been done better if it were designed in from the start. I'm also a fan of immediate data (after the opcode), which for RISC-V I would have made come in 16,32,64 bit sizes. The encoding of constants into the 32-bit instruction word is really ugly and also wastes opcode space.

After all the Vector, AI, and graphics stuff is hashed out I'd like to see a RISC-VI with all the same specs but totally redone instruction encoding. But maybe that's just me.


> The C extension was an after thought and IMHO could have been done better if it were designed in from the start.

Not sure what you mean here: opcode space has to have been reserved for the C extension from the start, that part can’t have been an afterthought. It may have been badly designed still, but if so that must be for other reasons (working from a bad code sample is often cited).

> The encoding of constants into the 32-bit instruction word is really ugly and also wastes opcode space.

It kinda has to be to minimise fanout, and with it propagation delays and energy consumption. As a software guy I recoil in horror, but I can’t argue against faster and more efficient decoders. https://www.youtube.com/watch?v=a7EPIelcckk

> I'm also a fan of immediate data (after the opcode), which for RISC-V I would have made come in 16,32,64 bit sizes.

So was I, before I read the RISC-V specs. One possible disadvantage of separate immediate data is wasting instruction space (many constants are so much closer to zero than 127), making alignment issues even worse, and it could increase decoding latency. I would definitely do this for bytecode for a stack machine meant to be decoded by software, but for a register machine I want to instantiate in an FPGA or ASIC, I would think long and hard before making a different choice than RISC-V.


> The C extension was an after thought

I understand that "afterthought" can be more of a subjective comment on the design than a concrete claim about the order of events, but still I'll quote directly from the RISC-V Instruction Set Manual:

> Given the code size and energy savings of a compressed format, we wanted to build in support for a compressed format to the ISA encoding scheme rather than adding this as an afterthought


From what I remember, Krste has advocated for compressed instruction sets with macro-op fusion in the uarch front-end for a while, and the design of the RV ISA is heavily inspired by this, so it’s not particularly surprising that SiFive (i.e. Krste’s (and others’) company) is opposing Qualcomm’s proposals. It will be very interesting to see what happens.


What I find particularly interesting is that SiFive never actually built a high-performance CPU core. Their highest performing IP offers IPC closer to ARM's current efficiency cores. And SiFive always kept very vague about other metrics about their CPU cores (like power consumption).


SiFive announces a new core with significantly increased performance mostly every October(ish), and has been doing so every year(ish) since U74 (the core in the fastest currently shipping SoCs) succeeded U54 in October 2018.

They have never build a core comparable to the fastest current Arm, x86 cores because they haven't been moving up the performance curve for very long. Just five generations at this point.

October 2017: U54, almost A53 competitive despite being single-issue

October 2018: U74 (dual issue), A55 class

October 2019: U84 (OoO), A72 class

June 2021: P550, A76 class

December 2021: P650, A78 class

October 2023: P870, Cortex-X3 class

SiFive can't talk a lot about power consumption because that depends not only on the core design but the entire SoC, the process node, the corner of the process node, the physical design and many other things that are under the control of SiFive's customers, not SiFive.


Not going to happen. All the tech working groups in the foundation report up through Yunsup who's the benevolent tech dictator for life. Changing the spec wouldn't go through without his approval. And compressed instructions check a check box in the low end cores SiFive is selling. Code density is pretty crappy otherwise. Or at least it's crappy compared to 8051s which is where they're competing against in some corners. The low-end RV16 or RV32 are a little more competitive against Cortex M0/3/4s, but even then only when you have the compressed instruction extension.


Those low end cores do not implement the RVA or RVB profiles.


Not an industry expert.. just a curious onlooker, but this is not surprising in the slightest - though I am surprised the VC firehose has been turned off this soon. I thought they'd float along on stupid money for longer.

From the start they seemed overfunded and with no real good long term business model. They have no lock-in because of the whole open nature of RISC-V.

- They're not designing for any specific in-house purpose like Alibaba/Western-Digital

- The low end will always be dominated by Allwinner and company and they can't win

- The high end is competing with x64 and huge companies like Intel.. also can't win unless you're Apple

They make non-pricecompetitive middle of the road chips that nobody really needs other than RISC-V enthusiasts.. And at the end of the day effectively anyone can come in and do what they're doing at any point.

The latest announcement seems to show they're finally shifting to the strategy their Chinese competitors have been doing for years. Open RISC-V cores and then get lock-in with custom NPUs. I doubt they'll be able to compete with Chinese firms there though

Cool idealistic company, but at the end of the day making cool stuff isn't in itself a business model - and you gotta make money to pay the bills

RIP


Sounds like they were basically a victim of venture capital. First the pressure to grow (too) rapidly, then the gutting for failure to deliver. Growing a company with wild ambitions sustainably is hard. I feel sorry for the employees that tried.

And of course 9 women can't deliver a baby in one month.


I think it's an example of a company suited for venture capital. As far as I can see, there was no option other than to grow rapidly. They'd need to have matched Intel/Qualcomm/etc. to have any chance of success.

VC gave them money on the initial RISC-V hype but then the hyped died down. There was like a good year or two with no real affordable products on the market and the software/toolchains were all half-baked. It also became clear that an open instruction set doesn't really bring you any huge "win". As far as I can see, the necessary subsequent waves of VC with deeper pockets never materialized (like MagicLeap had managed)

The Chinese competitors were also very quick and managed to catch up and in essence beat them to market - so they've lost any first-mover advantage


> I think it's an example of a company suited for venture capital.

I guess it makes sense from the investment angle. Ie. need to grow, and need to grow fast, therefore need for lots of cash, therefore need for risk takers.

But at the same time not so much so for the angle of is growth realistically possible for this type of company? It took ARM over 30 years to organically grow with ups and downs to what it is today, in an age where the foundry landscape was much less complex wand wild chip design innovations were much more common. (We had SPARC, PowerPC, MIPS, PA-RISC, Alpha AXP, i960, 68k, Itanium, Transmeta in the nineties.)

From what I read, SiFive had broad ambitions and wasn't very focused. Not sure if it is viable, but expecting it to turn $$$ into competitive chip designs in a couple of years was a bit of a stretch.


68k is 70s

sparc, mips, pa-risc, i960 are 80s

arm is 80s too

transmeta didn't ship shit until 02000

i'll give you alpha, powerpc, and itanic tho


You're absolutely right of course and I stand corrected.

I was thinking of the cool, companies (Silicon Graphics, Apple, NeXT, BeOS) and high end stuff (SUN, Hewlett Packard, DEC), of the pre-WinTel era. When CPUs had one core, integrated memory and periphery controllers and even floating point co-processors weren't ubiquitous and "graphics cards" did not include a "GPU".

Acorn worked with a single company (VLSI) to produce the first ARM chips. When ARM came of age, there wasn't a complex "foundry ecosystem" where dozens of companies specialize in different stages of what results in a SoC.


i feel like the complex foundry ecosystem makes wild stuff a lot more doable now

from my perspective i see a lot of totally wild stuff, not all of it successful: yosys, esp32, fram, reram, optane, tensilica, cortex-m0 socs, padauk's fppas, gigadevice, apple's m1 and m2, graviton, tpus, ambiq's subthreshold utter insanity, stm32g, the unbelievable explosion of photovoltaic, wch's ch32v003, gallium nitride, mram, wlcsps, jlcpcb's smd assembly service, mass-market lidar chips with picosecond timing, chalcogenide pram, greenarrays, silicon carbide, modern silicon mosfets (not to mention igbts), led streetlights, petabit interconnects in data centers, nvme ssds, the zillion variations of risc-v, bitcoin mining asics, oled displays (though those aren't chips), gpgpu, submillimeter phased-array mass-market products like starlink, indium phosphide amplifiers in oscilloscopes, amorphous-silicon-on-glass products from sharp hitting the mass market (look, ma, no cog!), and skywater's open-source pdk and the associated shuttle program

so i have a hard time agreeing that "wild chip design innovations were much more common" in the 01990s than now; from my perspective that could hardly be farther from the truth


yup this is those cases where VC money makes sense, or at least tactic should complete different without VC-money.


"victim of venture capital", who do you expect to fund such a risky business?


RISC-V has a lot of potential to eat ARM's lunch and adoption by companies like Qualcomm signal a future industry shift to it. The problem for SiFive is none of that requires them to be around to see it happen. The success of the startup and ISA are not inextricably linked. They did a lot of important work early on to cheerlead for RISC-V and to build much needed support and infrastructure for it. But this feels to me like the end of the beginning for RISC-V and a transition to a stage where the industry will drive it from here.

The other day Qualcomm announced a RSIC-V based Android wearable SoC. Expect to see full smartphone SoCs in shipping devices in a few years. ARM's attempts to extract more value after going public and lawsuit with Qualcomm is souring their relationships and driving customers to RISC-V.


But this doesn't answer the question as to where firms go if they want to license a core rather than design in-house. Arm grew in large part because firms decided that designing CPUs cores wasn't a central competency and Arm would do a better job. And for a long time even a firm with the resources of Qualcomm couldn't outperform Arm.

Maybe (say) Tenstorrent will take up the mantle but just saying that 'the industry will drive it' doesn't really describe how Arm gets replaced.


> And for a long time even a firm with the resources of Qualcomm couldn't outperform Arm.

For a long time (around 2010 to 2016) the internally developed Qualcomm ARM core did outperform the cores you could license from ARM. Qualcomm's internal team fell behind and then Qualcomm started licensing the standard ARM designed cores. Now Qualcomm acquired Nuvia and wants to develop their own custom cores again. We will see how it goes.

Most of the old Qualcomm ARM CPU team went to Microsoft for a few years, got laid off, and now actually work for ARM.


Thanks for the correction / clarification. I agree 2016-2021 isn't really a long time!

Of course the Qualcomm's use of Nuvia cores is subject to the ongoing legal case so it's difficult to see how that will work out.


Qualcomm was still developing a 32-bit ARM core and then Apple came out with their internally developed 64-bit ARM CPU and surprised everyone.

This Qualcomm exec got shoved aside after making this statement.

https://www.pcworld.com/article/447916/apples-64bit-a7-chip-...

Qualcomm quickly switched to licensing a 64-bit core from ARM which was the Snapdragon 810 that had overheating problems. After that the next Snapdragon 820 with Qualcomm's internally developed 64-bit CPU was fine.

That same team was also developing their ARM server CPU but that whole project got cancelled and the team got let go around 2018.


But then every generation after the 820 used a Cortex core so their in-house core only lasted one year.


Yeah, that team switched exclusively to the server chip which was cancelled a few years later and they laid off 500 engineers in that team.


Qualcomm's Znew proposal is basically an attempt to skirt ARM royalties while keeping a design that is very close to ARM64 (basically, throw out 16-bit instructions then add some new modes and complex 32-bit instructions ARM uses to make up for it).

This smells of them trying to convert Nuvia from ARM to RISC-V so the entire Nuvia case basically goes away before they get forced to pay out a lot of money.


Arm's case isn't about the ISA, it's about use of IP developed by Nuvia with help from Arm under Nuvia's Arm license being used in a way that isn't compatible with the terms of Nuvia's license.

IANAL but I'd be astonished if they get away with that IP making its way into RISC-V designs, if Arm win.

Plus, they won't have anything RISC-V based available in anywhere near the required timescales. They're still arguing about the ISA after all!


ARM's case is that a uarch is necessarily tied to its ISA and that their license isn't transferrable because the contract plainly says so.

If Qualcomm can show up and say "here's our uarch running a different ISA", it disproves that point and at most leaves dispute about some patents where Qualcomm can probably get a quick settlement for far less than the royalties would cost them.

The whole point of Znew is to transform RISC-V into something so similar to ARM64 that they can swap out the decoder and be good to go.


Decoupling the ownership of the ISA from ownership of the silicon means chip makers can compete more openly. Anyone with chip design capabilities can design and market their own premade RISC-V cores or offer custom silicon design services.

With ARM they acted as a gatekeeper through their licenses. Anyone who wanted to design ARM based chips needed a license and their licensing structure has gotten more complicated and expensive over the years.


Sorry, this completely misses the point that many firms don't want to design their own CPUs, which is why Arm became so successful in the first place.


No I'm not missing that point. My point is Qualcomm, Samsung, Intel, whoever doesn't have to pay a gatekeeper a license fee to design RISC-V cpus. They are also free to design and sell premade cores similar to how ARM does it today, Sans ISA license structure.


> Free to sell pre-made cores

You do know that Arm makes a tiny amount on each core licensed and that none of the firms you mention will have any interest in that business model. If they’ve spent a lot of money on a competitive design they will not be handing to competitors for peanuts.


Are you saying Qualcomm doesn't have interest in selling cpus to customers without paying a third party a licensing fee? That doesn't make any sense. Companies are always interested in lowering their overheads.


> But this doesn't answer the question as to where firms go if they want to license a core rather than design in-house

My original comment. Qualcomm won't be doing this. Or at least not for a fee that is in any way comparable to what Arm charges.


I mentioned it elsewhere but a big draw is FPGA soft cores or others forms of embedded CPU stuff. ARM licensing terms are bonkers so riscv has a major advantage if you need something better than a NIOS II or Microblaze (soft cores that Intel and Xilinx provide respectively).


I worked at a company that designed whole systems from chips on up. I thought it was odd that they chose MIPS as a base architecture, until someone explained that the alternatives all had far higher licensing costs. IP licensing for other parts of the chip was already their second biggest cost after payroll, so I guess that made sense.


with so mych of xilinx' lineup having hard arm cores built in, what is the driver for soft cores?


Ease of use. The area of a small soft core is trivial. And having a small CPU core all for yourself and dedicated to manage one piece of hardware without having to deal with an operating system and other processes interfering is incredibly convenient.

See my other comment.


Flying radiation hardened FPGAs is a big one. Just because a chip exists doesn't mean it's usable for any given application.


The latest microblaze is already RISC-V under the hood, but of course other vendors can be used too.


I feel like it will converge to just having RV + FPGA chips rather than people wasting precious FPGA space on softcores. Zynq's are plenty popular already


It's common to have high performance CPU cores for the main tasks yet still have small microcontroller cores sprinkled all over the place for specific sub-block management, even in FPGAs.

For example, a small 5-stage pipelined VexRiscv CPU take around 2000 logic elements. That's nothing in today's large FPGAs. Add, say, 4KB of RAM and put it right next to the complex HW core that you want to manage and you have the equivalent of a complex FSM that's programmable without any worries about having to share cycles with other processes and missing out events with hard real-time requirements.

It's done all the time, and it's great.


Easier said than done. Lots of Zynqs on orbit and elsewhere right now but if you want to do anything at all custom you're stuck with soft cores because regardless of who you are, I know for sure you don't have the money to do a custom chip.


I worked at SiFive for about a year in 2018. It was a very poorly organized company with management that was either absent (Shinoy, Sherwani) or unfamiliar with how teams were managed outside academia (Lee). It seemed like Samsung and Intel invested in SiFive mostly to make them (SiFive) look like a going concern so they (Intel) could negotiate better deals with Arm.

There were PLENTY of very smart people there, but after buying OpenSilicon it was sort of hard to figure out what they were trying to do. Are they an IP licensing company? Do they design custom cores (or SoCs) for people? Do they make catalog parts? Are they a software company?

And then they kept asking too much money from Intel (which had some problems of their own) and they (Intel) walked away from the deal. I don't think SiFive could recover from that.


Worked on a project using them a couple years ago and it was a complete shit show. Stuff like atrocious documentation, spaghetti code demo projects and very difficult to reach support. We surmised that their auto core builder thing was actually probably a few people manually configuring and firing off build jobs rather than any truly automated/turnkey process. They also lacked a lot of peripherals/interfaces that you'd expect or, more accurately, need for any product leveraging the performance the cores could theoretically provide.

Long story short, this isn't super surprising given that it basically killed our product.


Can you say more why you chose them for your product? I can't think of any conceivable reason you'd choose them over other options.. unless being open-everything is a core part of your brand/mission (which is completely fair, but pretty niche)


FPGA soft cores for space applications. You can't (or at least couldn't at the time) license ARM cores for FPGA deployments outside of R&D because ARM is psychotic.


We've had people consider Ibex for space applications, well verified and has a dual-core lockstep option: https://github.com/lowRISC/ibex.

An ETH Zurich team have done a triple core lockstep version for cubesats: https://www.theregister.com/2023/10/05/riscv_microcontroller...


https://www.gaisler.com/index.php/information/servicessuppor...

Gaisler is also current one of the biggest players there IIRC. I've used them for stuff in the past.


>can't license ARM cores for FPGA deployments

I believe this is because ARM wasn't confident they'd be able to enforce licensing effectively.

For non-FPGA designs, they only need to have insider knowledge from a handful of fab companies to detect those using cores without valid licenses.


ARM has a team involved in accounting and finding "catch up revenue." This are royalty payments that were supposed to be paid but the customer "accidentally forgot" to pay but thanks to ARM's "helpful accountants" they get the money sent over.

How exactly ARM does this isn't always clear. For hard IP there is a GDS layer called IP tags and the foundry is supposed to scan this layer and can report numbers back to IP providers like ARM, Cadence, Synopsys, etc. If the customer removed this layer "accidentally" then the foundry can still scan for certain patterns and structures within the GDS mask data. Like a unique hidden watermark and report back to the IP vendor.

For soft IP that is synthesized there are probably other ways to do it but I'm not that up to date.


Isn't LEON3 the go to core for these sort of applications?



AFAIK you can get cortex M1 and M3 softcores in the last few years at least. I don't know about anything bigger though.


Which are very old, completely obsolete cores.


Space applications tend to use older cores and processes anyway because they're more resistant to radiation.


Less inherent robustness, more just general tendency towards "solving tomorrow's problems with yesterday's technology"

There are companies that take commercial parts and nuke them under a beam at places like TAMU to profile their performance then resell to aerospace. The issue is by the time this flight qualification happens the parts are old as hell.


For small microcontrollers you can just buy modern radiation tolerant chips from e.g. microchip for about $2000 or so per unit, but you need a certain expertise to select the other components you need and design/qualify. Or you can any one of dozens of different cubesat cpu boards, which typically have COTS components and has gone through radiation screening (which is way more work than "nuking under a beam"). These are typically sold for cubesats, but don't meet the requirements for qualification and documentation that is required for an ESA or NASA-related project beyond a cubesat.

There are the famous "old as hell" rad-hard powerPCs which cost $200k, used for historical reasons or extreme environments like a fly-by of Venus. It's true that older chips may be inherently more radiation tolerant due to larger features on the silicon but it's a misconception that CPUs used in space are therefore always old as hell. These are usually selected because they are good enough and to reuse design and qualification from a previous project instead of spending millions and ten person-years on a new design.

But commercial spacecraft these days usually use something like this[1] triple-redundant 50MHz softcpu implemented on an FPGA or this[2] Ultrascale ~1Ghz SOC protected by a secret sauce of separate FPGA for watchdog and overcurrent detection. Both are modern, run linux and use modern development environments/tools. The former is slower but can recover upsets in realtime, while the latter is much faster but needs to reset itself to recover at least some types of upsets.

[1] https://www.aac-clyde.space/wp-content/uploads/2021/10/AAC_D...

[2] https://xiphos.com/product-details/q8


But there's still a good reason older hardware is preferred. The wider traces on older hardware means less susceptibility to random bit flips due to the higher background radiation. The extremely narrow traces of modern processes are already vulnerable to quantum effects on Earth where radiation is significantly attenuated.


We're talking about softcores here, not fabrication processes, flashing an FPGA with an older softcore design doesn't somehow make it more resistant to interstellar radiation


Very old maybe, obsolete not so much. The M3 is still being designed into many, many applications today (though of course so is the 8051).


[flagged]


> nothing else left to say but wishing you to have a psychotic episode yourself.

That remark sounds like the pot calling the kettle black. Two wrongs don't make a right you know.


Awesome news. The world is healing.

Having a big player as the "ARM of Risc-V" funded by VC was so toxic. It takes the oxygen out of the ecosystem.

The next step in open hardware is not having more proprietary silicon shops, it's streamlining the manufacturing process to make it look more like pooled PCB manufacturing, so that open collaborative groups can cheaply iterate their designs.


The problem is that the interesting bits in the VLSI space right now aren't in digital design.

Nobody needs another processor. Even an old-ass MIPS core is good enough.

The interesting bits are RF, ADC, DAC, SerDes, high efficiency DC-DC, low leakage designs, etc. RISC-V does not one iota of good for any of these things. Nor do these things need 5nm technologies--180nm or 250nm would be just fine though you'd probably have to use 120nm just because everything else probably has too little fab capacity left.

Which is a shame because that is precisely the path RISC-V needed to take to unseat ARM. It needed to be really good in the under 10 cents category with some decent analog peripherals such that it could expand upward and eventually eat ARM.

The under 10 cents category of microcontrollers is an absolute shitshow and has been for 10+ years. RISC-V could have brought unified tooling and architecture to that space. However, that isn't sexy. It would only let you ship a zillion chips and make reasonable profits. And that's just not VC compatible.


What's interesting to one person can be boring to another. I'm assuming you are on the analog side and find that stuff interesting. I'm on the digital side and was in a serdes team for 10 years and I'm glad to be gone.

You're absolutely right though that companies pick the process node based on the application. Everything I'm working on now is 5nm or smaller. Our customers DO want faster processors but we also have lots of serdes for 400+ gigabit networking and other stuff. I have friends doing DC power converters in 180nm. That's over 20 years old now but still useful for a lot of applications.

I worked at another startup and you're right about some things not being VC compatible. They want big investments and big payoffs. They aren't interested in investing in companies with lower risk but steady smaller profits.


It takes $30 million for masks in 5nm. EDA tools for physical design are over $1 million per instance and you may need 20 to 200 licenses for a big chip.

Total compensation for engineers is in the $150K to $400K range in the US.

This isn't getting cheaper.

PCB manufacturing is many orders of magnitude simpler and cheaper and easy to do at the hobbyist level.


I disagree.

Manufacturing processors is expensive. You need a big successful company to show that it's possible and profitable to invest.

Long term, yes I kind of agree, you run the risk of extend and extinguish if you get one dominant player, but you aren't going to get to the point of RISC V being successful by shunning big companies willing to invest. You just need to make sure they're more red hat or sun (pre buyout).


Manufacturing processors is expensive.

Not really. I'd say designing processors is expensive in engineering costs, and open designs are already pretty impressive and can be used as starting points. It's also expensive to optimize a design for a given process, which is necessary to get the best performance (probably by a factor of two?) from the design/process. So I'd expect the foundries to start offering optimized RISC-V cores to SoC designers as part of their offerings - especially as we reach the end of scaling.


Ok, the designing, and tooling, testing etc.

But that just makes it worse.

If you have to spend £5million over 5 years to design something, you need to get funding until the point where you're actually selling products. If all the cost is in the actual manufacturing, you don't need the capital outlay early on.


If “healing” is the ultra wealthy holding on to their money or investing it in financial instruments instead of tech startups then god save us all. I don’t understand the animosity towards VCs that take risks on entrepreneurs like us.

Open source collaborative groups are not going to do shit in this space piddling away in their garages or community workspaces. It takes hundreds of millions of dollars to build a fab that can make competitive chips and beer money donations are not going to get us there.


You would need several billion to build a fab, a few hundred million might get you a 5nm chip or two


SiFive is a fabless prop chip designer, they build no fabs. Yes building fabs is heavy industrial investment, and thus less amenable to open models. Drawing the masks is not.

Lot of VC work is regulatory arbitrage: how to steal the flowers from the public park without going to prison. That's why they are so proud of all these local sectoral monopolies they established while the lawmakers were asleep at the wheel, or bought, so the normal limits on profit in a market economy, through competition, are suspended.

Bulk of VC compensation is management fees, wisely based on the "head I win tail you lose" model and losses are often outsourced to ordinary folks via institutions like the Ontario Teachers' Pension Fund.

Money is just the scoring system of the economic game. If the rich play zero sum games with the points with each other it's harmless, and much better than malinvestment in Yachts or Web3 platforms where actual steel and engineering capacity is taken away from better use cases.


No one seems to be talking about the elephant in the tent, e.g. that SiFive's IP/implementations aren't very good in terms of performance or even price/performance. There's a small audience (mostly in this thread :) who'll buy any RISC-V for the sake of novelty, but the vast majority requires at least parity with commonly available ARM (primary competition) and SiFive isn't it.


I think another factor is that at the high end of RISC-V, Jim Keller (https://en.wikipedia.org/wiki/Jim_Keller_(engineer) ), has his own company that is competing with SiFive: https://tenstorrent.com/risc-v/


Have they commercialized any of their designs ? I don't think they are competing with Sifive yet!


Reminds me of a story from one of the startups I worked at. Our CEO found himself in the elevator with the CEO of another company.

Other CEO: Ah yes, <company>. I guess that makes us competitors.

Our CEO: We'll be competitors when you ship a product.


LG bought a license for Ascalon, their large (8-wide) RISC-V implementation supposedly competitive with Zen5.


Woah! Is it mostly for internal use for lg electronics? I'm excited to see the performance in real life.


IIRC the announcements talked smart TVs.


I love the new CPU wars.


A shame, but also unsurprising. I never understood how they could have such a high burn rate for a technology that hadn't really taken off in the markets that they were targeting.

RISC-V is/was vitally important in putting downward pressure on ARM licensing. But large companies were never going to move off ARM for their workhorse CPUs. They'd use the threat of moving to RISC-V as a way of getting sweetheart IP deals from ARM, but no one actually wanted to start over on a new software transition while the move to ARM was still unfinished.


Sounds like the story I have heard told a few times about how companies in the 60s, and 70s would make sure they had the business card of an Amdahl sales rep on their desk when the IBM sales rep came to visit (Amdahl apparently made their business selling mostly/somewhat IBM-compatible but much cheaper mainframes.)

All I can say is general comment as a consumer / "prosumer"; I recently went looking to build out a high core count homelab server built on either RISC-V or ARM; because I wanted lower power consumption, higher core count, and, well, was interested in something non-x86 anyways. When I went looking, I found there is still nothing out there in the ARM world other than Ampere servers and that is unavailable to end consumers. We're decades into this ARM thing, but there are entire market segments where ARM looks like it'd be applicable but there are no products (ok, other than Apple). And yet I also found that RISC-V seems to be making early moves into this kind of space - e.g. I can preorder and actually buy the Milk-V SOPHON-base "Pioneer" board, which is a 64-core RISC-V board in mATX form factor I can put in my own case, etc. but nothing like this seems reasonable on the horizon for ARM. Still.

My question is: is this a product of the viability (or lack of it) of these segments/ businesses, or does it have something to do with ARM licensing?

Anyways, I'm bullish on RISC-V. I played with PicoRV32 a few years ago on hobby FPGA projects; built out my own little primitive SoC on an Artix A7 board, wrote my own little operating system, it was great fun. The openness of all this stuff is fantastic and necessary for producing innovation. I feel like the nature of RISC-V will naturally tend towards producing more diversity of products than ARM has, even if the investment environment goes through a bit of a contraction for a bit.


> RISC-V is/was vitally important in putting downward pressure on ARM licensing.

Given current laws. I wonder if the world wouldn’t be better off with strong interoperability laws, where ISAs are simply neither copyrightable nor patentable. This would instantly kill ARM of course, but their ISA (well, ISAs) would live on. I love the design of RISC-V, but I find it kind of sick that being forbidden to implement one’s own ARM or x86 core is such a big reason for its success.


It wouldn’t kill Arm. They would still license their own cores.


Oops, my mistake, sorry. I still think it would seriously cut into their profits, but you’re right.


Badly wounded, maybe?!


You're not wrong. I think the move to riscv will happen but not in any time span we can make predictions inside of (~10 years+). ARM has been awful to license stuff from and riscv has been great for keeping them from going absolutely mad with power.


it's already happening. Nvidia and Western digital are already shipping riscv in production. high performance riscv is likely closer to a decade away, but it's already doing a pretty good job of replacing the embedded zoo


But why would NVIDIA ever ship a RISC-V vector unit? They have no incentive to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: