Hacker News new | past | comments | ask | show | jobs | submit login
Google wants RISC-V to be a “tier-1” Android architecture (arstechnica.com)
393 points by soulbadguy on Jan 3, 2023 | hide | past | favorite | 212 comments



Google "wants" that, but doesn't seem to push forward on that matter. AOSP gerrit received risc-v jvm CLs 2 years ago from external contributors, and didn't merge any. (sorry, I have no source from Google's gerrit since they regularly clean it up, but you can find https://www.theregister.com/2021/01/21/android_riscv_port/ for related info )

Current state of Android RISC-V support 2 years after those ignored contribution is literally "we have a working libc", which isn't great. My understanding of Google's presentation is that Android 14 (Q3 2023) will have interpreter-only jvm support for RISC-V, no JIT/AOT. (I'm surprised there is no pure C interpreter that would work anywhere, but well). So decent RISC-V support in Android may happen for Android 15, so Q3 2024.

Will there be weird ABI/ISA issues that will need few versions to iron out? Hard to say, armv5 didn't go smoothly, nor armv7 (hi neon-less nvidia), and risc-v's modularity might prove to be challenging on that regard.

Overall, even though Google/Android doesn't seem as pushy on risc-v as they pretend to be, I'd say Google/Android will indeed be ready when the risc-v industry is ready for prime time.


Nobody really wanted to integrate RISC-V support anywhere while the ISA and ABI was unstable. That only really got finalized last year, so I am not surprised that it took this long.


This actually seems like the most reasonable reason to me. It wasn't stable, there was nobody (planning) on making RISC-V devices on which you'd want to run Android, and compiler support was far behind. We're in a very different place now, I would have made that same call (though perhaps would have given more reasoning to reduce speculation).


Is this more about the ABI? I couldn't find incompatible userspace ISA changes since many years (things sounded to have settled down since a while even in 2016: https://riscv.org/wp-content/uploads/2016/06/riscv-spec-v2.1...). The RISC-V Linux port was also merged to mainline in 2017 and presumably been living under the stern binary & ABI compatibility rules there.


Since ISA collections affect ABI in RISC-V, getting an agreement of the baseline for RISC-V ISA for Linux-based systems was extremely difficult. On top of that, the ABI stabilization work couldn't begin until the required ISA extensions for Linux were locked in. So it was all a mess.

(It still is a mess, to be honest...)


My guess is that theyve only recently been asked by handset manufacturers to consider non-ARM chips, for example this would be a great way for Qualcomm to gain leverage over ARM given the recent lawsuit


Android has official x86 & x86_64 support and used to have MIPS support back when MIPS still existed, so this isn't the first "not-ARM" support to happen.

But the change from ignoring risc-v contributions to saying it's important probably did come from a significant SoC manufacturer pushing for it, yeah.


Another possible guess, is that Google wants to start making their own CPU cores to beat Apple (I'm more along your guess, but I've seen so much Pixel-first AOSP development, that I have to doubt)


I'm aware of at least one Chinese manufacturer building a RISC-V based smartphone


On what timeline? Are there any suitable SoCs yet?


This one is what I'm aware of: https://sipeed.com/licheepi4a/

Alt: https://linuxgizmos.com/lichee-pi-4a-risc-v-platform-availab...

At the bottom they say they have a tablet and phone planned, though no info on when.


The timeline looks similar to when I expect the first RISC-V consumer devices to be ready, so they'll be ready in time for that.

The downside is that developers won't be able to test their stuff on the dev boards that are already available, which may harm the ecosystem on launch.


What would be the correct way to handle this in your opinion then? Invest in migrating a massive OS onto the new arch before it's stabilized and has a working SoC with 3D acceleration (which Android needs)?


It's exciting that Google specifically requires memory tagging support and the J extension (Standard Extension for Dynamically Translated Languages) from RISC-V processors that are going to run Android: https://youtu.be/70O_RmTWP58?t=651


Not a hardware/chip person here, so I found this text helpful regarding memory tagging:

"As one might expect, the Arm64 architecture uses 64-bit pointers to address memory. There is no need (yet!) for an address space that large, though, so normally only 48 of those bits are actually used by the hardware."

...

"MTE allows the storage of a four-bit "key" in bits 59-56 of a virtual address — the lower "nibble" of the top byte. It is also possible to associate a specific key value with one or more 16-byte ranges of memory. When a pointer is dereferenced, the key stored in the pointer itself is compared to that associated with the memory the pointer references; if the two do not match, a trap may be raised."

From this article at the ever-wonderful lwn.net: https://lwn.net/Articles/834289/


Defining the requirements is one point, one task in that list. Developing J is another.

Note that J is not a ratified extension, or even close to being so. It has been moving slowly for years (been a low priority), and now it is finally getting some attention, as higher priority items (such as H, B, V and Zc) are either done or final stages.


Could Google pull another QUIC here? "Here's a thing we implemented and deployed, wanna polish it up and call http2?" (Yes, I'm ignoring nuance here)


SPDY became http2, QUIC became http3.


You're right, but i can't edit anymore unfortunately.


It could. Major vendors have contributed extensions before. Some of them evolved to standards, whereas some were only used as extra input.


There is also "Member Day Session: J Extension Group Meeting"

https://www.youtube.com/watch?v=0EEzXsA--9Y


Someone fill me in - are all my understandings here correct?

1. ARM is a RISC architecture, but it is proprietary/licensed, subject to export law, etc.

2. ARM is having a heyday, powering just about every mobile device, plus Apple's recent M1/M2 chips, and is also becoming more common on supercomputers and on servers

3. RISC-V is also a RISC, as the name implies, but is owned by a non-profit and intends to be open and easy to license

So next I would ask, do recent developments that make ARM so promising also make RISC-V promising? Do the decades of work that it took to make ARM the rising star also apply to RISC-V? How far away is RISC-V from being competitive with ARM at a technical level? Or is it already there?

Bonus question... given that both ARM and RISC-V have similar goals, how feasible is an abstract layer on top of them? Meaning, five years from now, how possible is it that I have a RISC-V phone that can run ARM binaries, and and ARM phone that could run RISC-V binaries?


>How far away is RISC-V from being competitive with ARM at a technical level? Or is it already there?

RISC-V caught up, key functionality wise, with the set of extensions ratified in December 2021.

There's key advantages to RISC-V, licensing aside. It leverages industry experience and avoids many pitfalls thanks to not dragging any baggage.

Field experts such as Jim Keller sing praises about it.


The ISA standard only specifies the instruction set. The actual technology in pipelining designs and gate layouts are still proprietary. Are there significant advantages to having an open ISA? Better software support?


Even ignoring RISC-V's already excellent toolchain and software ecosystem support, and the license everybody gets for free (which is a huge deal), there are technical reasons.

RISC-V is designed with great care taken to weight all decisions as not to hamper any scope of implementation, from the lowest power microcontrollers to the fastest supercomputers, and everything in between.

This is unlike e.g. ARMv8/aarch64, which imposes too much complexity (>700 instructions) to small implementations, and hampers all implementations due to low code density.

In small implementations where performance isn't paramount but 64bit is required (for e.g. addressing, as is the case in many specialized tiny cores embedded in larger SoCs), bad code density increases ROM and/or RAM requirements as code takes more space, thus area and power. If 64bit is not needed, aarch32 can be used, which has much higher density, yet as of recent B and Zc extensions, RISC-V is better still, and can be tailored to the requirements, down to as simple as ~42 instructions and 16 registers (rv32e).

In large implementations, such as Apple's M1/M2, aarch64's awful code density imposes a large L1$ code cache in order to keep the pipelines fed. Large L1$ are very costly, as latency, area and power go up quickly with cache size, and maximum clock drops as size increases. Keep in mind that, in current fab nodes, SRAM is far more costly than logic

In contrast, RISC-V offers industry-leading code density through a form of variable instruction size that takes great care to not complicate decoder parallelism. There are already multiple large scale RISC-V cores announced with 8-wide decode (like Apple M1/M2).


> RISC-V is designed with great care taken to weight all decisions as not to hamper any scope of implementation, from the lowest power microcontrollers to the fastest supercomputers, and everything in between.

RISC-V was consistently designed so that the simplest implementations remained as simple as possible. When design tradeoffs meant more complexity for more performant designs, any possible detriment was determined to be unimportant.

Notoriously, this is the whole "big cores can fuse instructions; that's got to be free for them, right?" Though the V extension also has a couple of fun ones with re-using v0 for masking (oh a big core can just keep track of whether it's a mask or not and switch which set of registers it's renaming from) and vsetvl (yeah let's make a big core speculate what effect it'll have on subsequent instructions)

> aarch64's awful code density

All the code density comparisons I've seen have been static. Do you know of any dynamic comparisons? I suspect 10% less dense code than RISC-V C (but still denser than x86-64) isn't the primary reason for having 3x more L1I than I think any RISC-V or x86 design currently has...

And even then, ARM for instance decided it was worth spending 50% of L1I cache area on a MOP cache for various reasons, a key one being that their implementation shaves off a cycle in branch mispredicts. If there's no predecode info stored in cache, I can easily see a predecode stage adding a cycle to the pipeline depth over assuming fixed-length. And if it is stored in SRAM you lose some of the codesize benefits...


Great point. At the low end, there are the 8-bit microcontrollers that cost a few cents. Far larger than that, a fast 64-bit multicore chip, costs at least a few dollars. There's a large space in-between. A current laptop or desktop machine will have many, perhaps literally dozens, of 32-bit microcontrollers on the motherboard and in the peripherals, often integrated into a larger IC. There are multiple processors, besides the main processor, in most mobile devices. In recent years, this area has been largely controlled by ARM.

For something very simple, an in-order 32-bit processor with nothing but simple integer features, RISC-V does appear to be a bit simpler to implement than the current embedded ARM instruction sets. Less state, fewer and more regular instructions. In a large machine throwing away some tens of thousands of transistors on that is an irrelevance. But in a 10 cent microcontroller it is not. As 32-bit and 64-bit computing displaces the 8-bit embedded world, I think RISC-V has a small technical advantage in that area at least.


> […] aarch64's awful code density […]

With all due respect, this is complete nonsense. You keep making this claim, yet have never produced any credible evidence other than a single bogus «ls -l /bin/bash» ages ago for three – x86, aarch64 and RISC-V (was it RISC-V 64 or 32?) – Linux distributions where the code was most likely compiled with «-O2 -g». If you want to substantiate your claim, you had better provide an objdump output for the _text/.text section that will reveal the actual (i.e. pure) code size footprint.

Let's pick this apart. The same random C++ code compiled with the same GCC v12.2.0 for 1) aarch64 (a generic 64-bit aarch64), 2) RISC-V (a generic 64-bit target), and let's also throw 3) POWER64 and 4) MIPS64 in for giggles:

– aarch64: https://godbolt.org/z/r4WPPj8nG

– RISC-V: https://godbolt.org/z/sr6nb8sTc

– ppc64: https://godbolt.org/z/PMrhe5W8z

– mips64: https://godbolt.org/z/sdKjncEfq

If we filter out empty lines, labels, assembly pseudo-instructions and directives, we get the following results (in the ascending order) as the number of actual instructions for each ISA:

  aarch64: 
  $ rg -cv '(^ *\.)|(^$)|(^[ A-Za-z0-9\(\)_]*:)' /tmp/meta.arm64-v8a.S
  814

  RISC-V 64-bit:
  $ rg -cv '(^ *\.)|(^[ A-Za-z0-9\(\)_]*:)' /tmp/meta.risc-v.S
  860

  ppc64:
  $ rg -cv '(^ *\.)|(^$)|(^[ A-Za-z0-9\(\)_]*:)' /tmp/meta.ppc64.S
  945

  mips64:
  $ rg -cv '(^ *\.)|(^$)|(^[ A-Za-z0-9\(\)_]*:)' /tmp/meta.mips64.S
  1030
aarch64 has, in fact, the lowest instruction footprint and came out at the top, with RISC-V trailing behind (but being comparable nevertheless). Surprisingly, ppc64 did more than 10% worse than aarch64, and mips64 came in last (unsurprisingly).

Conclusion: aarch64 does not possess the «awful code density» and is comparable to or better than that of the RISC-V 64-bit architecture.


Number of assembly instructions is not strongly related to code density. In any case you need to look at dynamic code density to draw any conclusions about I$ pressure.

Even if your comparison was valid, it is affected by the quality of the compiler - current compilers are not very good at exploiting RISC-V compressed instructions and generally have not had as much attention as on older architectures.


And that is why I used the same compiler for all ISA's: gcc 12.2.0. The GIMPLE IR intermediate product is the same for all architectures; however, the ISA backend is specific to each ISA, so code generation peculiarities might or might not be an issue. GCC code output quality is above the average or high for all active mainstream architectures.

Also, RISC-V compressed instructions are RISC-V 32-bit, therefore a comparison with 64-bit architectures is not applicable.


> Also, RISC-V compressed instructions are RISC-V 32-bit, therefore a comparison with 64-bit architectures is not applicable.

No, both 32-bit and 64-bit RISC-V have compressed instructions, and they're nearly identical (IIRC, there's one compressed instruction which is different).


>yet have never produced any credible evidence other than a single bogus «ls -l /bin/bash» ages ago

Wait WHAT? I strongly suspect you're confusing me with somebody else I am not.

Please use `size` from binutils, if you're trying to compare code size of binaries.


Does this account for the ubiquitous riscv compression extension?


Yes and no. The GP is counting instructions, which is approximately the same between ARM64 and RV64. Others are talking about code density where RISC-V currently has a significant advantage due to compressed instructions.

-----------------

For the above code segment, though I selected the compile-to-binary option, and the program failed to compile. So I don't know what's up with that.

On a relatively short chunk of C++ code, you can see the compressed (16-bit) instructions in the binary output:

https://godbolt.org/z/jzjzrrfMP

Some instructions, such as the ones with a larger immediate load, are 32-bit.

Same version of GCC / G++ as the above.

The actual code size of the ARM64 is bigger:

https://godbolt.org/z/Wq7jKz5rY

I'm not sure why all the NOP instructions are in there, but even discounting that, all the generated machine instructions are 32-bit. Is there another option for GCC that will help? Or am I mistaken in thinking there is a Thumb2 equivalent for this target?

Edit: Answer: No. A64 is fixed-length:

https://developer.arm.com/documentation/den0024/a/An-Introdu...


For the sake of savouring a civilised nerdy discourse, may we stick to the same code and set a rule that only compiler switches can be fiddled with? Comparing 3 lines (after discarding empty lines, curly braces, functions and variable declarations) of unrelated code (which is not a C++ code anyway) yielding a 5 or so CPU instructions is boring and is not enlightening in our quest to reveal the light and unveil the path to truth.

The piece of C++ code I have uploaded to GodBolt was floated on HN a few years ago as an instance of the template metaprograaming that crashed one C++ compiler and resulted in an OOM process kill for another (for the record, it was clang++ and g++ although I can't remember which one did what). Both compilers have been fixed since then, but the code is still interesting due to it being: a) terse, and b) resulting in a unusually large number of [mostly useless] numeric computations that a C++ compiler yields by virtue of the template expansion. So I have retained a copy of the mischievous code. The template expansion is common in C++ template metaprogramming, and is extensively used in C++ scientific libraries (e.g. BLASS as well as in others).

When it comes to the generated code footprint comparison, the numeric computations are a good metric to assess.


I compiled that code with slight modifications (needed an actual func1 function) with a cross-compiler I had laying around into actual executables:

cross g++ version 8.4.0 for aarch64: 13712 bytes

native g++ version 11.3.0-3 for RV64GC: 13968 bytes

... but that includes a bunch of startup stuff, so that's not a good measure.

Modifying the code to generate binary on godbolt:

ARM64, 3816 bytes: https://godbolt.org/z/vn3de5nEc

RV64GC, 3226 bytes: https://godbolt.org/z/4ovMTaxo9

So an 18% increase in code size for ARM 64-bit vs RISC-V 64-bit. As you can see in the generated code, there are 16-bit compressed instructions by default with the RV64GC binary.


This is excellent, thank you. I was under the impression that the compressed instructions set required a dedicated compiler switch, and counted a number of instructions instead. Now that «X is 18% smaller than Y» is quantified, there is the circumstantial evidence, and it gives a clearer idea of the differences between two ISA's. Yet, it does not validate the absolutist and unquantifiable «awful code density» statement.

I have also noted that compressed and uncompressed instructions are unevenly interleaved in the generated code, and occur in random sequences, i.e. addresses 0x10a60 ÷ 0x10a72 follow the 16-32-16-16-16-16-32-16 bit sequence whereas most other instructions follow 16-32 bit or 16-32-32 / 16-16-32 bit sequences. I wonder what performance penalty for the instruction decoding unit in hi-perf RISC-V cores is going to be for compressed vs uncompressed instructions code especially if the instruction being decoded spills over into the next L1 i-cache line (or, worse, into another memory page). RISC architectures are generally averse to misaligned memory access.


The compression extension is RISC-V 32-bit. All code has been compiled for 64-bit architectures for a comparison.


That is not correct.

Compressed instructions are available for 32-bit (RV32) and 64-bit (RV64). The compressed instructions differ slightly between the two though. See my other comment in this thread.


I am more than happy to stand corrected, if a correction is warranted. What is the gcc 12.2.0 -march= value to make gcc 12.2.0 emit the compressed 64-bit code? The gcc 12.2.0 rejects all the -march RISC-V values the gcc documentation cites, e.g. rv64i is not accepted, and it is documented. I had presumed it was -march=rv64Zc[…], with […] being a combination of a/b/c/d/f/cmp/cmt, and neither of those was accepted.

Also, according to https://wiki.riscv.org/display/HOME/Specification+Status, code size extensions are in a review state slated for a ratification in Q1 2023, with «ABI issues being worked» mentioned in the notes. My interpretation is that the compressed instructions extension is still work in progress, and is a moot point to discuss at this very moment.


> My interpretation is that the compressed instructions extension is still work in progress, and is a moot point to discuss at this very moment.

The compressed instructions extension (the C extension) has been finished and standardized for a long time. In fact, all mainstream desktop Linux distributions which have ported to RISC-V have standardized on RV64GC (aka RV64IMAFDC, with the extensions I, M, A, F, D, C), that is, they require compressed instructions.


So what is the gcc (or gas via gcc) compiler switch to make the RISC-V compiler backend yiled compressed instructions? I have been unable to find a working one in the documentation, and the beehive mind has been of no help either.


Compressed instructions are generated by default with recent versions of GCC for RV64GC and RV32GC.


>My interpretation is that the compressed instructions extension is still work in progress

C was ratified years ago, but I suspect you might have heard about Zc (about to be ratified, specific to 32bit, particularly helpful to microcontrollers) or B (ratified recently, bit manipulation, happens to help code density), and their effect in code density.


I thought the sizes of data caches were far more important than instruction caches? I mean even if the ISA is not as dense as it could be, we're talking about a few kilobytes of wasted i-cache space at most.


Well, that's like 10% of your I$.


The greatest advantage is that you can make your own processor implementing the ISA without asking for permission and/or paying fees. Also you can make your processor open source if you wish and many have already done so.

As an example Apple designed their processors from scratch, they do not use any core designs provided by the ARM holdings corporation. Yet they still have to pay ARM a lot of money because they use their instruction set.

The instruction set is a very sticky thing. All the software gets written for a particular instruction set, and people like to keep using previously written software. Thus, software stickiness causes CPU ISA stickiness which gives companies that control that particular ISA a lot of power. And they of course monetize that power by charging extra for their CPUs.

With an open ISA, anybody can make processors that use the same instructions, which means that your software should work on any CPU designed for that particular ISA. Thus, all the talented hardware designers in the entire world can work to make RISC-V processors without asking for permission. They are free to charge for their designs or make them open. And if you write RISC-V code, it can execute on any of those processors or any future ones anyone decides to develop. (There is a bit of complication here in that the RISC-V ISA includes several different combinations for instructions for different use cases, but they are all open).


> Yet they still have to pay ARM a lot of money because they use their instruction set.

Is this true of Apple? They co-founded ARM, I thought it was believed they had a perpetual license.


That doesn’t exempt them from royalty payments.


Ah, I see.


> The greatest advantage is that you can make your own processor implementing the ISA without asking for permission and/or paying fees.

This is great but if one or all the parties in a commercial war decide that your product cannot cross a border, an open source design won't save your business.


Lots of techniques are covered by patents.


And you're right, a vendor could incorporate novel patent-encumbered instructions into their otherwise RISC-V (based) CPU designs. In which case perfect compatibility wouldn't be possible.

Patterson, I believe, has said that folks like Intel intentionally do this, add rather pointless new patented instructions so that they are always "embracing and extending" (my words) their own designs to prevent true compatibility; and thus cloning by others. And that that's one big reason why it was worth creating RISC-V.


What are expiration dates for today's most important techniques ?


arm64 was essentially designed from scratch. What makes it have any more baggage than RISC-V?


> RISC-V caught up, key functionality wise, with the set of extensions ratified in December 2021.

Having an ISA with functionality parity is the table stakes. But having designs actually capable of outperforming the current generation of ARM cores will be a real challenge.

Can SoC vendors design their next generation with RISC-V application cores? Sure, but no one will want it if it's slower or results in lower battery life. So far they've mostly only dipped their toes in the water with RISC-V microcontrollers. Google's explicit desire for a tier-1 RISC-V Android probably will help break a stalemate and get SoC vendors to ship something. It will very likely not debut as a flagship. But releasing a RISC-V phone that performs "adequately" would be an accomplishment that will flush out a lot of issues.


RISC-V chips so far have been significantly lower-area and higher efficiency than their closest ARM counterparts, when on the same fabrication process. This has held true for embedded microcontrollers and lower-end application processors. And RISC-V with the newly standardized extensions and Rva-22 profiles seems to have even more potential at the higher end. It obviously won't make a huge difference to battery life, but it shouldn't hurt either.


So I don’t understand any of this stuff. But does this mean applications built for ARM64. Like a .NET 7 app which targets linux-arm64 and runs on… say Ubuntu ARM. Will run on a RISC-V cpu?


RISC is just a broad categorization of the instruction set, based on some of its defining properties.

The actual instruction sets are not compatible: they both have different instructions, different ways of encoding instructions and so on.

It's a bit like an AK-74 and an AR-15. Both are classified as assault rifles based on some defining properties, but they're designed and built completely different, so you can't take say a full magazine from an AK-74 and make it work in an AR-15.

So no, you won't be able to take your ARM binary and run it unaltered on a RISC-V core.


>So no, you won't be able to take your ARM binary and run it unaltered on a RISC-V core.

There's still emulation, which will run your unaltered ARM binary... with the corresponding performance penalty, which could either be acceptable or not for the intended purpose.


I group emulation under "altered", as the instructions are not executed by the hardware directly but rather translated into other instructions be it directly or indirectly.

Of course, it's hardly a black and white thing.


Would ARM on RISC-V emulation have lower overhead than, say, x86-64 on ARMv8? I would imagine the common RISC stuff would be beneficial there, but I know next to nothing about CPU architecture.


No. Both x86 and ARM are very expensive to emulate on another ISA, primarily because of the intricate details of updating their respective status registers / condition codes after each instruction. In an emulator, getting that right takes a lot more work than doing the actual add or xor or whatever the instruction appears to do.

RISC-V on the other hand is very easy to emulate at high performance because it doesn't have condition codes at all. Rather than doing something like "cmp A,B;blt foo" as x86 and ARM do (with the result of the cmp stored in the condition codes), the equivalent RISC-V code is "blt a,b,foo".


AArch64 doesn't rely on NZCV that much. Unlike AArch32, most instructions can't run conditionally anymore, it's mostly just the usual conditional branches. Like on RISC-V, cb(n)z and tb(n)z perform comparion/bit test together with a branch in the same instruction.


Yes, but it's not specific to "common RISC stuff", but instead it's because of a peculiarity of the x86 family, which has a strong memory ordering, while both ARM and RISC-V have a weak memory ordering. Apple avoided that overhead by adding a special strong memory ordering mode to its processors, which is enabled while running their x86 emulator; but emulators running on other processors (unless they have the RISC-V TSO extension or similar) either have more overhead, or have to use only a single core to run the emulation.


There is a penalty for translating arm instructions into risc on the fly. That can be compensated by faster risc cores.


>dotnet

Is the single remaining major runtime that's still not ported to RISC-V.

Therefore, the answer is: Not yet.


Technically Mono supports RISC-V, so you can get .NET code running on it. Also it looks like if you're willing to go through messy and poorly documented build processes you can build CoreCLR on RISC-V: https://github.com/dotnet/runtime/issues/36748

So proper support is likely coming soonish, I'd guess by the time .NET 8 or 9 comes out.


Hahaha ok. But I could use rust or go without (or without much) issue?

I was worried this might be something we have to wait another 5 years for decent support for but it sounds like it’s quite widely supported now which is awesome!


Yes, Rust has been ported for a while. It's fine.

Keep in mind, there is already 95%+ of Debian's package library, and that's the Linux distribution with the largest package library.

On the topic of Rust, oreboot runs in multiple RISC-V boards. That's Rust running right from power-on/reset.


Risc-V is an completely different instruction set from ARM.

Risc-V is no more compatible with ARM than ARM is with Intel.


Apple's ARM is more compatible with Intel than you'd expect. https://news.ycombinator.com/item?id=33635720


Do not worry too much on running ARM or x86-64 or mips binary on RISC-V. You will soon start cross compiling the source code to the target architecture. Running the binary of other arch to another is temporary solution. Rather we need to think on what RISC-V enables us compared to ARM?


Applications built for one don’t automatically run on others the way intel/amd have parity for a x86 processor. It is still a different architecture so you still need to compile for RISCV.


Yeah so for .NET I target win-x64 or linux-x64 or linux-arm64, but I guess .NET needs some linux-riscv or equiv.


Yup. In Go the target is called `riscv64`.

But someone could come up with a translator which compiles arm64 or amd64 binaries into riscv64, similar to how Apple's Rosetta2 and box64 does it for arm.


Or one can come up with Rosetta 2 like solution until all binaries are ready for risc.


RISC vs. CISC is irrelevant.

ARM vs. RISC-V is irrelevant

The only thing that really ever matters is the quality of any given specific CPU implementation, which has almost nothing to do with CISC vs. RISC or the ISA[1]. x86 has been dominate for so incredibly long because Intel just consistently made the best processors around, and #2 in the space was also usually AMD. This is why Apple switched to Intel in the first place, after all. Don't forget that Apple switched off of RISC to CISC and got huge increases in performance & efficiency as a result.

Apple's M1/M2 are good because they invested an absolute shit-ton of money building up a seriously good in-house CPU team over the past 10 years, not because it uses ARM's ISA. In fact, M1/M2 support x86's memory model - it's a key reason that Rosetta 2 runs so fast.

So for a RISC-V phone to show up in let's say the mid-range or higher market and be competitive would take someone actually building a good RISC-V CPU implementation. Which maybe sifive will pull off - their new performance lineup looks decent enough on paper anyway. But that's also compared to the 2-year old Cortex A78 which is itself quite a ways behind Apple's M2 / Bionic A14. So that's what you really need to find for a RISC-V phone/laptop/whatever to be interesting - someone who can, ideally consistently, deliver a CPU core design that's competitive with {Apple, ARM, Intel, AMD}. And that list only even barely includes ARM - their CPU cores are by far the weakest of that set. Which is itself a significant asterisk on the whole "ARM servers!" thing.

1: the small but significant asterisk on that is it's plausible, if not likely, that Apple was able to build an 8-wide CPU frontend as a direct result of armv8 not being a VLA ISA whereas x86_64 is. However that's also arguably more a function of the ISA's encoding than the ISA or CISC vs. RISC debate. In theory you could have an x86_64 instruction set that's not VLA, just like ARM used to have thumb/thumb2 and non-thumb modes back in the day.


> And, and I can tell you that we got pretty far along with an internal Arm design, and it was very, very clear that you if you’re delivering a certain level of performance, the delta in power driven by the ISA is like 5 percent. [1]

Power matters a lot and 5% isn't almost nothing. And that's an estimate from an x86 vendor.

Agreed that it's almost certainly third though behind process and design. Intel's CPU leadership for a long time of course is not unconnected with its process leadership.

[1] https://www.nextplatform.com/2022/10/03/the-steady-hand-guid...


Idk for phones and larger I'd say 5% power reduction is still pretty close to nothing, especially if that's the difference between x86 and ARM. A 5w SoC becomes a 4.75w SoC - nobody is going to bat an eye at that. Similarly a 50w laptop CPU dropping to 46w is hardly transformative. It's not nothing, but it's still significantly less than a node shrink even with Moore's law being dead. And the difference between ARM and RISC-V would likely be even smaller.

You could also save way more than 5% power by just optimizing user space a tiny amount


It's not huge but it's not irrelevant either. Plus I think its probably an understatement (being from an x86 vendor).

I agree that the focus on ISA is often a bit overdone just not that it's something we can ignore.


Fair, and there are also of course ISAs that could be drastically impactful like the ill-fated Itanium or the doesn't-seem-to-be-real Mill.


Agreed. I think it's hard so see a new ISA getting much traction now unless it clearly offers a very significant power / performance benefit and that seems unlikely at the moment. I blogged that RISC-V might be the last mainstream ISA a while ago - I think it will crowd out other options.


Does this mean that vendors with already good ARM CPUs can fairly easily convert them into good RISC-V CPUs? If so, is anyone doing it?


I don't know about "easily" but let's assume sure, who would you even consider doing it? The biggest non-"reference" CPU core design is Apple's, and Apple is an ARM ISA founder so why would they care about switching to RISC-V?

For others, like Qualcomm, Samsung, or Amazon, they are licensing the ARM-designed CPU cores not designing their own in-house. So they wouldn't be able to switch those to RISC-V, it's not their IP.


> Bonus question... given that both ARM and RISC-V have similar goals, how feasible is an abstract layer on top of them? Meaning, five years from now, how possible is it that I have a RISC-V phone that can run ARM binaries, and and ARM phone that could run RISC-V binaries?

That’s called Java and Android already has it


I'd say it's more along the lines of Qemu user emulation / Rosetta 2 / Microsoft's x64 on ARM64 JIT.


I'm not sure the fact that RISC-V is RISC means much. Seemingly every new CPU ISA designed since 1990 has been RISC. Only Arm's has taken off in a big way outside specialist applications. CISC CPUs (x64) are still around and still competitive too.


With a permissively licensed chip running a permissively licensed Fuchsia kernel and the permissively licensed Android runtime atop that, Google can finally explicitly slam the door on people who think they get to contribute to Google projects, which should save a ton of money over the bureaucratic-slow-march approach they take today.

Because vendors won't have to release device kernels any more, and will be able to use arbitrary ISA extensions they're under no impulse to distribute, the industry will achieve the ultimate nirvana last enjoyed by AT&T before the breakup -- this hardware is not yours, we will not be releasing technical information about it, and God help you if we catch you opening it up!

It's been a long road but I really think we'll see the return of the "no user serviceable parts inside" sticker, only for phone software this time around.


Hoppefully by that time Right to repair laws would be already set.

Also this should push anyone over the edge about using copyleft licenses, if you are complaining about unusable android without google services now, having no Roms all together is even worse.


Yeah well they wanted Intel to be one too. I don't think you can buy a phone running x86 or x86_64 new anywhere, and that was back in the honeycomb timeframe.


I had a Zenfone 2, which ran Marshmallow on the Intel chipset. Decent phone and I don't remember having many complaints other than it didn't support my carrier's wifi calling which ended up being a dealbreaker when I got moved to an office that doubled as a faraday cage.


I had a Zenfone 2 as well and my main complaint was battery life.

It was definitely more power hungry than comparable competitors and not noticeably faster.


You can run android apps on x64 chromebooks though.


I think that has more to do with Intel failing to offer any competitive chips in this space. The entire company seemingly assumed phones and tablets were just a fad and spent the better part of a decade treating AMD as their only competitor.


I was a little surprised the edison/galileo platform didn't takeoff, their main problem seemed to be zero documentation.


It was also much more expensive than comparable ARM chips with no benefits - in fact it came with the huge drawback that it was a completely new platform in that space.


That'd depend on who you ask. Many will tell you any RISC choice is better, and there's no reason to use x86 when not tied by having to be compatible with legacy applications.


You can buy a Librem new (or rather any struggle struggle is because they're sold out, not because they're not being made) and it runs x86, though I don't think that undermines your point by much.


Which Librem device are you talking about? Their smartphone (Librem 5) has 4 ARM Cortex A53 cores.


> The world's biggest companies are building trillion-dollar businesses on top of the Arm architecture, and the realities of product design mean all these plans are two to five years out. So for Arm, giving off a vibe of "instability" is probably the single biggest thing it can do to drive away customers.


Of course they do. No one wants to pay ARMs licensing fees but there isn't really any other marketable option.


It's not just the fees. ARMs licensing terms are fucking annoying. They're militant about their shitty IP so they refuse to license their cores for FPGA based deployments, for instance.


I hear this all the time, but are there any public numbers on how much money is spent on licensing? Are we talking pennies per chip (which adds up) or tens of dollars per shipped device? ARM was going to sell for "only" $40 billion, which makes me think that per device fees are low (there is an ARM chip in everything).


It's not really about the level of the fees.

RISC-V is "permissionless". Grab a core from github, grab the freely available specs, and start working. No up front agreement, no fees down the road. For Arm you have to enter a legal agreement before you can start, and then negotiate licensing for your product.

Also Arm have been acting aggressively recently which doesn't make other customers feel happy: https://www.theregister.com/2022/11/01/qualcomm_arm_cpu/


It's such a single point of failure that it kind of boggles my mind. Seems like the world would become chaotic fast if they just stopped giving out licenses (though I know that some companies like Apple have perpetual licenses)


"2021 licensing (non-royalty) revenues were up 61% to $1.13Bn as our expanded product portfolio and new business models such as Arm Flexible Access gave more customers more reasons and more ways to license Arm technology.

2021 royalty revenues were up 20% to a record $1.54Bn, helped by continuing strong growth of 5G smartphones, more ADAS and IVI chips going into cars, and price increases in 32-bit microcontrollers."

Source: https://www.arm.com/company/news/2022/05/arm-delivers-record...

The royalties are higher on the newer more complex designs, so the 29.2 billion chips number can't really be used to derive a proper price per chip.

Don't even get people started on the perpetual and architectural licensing too.


That doesn't really seem like an emergency.

What's the rough size of the end markets for the devices the chips are in? Like hundreds of billions of dollars of revenue?


46.23 billion for apple. And that's a single manufacturer, albeit potentially the highest revenue one.

I imagine Microsoft, Google, Facebook, Amazon, etc. all are also massive core producers despite not making an external product for sale.

So I'd guess trillion(s?) in production of product or internal use, or near enough.


ARM reported 2.7 billion in revenues in fiscal 2021 (I cannot seem to find 2022 results easily).


Thats smaller than I expected for such a prominent company


No surprises there.

RISC-V is the future, and Google itself already has some public RISC-V designs.

The expectation is that some Pixel in not so distant future will debut with a highly competitive RISC-V SoC.


That's going to be interesting to see. Apple could pull off the change in infrastructure and tell devs to deal with it. But could Google do the same with lots of apps in the store which contain native code?


Not an Android developer, but it seems that all I hear is the upgrade treadmill for Android APIs. That is without changing the underlying architecture.


Not sure what you're talking about here, Android supports 4 architectures right now (supported 6 before) and adding one more isn't such a big deal.


I know android itself can support multiple archs, but does a (random play store app with native library) support them or even test them all?


Most Play apps have no NDK code so they work fine.


Not claiming it's a representative sample, but my phone has 160 subfolders underneath /data/app (i.e. user-installed/-updated apps), of which after a quick count 52 include native libraries.

I don't even have that many games on my phone, so those only account for six of those apps, with the largest fraction being various multimedia apps (camera, video player, image editor, music player, even my e-book reader…)


The top Play apps and games use native libraries. Here's a CppCon talk that mentions that 75% of the top 100 Play apps and games use C++.

https://youtube.com/watch?v=2Y47g8xNE1o&feature=shares


Yup. They can and they can also tell the devs to deal with it. Android devs, like me, aren’t into android development for the love of the dev tools and the environment (which are pathetic at best). They are there because they have to — one way or the another.


Well the benefit of having Java/Kotlin being the default dev language for Android is there's no need to recompile to run those apps on another arch. Only apps with native code will be an issue.


Where "only" means majority of games, especially anything created with common game frameworks, and non-trivial apps with native dependencies they pulled in.


"Multimedia" apps (anything image, video or audio-related, and even my e-book reader) are probably another large fraction (and since I don't have all that many separate game apps installed – mostly a few emulators only – they're actually the largest fraction of apps with native libraries on my phone).


When Intel was trying to make x86 phones a thing they supplied Google with an ahead-of-time binary translation system. The same could be done with RISC-V.


They are designing a new chip for their cloud...


Didn’t they just release their ARM virtual servers?


RISC-V is mentioned (once) in RFC-0111 (Initial Fuchsia hardware platform specifications)

https://fuchsia.dev/fuchsia-src/contribute/governance/rfcs/0...


Seemingly only yesterday, there was a lively rather abstract debate here on HN over the reasons why open hardware wasn't a thing. Many said it wouldn't ever exist and had good arguments for that. (Which arguments irritated me.) Irritation gone.


Fantastic… but on what? RISC-V is still not quite Raspberry Pi 4-level on the fastest chips, but is advancing quickly.

Is this going to primarily be the newest chipset for cheap Kindles?


>RISC-V is still not quite Raspberry Pi 4-level on the fastest chips, but is advancing quickly.

RISC-V is not quite Pi 4 level on the fastest SBC you can buy today, the VisionFive 2, where some kickstarter backers have already received their boards. It's around 80% as fast as Pi 4.

Sipeed have announced their LM4A with TH1520 SoC will be on sale late Q1. They've published benchmarks running at 1.85 GHz showing it is faster than Pi 4 at that speed, but it is supposed to run at 2.5 GHz with a heatsink.

SiFive have announced that their HiFive Pro board will be shipping in the summer, using Intel's "Horse Creek" SoC (which in turn uses SiFive P550 cores). The SoC was demonstrated running by Intel back in September. SiFive says the board will run at 2.2 GHz. It's should be similar performance as an RK3588 at similar clock speed i.e. quite a lot faster than a Pi 4.

Multiple other companies are working on RISC-V SoCs with performance in Apple M1 class, which is much faster than anything ARM themselves has. They're further away -- 2024 or 2025 -- but there is basically no doubt that they will come.

Android software probably isn't going to be ready for prime time until 2024 or 2025 anyway.


> Multiple other companies are working on RISC-V SoCs with performance in Apple M1 class, which is much faster than anything ARM themselves has.

Which companies?


VisonFive 2 is a decent amount faster than a RPi4.

And there was a floodgate opened for high perf core announcements at the recent RISC-V summit. https://www.semianalysis.com/p/ventana-risc-v-cpus-beating-n... Most of those are probably bullshiting some major aspect of their cores' perf but there's designs like from from Jim Keller's Tenstorrent too. Jim Keller is sort of known for not bullshitting.


There's a big difference here. A lot of very good designers saw a way to cash in on the emerging RISC-V market. Chip designers saw the potential and joined stealth startups where they could go from $400k per year to many millions. This potential payout poached a LOT of the best designers from big-name design companies.

I think there's a lot of potential here and probably a bit less over-promising than you'd get from companies without such experienced design talent.


It's not, it's a bit slower. But anyway it's close enough that unless you put two side by side or use a stopwatch you probably wouldn't notice the difference.


But it's a qualified not.

Specifically, I know two ways:

0. VisionFive 2's SoC has drastically faster GPU and overall better peripherals

1. Raspberry Pi 4 will overheat and heavily throttle at the slightest hint of load. With this in mind, VisionFive 2's CPU will in most situations be faster, by virtue of not throttling.

*. Most heatsinks only mitigate this situation partially. You'd need either active cooling, or to get a 400 instead, which whole keyboards very effectively acts as heatsink.


Is there a bootstrapping issue here that Google can solve? It seems to me that the market for fast RISC-V processors is pretty small until there’s a consumer-friendly OS to run on the things, and of course only hobbyists will port their OS to a slow chip.

The ecosystem could slowly grow through hobbyists, niche devices, special-purpose compute appliances, blah blah… or some entity with unlimited money could just do the short-term irrational thing and jumpstart one side of the equation.


Linux support is quite solid already, and Debian (largest Linux distribution) builds over 95% of packages for this architecture. The only major runtime missing is dotnet.

This is about Android, which is very different from your average Linux system.


Sure. But if google gets the Linux kernel inside android to run well on Risc-V, that's going to be a large fraction of the work required to get Linux running. Not a ton architecture specific code in user space, especially since things like gcc, glibc, and friends are already ported.


I quite like Linux, but it doesn’t appear to be super popular among consumers. (This is not intended as a slight — IMO open source works best as a community project by developers, trying to spread it to people who aren’t able to contribute doesn’t really help either party that much).


If that's your concern, Android (the story's topic) is very much popular among consumers.

Windows, MacOS and IOS also are, but the community cannot port these.

There are less popular / niche, community-run OS projects like Haiku which already run on RISC-V.


Yep! This is why, despite not being an Android user, I’m (speculatively) happy to see Google kind of kick-starting the software ecosystem. Hopefully that’ll lead to higher performance hardware, and it’ll eventually hit the point where we want to run proper desktop Linux on it.

Still some potential pitfalls, hopefully we won’t get super locked down hardware (I’m under the impression that that’s a bit of a problem in the phone universe).


How do you define Linux? The Linux kernel runs on billions of Android phones. It's also runs on chromebooks which sell in high volumes to kids in the K-12 schools.


I was responding to someone else who’d talked about Linux, so I think the question is better directed their way.

However, they talked about Debian and put it in the category of Linux, and then specifically contrasted against Android as a separate thing, so I think we’re informally talking about the conventional desktop Gnu+SystemD+Too many other groups to list Linux ecosystem.


Also depends on how stringent the EU is going to be about enforcing their laws. If iOS/Android/Windows/Intel/(Ryzen?)/Huawei... do get banned in practice rather than just in theory, this opens up quite the field for Linux/RISC-V !

https://en.m.wikipedia.org/wiki/Max_Schrems#Schrems_I


>RISC-V is still not quite Raspberry Pi 4-level on the fastest chips

Current chips in cheap SBCs are already equal or above Raspberry Pi 4 level[0]. Particularly so when most Raspberry Pi 4 do not have a sufficient cooling solution and will be quickly throttled under load.

The fastest announced chips are competitive with the top x86-64 chips[1].

0. https://nitter.net/pic/orig/enc/bWVkaWEvRmozSEZOR1VvQUE4WWlP...

1. https://www.hpcwire.com/2022/12/13/ventana-plans-to-bring-ri...


> The fastest announced chips are competitive with the top x86-64 chips[1].

I'm gonna need to see benchmarks before I believe that.


RaspberryPi designs are not completely open and RISC-V is, right?


RISC-V is an open ISA. It doesn't mean you can't build a closed system on RISC-V, nor does it guarantee anything about the firmwares and auxiliary chips (wifi etc).


RISC-V is an ISA or Instruction Set Architecture.

This defines the interface between hardware and software. If software and hardware both follow the specification, then the software will run on the hardware, and the hardware will run the software.

Both the hardware and the software could be entirely proprietary and extremely locked up, while compliant, as that'd fall well outside the ISA's scope.


It's neither.

The broadcom SOC closed source, and it doesn't have public documentation. Also it has a videocore core running a proprietary RTOS called microsoft ThreadX that does bootstrapping and some low level hardware stuff.


Holy moly.

Never knew that Microsoft developed and owned ThreadX:

https://en.wikipedia.org/wiki/ThreadX#Products_using_it


Read the article. They're preparing for RISC-V chips to take over the crown from ARM as ARM is not a stable company.


Meanwhile, the RISC-V "ecosystem" is already a full shambles, with companies shipping chips implementing preliminary, not 1.0 released extensions and adding on their own proprietary extension instructions.

It's like ARM before v7 or v8, only dumber, because at least ARM back then had an excuse for why they were fracturing it all so badly.

Also, of course, ARM has lots of other IP you need to build a full-blown smartphone SoC; think GPU, display processors and pipelines, video accelerators, ... - to make a RISC-V smartphone you will have to buy all of those in because there is no "RISC-V ecosystem" replacement (and it's rather unlikely there ever will be) and at that point... what's the point anymore?


The chips using preliminary/draft specs are annoying. However RISC-V has allocations dedicated in the ISA for vendor specific instructions which is fine. Code that does not use such extensions runs just fine


>The chips using preliminary/draft specs are annoying.

They actually aren't annoying at all. Real world testing with actual chips is very helpful to the RISC-V Foundation and its extension development and ratification process.


Would moving to RISC-V have any effect on Android's fragmentation and update problem (due to Qualcomm's stranglehold on the android ARM market)?


I don't think so. Moving to Fuchsia and producing SoCs with some closed parts will allow any vendor to obsolete any part without any problems. Stopping to provide drivers will make any IC as dead as a well cooked brick.

I think Fuchsia's main aim is to provide Google more control on the Android stack. "We will be able to do things more securely than Linux" reads like "We'll be able to close down devices even more while giving you a sense of openness" to me.

So, Google might be using RISC-V as another way to distance the platform from contemporary hardware/software ecosystem and make it practically harder to tinker/root/exploit beyond their (financial) comfort zone.


>Stopping to provide drivers will make any IC as dead as a well cooked brick.

AIUI Fuchsia got stable driver APIs/ABIs, so the Linux rules do not apply.


You can always forget a version check in your production code which fails to load your driver if the kernel version is beyond a certain point.

A stable API/ABI cannot stop a determined company from deprecating your hardware.


The Android software update problem is a tragedy that will not end with the introduction of RISC-V. Hell, it could get worse.


The top Android OEM's now offer 3/4 OS updates and 5 years of security updates. It's time to let go of the "Android software update problem is a tragedy" criticism. And if you still want to use your Android phone after 5 years there's a good chance that there will be a LineageOS build for it.


>And if you still want to use your Android phone after 5 years there's a good chance that there will be a LineageOS build for it.

There's a chance, but unless you checked for support in advance, a "good chance" seems like a bit of a stretch. Unless you mean unofficial builds, which are either too much work for most users, or not especially trustworthy.


I have a Nexus 5, released in 2013, running Android 13 surprisingly well.

>Unless you mean unofficial builds, which are either too much work for most users, or not especially trustworthy.

The process to install an unofficial build is the exact same as an official build. New updates can even be downloaded and installed in settings with a tap of a button.


Nexus and Pixel devices are top of the line phones that are famously well supported by ROMs, the sort of "check for support in advance" I mentioned.

Unofficial builds are not the "exact same", else they would be official builds. Unofficial builds are usually distributed without source (let alone working build instructions) as opaque blobs that you have to trust some random XDA user, with no verifiable background, didn't do anything too horrible to. At best, you're pretty much guaranteed to be getting a build without selinux enabled; at worst, you're getting straight up malware.

quick edit to clarify: the "too much work" I mentioned before was the rare instance where there is source, and you have to build yourself to get anything trustworthy


All LineageOS builds, regardless of whether they are official or unofficial, are insecure by default as the binary blobs are outdated and will never be updated. LineageOS is the point of last resort if you want to update your EOL device to a newer OS build while sacrificing security.

>Nexus and Pixel devices are top of the line phones that are famously well supported by ROMs, the sort of "check for support in advance" I mentioned.

If the bootloader can be unlocked then there is going to be a LineageOS build for it.

>Unofficial builds are usually distributed without source (let alone working build instructions) as opaque blobs that you have to trust some random XDA user, with no verifiable background, didn't do anything too horrible to. At best, you're pretty much guaranteed to be getting a build without selinux enabled; at worst, you're getting straight up malware.

The vast majority of Unofficial LineageOS builds are performed by Recognized XDA developers. So, no, we're not trusting some random XDA developer. Additionally, SELinux is in enforcing mode on my unofficial Android 13 Nexus 5 build.


Do not forget that every OS update needs to go through your carrier before arriving to you, and they might not care to do the vetting work, barring you from getting the updates.

On the other hand, an iPhone is usable for 9-10 years with updates and EOL support. You'll change your battery once in mid-life if you look after your device well.


My iPhone 6 was released on September 19, 2014 and it received its last OS update on September 17, 2018 with iOS 12 so it received only 4 OS updates updates. It continued to receive minor security updates until June, 2021 so about 6 1/2 years. Although, the security updates it received after 5 1/2 years were minor and not comparable to the full updates non EOL devices received.


> The result is that Chinese tech companies are rallying around RISC-V as the future chip architecture.

That’s really the motivation here. I guess google is trying to stay ahead of being replaced in China?

To me, as a westerner, I‘d rather see ARM succeed than RISC-V. Despite ironically being designed in the west, rallying the world around an open instruction set I believe will primarily benefit China, who wants to lead in semiconductors. It gives them the ability to become big players without having to push a new ISA on the world, and if it becomes dominant on mobile, they could see themselves replacing Qualcomm, Intel etc. It is a zero sum game all around.


I (also westerner) would still like to see RISC-V succeed, doesn't mean ARM has to fail but open competition is likely to be good in the long term.

An ISA won't let China become a leader in semiconductor unless they implement it better than both western implementations of RISC-V and than ARM/Intel, the only thing it does is give an incentive to not rest on our laurels

I am also fundamentally uneasy about fundamental technologies being based on just a handful of companies, whether they are western or not, I know it happens but the less it happens the better imo


“ It is a zero sum game all around.”

It is not. We consumers benefit from competition, and loose from higher prices and poorer products produced by monopolies.


> We consumers benefit from competition

Look where the "competition" of the last 30 years brought the Western world: domestic electronics production is all but dead outside of military and R&D depending on quick turnaround because it got moved off to China, Taiwan, Vietnam, Japan and South Korea, domestic pharmaceutics and chemical production is all but dead because India and China are absurdly cheaper, domestic steel production has taken serious hits for the same reason, animal welfare in farming is all but gone with farmers preventively feeding antibiotics to avoid immediate population collapse because the large meatpackers and chain stores act absolutely ruthlessly to make more profit, wages have gone down across the board when compared with inflation and large parts of our societies have no emergency funds...

> and loose from higher prices and poorer products produced by monopolies.

Competition itself is not bad, it keeps societies from stagnation - but too much and the side effects are now worse than the starting point. The Western world after decades of "competition" now depends on genocidal dictatorships (China), ordinary dictatorships (oil sheiks) and declining democracies (India) for its basic survival, and inside of our economies we got monopolies and oligopolies at a scale that was unbelievable even in the 90s.


If the west does not adopt RISC-V, then it will simply fall behind China.

I don't think this is something we want.


China has spent ages trying to get their local MIPS derivatives to a place where they're competitive, but haven't succeeded. Adopting or not adopting RISC-V (which is only an ISA, not an actual platform) won't make a lick of difference.


That assumes that RISC-V is a competitive advantage over ARM and x86. While I have no particular grounds for pessimism about RISC-V, it seems a bit premature to assume that it's better in practice than those others.


riscv does have something that arm/x86 doesn't have: intense implementation competition at every perf/price point

Lots of companies are throwing their hat in trying to design a riscv core.

You can do a riscv startup, its much harder to do an arm startup and near impossible to do an x86 one


I've been hearing about how RISC-V is going to crush the market for over a decade, and the only fully-functional RISC-V-based product on the market is a soldering iron. Everything else is some kind of DIY proof of concept kit of no use to any actual consumer or business. The architecture has about fifteen or twenty years before they are any kind of meaningful threat to x86_64, assuming it lasts that long.


There is a chance you're using RISC-V CPUs right now. Nvidia GPUs use them for their management logic (GSP) and Seagate/Western Digital are using them for HDD controllers. ARM used to only be used in these kinds of workloads too, you have to work your way up the stack.


Since RISC-V is barely ten years old that's quite a statement.

Introducing a new instruction set is an enormous undertaking, taking multi-decades typically. If anything, the adoption of RISC-V and its momentum is astonishingly fast.


They had hardware in 2011. Development preceded that milestone by some years at Berkeley, which is where I was first told about its impending market domination.


This is just lopsided thinking. I have the exact opposite opinion as someone not from the west, we need open instructions to be free from the clutches of everything controlled by western countries, we need competition.


The flip side is that China is pushing RISC-V because they can't develop their own homegrown microarchitecture.

This is China we're talking about here, they aren't rallying behind RISC-V because they like it. It's because they can't develop anything better than RISC-V, can't license x86 or ARM (which clearly are better than RISC-V) for obvious reasons, and so they're stuck with what they can get their grubby hands on.

If the west widely supports and adopts RISC-V in a way that doesn't exclude China, we are quite literally just playing into Chinese hands like the west has done time and time again.

Nurturing RISC-V in itself is a noble endeavour, but we must do so in a manner that doesn't hoist our own god damn petards.


This is just incorrect. Chinese companies have plenty of design chops.


And what percentage does that involve stealing and reverse engineering IP from western countries?


Try and see if you can find any stolen code here[0] or here[1].

Cheers.

0. https://github.com/T-head-Semi/openc906

1. https://github.com/T-head-Semi/openc910


No idea what this is or why its relevant. Do your two repositories represent the entire of industry in China?

https://www.cbsnews.com/news/chinese-hackers-took-trillions-...

"Cheers."


If they can make these two (which are quite competitive, mind you), that's proof enough they don't need to "steal" any designs from the west.

Cheers.


Then why Chinese (if they can be treated as a whole) don't just rally LoongArch?


True patriots use Windows.

Linux? What are you, a Chicom? Did you know they literally use that in North Korea?[0]

0: https://en.wikipedia.org/wiki/Red_Star_OS


Envy that you can become a member of Westerners.


Commoditize your complement [0], right?

[0] https://www.gwern.net/Complement


This is very consumer oriented thinking.

Industry wants RISC-V too.


How does this help consumers?

The main difference to consumers is that less apps will be available since they haven't been ported to RISC-V


Less license fees means lower costs. But, more importantly, an open ISA means everyone can design for it, which means more competition which means much lower costs.


ARM license fees are a relatively small proportion of a smart phone. It might have some impact on the low-end, but licensing costs are more important for all the small "invisible" chips that are in everything these days.


Well it also means that there will be many risc-v flavors since nobody like ARM controls it. So every chip can add a few instructions to differentiate themselves.

So if you spend a few $100M to a few $billion making a hot new risc-v core you are likely to add a few instructions for your intended use case/customer. The result will be significant fragmentation (which is already happening) which will make a mess of any OS and compiler support.


Maybe this is naive, but the way I see it is: more architectures = more room for innovation. Ultimately RISC-V will allow for more competition in the CPU space, driving down prices for consumers.


At a certain point you have no more benefits from end user variety.

Think about it this way - what benefits would you get if each house had a different standard of electricity delivery? Let's say you have DC 300V, your neighbor AC 220V 50hz... meanwhile the building across the street was all three phase AC 150V 100hz. You wouldn't be able to buy a random TV and be sure that it works.

In the end either ARM or RISC-V will displace Intel x86. And you'll have much more progress, just like the progress in ARM space was much more rapid than x86 space.


Releasing some new competitive CPU is a billion+ dollar hurdle. No one is going to release a RISC-V competitor to any current top of the line CPUs without that minimum investment. Who is going to do that level in investment when buying licensed ARM cores is orders of magnitude cheaper?


>Releasing some new competitive CPU is a billion+ dollar hurdle.

We are well past that hurdle.

Competitive CPU designs have already been announced, and known funding for RISC-V is already in excess of 10 billion.


I should note here that Apple designed and released their CPUs for mobile phones, tablets and PCs without buying ARM cores. They are still using and paying for the ARM ISA, but they designed their own cores.

And their CPUs are widely held as better than the ones that use Arm cores. (E.g. the Samsung ones).

So there is potential for investment.


Apple easily spent a billion dollars developing their A-series chips from the purchase of P.A. Semi in 2008 through the release of the iPhone 4 with the A4 chip in 2010. A lot more money went into subsequent designs and now the M-series.

There's potential for investment but that investment is a lot of money.


OEMS make not want to spend that money but companies like amazon, apple, qualcomm, google, fujitsu, already spend that money to get custom arm cores. Doing a risc-v core instead means you potentially save on licensing although that might not offset being able to get a arm design and modify it. It probably comes down to licensing terms and pricing that we're not privy to.


> Who is going to do that level in investment when buying licensed ARM cores is orders of magnitude cheaper?

Companies like Huawei who might not be able to license new ARM designs


> How does this help consumers?

The same asked of ARM, just look at the Surface critique.


Cheaper cpus that can get specialized for more domains, as there isn't the same cost/innovation bottleneck as ARM/x86.

And Android runs a VM, last time I checked..


> And Android runs a VM, last time I checked..

But a significant fraction of apps still includes native libraries, too.


How much are chips in mobile devices currently? How much of a saving are we talking?


https://www.thejakartapost.com/life/2020/06/08/what-makes-up... says $54/unit for what you find in a Galaxy S20.


> Industry wants RISC-V too.

Only in China. And only at the bottom.

The ARM "premium" is a rounding error in anything that isn't a straight race to the garbage dump.

RISC-V, however, is a highly welcome development in all the <10 cent microcontrollers which have no standards, no tooling, no ISA, etc.


ARM doesnt cover every single point on the power/perf/area/price graphs, so riscv will at least give companies the opportunity to fill those gaps


ARM is on the way out...

Also, watch for Si implanted back doors. Guaranteed to happen...

One big impediment to RISC-V now is GPU, but that is also coming on fast: https://www.allaboutcircuits.com/news/risc-v-universe-grows-...


God I love Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: