Hacker News new | past | comments | ask | show | jobs | submit login
RISC-V support in Android just got a big setback (androidauthority.com)
104 points by ammo1662 21 days ago | hide | past | favorite | 83 comments



Reads like a typical "will be released when it's ready, but it's not ready".

Or providing a (single) generic kernel for RISC-V is impractical, so support for that is removed until another approach is figured out? Just speculating here.

But Android-on-RISC-V is pretty much unavoidable at this point. Google-blessed or otherwise. If Google won't do it (article does not suggest that?), the open source community will. Or the Chinese. Or some other deep-pocketed corp interested in pushing this ahead.


The question is, which RISC-V? You're providing a single kernel for phone-like things, so probably RV64I as a base. MAFD is pretty much required for running a Java VM. Q, no. C, are you shipping for Si-Five or Qualcomm? J/T/P/V would be useful, but does the kernel need it? H is needed for Android phones these days but not watches. S, no idea.

The explanation makes sense to me; let the dust settle and have real hardware shipping before defining the base feature set in a generic kernel image.


The sheer number of RISC-V extensions and the resulting fragmentation is blowing my mind. No wonder they can't provide a single generic kernel.


There are baseline profiles, like RV23A, that standardize a set of extensions for a class of hardware. Problem right now is that Qualcomm and Si-Five disagree on what extensions should be supported, and there is hardware shipping or in development with two different definitions.


Literally everybody agrees except Qualcomm who want to basically no participate, come in on the side and make it quasi propitiatory.


Couldn't Google (who after all does Android) simply declare that the 'official' RV64 is IMAFD (aka 'G') CJQTVPBHLNSUX ? (or whatever makes sense, given the android roadmap). https://en.wikichip.org/wiki/risc-v/standard_extensions

And if they did this, this would prove why Android's parent company is called Alphabet!


There is a standard profile called 'RVA22' and 'RVA23' after that. If you watch the presentations from google on this, they will almost certainty target one of those two standards, very likely RVA23, specially if its now delayed.

Android could also define its own profile, but there probably wont be a reason for why they would need to do this.


It's really not that bad when you consider that RISC-V is built to be used in anything from a tiny mcu to a large server cpu. Compare with the set of extensions to x64, too.


It hasn't actually resulted in that much fragmentation. Android will target the standard profile, they have already announced that. There are standard profiles that pretty much get used by everybody that is trying to make something that runs generic software.

If you have some one of consumer electronic device that runs one specific software, you might not follow a profile, but even then most will follow a specific profile.


What's MAFD?

(I don't know the instruction-set well and Google didn't turn up anything informative.)



That doesn't directly answer the question.

The answer seems to be that MAFD refers to the union of the M, A, F, and D extensions to the baseline ISA, adding integer multiplication and division instructions (M), atomic instructions (A), single-precision floating point instructions (F), and double-precision floating point instructions (D).


Does it? They say the risc-v support “is discontinued”, which seems to imply something much stronger than what you say.


The article says:

> Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI).

There is some more context in the android SIG: https://lists.riscv.org/g/sig-android/topic/105816077#msg389


that link makes it much more clear

  > they are removing older code for Qualcomm specific use case that is no longer supported (Wear OS IIRC was discontinued). That pre-dates all the current work, so I expect this was removed because the platform was deprecated
and

  > since maintenance of an officially _labelled_ GKI kernel is more expensive, we're removing the sticker for now.
so... basically a nothingburger?


> But Android-on-RISC-V is pretty much unavoidable at this point.

I’m curious to know what happens to the significant portion of apps that rely on the NDK? Might there be some possible emulation layer like Rosetta 2 like with Apple Silicon?


If Google cares enough, then yeah probably. They have an emulator (libhoudini) to run ARM apps on x86 (Chrome OS being the main use case)


The quote in there mentions “rate of iteration”, which given the noise Qualcomm were making makes me speculate Qualcomm are planning on forking RISCV to be more arm64ish and Google would not want to be maintaining both that and vanilla RISCV in this until the dust settles. It is something like that or they really are removing it.


I suspected it was not good news for RISC V when Qualcomm got involved. They are going to split the RISC V ecosystem.


> They are going to split the RISC V ecosystem.

Part of the DNA of RISC-V is to provide a basis from which a million flowers can bloom. Instead of homebrewing, you can reuse RISC-V and make tweaks as you need... if you really do need a custom variant. Think of it as English, a common substratum from which there are lots of mostly interoperable dialects. This is what happened with Unix.

Now if we want a mainstream consumer then we will need a dominant strain or two. The RISC-V foundation is doing that with the RVA23 profiles etc, but if there is a few major ones that should be navigable. Linux had support for PC-98[1] which was a Japanese alternate x86 platform.

The changes I've seen proposed by Qualcomm don't seem drastically different [2], and could be incorporated in the same binaries with sniffing the supported features. The semantics are what are important, and it's not different there at all. Could be supported with trap and emulate

[1] https://en.wikipedia.org/wiki/PC-98 [2] https://lists.riscv.org/g/tech-profiles/attachment/332/0/cod...


The code size reduction instructions are an extension that will go through and will eventually be supported by everyone, and is not the bone of contention here. They are designed to be "brown-field" instructions, that is, fit into the unimplemented holes in the current spec.

The reason the spec is going to split is not them, but the fact that Qualcomm also wants to remove the C extension, and put something else in the encoding space it frees.


Hmm, that seems like a mistake because C allows for instruction compression with low cost to decode that is perfect for embedded use which is a big part of the RISC-V usage now.

That said, if they implemented C, and then had their replacement toggleable with a CSR that would still be backwards (albeit not forwards) compatible so that'd only be an issue if Qualcomm RISC-V binaries become dominant, but I don't think binaries are gonna be that dominant outside of firmware going forward, and any that are will be from vendors that will multi-target.


>> Hmm, that seems like a mistake because C allows for instruction compression with low cost to decode that is perfect for embedded use which is a big part of the RISC-V usage now.

It may be low cost to decode a compressed instruction, but having them means regular 32-bit instructions can cross cache lines and page boundaries.

My own thought is that there should be a "next" version or RISC-VI that is mostly assembler-level compatible but changes all the instruction encodings to be more sane. What that means exactly is still a bit fuzzy, but I am a fan of immediate data being stored after the opcode.


> My own thought is that there should be a "next" version or RISC-VI that is mostly assembler-level compatible but changes all the instruction encodings to be more sane.

I feel like that is really a case of Chesterton's fence. It was done by people who litterally wrote the book on processor design (David Patterson, author of "Computer Architecture: A Quantitative Approach", "The Case for RISC", "A Case for RAID", ). I have heard a talk with the rationale behind where bits are placed to simplify low-end implementations.

> What that means exactly is still a bit fuzzy, but I am a fan of immediate data being stored after the opcode.

As a hobbyist, I get it... but except for when you are reading binary dumps directly, which happens so rarely these days, when is that ever relevant? That is just OCD. I think of this video when I get the same itch and temptation. https://www.youtube.com/watch?v=GPcIrNnsruc

Also, let's not forget that RISC-V is already a thing with millions of embedded units already shipped.


>> I feel like that is really a case of Chesterton's fence. It was done by people who litterally wrote the book on processor design

It was originally intended for use in education where students could design their own processors and fixed instruction sizes made that easier. I'm not saying "therefore it's suboptimal", just that there were objectives that might conflict with an optimal design.

>> > What that means exactly is still a bit fuzzy, but I am a fan of immediate data being stored after the opcode.

>> As a hobbyist, I get it... but except for when you are reading binary dumps directly, which happens so rarely these days, when is that ever relevant?

How about in a linker, where addresses have to be filled in by automated tools? Sure, once the weirdness is dealt with in code its "done" but it's still an unnecessarily complex operation. Also IIRC there is no way to encode a 64bit constant, it has to be read from memory.

Maybe I'm wrong, maybe it's a near optimal instruction encoding. I'd like to see some people try. Oh, and Qualcomm seems to disagree with it but for reasons that may not be as important as they think (I'm not qualified to say).


> Also IIRC there is no way to encode a 64bit constant, it has to be read from memory.

There never is, you can never set a constant as wide as the word length. Instead you must "build" it. You can either load the high bits as low, shift the value, and then add the low bits, or sometimes as Sparc has it ('sethi'), there is an instruction that combines the two for you.

https://en.wikibooks.org/wiki/SPARC_Assembly/Control_Flow#Ju...

https://en.wikipedia.org/wiki/SPARC#Large_constants


> millions of embedded units already shipped

10+ billion. With billions added every year.

THead says they've shipped several billion C906 and C910 cores, and those are 64 bit Linux applications cores, almost all of them with draft 0.7.1 of the Vector extension. The number of 32 bit microcontroller cores will be far higher (as it is with Arm).


yes, was curious why compression format didn't require

1. non compressed instructions are always 4 byte aligned (pad a 2 byte NOP if necessary, or use uncompressed 4 byte instruction to fix sizing)

2. jump targets are always 4 byte aligned (which exists without C, but C relaxes)

This avoids cache line issues & avoids jumps landing inside an instruction. Can consider each 2 compressed instructions as a single 4 byte instruction

Bit redundant to encode C prefix twice, so there's room to make use of that (take up less encoding space at least by having prefix be 2x as long), but not important


I completely agree. Not that everything has to be relaxed, but at least the things that made it impossible to decode RISC-V when C is enabled. The amount of code needed to detect when and how instructions are laid out is much larger than it should be.


"impossible"?

It's a little easier than ARMv7, and orders of magnitude easier than x86, which doesn't seem to be preventing high performance x86 CPUs (at an energy use and surely silicon size penalty admittedly).

Everyone else in the RISC-V community except Qualcomm thinks the "C" extension is a good trade-off, and the reason Qualcomm don't is very likely because they're adapting Nuvia's Arm core to run RISC-V code instead, and of course that was designed for 4-byte instructions only.


That is a trade-off towards density that seems not worth it where all it would take is a 16 bit NOP to pad and a few more bytes of memory to save on transistors of implementation.

Maybe they did the actual math and figured it's still cheaper? Might be worth it.


SiFive slides: https://s3-us-west-1.amazonaws.com/groupsioattachments/10389...

Their argument is that since eventually there'll be 8 byte instructions, those will have the same cache line issues (tho that could be addressed by requiring 8 byte instructions be 8 byte aligned)


Check your link? It isn't working for me.



C is good for high performance instruction sets to. Funny how every company that starts with green field RISC-V doesn't ever mention it as a problem. And yet the one company who wants to leverage their ARM investment thinks its a huge problem that will literally break the currently established standard.


The solution to this is not to split, but just follow Qualcomm. Their vision for the future of the ISA is simply much better than SiFive's.

Right now, most devices on the market do not support the C extension, and any code that tries to be compatible does not use it. Qualcomm wants to remove it because it is actively harmful for fast implementations, and burns 75% of the entire encoding space, which is already extremely tight. SiFive really wants to keep it. The solution to fragmentation is to just disable the C extension everywhere, but SiFive doesn't want to hear that.


> Right now, most devices on the market do not support the C extension

This is not true and easily verifiable.

The C extension is defacto required, the only cores that don't support it are special purpose soft cores.

C extension in the smallest IP available core https://github.com/olofk/serv?tab=readme-ov-file

Supports M and C extensions https://github.com/YosysHQ/picorv32

Another sized optimized core with C extension support https://github.com/lowrisc/ibex

C extension in the 10 cent microcontroller https://www.wch-ic.com/products/CH32V003.html

This one should get your goat, it implements as much as it can using only compressed instructions https://github.com/gsmecher/minimax


The expansion of a 16-bit C insn to 32-bit isn't the problem. That part is trivial. The problem (and it is significant) is for a highly speculative superscalar machine that fetches 16+ instructions at a time but cannot tell the boundary of instructions until they are all decoded. Sure, it can be done, but that doesn't mean that it doesn't cost you in mispredict penalties (AKA IPC) and design/verification complexities that could have gone to performance.

It is also true that burning up the encoding space for C means pain elsewhere. Example: branch and jump offsets are painfully small. So small that all non-toy code need to use a two instruction sequence to all call (and sometimes more).

These problems don't show up on embedded processors and workloads. They matter for high performance.


> me but cannot tell the boundary of instructions until they are all decoded

Not fully decoded though, since it's enough to look at the lower bits to determine instruction size.

> Sure, it can be done, but that doesn't mean that it doesn't cost you in mispredict penalties

What does decoding have to do with mispredict penalties?

> Example: branch and jump offsets are painfully small

Yes, thats what the 48 bit instruction encoding is for. See e.g. what the scalar eficiency SIG is currently working on: https://docs.google.com/spreadsheets/u/0/d/1dQYU7QQ-SnIoXp9v...


> Not fully decoded though, since it's enough to look at the lower bits to determine instruction size.

It is not about decoding, which happens later, it is about 32-bit instructions crossing the L1 cache line boundary in the L1-i cache which happens first.

Instructions are fetched from the L1-i cache in bundles (i.e. cache lines), and the size of the bundle is fixed for a specific CPU model. In all RISC CPU's, the size of a cache line is a multiply of the instruction size (mostly 32 bits). The RISC-V C extension breaks the alignment, which incurs a performance penalty for high performance CPU implementations, but is less significant for smaller, low power implementations where performance is not a concern.

If a 32-bit instruction cross the cache line boundary, another cache line must be fetched from the L1-i cache before an instruction can be decoded. The performance penalty in such a scenario is prohibitive for a very fast CPU core.

P.S. Even worse if the instruction crosses a page boundary, and the page is not resident in memory.


I don't think crossing cache lines is particularly much of a concern? You'll necessarily be fetching the next cache line in the next cycle anyway to decode further instructions (not even an unconditional branch could stop this I'd think), at which point you can just "prepend" the chopped tail of the preceding bundle (and you'd want some inter-bundle communication for fusion regardless).

This does of course delay decoding this one instruction by a cycle, but you already have that for instructions which are fully in the next line anyways (and aligning branch targets at compile time improves both, even if just to a fixed 4 or 8 bytes).


> I don't think crossing cache lines is particularly much of a concern?

It is a concern if a branch prediction has failed, and the current cache line has to be discarded or has been invalidated. If the instruction crosses the cache line boundary, both lines have to be discarded. For a high-performance CPU core, it is a significant and, most importantly, unnecessary performance penalty. It is not a concern for a microcontroller or a low power design, though.


Why does an instruction crossing cache lines have anything to do with invalidation/discarding? RISC-V doesn't require instruction cache coherency so the core doesn't have much restriction on behavior if the line was modified, so all restrictions go to just explicit synchronization instructions. And if you have multiple instructions in the pipeline, you'll likely already have instructions from multiple cache lines anyways. I don't understand what "current cache line" even entails in the context of a misprediction, where the entire nature of the problem is that you did not have any idea where to run code from, and thus shouldn't know of any related cache lines.


Mispredict penalties == latency of pipeline. Needing to delay decoding/expansion to after figuring out where instructions actually start will necessarily add a delay of some number of gates (whether or not this ends up in mispredict penalty increasing by any cycles of course depend on many things).

That said, the alternative of instruction fission (i.e. that which RISC-V avoids requiring) would add some delay too (I have no clue how these compare though, I'm not a hardware engineer; and RISC-V does benefit from instruction fusion which can similarly add latency, and whose requirement other architectures could decide to try to avoid (though it'd be harder to keep avoiding it as hardware potential improves while old compiled binary blobs stay unchanged), so it's complicated)


Ah, that makes sense, thanks. I think on the end it all boils down to both the arm and the rv approach to be fine approaches, with slightly different tradeoffs.


> Qualcomm wants to remove it because it is actively harmful for fast implementations

Qualcomm's "fast implementation" reportedly started out life as an ARM core and has had it's decoders replaced. That explanation makes their very different use of the instruction space make much more sense to me than any other. They did the minimum to adapt an existing design. Not the stuff of lasting engineering.


> Right now, most devices on the market do not support the C extension

That's outright false.

And outside of the actual devices, the whole software ecosystem very much uses the C extension.

Qualcomm simply wants to break the standard to make money, that literally all it is.

> Qualcomm wants to remove it because it is actively harmful for fast implementations

Funny how not a single company other then Qualcomm argues this. Not Ventara, not Si-Five, not Esperanto, not Tenstorrent, non of the companies form China.

Its almost, almost as if it not that big of a deal and Qualcomm simply want to same money and reuse ARM IP.

> The solution to fragmentation is to just disable the C extension everywhere, but SiFive doesn't want to hear that.

Literally nobody except Qualcomm wants to hear it. It wasn't even a discussion before Qualcomm. All the other companies had plenty of opportunity to bring up issues in all the working groups, and nobody did. Literally not a single company gave a talk, talking about how the C extension was holding them back. In fact most of them were saying the opposite.


There is a lot of stuff behind the scene you don't know. You statement about "other companies" is completely wrong.


These things are supposed to be discuss in the open, its an open standards process. So please link me to the official statements by these companies that they are unhappy.

I have watched many discussions and updates by the work-groups, and nobody came forward.

And why are they afraid to come forward? If this is so important then shouldn't there be an effort to convince the community?

So sorry until I see something other claims like 'there is a shadow cabal of unhappy companies planning a takeover' I'm not gone buy that this is widespread movement.


>I have watched many discussions and updates by the work-groups, and nobody came forward.

Rivos came forward. They (kindly) told Qualcomm not to put words on their mouth.

Rivos is, of course, totally fine with C.


Yup. Rivos said basically "Please don't interpret our willingness to look at your data, once you provide it, as supporting your claims"


pray tell


> Right now, most devices on the market do not support the C extension, and any code that tries to be compatible does not use it.

I don't know of ANY commercially-sold RISC-V chips that don't implement the C extension. Even the 10 cent CH32V003 implements C (RV32EC).

> burns 75% of the entire encoding space

In ARMv7 the 2-byte original Thumb instructions burn 87.5% of the 4-byte encoding space (28 out of 32 combinations of the 5 MSBs).


> most devices on the market do not support the C extension

Name one that doesn't, it's exactly the opposite (for 64-bit).


Maybe you've found the solution: RV32 must have the C extension.

RV64 and RV128 must NOT have the C extension.

Problem solved?


No I meant, that for 64-bit CPUs virtually every available one supports the C extension.


> to be more arm64ish

I have a hard time picturing what this means. There's so much flexibility in implementing a core; what would they want to change in the ISA to make them like it better in a non-negligible way?



Mainly, richer addressing modes.

SiFive designed RISC-V to have braindead-level simple addressing modes, with the idea that you use 2-4 normal alu ops to do addressing instead of a single op with a more complicated addressing mode. Then, to reduce the horrible impact this has on code size, they introduced the C extension that burns 75% of the encoding space of 32-bit instructions on 16-bit instructions, but this is still only a bandaid and a much weaker solution than having better addressing modes in the first place.


RISC-V already has an extension for simplifying address calculations, Zba, required for RVA23, for doing x*2+y, x*4+y, and x*8+y in a single instruction (sh1add/sh2add/sh3add; these don't have compressed variants, so always 4 bytes). Combined with the immediate offset in load/store instructions, that's two instructions (6 or 8 bytes depending on whether the load/store can be compressed) for any x86 mov (when the immediate offset fits in 12 bits, at least; compressed load/store has a 5-bit unsigned immediate, multiplied by width).

Also, SiFive didn't design this - in 2011 there's already "Given the code size and energy savings of a compressed format, we wanted to build in support for a compressed format to the base ISA rather than adding this as an afterthought" in the manual[0], while SiFive was founded on 2015.

[0]: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-...


It's not as black and white as you portray it.

Both sides of the argument agree that high-performance implementations are doable and not hindered much buy both, you can watch the debate in the profiles SIG zoom meetings.

I think the real dispute was how the opcode space and code density will evolve when more and more extension will need to be added. Will more 32 opcode space and aligned 64 bit instructions but no 16 and no 48 bit instructions in the longterm be a better choice than fewer 32 bit instructions, but 16/48/64 bit instructions?

Qualcomm is also currently involved in the Combined Instructions SIG, and proposed a few instructions in that vain: https://docs.google.com/spreadsheets/d/1dQYU7QQ-SnIoXp9vVvVj....

Notice that these are currently very mild combined instructions, like BEQI, or CLO, which are unlikely to be cracked into uops compared to more complex addressing modes (e.g. apple silicon needs to crack pre/post increment load/stores).

BTW, this article is actually about removing Qualcomm specific stuff from android, see: https://lists.riscv.org/g/sig-android/topic/105816077#msg389


It's the other way around. Support for Qualcomm's incompatible platform is being removed[0].

0. https://news.ycombinator.com/item?id=40212173


Supporting extra architectures isn't free, and cost matters in an era when devs are getting laid off in numbers.


I see a lot of comments on whether this is motivated by geopolitics -- an attempt to deal a setback to China.

I have no way of knowing whether this is the case, but in general, I would suggest that this is at least plausible, and, if I am right in this, we need to have an existential conversation about what happens to Open Source in a global warfare scenario. (cf https://www.noahpinion.blog/p/americans-are-still-not-worrie...)

High-risk-low-liklihood scenarios are a fact of life, and we get used to dismissing them as the cost of doing business. But when those same habitual reflexes get used to dismiss scenarios that are now of moderate liklihood, we collectively make mistakes that are hard to recover from. Such mistakes are especially likely when (a) contextualizing an additional piece of information, such as a seemingly-unrelated product cancelation, or (b) making plans for the future (either personally, or for a project, company, or industry.)

It's time to at least _have a conversation_ about what open source looks like under present global conditions, and that conversation needs to foreground the _exceptionality_ of the past thirty years of relatively peaceful relations between great powers.

Nobody wants to have this conversation, not because it's scary, but because of significant social pressure we all experience to discount the liklihood of unpleasant futures. But here we are.

What does trust look like in Open Source in 2030? How should we model threats? How do we ensure that as many people as possible can contribute to projects, while at the same time minimizing the risk of another xz?

A spookier question: _can_ we?


Tangentially related to RISC-V current events: just a few weeks ago came some news[0], that the RISC-V community doesn't consider the support of Yocto project to be too important, and they don't want to support it anymore... (which was quite interesting, considering that RISC-V supposed to be big wannabe embedded competitor).

[0]: https://lists.yoctoproject.org/g/yocto/message/62906


> We received some disappointing feedback from a key organization in the RISC-V ecosystem that the Yocto Project was not important and not worth funding. We will take this feedback into account.

That's a bummer. But the core Risc-V support should be contributed straight to the Linux kernel/u-boot/gcc/glibc anyway I guess. Then board makers, not chip makers, should contribute/provide yocto support, based on that foundational work.

Not sure which key organization they are referring too though.


Strong agree.

It would be bad for RISC-V to favor a Linux distribution of all things. The favoritism would look bad.

Distributions will properly support RISC-V even without RISC-V funding them directly, if they wish to remain relevant.


Wonder if this is due to US gov pressure, with the aim to slow the growth of chip making in China?


I don't understand why anyone would see dropping RISC-V support as a move against China. Chinese companies have their fingers in a lot of pies.

They're mainly invested in the ARM ecosystem, companies such as Allwinner, HiSilicon, Rockchip and UNISOC are very successful in this market. They also make x86 CPUs (Zhaoxin) and have developed their own domestic MIPS-derived architecture (LoongArch).


> I don't understand why anyone would see dropping RISC-V support as a move against China. Chinese companies have their fingers in a lot of pies.

China are doing the shotgun approach, of shooting wide and hoping something hits the target. aka putting money into everything in the hope something will work to make them independent of western (chip making related) sanctions

So, while China is doing a lot with ARM... they're also doing a lot with RISC-V. And RISC-V (at least until recent US government concerns) has had a very high uptake internationally + corresponding growth trajectory.

ARM is still "western" tech, whereas RISC-V is more international now.

For reference:

https://www.reuters.com/technology/us-china-tech-war-risc-v-...

Note the specific mention of Google, Android and RISC-V in that?

So there is clearly a change of "something" internally at Google to be backtracking just a few months later, though it could indeed be just a co-incidence.

It'll be interesting if we see other (western) companies change their RISC-V plans too. ;)


Being quick to jump to conspiracy theories means you’ll have trouble being taken seriously when something real happens. I’d go with the most parsimonious explanation: Google doesn’t have plans to ship anything soon and isn’t going to commit to supporting an entire platform they don’t need.

Especially given their slash-and-burn management culture, I wouldn’t read anything more into it than nobody at Google thinking that they’re going to be promoted for working on that. I am curious whether we’ll see Qualcomm or someone else step up to do the work.


OP did not openly endorse flat earth or suggest lizard people run the world. Speculation about tech that is at the center of geo-political tensions is bound to happen and it's only natural to question if government pressure is at play.

The US has already enacted GPU bans and investigated and killed technology transfer deals. The US Commerce Department just this month is "working to review potential risks and assess whether there are appropriate actions under Commerce authorities that could effectively address any potential concerns" with regards to RISC_V. [1] That's the same department that pressured AMD to lower performance on China products before refusing to gran them an export license.

[1] https://www.theregister.com/2024/04/24/us_commerce_china_ris...


What makes something a conspiracy theory is that it ignores simple explanations in favor of something untestable and perfectly concealed. In this case we have a very simple explanation which is highly parsimonious - Google doesn’t see a business benefit yet from committing support – but we’re being asked to believe that instead the men in black visited Google to sabotage China by … not supporting an architecture which China doesn’t depend on and could easily support on their own if they did?

Your reference to other actions undercuts your argument: when the US government has acted in the past, it has generally done so officially – not the CIA subverting Crypto-AG, but Commerce using market pressure openly. I wouldn’t say it’s impossible that they could act behind the scenes but if they were competent enough to do that so quietly I’d expect them to pick a higher impact target.


Is the idea of government pressure automatically a conspiracy theory?


In the absence of any evidence or even a sensible motive, what is it? There’s no reason to think Google’s stated reason isn’t true, and as an attempt to obstruct China it would be rather pointless to go after something they have almost no deployed usage of which is starting behind things they do use.


It is their guess at what is happening without proof. That is the definition of a theory. It is about two entities secretly working together to do something harmful. That's a conspiracy.

So... I mean... yeah. The comment is a perfect example a conspiracy theory. Both in literal textbook definition and the broader cultural understanding of the term.


Two entities, not publicly, very minor harm from reduced competition, you could say the same thing about a huge fraction of discussions between two companies.

Government pressure isn't some big deal, it happens all the time.

For the broader cultural understanding, it needs to be something where making it public wouldn't be extremely boring.


GP did not "jump to conspiracy theories." They asked a reasonable question. I would agree that it doesn't pass Occam's Razor, but there has been plenty of talk from US gov about concerns over China passing the US, and with ARM specifically. There was even an article about it on HN not too long ago. Furthermore the US gov has been putting a ton of pressure on big tech companies to get them to do things without formal laws or regulation.

I hate when people jump to conspiracy theories with a passion, but I'm also beginning to hate when people overly dismiss reasonable theories as "conspiracy theories." It's important that we don't expand the definition of conspiracy theory to a meaningless height. This is what happened with the "Lab Leak" theory of Covid, which only served to "prove" the conspiracy theorists right and was and has been a major setback for people who want to call out conspiracy theories for what they are. If we expand the definition to include theories that are reasonable, then "conspiracy theory" ceases to be meaningful and we now lack a word that we really need, considering humans are prone to conspiracy thinking and it needs to be called out when it happens.


They did jump to a conspiracy theory: there’s no evidence that Google’s stated reason is hiding an ulterior motive, and as a way to hinder China it’d have very little impact since Chinese companies aren’t dependent on it.

This is not the place to relitigate the lab leak mess but I will note that the conspiracy theorists were continually in a state of wrongness. They contributed nothing but noise because they were starting with the conclusion and confabulating as needed to support it. You can and should expect people to back claims up with logic and evidence.


> as a way to hinder China it’d have very little impact since Chinese companies aren’t dependent on it

Have you missed the whole "China developing RISC-V as an alternative path forward" thing then?


Not all, it just doesn’t work as a conspiracy theory.

China is currently using x86, ARM, MIPS (Loong arch) at scale. If RISC-V makes sense for them, they can use it with Linux now. Now, think about what happens if the hypothetical men in black manage to keep Google from supporting it. China has an enormous tech sector, so if it’s economically viable to use RISC-V for phones or tablets they can pay a couple of developers to support RISC-V in an Android fork, which is going to be popular if RISC-V is working for anyone else in the world, and their domestic market alone is large enough to support it. If that doesn’t make sense, they’ll continue using the same ARM devices they’re currently using because the men in black can’t tell anyone to stop using that platform.

Given how many US companies are looking into RISC-V, it’s also hard to see a ban being sustainable. If RISC-V starts to become competitive, American companies are not going to tolerate being shut out. If it doesn’t, being shut out won’t harm China.


You calling them "men in black" doesn't make this more of a conspiracy, it just makes you look silly. Call them "US Senators"[1] or "US lawmakers"[2]. It's a lot more descriptive, though admittedly it does make your assertion that this is a "conspiracy theory" sound pretty weak...

Also, you saying that China could reimplement RISC-V support in Android is a completely different discussion. That's a discussion about how effective the US government actions would be, not whether they were doing anything (which you want to call a conspiracy theory despite there being plenty of evidence that this is happening). You might find this article[3] and this discussion of it [4] interesting, or you might just think the entire thread is full of conspiracy theorists.

At a certain point though, the person calling people "conspiracy theorists" actually becomes the conspiracy theorist. It's not about rhetorical tricks like saying "men in black", it's about evidence.

[1] https://www.reuters.com/technology/us-china-tech-war-risc-v-...

[2] https://www.reuters.com/technology/us-lawmakers-press-biden-...

[3] https://www.bunniestudios.com/blog/2023/regarding-proposed-u...

[4] https://news.ycombinator.com/item?id=38163412


> You calling them "men in black" doesn't make this more of a conspiracy, it just makes you look silly. Call them "US Senators"[1] or "US lawmakers"[2].

If you read those links, notice how this is all happening officially in public? That’s why I referred to the conspiracy theory as such because we are left to believe that there’s some well-concealed shadow operation duplicating those efforts despite it being a poor return on their investment.

Similarly, reading your latter two links would help you understand why it wouldn’t be effective. Andrew makes the case well that even the official measures being discussed would not impede China, and the conspiracy theory would be even weaker.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: