Hacker News new | past | comments | ask | show | jobs | submit login
Huawei Seeks Independence from the US with RISC-V and Ascend Chips (tomshardware.com)
237 points by headalgorithm 56 days ago | hide | past | web | favorite | 157 comments

What do they expect. The US tariff wall and embargo is essentially a subsidy that encourages local development. Without it it’s cheaper to continue to depend on US controlled designs. Essentially low tariffs make it hard for local industries to get themselves above the price floor.

These parts aren’t much to get excited abOUT today but they’ve finally given China a reason to subsidize local efforts, resulting in the long term with an ecosystem that can take off the us design giants.

Few days ago someone posted a paper from 2002 talking about that with a quote from a 19th century US pol bitching about the British lowering tariffs in order to kick the ladder away.



"Essentially low tariffs make it hard for local industries to get themselves above the price floor."

This applies to both countries.

The point is that it's asymmetrical. The US produces high value products (high tech bearings, specialized software), stuff whose value is lower than the cost of transport (paper) and extractive goods (mining and agriculture). Other countries are more competitive on the low end stuff.

Look at electronics: yes, China has tons of people building phones, but Chinese companies get about $20 on each iPhone; other parts of the supply chain get dribs and drabs and Apple pockets a couple of hundred bucks, almost all of the profit. trying to do all that in the US wouldn't add much to the GDP and conversely would take people away from working on higher value goods. So for a wealthy country the tariffs not only don't help, they hurt.

Conversely US restrictions raise the value of Chinese companies working their own way up the value chain. Before those restrictions were in place it simply wasn't worth designing, say, a new chip; that takes a long time and while you're doing so other companies are smoking you by selling low cost chips designed elsewhere. Essentially the US was sucking the oxygen out of their competitor's market...but these tariffs stop that.

It's the same principle of why the US wouldn't want its NATO partners spending too much on defense: essentially bribing them to not have much military is cheaper than getting entangled in a future war on the continent. Again, trying to pressure them to spend more is a very expensive move in the medium and long term.

I would not describe any situation that results in perpetual bribery to stave off war ideal. That's extremely unfair to the American taxpayer. I would much rather spend that money on our own military so that we can have peace through strength. What you propose makes the American worker a slave to the rest of the world in an aims to appease them into not competing. That is ridiculous in my opinion as companies like Huawei prove that the competition is already here. Second, how are we ever going to compete when we keep sending them the designs for our latest gadgets? We're giving away the IP in exchange for cheap labor again at the expense of the American worker. China is going to be a major player in the next century and we had best prepare.

Why would that unfair to the American taxpayer? Making sure a war you want doesn't happen saves those taxpayers' lives (and money).

And in the Nato and Japanese case the "bribe" is paid in kind: you get what you are asking for (the US builds up its military and provides a security guarantee; those countries don't have to waste money on military expenditures and in exchange get a higher standard of living than the US). I suppose that's peace through strength.

> Second, how are we ever going to compete when we keep sending them the designs for our latest gadgets?

Sounds like the arguments against free software (AKA Open Source) I heard in the 80s and 90s.

The American worker isn't going to build these low margin products anyway; the design is where all the margin is.

The problem is that while RISC-V may deliver an excellent open ISA, there is no equivalent open solution for the GPU/display controller side. As I understand it, the GPU world is full of patents and dominated by just a few parties. I don't think there's something that Huawei or other Chinese manufacturers could easily start with and extend.

So I can see RISC-V being ideal for applications that are not user facing or don't require fancy graphics -- servers, network hardware, microcontrollers, etc. But it would be a tough call to put one in a phone, tablet, or laptop.

> As I understand it, the GPU world is full of patents and dominated by just a few parties.

This would be a impediment typically, but considering they're blackballed already, how beholden are they to patents? Block imports? Think that's being done already. Requiring US companies to not do business with them? That's being done already.

At this point, they just have to ride out the US/China trade war and then when things are being negotiated, I imagine a "turn a blind eye" to Huawei is going to be given. They're gimped in some ways, but in others they've probably never been more free.

Actually, what's to stop Huawei from just copying and incorporating whatever they need into their IC designs regardless of patents? I can imagine foreign foundaries refusing orders from Huawei to avoid manufacturing IP infringing designs, but are there foundaries in China that can produce the chips necessary while ignoring foreign patents?

As an aside to that, how do foundaries know whether their client designs infringe on patents? Is there some sort of automated system that goes through the design files they are sent looking for areas of silicon that may contain infringing IP blocks?

>are there foundaries in China that can produce the chips necessary while ignoring foreign patents?

Realistically, no. The only two competitive, bleeding edge foundries are TSMC and Samsung.

SMIC are now rolling out 14nm - that's apparently competitive enough for people to still use Intel CPUs.

There's a very, very big difference between "rolling out 14nm" with unknown yields, performance, etc. Vs Intel's venerable 14nm process.

Intel is Intel, as unhelpful as that statement is. So these guys are starting on 14nm, while others are pushing ahead to 5 and below. Sure it'll work, but I don't find that competitive in an already crowded smartphone market in the next 2 years.

They've been rolling it out since 2016, better wait until their product actually appears.

> Is there some sort of automated system that goes through the design files they are sent looking for areas of silicon that may contain infringing IP blocks?

I'm going to venture a solid guess that no such system exists in America simply because possessing the IP to match potentially infringing IP against would be problematic for the companies it's designed to protect.

Intel: "Hey GlobalFoundries, you're not one of our suppliers but here's the schematic for our latest chip. Please make sure nobody takes it!"

And I can all but GUARANTEE that doesn't exist in China. Unless of course it's to pick up any IP they might have missed when they stole it the first time.

Infringement is detected by display of patented features or potentially reverse engineering relevant portions of an IC. There is no practical way to police infringement using the disparate array of design inputs mapped onto proprietary gate libraries.

Infringement would most likely be detected by reading data sheets and looking for similarities. Armed with that, an IP holder would go through the very expensive reverse engineering process. They're unlikely to do that unless a win in court is likely.

There is no easy way to check for IP infringement here. When the key idea of a design may be 100% lifted illicitly, the schematics may be somehow different, and the mask layout, incomprehensibly different.

You imply Huawei was playing by the "rules" and not already stealing. [1] [2]

[1] https://arstechnica.com/tech-policy/2019/01/us-indicts-huawe...

[2] https://www.bloomberg.com/news/features/2019-02-04/huawei-st...

This seems disingenuous and arbitrary considering the decision to put Iran on a ban list was basically taken unilaterally by the US, all the while Cisco and Bluecoat were selling tech to build the great firewall of China, and a long list of other (US, Israeli, etc.) companies are selling tech to basically every authoritarian, oppressive regime out there without any legal or moral consequences.

As for Huawei, the fact that the bans and sanctions tend to be fluid and negotiable despite being called a national security issue kind of puts the "national" part into question. Someone certainly benefits.

And consider that the media does a great job in painting half of the picture. The other half is filled with the complete absence of articles related to US companies trying to steal secrets. The plausible explanation is that the media is heavily biased (in al honesty it's not like they could publish articles on active operations). In this context does it seem fair to judge having only half the information?

Whats arbitrary about pointing out a string of high profile reports regarding Huawei attempting to steal trade secrets? I don't see the same reports coming out of US companies in regards to Chinese companies.

> Whats arbitrary

The decision to put Iran on a ban list, as I wrote very clearly.

> reports regarding Huawei attempting to steal trade secrets

This is called a "national security" issue but doesn't appear to be treated as such. Which means the threat is not taken seriously or there is no threat.

> I don't see the same reports coming out of US companies in regards to Chinese companies

I'm not sure I understand. The reports about Huawei came from US companies. But in case you were wondering why there are no reports about US companies attempting to steal trade secrets there are 2 possible answers: 1) US companies never spy or 2) US media never talks about it. Which sounds more plausible to you? And if it's 2) I'll ask again: is it fair to form an opinion having only half of the picture?

"Tappy" again, plus a story from Bloomberg, which has shattered its reputation when it comes to technology/security reporting (SuperMicro, among other false articles whipping up anti-Chinese hysteria).

The NPR segment is just repeating what Bloomberg said.

The ZDNET story is about a Huawei engineer ... wait for it ... asking someone at Foxconn about what components Apple uses in its smart watches.

The CNBC story is about Huawei and a tech company founded by one of their former employees suing one another, each alleging the other stole trade secrets. These sorts of disputes happen all the time.

You'll notice that nowadays, there's tons of reporting on every minor IP dispute or claim involving Huawei. At the same time, and I'm sure this is completely coincidental, the United States government is trying to paint Huawei as a "bad actor," so that it can be used as a hostage in the trade war.

Curious where I implied Huawei didn't before (merely stated the enforcement for patents, or "rules" as you put it, no longer apply because the negatives are already in place against them).


If we're all just gonna take our marbles and go home, I guess there's nothing really stopping any of us from just making our own marbles.

Still pretty dumb to make your own marbles though. But I get your point there.

GPUs have sort of plateaued except for the raytracing component. There are quite a few vendors, Mali from ARM, Adreno from Qualcomm, PowerVR from Imagination, NVIDIA's GeForce, AMD's Radeon, Intel's integrated stuff, and then there is a few more smaller players (what Matrox, Broadcom, etc.)

I think there is room for an open source straightforward GPU based on the standard principles. Although maybe there is patent licensing between all the parties that I am not aware of.

> GPUs have sort of plateaued except for the raytracing component

What? As far as I understood it, CPUs had plateaued outside of efficiencies, GPU power still follows Moore's. Generational benchmarks show big leaps in every new arch on userbenchmark, whereas CPUs from 7 or 8 years ago are only marginally slower (SC) than modern ones.

>Generational benchmarks show big leaps in every new arch on userbenchmark,

That is because GPU scales very well with Transistor count, throw in more transistor and memory bandwidth and you are good. On the CPU you could have doubled the transistor budget and get marginal improvement in IPC.

To expand on that, this is my understanding: process shrinks used to lead to faster clocks AND more transistors, but a wall was hit and since then they only lead to more transistors. GPU's can use those extra transistors because their workload is embarrassingly parallel (thousands of parallel things), but CPU's can't because their workload is mostly sequential (developers have to explicitly write code for every thread to do something useful, and find this hard to do). You can see the clock rates peaking with the Pentium 4, crashing when that design proved a dead end, and then plateauing: https://i.stack.imgur.com/z94Of.png

IPC improvements are still happening, which is why Intel has been on 14nm for years and still makes CPU's every year that are a little bit faster, but they are hard to do and you only get incremental improvements.

If someone could figure out a way to write regular software that easily parallelized to hundreds of cores, CPU's could also get the same leaps in performance that GPU's have been enjoying. It's a software problem more than it is a hardware problem.

>If someone could figure out a way to write regular software that easily parallelized to hundreds of cores, CPU's could also get the same leaps in performance that GPU's have been enjoying. It's a software problem more than it is a hardware problem.

Amdahl's law [1] , and while we have demonstrate parallelised code can speed things up, generally speaking Software performance still have lots of low hanging fruit without even going deep into parallel and concurrency world, it is just a matter of cost whether it is worthwhile to improve or fix it.

[1] https://en.wikipedia.org/wiki/Amdahl%27s_law

Can you explain, why CPU's do not scale so well?

Because GPUs always run trivially parallelizable workloads and CPUs usually don't.

I am not an expert but I will take a shot. CPU speed is in part determined by the clock. Faster clock, more data processed per second. That's the simplest understanding. Now for modern CPUs, we use a pipeline architecture. So just like how a compiler like LLVM has a tons of optimizations, the same can be said for CPUs. Your CPU doesn't quite execute instructions one by one like a hardware Turing machine. It consumes a large chunk of instructions, try to figure out what's needed and what's not, and then process it. This is all done the few pico seconds of each cycle. It's less like a car manufacturing line and more like a fruit sorting machine, different operations gets executed simultaneously if the processor can prove that they are not e.g. required to be in order. A CPU with more transistors can easily process more optimizations i.e. bigger pipeline. You can also throw in other stuff like caches to squeeze out even more performance. The other side of Moore's law is economics, essentially cost of computing gets halved, more processing power for half the cost. Same chip, more power. Now for CPUs, clock speed has pretty much plateaued in the last decade. It is increasing, but slowly. At the same time, there are limits to pipelining, and if you get too eager with optimizations, you end up introducing vulnerabilities (since this is HN, I think I don't need to cite examples, there have been plenty). We are approaching the limit of transistor scaling and the clock isn't magically oscillating faster so there is now greater urgency on the architecture side. Current software is unable to take full advantage of multi-core processors or parallelism to the extent of making Moore's law great again. (the last time Intel tried to launch a CPU with emphasis on parallelism didn't really work out, look up Itanium). It shifts a lot of work to the compiler end and makes life unnecessarily difficult for compiler authors.

Now GPUs are designed for running multiple computations simultaneously, like CPUs, there's the fact that most operations cannot be trivially parallized. Matrices and linear algebra can because of obvious reasons — large grid of numbers, easier to process at once than with a for-loop. Most problems are not like this. The next step for computing may be FPGAs, digital circuits that can be reconfigured on-the-fly. Certain new Intel CPUs have this and most processors have a basic version in the form of microcode. Whether this will work out, I do not know. FPGAs is too more compatible with trivially parallized tasks than your usual CPU based workload. Theoretically in future you can perhaps spend 20min compiling Minecraft or Factorio to FPGA bitcode during the installation process, though more likely it is only the processor intensive bits that gets optimized. When you load the game, part of the chip gets reconfigured for it. I believe this will benefit long running processes more than short ones.

Now I am no subject domain expert so somebody with more insider experience should elaborate on this.

Why can't a GPU power a computer?

I'm not an expert and might be horribly wrong and welcome edification, but I believe you don't want your GPU powering your computer because

- GPUs don't handle I/O other than writing to video RAM.

- GPUs don't handle interrupts.

- GPUs don't handle branching well.

For the first two, a lot of infrastructure that is part of the chipset and platform would have to be extended to each compute unit of the GPU.

Imagine a database on a GPU. It's not like 1.5K-2k+ cores that are super good at math can read and write your disk or disk array at once.

Same reason we have OLAP and OLTP systems. Same reason it makes sense for some use cases to set up a Hadoop cluster, and for others to have a single but fast CPU: some tasks can be run in parallel, others can't.

See Amdahl's law : https://en.wikipedia.org/wiki/Amdahl%27s_law

You can calculate the value of each screen pixel without knowing the value of the others. To calculate the n value of a fibonnaci sequence you have to calculate the n-1 and n-2 values first, so it doesn't matter how many cores do you have. Gpus are for the former and CPUs for the latest.

Nitpick: I understand your point in general, but it's not actually true that you need to calculate the n-1 and n-2 values first for Fibonacci.

You can use Binet's formula or matrix exponentiation to calculate it without such a dependency.

GPUs are made to execute a limited set of the same operations operation on on a huge amount of data in parallel. This is a totally different workload than your usual computer programs, that commonly have a long and complicated series of commands. Thus just translating your program 1:1 to your GPU computer would make it way slower. Your GPU runs probably around ~1.5GHz, a third of your CPUs. The GPU does have a few thousand cores that run in parallel while our consumer CPUs does have at best a dozen (that are more capable than any single GPU core), but most of those are often already idle, since it's a lot of work to make your software take advantage of them. For some tasks it's worth it, and those take advantage of e.g. you GPU.

The biggest issue is lack of huge caches, and the extreme latency to main memory due to the speed of light.

GPU’s are fine or even better for a wide range of compute tasks see CUDA etc, they just end up being very slow at a subset of common tasks.

Circuits don't operate at speed of light, though. People like to say this over and over again, but electron mobility through a circuit is only around 2/3s the speed of light, IIRC. They may have very tiny mass, but they still have mass.

No, it's the electromagnetic field which propagates at around 2/3c in the conductor. The electron drift velocity is on the order of a fraction of a mm/s.

It is the mass in the electrons that creates the heat.

Number of transistors/$ is about to reach a plateau soon (if it hasn’t already.)

More powerful GPUs will happen as Moore’s Law continues for a few more years, but the free for all will soon be over.

This is true for the whole semiconductor industry, and the consequences will be very interesting, not necessarily in a good way.

There is plenty of work being done on RISC-V vector computing and GPU equivalents. E.g.:


As someone who looked hard into it, RISC-V doesn't really make sense for a GPU shaders, IMO. You'd have to change so many things about how the ISA works that you wouldn't get a benefit to using a common ISA. Stuff like how shaders are batched and dispatched, textures accesses make sense to be their own instructions, you want a larger register file on the scalar side, the memory barrier semantics are much more complex, etc.

RISC-V might make sense inside of the GPU accelerators though. GFX command list processing, DMA engines, and video codecs all make sense to be something like RISC-V.

Most of the GPU workloads these days are not graphics related. They are general compute vector machines. The RISC-V vector extension is based on Cray's ISA and meant to compete in this space.

Nvidia are supposed to be using RISC-V inside future GPUs.

Yes, as a replacement for their internal proprietary logic controllers which operate in the periphery.

That has nothing to do with the RISC-V ISA being the right choice for core functionality like shader cores.

AFAIU their RISC-V variant can also be proprietary, akin to FreeBSD based products such as Junyper or macOS

To replace their falcon cores in the places I talked about in my second paragraph.

I was just trying to point out that Nvidia seems to be a bit more positive about the idea than "it might make sense".

There is an okish Chinese GPU maker. We'll have to see if they actually deliver though. But with more funding they may be competitive.


>Military ties, Chinese listed company

That how you know it's nothing but a marketing campaign to boost their stock price.

Nothing about RISC-V would cause any more difficulty interfacing with GPUs. Any GPU you can license piecemeal right now, can also be connected to a RISC-V complex.

Presumably if Huawei can't license an ARM core before of trade restrictions, they're not going to be able to license a GPU because of similar restrictions.

Maybe Jingjia. Most GPU IP vendors are U.S, Korean, or broadly Western; so I can imagine it'd be difficult. It's not like they're having a more difficult time as a result of the host ISA. That's my main point.

ARM isn't going to license their GPUs to a competitor.

AMD just announced a major license of their GPU tech to Samsung. Nvidia, Qualcomm, and other vendors have complete designs with their own GPUs. Vivante and Imagination would surely be glad to license their tech to smaller players.

From hardware POV GPUs these days are to a close approximation just multicore CPUs with more focus on parallel throughput instead of single core performence, and a few specialized graphics op instructions (like texture lookup) that are unlikely to be badly patent encumbered due to their age.

If they can only sell their phones in China, they will go ahead and make their own GPUs since they can violate US patents all they want in their own market. That will just force them to develop good GPUs.

Huawei is still a largely a networking equipment company, the consumer branch of it is growing very fast, but not essential, as per their CEO.

I was in Beijing at their press conf last January when Richard Yu proudly announced that in 2018 the Consumer Business Group had a record revenue of fifty billion US$, making as much as the infrastructure division for the first time ever. Sure Ren Zhengfei is ready to throw the CBG under the bus if needed (but it won’t happen, as it is sustainable even as a China-only business). Still the consumer division is way past being just a small part of the company.

It's not clear to me that you can't make an open source cpu with a chip like risc-v

Let's be real, the Chinese have been stealing technology right and left for years. Losses number between $225 and $600 billion per year [0]. One in five companies has had its technology stolen [1]. They will just steal the tech. Or maybe they'll decide it's easier to work with AMD (they're already doing it on the CPU side [2]).

This is not popular to say, but it is accurate and my statements are sourced. Feel free to provide rebutting evidence if you disagree.

[0]: https://money.cnn.com/2018/03/23/technology/china-us-trump-t...

[1]: https://www.cnbc.com/2019/02/28/1-in-5-companies-say-china-s...

[2]: https://en.wikipedia.org/wiki/AMD%E2%80%93Chinese_joint_vent...

The methodology in the US study seem to suggest that Chinese are inable to do anything on their own and any technology that US company have that somehow China also got, is ALL counted as stolen.

That's a very fun verdict

Especially since China is the number one scientific country in terms of number of published papers and e.g are the leaders in nuclear fission and AI.

The losses numbers are from a US government investigation. So they might have some bias ... and therefore not really mean anything.

I'm surprised that it's only 1 in 5 companies. Based on prior client experiences with Chinese contractors, I would have expected it to be 5 of 5 foreign companies doing business in China.

I would assume that it's due to a limited number of companies having useful IP to steal. I have had similar experience with several companies, and hear similar stats from those who work with other companies, but that is anecdotal. I would also hazard a guess that many companies don't want to admit it, even if it does happen.

What you say may be accurate, but it's also extremely biased by a paradigm of strict imaginary property... which isn't necessarily useful for understanding how things are going to play out.

You can't steal an open source ISA .... unless you don't follow the license (which in this case is trivial to do)

More like "Huawei seeks to survive in the future in spite of the US squeezing them. They want to believe these other chips will let them do that."

I don't understand the tech well enough to know how much of a fairy tale or hope and a prayer strategy this is. But calling it "seeks independence" is some incredibly positive spin which makes me suspicious of the claims.

If these other chips are so good, wouldn't you go ahead and migrate to them for the actual benefits rather than researching them as a backup plan?


There's tremendous inertia in apps which is what gave x86 an effective monopoly in the 90'es, even when superior options were available (hello Alpha). Today the situation is actually better and with RISC-V having the same memory model as Arm, porting from an Arm version is way easier (Neon code not withstanding) but it's obviously way less attractive than running running existing code.

> Today the situation is actually better and with RISC-V having the same memory model as Arm, porting from an Arm version is way easier (Neon code not withstanding)

It's really not any different today than it was. The memory model being similar is kinda nice, but you still need to convince the world to compile for another arch. And you need to make a CPU good enough to justify the recompile and ongoing multi-arch maintenance.

IMO it's hugely different, at least in the server space. In the 90'es _all_ servers ran on proprietary platforms (OSes and micro-architectures). To succeed with an alternative, you had to convince your OS vendor (Microsoft, Sun, ...) to port. The most famous attempt Windows NT (ran on x86, PowerPC, Alpha, and possibly more), but the effort fizzled.

Today, the MAJORITY of servers run Linux, even on Microsoft cloud platform, and a significant part of that is using primarily open source (eg. "LAMP"). If, say, RHEL supports your platform, it's fairly easy for most to move.

Yes, it's not easy to migrate, but it is _way_ easier than it used to be. Now, non-server application is a different story.

So QWERTY keyboards all over again, basically.

"Literally every software and hardware market since computers became a commodity" all over again, you mean. They've always tended towards monopolies.

IP law is harmful to society.

I didn't interpret "seeks independence" as positive spin at all. It's clear and obvious that they are trying to become more independent of US technologies now that they are suffering at the whims of a mercurial personality. Indeed, the article goes into detail with quotes about how their desire for RISC-V is driven by the embargo.

I think you may be reading intent into what "seeking independence" means that is not there.

A one dimensional comparison like good or bad is not how these decisions get made. Current ARM based chips are good enough for current applications. They come with massive app support and a mature ecosystem.

Given the risk of being kicked out of that ecosystem. The score tips in favor of this new technology.

So there are basically 2 possibilities.

It is possible for Huawei, and Chinese companies in general, to build a successful alternative without using American technology (for the purposes of this post, I'm defining successful as equivalent to their American tech using competitors).

It's not possible.

What the actions on Huawei have done, however, is ensure that if that possibility exists, very significant chunks of Chinese resources will be devoted to finding it.

It appears China had decided this wasn't completely necessary, and were happy to rely on Americans for software, and core hardware. But that is obviously not true anymore, and the biggest long term effect of these actions are Chinese state resources being poured into directly combating and threatening American tech's competitive position.

And seeing that the tariffs and capricious action has not been limited to China, it won't even be surprising if some other nations join in to help, especially from the EU and/or India and Korea.

I think this is actually a good outcome, at least from a standpoint of avoiding a technical monoculture (especially in hardware). It's probably also good for general innovation as a whole.

It also shows how short sighted USA is with this. They’ve given China incentive to reduce their reliance on the US and give the US less leverage overall. China may set a damaging precedent here with regard to US power

> China may set a damaging precedent here with regard to US power

I'd say precedent was set when congress blocked China from joining the ISS, leading to China developing its own space station program.

+1 Unfortunately, I totally agree with you. I think that we are making a long term strategy blunder.

> without using American technology

Eh, RISC-V was developed at UC Berkeley. In the United States.

The implication there was patented. Linux for example has been developed all over the world, yet am sure the administration would happily claim it's American as well, just because of where the Linux Foundation is located, but that doesn't matter as long as it's open.

I think that RISC-V has a long way to go to be actually competitive with ARM on a IPC basis. Although it is great to have more competition in the market I believe that this is more of a 5-10 year goal rather than distinctly near term.

RISC-V is an instruction set - I think you're confusing that particular implementations of RISC-V (of which there are few yet)

But none of them are competitive at all on an IPC basis right?

Don't get me wrong, I would love a RISC-V chip that is competitive with the ARM chips in smartphones but I think we are quite aways away from it for the time being.

RISC-V is most successful as microcontrollers who do not have to be state-of-the-art fast, they just have to work reliably.

BOOM is close to Ivy Bridge IPC in one benchmark. https://people.eecs.berkeley.edu/~celio/

Using MHz as a divisor really benefits BOOM and extrapolating from IPC makes some assumptions regarding the future ability of the chip to scale to similar frequencies as Intel's product line.

Assuming the best performing BOOM chip refers to the 2018 HotChips tapeout, that chip operated at 1.0 GHz. The lowest speed Ivy Bridge Core/Pentium (a design getting close to a decade old, FWIW) was 2.5 GHz. This means on a core-to-core basis Ivy Bridge is 2.7x more performant. This ignores the benefit of the additional core count of the Core series of Ivy Bridge procs.

Today's 9900K is also around 40% more performant than the comparable Ivy Bridge (4960X). That would suggest the performance gains modern Intel design holds over BOOM is closer to 3.8x on a core-to-core basis.

Being within an order of magnitude performance-wise is impressive given that BOOM is mainly a one-man show. That said, having about a quarter of Intel's present chips' performance doesn't strike me as particularly "close".

IPC means "instructions per cycle," so it's correct to divide the scores by frequency. Totally agree that in total performance, no realized BOOM chip comes close to mainstream Intel chips. I think Chris is working at Esperanto Technologies (https://www.esperanto.ai/technology/); hopefully, they upstream performance improvements to the BOOM open-source repo.

It's a bit more nuanced than that. It's a lot easier design-wise to hit a given IPC at 1.0GHz than it is to hit the same IPC at 4.0GHz.

I understand and I appreciate the test from an intellectual standpoint. But the IPC definition is orthogonal to meaningful performance benchmarking when I can't get real world equivalent clock frequencies of each processor type.

@Taniwha - I can't reply to you directly because we're so deep, thread-wise.

I appreciate and understand the discussion around IPC and how BOOM is a significant improvement on the state of the craft. I cut my early programming teeth by developing PPC and SPARC versions of my business' embedded software products. I'm a longtime fan of alternative architectures and keep up with the industry through attendance at events such as Hot Chips. I am familiar with the terminology here and am not disputing it or the findings of the benchmark.

My point was that using IPC in this way obfuscates how far these architectures have yet to go to be competitive in certain markets. Said market competitiveness was the original discussion topic up-thread.

You can always click on the timestamp of a comment to got the comment permalink and reply from there.

That didn't seem to work at the time but both options are now available. Too late to delete and re-post, regardless.

IPC "Instructions per cycle" is also "instructions per clock" - it's right there in the name "per clock" - it's a measure of architectural efficiency independent of clock speed.

I agree that being able to build something that runs at speed is equally important but IPC does have an exact meaning

>Today's 9900K is also around 40% more performant than the comparable Ivy Bridge (4960X).

Most of that relative performance gain is not from increased speed but from decreased speed of older processors due to meltdown (and spectre) mitigations. In the past the 4960X and it's contemporaries were much faster.

If RISC-V takes off, I think we'll first see it come from small shops that put it on the side of a chip with other functionality they're selling. Then it may bubble up into more advanced implementations as demand for the instruction set builds up.

Process tech is slowing, so I think that costs of the best processes will go down and we'll see more competition for CPU designs.

Look into the Esperanto chips. They will actually over quite high performance soft cores, competing with high level arm.

No, GP is saying that there is a long way to go before RISC-V implementations can match ARM implementations in competitiveness on an IPC basis.

If you properly examine the context then you can do like most and interpret "ARM" as "current mainstream ARM implementations".

I can recommend the book The RISC-V Reader: An Open Architecture Atlas by David Patterson and Andrew Waterman.



In the book the authors compare the expressiveness (and thus code density) of code implemented with the RISC-V ISA with instructions for x86, ARM and MIPS. They show that the RISC-V ISA is able to give ARM a good fight in terms of IPC, and that it beats MIPS. (Mainly due to the delay slots which can't be filled in many real world cases.)

They also show why the R5 should make implementations scale better in clock speed compared to ARM. This is the kind of stuff Patterson has been doing research on the last 3-4 decades, and it is quite interesting to see how this experience has guided the design of the RISC-V ISA.

It's hard to speculate what's going to happen in the end with the tech industry when there is turmoil - there are too many factors and actors at play.

But, I, for one, enjoy watching it tremendously (it's like "soaps" for us tech folk).

Imagine the irony if Huawei turns into an open source company that is the best option for user privacy.

Just by going with RISC-V they eliminate the issues with the Intel management engine.

I'm curious what Huawei will do next.

That seems... unlikely.

Does it? Their first arm doc was extremely open.

It is often used by academics for researching really low level stuff.

Their history with android is very poor. Their phones are next to impossible to physically open and they offer no way to unlock the bootloader but there are individuals who will charge you a fee to have the bootloader unlocked using their exploits.

I wonder if this will contribute to the world having RISC-V laptops sooner rather than later.

Why would we want RISC-V on laptops?

Maybe it won't contain a vendor "management engine".


Maybe, but I doubt it. It's there to support a bunch of features that companies want (eg. remote management, trusted computing, etc.). The reason why the consumer version has it is that most consumers don't mind, and having a seperate sku with it disabled/not present would cost extra. This won't change with RISC-V. You might be able to get a low performance part without it, but all the mainstream (ie. high performance) parts will have it.

Intel has a huge variety of different models. They are experts in market segmentation. They could do that. But they won't, and there are few who could do that.

With RISC-V, it is reasonable to expect multiple vendors to provide CPUs. There is a very good case to expect non-backdoored chips.

>Intel has a huge variety of different models. They are experts in market segmentation. They could do that. But they won't, and there are few who could do that.

That's true, but the way they do that is in a way that requires minimal changes. In terms of the management engine, that means not including AMT module (software) but keeping the rest of the ME (hardware). After all, it also does some essential things like power management and the bootloader. Removing it completely would also mean re-implementing the essential functionality.

The best you can hope for if you want to use mainstream parts is something like me cleaner where the management engine is only there for the boot, but is disabled afterwards. Something without an management engine would certainly possible, but would be limited to niche products like the librem.

> Something without an management engine would certainly possible, but would be limited to niche products like the librem.

I don't share this forecast. I think many private customers would view a machine without the ME and it's capabilities as a plus for a product when the marketing is done right. Especially since most don't care for the capabilities.

>I don't share this forecast. I think many private customers would view a machine without the ME and it's capabilities as a plus for a product when the marketing is done right

A lot of people say they value privacy, but when push comes to shove, they'll willingly trade privacy for convenience and cost. See: any facebook/instagram user, or iOS users who use google maps. iOS/Mac is marketed as the private phone/computer, but their market share shows how little consumers are willing to pay a premium for privacy (except in the US where iOS share is surprisingly high). You can try to convince consumers otherwise, but in truth, a management engine with backdoors/0days isn't really applicable to most consumers' threat model. The risks from social engineering attacks or unpatched software is a much greater threat.

If the RISC-V chips are manufactured in China, then they'll probably contain something worse than Intel ME.

The available Linux-capable RISC-V SoCs have an equivalent to the management engine, though the firmware is open, in the case of SiFive's offering.

Why not? a diverse ecosystem of silicon with some actual competition is going to help get us out of this US-centric intel/amd world where all laptops are essentially the same

It appears likely that Apple will be transitioning to ARM for its laptops in the future so regardless we should start seeing architecture diversity in laptops.

Microsoft and Google have already mass marketed multiple iterations of ARM laptops (the latter to great success in certain spaces) but "2" is not much diversity. Power has remained an active player but not in the laptop/phone/mobile space (even still very niche in workstation).

RISC-V will bring much needed diversity to spaces currently dominated by two titans.

For even the top models? How can that be performant?

Just reading up on it, the big thing pushing this is that the instruction set is completely open source. That's why someone might want it.


The cost of licensing instruction set is very small part of licensing.

High performance microarchitecture is never going to be open source. They cost hundreds of millions to design and the design stays relevant only 4-5 years.

if not for anything else why not simply for competition, more choice and lower prices?

What OS would they run?

Chromebooks, linux ... if Microsoft sees it's losing a market windows will be there

A Linux distro probably, at least initially.

Does Huawei sell Linux laptops today? Why not? Why would that change? If they don't think x86/Linux or ARM/Linux laptops are a good idea, why would RISC-V/Linux laptops be a good idea?

People love to come up with fantasies and try to work backwards from them, but sometimes you really can't get there from here.

They sell windows intel laptops, pretty good ones, but now they’ve been shut out of that market, so they may very well make RISC laptops with Linux or some proprietary OS.

If Huawei adopts RISC-V then they drive RISC-V adoption to some indeterminate extent. When the market is sufficiently large, someone might consider RISC-V laptops to be a good idea. Even if it's "only" a company like Pine64.

You have a surplus of condescension and a lack of imagination.

> If they don't think x86/Linux or ARM/Linux laptops are a good idea

They didn't until now, but the US government just told them commercial OSes with a connection to the US can not be trusted, so they might very well be rethinking that.

Is there a Windows version for RISC-V? Can they still legally use it? Do they want to dependent on Microsoft? I think these are questions that need an answer and I doubt that they are all answered affirmatively.

Mostly likely, but Android is but a recompile away (sure, not so for 3rd party apps).

Even today, with hardly any hardware, there's (to various degree of completion): Debian, Fedora, Slackware, FreeBSD, and seL4.

> sure, not so for 3rd party apps.

Ideally, once there is a JVM that runs on riscv, those apps are ported for free. An openJDK port is underway.

I have no idea how much work it would be for Android to port its APIs, but Google certainly has the manpower if this is something they want.

I don't know why there would be _any_ porting involved. Arm and RISC-V are so similar that the compiler should take care of it.

The major pain points today are the JITs, which applies to Java and JavaScript in particular. There has been preciously little (public) progress on these.

It's a lot of work, and I don't know that there's any support for the effort to port V8 and ART to RISC-V. When I was starting on V8 port, I didn't see much in terms of interest in providing material support for it.

I got one interview out of it, but I feel like maybe the Foundation is the right place for that. They'd have to delegate the hiring and management to a member, but I think it could work.

Huawei has their own VM for Android, so they would likely port that for their effort.

Absolutely, giants like Google or Huawei could pull this off, I really don't know if we'll see Java without someone like that. Personally I'm keen to see Mozilla's JavaScript VM (is it still called SpikerMonkey?) ported, but I doubt it will happen until we see more widely available/affordable RISC-V hardware.

I just don't have any interest in running any software that uses mozilla's JavaScript runtime. It has a lot going for it but I just find Mozilla based browsers misbehave and perform poorly on all my hardware, in ways that upset and frustrate me.

> Arm and RISC-V are so similar that the compiler should take care of it.

But the compiler (openjdk in this case) still needs to be ported, no?

EDIT: I think we are talking about two different things. The Java compiler (which emits Java bytecode) should be ported for convenience but that is not necessary. The compiler which compiles the actual JVM into an executable doesn't need to be ported, but it does need to have support added for emitting riscv machine code.

Huawei wrote their own AOT compiler[1] for Android (on ARM) so they seem to have the necessary expertise in this area.

[1] https://www.scmp.com/tech/innovation/article/3022747/heres-w...

There's already good support in the Linux world. Hardware support is something Linux does very well IMO

FreeBSD seems quite proactive with the support.

Is there a comparison between RISC-V and Power? IBM recently published the POWER instruction set.

I looked for MindSpore and only found references to MindSpore Studio. Anyone have a link to a git repo and docs? I saw that it also supports CPUs in addition to their custom hardware.

TensorFlow and Keras have such a large first mover advantage, the custom chips and MindSpore would have to be very efficient and inexpensive to make real inroads.

Is it just me or does this read like copypasta from a Chinese -> English translated press release?

From TFA:

In a similar vein, Huawei has launched MindSpore, a development framework for AI applications in all scenarios. The framework aims to help with three goals: easy development, efficient execution and adaptable to all scenarios. In other words, the framework should aid to train models at the lowest cost and time, with the highest performance per watt, for all use cases.

Another key design point of MindSpore is privacy. The company says that MindSpore doesn’t process the data itself, but instead “deals with gradient and model information” that has already been processed.

I am glad to have lower prices and more healthy competition

By moving away from ARM, mostly designed in the UK and with Japanese owners? Sounds like they're really just seeking general independence.

The ARM that is following US sanctions against Huawei due to its large presence in the US and reliance on what it says is "U.S.-origin technology"?

Whenever I hear the horrors of the tradewar in a positive light, I wonder if this is the Russian propagandists in action.

How can war be good? Delighting in the suffering of everyone?

Every ‘war’ that is not actually involving bloodshed can be good, because competition has been the spur of amazing human endeavors throughout history. Sometimes people need motivation to try something hard...

There is a legal/political artifact in this case: Intellectual property.

If IP didn't exist, then this wouldn't be good news, because we would have always had a competitive market in the first place, we wouldn't have proprietary ISAs.

This trade war just happens to interact with that artifact in a positive way: fostering competition in spite of IP.

Trade war is not war

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact