These parts aren’t much to get excited abOUT today but they’ve finally given China a reason to subsidize local efforts, resulting in the long term with an ecosystem that can take off the us design giants.
This applies to both countries.
Look at electronics: yes, China has tons of people building phones, but Chinese companies get about $20 on each iPhone; other parts of the supply chain get dribs and drabs and Apple pockets a couple of hundred bucks, almost all of the profit. trying to do all that in the US wouldn't add much to the GDP and conversely would take people away from working on higher value goods. So for a wealthy country the tariffs not only don't help, they hurt.
Conversely US restrictions raise the value of Chinese companies working their own way up the value chain. Before those restrictions were in place it simply wasn't worth designing, say, a new chip; that takes a long time and while you're doing so other companies are smoking you by selling low cost chips designed elsewhere. Essentially the US was sucking the oxygen out of their competitor's market...but these tariffs stop that.
It's the same principle of why the US wouldn't want its NATO partners spending too much on defense: essentially bribing them to not have much military is cheaper than getting entangled in a future war on the continent. Again, trying to pressure them to spend more is a very expensive move in the medium and long term.
And in the Nato and Japanese case the "bribe" is paid in kind: you get what you are asking for (the US builds up its military and provides a security guarantee; those countries don't have to waste money on military expenditures and in exchange get a higher standard of living than the US). I suppose that's peace through strength.
> Second, how are we ever going to compete when we keep sending them the designs for our latest gadgets?
Sounds like the arguments against free software (AKA Open Source) I heard in the 80s and 90s.
The American worker isn't going to build these low margin products anyway; the design is where all the margin is.
So I can see RISC-V being ideal for applications that are not user facing or don't require fancy graphics -- servers, network hardware, microcontrollers, etc. But it would be a tough call to put one in a phone, tablet, or laptop.
This would be a impediment typically, but considering they're blackballed already, how beholden are they to patents?
Block imports? Think that's being done already.
Requiring US companies to not do business with them? That's being done already.
At this point, they just have to ride out the US/China trade war and then when things are being negotiated, I imagine a "turn a blind eye" to Huawei is going to be given. They're gimped in some ways, but in others they've probably never been more free.
As an aside to that, how do foundaries know whether their client designs infringe on patents? Is there some sort of automated system that goes through the design files they are sent looking for areas of silicon that may contain infringing IP blocks?
Realistically, no. The only two competitive, bleeding edge foundries are TSMC and Samsung.
I'm going to venture a solid guess that no such system exists in America simply because possessing the IP to match potentially infringing IP against would be problematic for the companies it's designed to protect.
Intel: "Hey GlobalFoundries, you're not one of our suppliers but here's the schematic for our latest chip. Please make sure nobody takes it!"
And I can all but GUARANTEE that doesn't exist in China. Unless of course it's to pick up any IP they might have missed when they stole it the first time.
As for Huawei, the fact that the bans and sanctions tend to be fluid and negotiable despite being called a national security issue kind of puts the "national" part into question. Someone certainly benefits.
And consider that the media does a great job in painting half of the picture. The other half is filled with the complete absence of articles related to US companies trying to steal secrets. The plausible explanation is that the media is heavily biased (in al honesty it's not like they could publish articles on active operations). In this context does it seem fair to judge having only half the information?
The decision to put Iran on a ban list, as I wrote very clearly.
> reports regarding Huawei attempting to steal trade secrets
This is called a "national security" issue but doesn't appear to be treated as such. Which means the threat is not taken seriously or there is no threat.
> I don't see the same reports coming out of US companies in regards to Chinese companies
I'm not sure I understand. The reports about Huawei came from US companies. But in case you were wondering why there are no reports about US companies attempting to steal trade secrets there are 2 possible answers: 1) US companies never spy or 2) US media never talks about it. Which sounds more plausible to you? And if it's 2) I'll ask again: is it fair to form an opinion having only half of the picture?
The ZDNET story is about a Huawei engineer ... wait for it ... asking someone at Foxconn about what components Apple uses in its smart watches.
The CNBC story is about Huawei and a tech company founded by one of their former employees suing one another, each alleging the other stole trade secrets. These sorts of disputes happen all the time.
You'll notice that nowadays, there's tons of reporting on every minor IP dispute or claim involving Huawei. At the same time, and I'm sure this is completely coincidental, the United States government is trying to paint Huawei as a "bad actor," so that it can be used as a hostage in the trade war.
If we're all just gonna take our marbles and go home, I guess there's nothing really stopping any of us from just making our own marbles.
Still pretty dumb to make your own marbles though. But I get your point there.
I think there is room for an open source straightforward GPU based on the standard principles. Although maybe there is patent licensing between all the parties that I am not aware of.
What? As far as I understood it, CPUs had plateaued outside of efficiencies, GPU power still follows Moore's. Generational benchmarks show big leaps in every new arch on userbenchmark, whereas CPUs from 7 or 8 years ago are only marginally slower (SC) than modern ones.
That is because GPU scales very well with Transistor count, throw in more transistor and memory bandwidth and you are good. On the CPU you could have doubled the transistor budget and get marginal improvement in IPC.
IPC improvements are still happening, which is why Intel has been on 14nm for years and still makes CPU's every year that are a little bit faster, but they are hard to do and you only get incremental improvements.
If someone could figure out a way to write regular software that easily parallelized to hundreds of cores, CPU's could also get the same leaps in performance that GPU's have been enjoying. It's a software problem more than it is a hardware problem.
Amdahl's law  , and while we have demonstrate parallelised code can speed things up, generally speaking Software performance still have lots of low hanging fruit without even going deep into parallel and concurrency world, it is just a matter of cost whether it is worthwhile to improve or fix it.
Now GPUs are designed for running multiple computations simultaneously, like CPUs, there's the fact that most operations cannot be trivially parallized. Matrices and linear algebra can because of obvious reasons — large grid of numbers, easier to process at once than with a for-loop. Most problems are not like this. The next step for computing may be FPGAs, digital circuits that can be reconfigured on-the-fly. Certain new Intel CPUs have this and most processors have a basic version in the form of microcode. Whether this will work out, I do not know. FPGAs is too more compatible with trivially parallized tasks than your usual CPU based workload. Theoretically in future you can perhaps spend 20min compiling Minecraft or Factorio to FPGA bitcode during the installation process, though more likely it is only the processor intensive bits that gets optimized. When you load the game, part of the chip gets reconfigured for it. I believe this will benefit long running processes more than short ones.
Now I am no subject domain expert so somebody with more insider experience should elaborate on this.
- GPUs don't handle I/O other than writing to video RAM.
- GPUs don't handle interrupts.
- GPUs don't handle branching well.
For the first two, a lot of infrastructure that is part of the chipset and platform would have to be extended to each compute unit of the GPU.
Imagine a database on a GPU. It's not like 1.5K-2k+ cores that are super good at math can read and write your disk or disk array at once.
See Amdahl's law : https://en.wikipedia.org/wiki/Amdahl%27s_law
You can use Binet's formula or matrix exponentiation to calculate it without such a dependency.
GPU’s are fine or even better for a wide range of compute tasks see CUDA etc, they just end up being very slow at a subset of common tasks.
More powerful GPUs will happen as Moore’s Law continues for a few more years, but the free for all will soon be over.
This is true for the whole semiconductor industry, and the consequences will be very interesting, not necessarily in a good way.
RISC-V might make sense inside of the GPU accelerators though. GFX command list processing, DMA engines, and video codecs all make sense to be something like RISC-V.
That has nothing to do with the RISC-V ISA being the right choice for core functionality like shader cores.
That how you know it's nothing but a marketing campaign to boost their stock price.
This is not popular to say, but it is accurate and my statements are sourced. Feel free to provide rebutting evidence if you disagree.
That's a very fun verdict
I don't understand the tech well enough to know how much of a fairy tale or hope and a prayer strategy this is. But calling it "seeks independence" is some incredibly positive spin which makes me suspicious of the claims.
If these other chips are so good, wouldn't you go ahead and migrate to them for the actual benefits rather than researching them as a backup plan?
There's tremendous inertia in apps which is what gave x86 an effective monopoly in the 90'es, even when superior options were available (hello Alpha). Today the situation is actually better and with RISC-V having the same memory model as Arm, porting from an Arm version is way easier (Neon code not withstanding) but it's obviously way less attractive than running running existing code.
It's really not any different today than it was. The memory model being similar is kinda nice, but you still need to convince the world to compile for another arch. And you need to make a CPU good enough to justify the recompile and ongoing multi-arch maintenance.
Today, the MAJORITY of servers run Linux, even on Microsoft cloud platform, and a significant part of that is using primarily open source (eg. "LAMP"). If, say, RHEL supports your platform, it's fairly easy for most to move.
Yes, it's not easy to migrate, but it is _way_ easier than it used to be. Now, non-server application is a different story.
IP law is harmful to society.
I think you may be reading intent into what "seeking independence" means that is not there.
Given the risk of being kicked out of that ecosystem. The score tips in favor of this new technology.
It is possible for Huawei, and Chinese companies in general, to build a successful alternative without using American technology (for the purposes of this post, I'm defining successful as equivalent to their American tech using competitors).
It's not possible.
What the actions on Huawei have done, however, is ensure that if that possibility exists, very significant chunks of Chinese resources will be devoted to finding it.
It appears China had decided this wasn't completely necessary, and were happy to rely on Americans for software, and core hardware. But that is obviously not true anymore, and the biggest long term effect of these actions are Chinese state resources being poured into directly combating and threatening American tech's competitive position.
And seeing that the tariffs and capricious action has not been limited to China, it won't even be surprising if some other nations join in to help, especially from the EU and/or India and Korea.
I'd say precedent was set when congress blocked China from joining the ISS, leading to China developing its own space station program.
Eh, RISC-V was developed at UC Berkeley. In the United States.
Don't get me wrong, I would love a RISC-V chip that is competitive with the ARM chips in smartphones but I think we are quite aways away from it for the time being.
RISC-V is most successful as microcontrollers who do not have to be state-of-the-art fast, they just have to work reliably.
Assuming the best performing BOOM chip refers to the 2018 HotChips tapeout, that chip operated at 1.0 GHz. The lowest speed Ivy Bridge Core/Pentium (a design getting close to a decade old, FWIW) was 2.5 GHz. This means on a core-to-core basis Ivy Bridge is 2.7x more performant. This ignores the benefit of the additional core count of the Core series of Ivy Bridge procs.
Today's 9900K is also around 40% more performant than the comparable Ivy Bridge (4960X). That would suggest the performance gains modern Intel design holds over BOOM is closer to 3.8x on a core-to-core basis.
Being within an order of magnitude performance-wise is impressive given that BOOM is mainly a one-man show. That said, having about a quarter of Intel's present chips' performance doesn't strike me as particularly "close".
I appreciate and understand the discussion around IPC and how BOOM is a significant improvement on the state of the craft. I cut my early programming teeth by developing PPC and SPARC versions of my business' embedded software products. I'm a longtime fan of alternative architectures and keep up with the industry through attendance at events such as Hot Chips. I am familiar with the terminology here and am not disputing it or the findings of the benchmark.
My point was that using IPC in this way obfuscates how far these architectures have yet to go to be competitive in certain markets. Said market competitiveness was the original discussion topic up-thread.
I agree that being able to build something that runs at speed is equally important but IPC does have an exact meaning
Most of that relative performance gain is not from increased speed but from decreased speed of older processors due to meltdown (and spectre) mitigations. In the past the 4960X and it's contemporaries were much faster.
Process tech is slowing, so I think that costs of the best processes will go down and we'll see more competition for CPU designs.
In the book the authors compare the expressiveness (and thus code density) of code implemented with the RISC-V ISA with instructions for x86, ARM and MIPS. They show that the RISC-V ISA is able to give ARM a good fight in terms of IPC, and that it beats MIPS. (Mainly due to the delay slots which can't be filled in many real world cases.)
They also show why the R5 should make implementations scale better in clock speed compared to ARM. This is the kind of stuff Patterson has been doing research on the last 3-4 decades, and it is quite interesting to see how this experience has guided the design of the RISC-V ISA.
But, I, for one, enjoy watching it tremendously (it's like "soaps" for us tech folk).
Just by going with RISC-V they eliminate the issues with the Intel management engine.
I'm curious what Huawei will do next.
It is often used by academics for researching really low level stuff.
With RISC-V, it is reasonable to expect multiple vendors to provide CPUs. There is a very good case to expect non-backdoored chips.
That's true, but the way they do that is in a way that requires minimal changes. In terms of the management engine, that means not including AMT module (software) but keeping the rest of the ME (hardware). After all, it also does some essential things like power management and the bootloader. Removing it completely would also mean re-implementing the essential functionality.
The best you can hope for if you want to use mainstream parts is something like me cleaner where the management engine is only there for the boot, but is disabled afterwards. Something without an management engine would certainly possible, but would be limited to niche products like the librem.
I don't share this forecast. I think many private customers would view a machine without the ME and it's capabilities as a plus for a product when the marketing is done right. Especially since most don't care for the capabilities.
A lot of people say they value privacy, but when push comes to shove, they'll willingly trade privacy for convenience and cost. See: any facebook/instagram user, or iOS users who use google maps. iOS/Mac is marketed as the private phone/computer, but their market share shows how little consumers are willing to pay a premium for privacy (except in the US where iOS share is surprisingly high). You can try to convince consumers otherwise, but in truth, a management engine with backdoors/0days isn't really applicable to most consumers' threat model. The risks from social engineering attacks or unpatched software is a much greater threat.
RISC-V will bring much needed diversity to spaces currently dominated by two titans.
High performance microarchitecture is never going to be open source. They cost hundreds of millions to design and the design stays relevant only 4-5 years.
People love to come up with fantasies and try to work backwards from them, but sometimes you really can't get there from here.
You have a surplus of condescension and a lack of imagination.
They didn't until now, but the US government just told them commercial OSes with a connection to the US can not be trusted, so they might very well be rethinking that.
Even today, with hardly any hardware, there's (to various degree of completion): Debian, Fedora, Slackware, FreeBSD, and seL4.
Ideally, once there is a JVM that runs on riscv, those apps are ported for free. An openJDK port is underway.
I have no idea how much work it would be for Android to port its APIs, but Google certainly has the manpower if this is something they want.
I got one interview out of it, but I feel like maybe the Foundation is the right place for that. They'd have to delegate the hiring and management to a member, but I think it could work.
Huawei has their own VM for Android, so they would likely port that for their effort.
But the compiler (openjdk in this case) still needs to be ported, no?
EDIT: I think we are talking about two different things. The Java compiler (which emits Java bytecode) should be ported for convenience but that is not necessary. The compiler which compiles the actual JVM into an executable doesn't need to be ported, but it does need to have support added for emitting riscv machine code.
TensorFlow and Keras have such a large first mover advantage, the custom chips and MindSpore would have to be very efficient and inexpensive to make real inroads.
In a similar vein, Huawei has launched MindSpore, a development framework for AI applications in all scenarios. The framework aims to help with three goals: easy development, efficient execution and adaptable to all scenarios. In other words, the framework should aid to train models at the lowest cost and time, with the highest performance per watt, for all use cases.
Another key design point of MindSpore is privacy. The company says that MindSpore doesn’t process the data itself, but instead “deals with gradient and model information” that has already been processed.
How can war be good? Delighting in the suffering of everyone?
If IP didn't exist, then this wouldn't be good news, because we would have always had a competitive market in the first place, we wouldn't have proprietary ISAs.
This trade war just happens to interact with that artifact in a positive way: fostering competition in spite of IP.