EEtimes 2002 "Infiniband on the Verge of Broad Adoption" whoops :-)
But more seriously, I think having a license free CPU core with software support is a huge win, it will enable people like TSMC and GF to make 'jelly bean' CPUs that can be low cost and high volume. But they won't be that much different than the low end ARM cpus which are basically the cost of packaging these days anyway.
The market forces in a 14nm (last year's process node) world are pretty interesting. 90% of the cost is the dicing, testing, and packaging the parts.
So the other market force is the Western Digitals of the world who will make RISC-V embedded SOCs with exactly the right set of peripherals they need to make a billion disk drive motor controllers. But most people won't see those chips in the marketplace. And there probably won't be a 'design to order' chip house that will make small volumes of these things.
Imagine you are a Microchip or a ST Micro, what does RISC-V do for you? It lets you avoid the few cents you pay for the ARM license per chip, does it let you differentiate any more? Look how well that worked out for the AtMega16 for Atmel.
So that is the embedded market, kind of a wash.
But what about "bigger" systems? Laptops or desktops or servers? Do the pareto analysis on those systems, compare an ARM SoC to a RISC-V one. The costs are in memory, support chips, PCBs, tooling etc. Not the CPU license.
Will it let people build chips for phones like Apple does with their own GPUs and microarchitecture? Sure, and at less cost, but what then does that give the average consumer? A cheaper phone with market features? Ok that's a win but they won't care it is RISC-V versus ARM v8 or what not.
Bottom line is that I think it is great to have the ability to create computers that are not beholden to a certain vendor but I don't see the vector for getting them into general distribution where consumers actually benefit from the change.
When somebody decides to invest into large volume laptop/desktop/server microarchitecture to compete with Intel and AMD, nothing prevents them from adding their own proprietary ISA extensions. RISC-V allows classic embrace and extend tactics. RISC-V for Microsoft surface and RISC-V for Google Chrome may not be fully compatible with each other and RISC-V AWS cloud processors.
It is up to software distributors to decide whether it's worthwhile to use those extensions; provided they are even allowed to run their own software on the thing to begin with (which was not the case with Surface RT anyway, extended ISA or not).
> When somebody decides to invest into large volume laptop/desktop/server microarchitecture to compete with Intel and AMD, nothing prevents them from adding their own proprietary ISA extensions.
Nothing prevents AMD and Intel from doing this either, and they do it all the time! You just don't notice because they agreed privately to cross-license the extensions.
And at the end of the day, the parts of the platform that you'd use for a standard operating system (notwithstanding new tagged memory or other exotic architectures, which I'd argue are a good, innovative form of incompatibility) are fully standardized already on RISC-V. If Microsoft wants to make their next version of NT on RISC-V rely on Qualcomm-proprietary instructions, that's their prerogative.
Surface RT allowed Windows Store apps, including native ones.
In this hypothetical proprietary RISV-V scenario, you might not even be able to run unsigned code on the device and so nobody is going to bother supporting its weird instruction set.
The zoo of endianness, and extensions support did not do anything to ARM's rise.
And the extensions could be tested for based on ISA version. Most of ARM's history is non-divergent.
It want use their proprietary extensions but often I don't really care about that and if they pay for developing or updating all open source software so it runs on their extension then that's fine also.
But it could be a step towards a "GPL for hardware" type of shift. Someone puts in the work or the money to do a core which is competitive on some important metric (power, cost, etc.), and licenses it at no cost but under the terms that if you make changes you have to publish them and under the same license.
Then some people use it because it's good enough and free-as-in-beer, but if any of them improve it then it gets better. So then more people use it, until it's the Linux of hardware.
Moreover, if you have to publish the changes then you have to document the changes (equivalent of releasing source code), which means hardware that isn't a black box will gain a competitive advantage over hardware that is.
It's not hacking VHDL in your basement type thing where everyone can adds new stuff and it just works together trough common agreed api. All changes must run trough long chain of verification and testing from functional models to physical placement, testing and verification.
But more to the point, most of Linux isn't developed in basements either. Someone like Google/Facebook/Amazon/Microsoft decides it's worth their resources to make a more power efficient chip for their datacenters and more valuable to get further improvements from third parties than to try to sell it externally at commodity margins, or that "commoditize your complement" would be a good thing to do for cell phone chips, and now you've got a billion dollars in funding.
IMHO Syndicate with clever licensing around RISC-V infrastructure would be the best idea.
The clever licensing part:
1) mandatory licensing: licensees must license to everyone with the same conditions. licensees can't refuse to license.
2) pricing model: licensees periodically announce valuations at which they commit to sell their IP, and must sell their intellectual property for that value to anyone who is willing to commit to smaller license fees. There can be a periodic auction each year.
Those companies are the ones with physical hardware.
But it's also not true that they'd be the only ones to make money. If you lower your supplier's margins on hardware and then pass half the savings on to your customers, you get both higher margins and higher volumes. But your customers still get lower prices, which means they make more money too, on top of the transactions that it made feasible that weren't previously.
What makes it possible to design small volumes of custom asics for reasonable price is old and well known process, increasing the error margins (and sacrificing performance) or using existing modules. Simulation mostly works and can be trusted, yield is predictable etc.
Android already removed their dependency on GCC, following Apple's footsteps, and depending on how Fuchsia turns out, eventually the Linux kernel as well.
I know about the Fuchsia project, but so-far it can't compete with Linux or other established OSes.
"Android 8.0 and higher support only Clang/LLVM for building the Android platform. Join the android-llvm group to pose questions and get help. Report NDK/compiler issues at the NDK GitHub.
For the Native Development Kit (NDK) and legacy kernels, GCC 4.9 included in the AOSP master branch (under prebuilts/) may also be used."
"Removed GCC and gnustl/stlport. Added lld."
There are also a couple of Linux/Clang Conferences where Google goes through the kernel and clang changes they have done to accomplish it.
Android might have the Linux kernel under the hood, but it isn't Linux as many know it.
Not like it matters. They're replacing even that. See Fuchsia.
And that's a good thing: Fuchsia seems to have a better design (microkernel, multiserver).
Probably some leftover thinking from proprietary products that tried to apply license restrictions on the compiled output.
"GCC Runtime Library Exception"
This document was also the inspiration for the Classpath exception in OpenJDK.
> "In the FOSS world the license of the compiler has nothing to do with the license of the compiled software".
Only in the most extreme cases.
1) Battery life isn't dominated by run current for the vast majority of embedded devices. Sleep current dominates (most cases) or peripheral current dominates (RF transmit/receive, for example). You try to dial down the number of times you turn on until it's below the amount of energy you burn while off.
2) RAM is expensive; flash not so much. Code space isn't the issue--10% almost certainly not. Correlated: this is why I expect you really won't see 64 bits making a lot of inroads into embedded--doubling RAM consumption is expensive on embedded.
In other words: not only is it better in terms of royalties and ecosystem, it also better at everything else too. Isn't that terrific?
And here it is compared against a classical CISC platform and a hybrid one highly optimized for code size, and winning. What just makes RISC-V even more awesome than just any non-optimized design beating the incumbents.
Core advantage? I will let others debate that. Significant: surely.
REX prefixes really killed the space efficiency of the x86 architecture.
There's some indication that density should have increased somewhat since then, but I haven't looked at it myself.
It's not the kind of compression you might be thinking of. It's just 16-bit "shortcuts" for some of the common 32-bit instructions. The impact in gate count should be minimal. In a lot of these applications you'll have the code in on-chip non-volatile memory which means reducing code size may also reduce chip area.
I think with relatively little increase in gate count you could also make some sequences of two 16-bit instructions execute simultaneously, which could yield nice performance improvements for micro-controller cores.
Also, you might be surprised at how "big" many micro-controllers are becoming these days.
Decoding the "compressed" instructions is actually pretty straightforward, it doesn't add much complexity to a design. ARM Cortex M0+/M3/M4 implements a similar (but more complex) "compressed" instruction set called Thumb, and comparable RISC-V cores available from SiFive are smaller, faster, and more efficient.
In a very small RISC-V core by the venerable Clifford Wolf called PicoRV32 , you can look at the complexity introduced by configuring it with the COMPRESSED_ISA option.
> ...and program memory is at a premium in those places.
Program memory is one thing, but on processors of all sizes, code size has a big impact on performance in common types of program.
It gets harder for more complex designs though.
But for cases where you want to replace a Cortex M2, the area increase will be trivial.
If you compare GCC/ARM with GCC/RISCV the difference isn't too great, but even the IAR ARM compiler gives you noticeable improvements over GCC/RISCV. And ARM's compilers are actually quite good with respect to code size; MUCH better than GCC/RISCV (or even GCC/ARM).
That being said, were I to add some custom instructions, I would COMPLETELY prefer to do it with RISC-V than with ARM.
 Though the gcc/riscv toolchain is getting better pretty quickly.
RV32C and ThumbV2 have equal code sizes.
I'm not sure what happened with the ATMega. As far as I can tell, Atmel basically stopped developing AVR chips almost entirely around 2006, just selling the old ones; presumably they were having a hard time competing. With Cortex-M0s like the LPC2100? With PICs? With bargain-basement 10¢ Chinese microcontrollers made with obsolete process nodes? I'm not sure. The fact that they eventually sold the AVR line to Microchip makes me suspect it was PIC, but nowadays the chips that look like good AVR alternatives to me are almost entirely 32-bit ARMs.
The reasons they look like good alternatives, though, don't have a lot to do with "differentiation". The AVR was attractive because, as an 8-bit chip, it could be used in places where you couldn't afford a 32-bit or even a 16-bit chip, and it used less power. But then fabrication processes improved to the point where a 32-bit chip costs the same as an AVR, uses less power, runs far faster, and has more memory. Maybe if they'd kept developing the AVR that wouldn't be true — or maybe at that price point almost all the cost goes to dicing, testing, and packaging, which is what you seem to be saying.
> But what about "bigger" systems?… Not the CPU license.
I feel like the major cost of the CPU license is not the money you pay Intel but the built-in IME backdoor it ships with.
I would add though there are a couple places in the low end where one or two cents seem to matter. Or at least product managers think that it matters.
In the mid-range, RISC-V providers (certainly SiFive) are pushing the idea that it's easy/easier to add custom logic to your RISC-V based die. I'm not able to judge whether that's really true, but since they're starting with a clean slate in terms of interface logic then maybe. I can't imagine it being harder to add your own tile to a RISC-V design than an ARM design.
The tools seem to me to be pretty primitive, though. IAR says they'll have a compiler soonish. GCC still emits some "not completely great" code. (I mean, don't get me wrong, it's not horrible, but compared to the mature toolchains like ARM, Intel & MIPS, it's kind of bad.) Though it's certainly getting better over time.
If you look at the architecture though, it does seem a bit easier to implement than ARM and there's more to open designs than just the "free as in beer" argument.
RISC-V seems to be appealing to people who have a strong desire to "play around with" an architecture or a solution or are financially motivated to add logic that's somehow hard to add to ARM.
Isn't the biggest benefit of Apple's hardware the user experience enabled by top to bottom ownership of the software and hardware stack? I would imagine being able to replicate that without the enormous upfront design costs would be game changing, but I am not familiar with the unit economics of CPUs.
Personally I think having a license free cpu will be an integral part in continuing to make FPGAs more and more viable, and I think we're going to start seeing them in places that no one ever really imagined 15 years ago.
There once was a time when Linux was this teeny little player. Many people thought it would never amount to anything. After all Microsoft was the big player, support and tooling for it was everywhere. It was possible to license it for embedded use. The only benefit of Linux was freedom, and that can't possibly be enough of a benefit.
So the answer is: it really depends.
I think RISC-V will quickly infiltrate the invisible on-chip microcontrollers. The ones that manage power regulation, SDRAM calibration training, etc. There is very little friction there.
Then it will slowly enter low cost microcontrollers where cost is absolutely essential.
The high-end will be IMO negligible for years to come.
For full disclosure, I work for Red Hat and am keeping an eye on RISC-V for servers, and I hope it does succeed but there's a mountain to climb and lots of ways to screw up.
For example, gcc was pretty much ignored until Sun started the trend of selling UNIX SDK tooling instead of bundling it with the OS.
It started off as a Unix-like toy/learning-platform that people could run on devices too cheap to run Unix.
Then it evolved into a Unix that people could use on devices too cheap to run real Unixes.
It took the best part of a decade to make it competitive with the other Unixes. What just reinforces your point, I guess.
However its development wasn't done in the open and it was not "free" (in the FOSS) sense, so it's usage in a specific setting (e.g. commercial) was not possible. I guess Linux open license played a huge role in its adoption and not just the fact that it was free as in "beer" (compared to expensive traditional Unix systems) - and within a short time it surpassed Minix features. There might be disagreements about technical choices made (see legendary conversation about kernel architecture), but in terms of features it surpassed Minix in a short time (and today sets the state of the art for commercial Unices).
It only took a few hundred dollars worth of hardware to use Linux back in the day, and a windows license was a significant percentage of that.
To use a chip architecture? It takes a design team and booking of fab time. I.e. 10s of millions of dollars.
Non-copyleft UNIX clones weren't so lucky.
Even if RISC-V succeeds in the market, there isn't any guarantee that we won't have a plethora of incompatible extensions.
Dealing with are is not just % cost, put lawyers and time. If you are small company that is very valuable.
The choice WD made is not about cost but about an architecture that is adoptable adaptability. The whole argument is that you DON'T have to be apply to make it worth it to get a costume chip.
Also, you will have more venders to choice from when doing a product.
> Bottom line is that I think it is great to have the ability to create computers that are not beholden to a certain vendor but I don't see the vector for getting them into general distribution where consumers actually benefit from the change.
End consumers never benefit from any individual technology. They buy product and that work. RISC-V is more important for the overall industry and specially for those interested in open source.
As long as we base everything on proprietary ISA we can not have open chip projects that run lots of common software and that stops open silicon in its tracks, and with that open hardware as a whole.
RISC-V enables open source culture to be even legally possible and making it practically viable.
This stuff is still in flux but go into working groups for privilege architecture and security and you will see all these discussions.
RISC-V was never and will never be designed with the goal of eliminating royalties.
Edit: Intel could use it for their management processor just for the irony :)
Still, I believe that does bode well for more general adoption, once it starts replacing ARM on the OEM side of things, by having a positive effect on its hardware price.
The open source firmware people like are however doing a lot, coreboot, u-boot, linuxboot and so on. There are discussions ongoing about how to design the low level interfaces.
Finally, UEFI as the OS interface is actually being embraced by u-boot and coreboot as the standard OS loader. Thats because they have realized that the services provided by UEFI actually do solve many of the problem users of uboot/etc systems experience. For one, it standardizes the update process for the actual firmware, as well as providing services/controls for managing the OS boot process following update/etc. It also has interfaces for plug in cards (PCIe option roms) and many other features that turn out to be critical to building a generic computing device.
A specification can absolutly be a mess. Over-complicated for what is needed 90% of the time and in the other cases its also not optimal.
> Further, there is an open source mostly complete UEFI (tianocore) implementation.
Tinocore is not really complete for what you actually need and most venders are so far down stream that the advantages of real open source don't apply.
> Thats because they have realized that the services provided by UEFI actually do solve many of the problem users of uboot/etc systems experience.
I understand that. But there is a reason why Facebook, Google and other providers move away from UEFI.
> For one, it standardizes the update process for the actual firmware, as well as providing services/controls for managing the OS boot process following update/etc. It also has interfaces for plug in cards (PCIe option roms) and many other features that turn out to be critical to building a generic computing device.
The way the update process is implemented is incredibly sub-optimal and I have heard people from Intel agree that it is so.
The problem is that it creates and unnecessary parallel universe that is far more insecure, far harder to understand and with way worse tooling.
Check out this talk by one of the people who wrote UEFI and he admits many of the issues: https://www.youtube.com/watch?v=1XDYORK2z_M
Then check out this by one of the Linuxboot people going into many of the existing problems from his perspective:
UEFI, is a specification designed to allow a machine to boot and be managed by a multitude of OS's. That means, yes it may be a bit over complicated in places but those over-complications tend to serve a purpose (or did). I don't think anyone imagines that UEFI is perfect, its not, but tossing out uboot or whatever as an alternative is extremely myopic, as uboot doesn't really even provide enough firmware services for linux in its current state much less windows, or some future OS not yet thought off. That is the point with UEFI, to attempt to fill the gaps in what is possible with a given platform without creating a wild west of incompatible formats and hacky solutions for every platform (which is the current state of uboot/DT despite nearly a decade of work).
They have so many servers and different configuration needs that they need to boot their servers reliably, security, with integrity and they need to boot a wide variety of different systems.
> UEFI, is
You didn't seem to watch any of the sources that I provided. I'm not making an argument for uboot. The systems that I recommend as better the UEFI can do everything UEFI can do and actually do much more and are much more flexible, not to mention far more secure.
> That is the point with UEFI, to attempt to fill the gaps in what is possible with a given platform without creating a wild west of incompatible formats and hacky solutions for every platform (which is the current state of uboot/DT despite nearly a decade of work).
You don't seem to know how actually UEFI works on servers. Each vender uses tons of old unsecure bloated firmware full of different drivers that are very badly maintained and don't get security updates. UEFI is at the point where for a commercial server there are more lines of code then the linux OS you are booting into. Its a total security nightmare and a horrible situation in terms of open source as most of these things are closed source. The UEFI core might be open source but even that is not actually used and tracked upstream, vendors all use their own forks.
UEFI is complete separated layer that is its own OS, reinventing the wheel putting a totally insecure ring under your OS.
Now, that said, you have various ME/BMC processors scattered about, and those are the ones that have frequently been exploited to great advantage. The real chuckle here is that most of the BMC's are running uboot (or similar) firmware/OS stacks which don't tend to be upgraded for the very reasons I pointed out earlier. So yes, your BMC gets owned over the network, and it manages to own the OS running on the main processors because it can inject things into the address space during any part of the boot/runtime. But that isn't a UEFI failing, its a failing of the BMC vendors who don't have a clean way to audit/control the code being built into the images. If you look at OpenBMC its a yocto based system. Which means like android the vendors are on the hook for assuring their system works, and having ongoing development control of the upstream tree's. That all works about as well for BMC's as it does for android.
These are separate sub-CPUs required by both platforms to boot and run. Both of these CPUs run proprietary, encrypted, otherwise-inaccessible and inauditable code that is always running and has full access to everything the CPU has access to.
The Intel ME in particular is part of Intel's offerings that allow remote access to a system even if the operating system fails--it's accessible remotely by design, and flaws have already been found in it. Only Intel can update the ME. Only Intel knows what's in the ME and what it's doing, same for AMD and the PSP.
I think people want RISC-V to succeed because it gives privacy-conscious individuals a chance to have a truly secure platform not 100% under control of a single company with 100% auditable code from each stage of the boot process.
The fact that we're stuck with this blob for processor architecture is just sounds so archaic when you learn about RISC-V.
Everyone should be excited for this!
I guess you could argue that you can build your more efficient hardware with an older, cheaper process and maybe still end up with something competitive.
I don't see the tablet market being served by RISC-V, what benefits does it bring?
Someone like WD getting onboard makes some sense but that's not exciting, they just want to save a bit of cash.
Again, being open means anyone can design one, and several free (as in beers) designs have popped up already. Anyone that needs a CPU in their system can pick one of these designs and use it "for free". Moreover, a company can design one such chips, and then outsource to another company the support for it (or the other way around). Or, even swap support companies if needed. The fact that the RISC-V is open (and claimed to be patent free) means there is very little barried to entry. Which means the market offers space to more companies working together and competing at the same time.
For lots of designs, the CPU is a commodity. You need it, but the specs are not that important in the sense that you do not need the performance equivalent of the latest Intel chip. For those markets, haven a proven design with lots of software support and that is gratis, is way more appealing. Another commenter pointer rightly so to the "hidden" CPUs that are everywhere inside devices and gadgets. That's the market that I believe RISC-V is going to conquer quickly because the "cost" of moving to another architecture is low compared to other segments where it is very expensive. Think of a PC or smartphone which carry years and years of software developed all over the world and built against a specific ISA. Those markets are unlikely to move soon, if ever. However, the "hidden" CPU ones is easy, the washing machine software is built inside the wasching machine company which can probably built their software against a different ISA in a couple of days, if needed.
Another important aspect is the software tools. It is cosly both in time and in money to develop a high quality set of tools to support an specific ISA, i.e. compilers, support libraries and optimized libraries for your ISA, etc. Therefore, not all architectures receive the same "love". Some "niche" markets do, with lots of effort in the closed source space, which again means vendor lock-in. RISC-V, being open, is getting lots of attention from all open source tools, and it's expected its support will be on par with other mainstream architectures like x86_64 or ARM ones.
One final note is that of developers and knowledge. The fact that is free and anyone can experiment with it, means lots of universities are turning their focus on it for reasearch and education. As the new wave of engineers come out of the university with expertise in RISC-V, it's going to be easier to hire them and their friction-less path is going to be towards RISC-V.
Fpga implementations start at $5.
Individuals “waste” both of those amounts of money all the time.
Similarly, MS offers free VMs to developers for Windows so that you can build and test against Windows without cost.
Intel has publicly asserted that their patents apply to emulation, but is hasn't hit the courts AFAIK.
On x86-64 there's a reasonable minimum supported set of instructions, and you only really need runtime CPU detection for the newer vector extensions.
There could be further, more advanced extension that may not be in every processor, but that's just the same as x86
The modularity in the ISA as it is now, is something that's more geared towards embedded devices or FPGAs, where you'll be compiling code specifically for that target anyway.
but... your point was probably more along the lines of "getting custom silicon is still going to need a market of greater than several thousand," and that's probably always going to be true.
You have to get the extension board if you want PCI-E, SATA, and M.2 connections, but it doesn't look like there are any available right now.
I'll check again in six months...
where can i buy one :O
If you want something cheaper, for around $20 there are a bunch of mini FPGA based boards available on Aliexpress purposefully made to load into them a soft-RISC V core.