Also, is it possible to use only a handful of RVC-enabled binaries on a system that otherwise doesn't use RVC? If so, it might not be so bad.
Some people building Linux-capable processors decided to include RVC.
Some people with Linux distributions (Fedora, Debian), seeing that all known current efforts to build linux capable processors were supporting RVC, and seeing the large benefits it provides, decided to build their distros using RVC.
If someone wants a Linux distro that doesn't use RVC .. it's not a problem. The instructions supported are exactly the same. There are no portability problems. If a package will build successfully with RVC then it will compile without it, no problem, with no additional human work. Just change one configure flag when you build the compiler (to default to rv64g instead of rv64gc). The only difference is the binaries will all be 30% to 50% bigger.
Some more build machines will be required, and a few GB of disk space. Theses are trivial things.
It's absolutely nothing like the difference between, say, armeabi and armhf or the differences between armv4, armv4t, armv5, armv6, armv7.
You might find my 15 minute talk at the RISC-V workshop interesting on this subject: https://rwmj.wordpress.com/2018/05/21/my-talk-from-the-risc-...
The bit about RISC-V on servers starts around 5 mins in.
RISC-V has an extensible ISA so it's likely to be a mess by design. I'm huge fan of risc-v so I really feel bad saying this. It's telling that RV-IMAFDC is the base for unix systems. It was originally IMAFD = G, but now GC is the standard. Then there's the V extension which I think will be very important and it already sounds like it may be fragmented (graphics and AI may be extra beyond vectors). But then there is space in the instruction encodings explicitly for custom extensions.
These variations are important, and the ability to customize in a standard way is too. I think it's going to be an amazing decade or two in the CPU space because of risc-v but it's also going to be a lot of churn IMHO.
Once open designs catch up to the state of the art and a real base ISA solidifies I hope a RISC-6 comes along with a more optimal encoding and things become really stable for a long time. But that's just my optimism getting out of hand.
The problem is that there is NEVER one optimal ISA for all applications. It makes absolutely no sense to have V, T, J extensions in the new 'real base' because there are tons of systems that don't ever need it and it would be a waste.
That's why the RISC-V foundation defines profiles for certain application domains. RV64GC is the logic one for Linux for the moment.
Also you ignore that very often an ISA is not used as a mass market product, but rather in a smaller more specialized space. The costume extensions are very important for people like that. In other cases, the potential AI extensions for vector for example, will probably never gone show up in standard profile for Linux.
I think what we are like gone see is that RV64GC will remain the base level for the Linux profile and eventually more advanced profile with V and potentially others will emerge.
The whole point is that no long build everything around ISAs that is never optimal but rather around profiles.
The "real base ISA" will be whatever set of options becomes widespread 10-15 years from now. In my mind it's entirely feasible (though not as likely) that the approach of using hundreds of minion cores (see Esperanto) to do graphics will become a common thing (since there is no free GPU and none around the corner). If that is the case, the V extension will become part of the "real base ISA" because everyone will want it. So far the A and C extension went from optional to necessary - it would be naive to think it will stop there. Also, as implementations improve someone may come up with a low cost (low area) way to implement DSP instructions. Maybe that becomes common because the low additional cost is a "why not?". That's part of my point too, the flexibility will bring innovation and that may lead to certain things naturally becoming part of most peoples expectations.
It looks like OpenGL has done something similar. They had a few specifications (ES in particular) for different classes of hardware, but as it became clear that certain features could be handled by the lesser hardware variants those became part of the standard.
You seem to again ignore that there are many, many applications that don't need or want a graphics card or SIMD of any kind.
What you are talking about is not the 'real base ISA' but the 'real os application profile'. By not hard coding these choices into the ISA you have much more freedom to create new profiles for future needs without changing anything about the ISA.
> So far the A and C extension went from optional to necessary
That is not the case. Its simple false. There are tons of RISC-V chips that don't have either, some of them are even in production.
Before the Linux RV64GC profile was released there simply was no defined official standard for OS application like Linux.
> Also, as implementations improve someone may come up with a low cost (low area) way to implement DSP instructions
That already exists, its the P working group.
> That's part of my point too, the flexibility will bring innovation and that may lead to certain things naturally becoming part of most peoples expectations.
What you are missing is that 'most people expectations' are limited to a specific program domain. Nobody will ever expect graphics on a IoT edge processor. So the default profile for IoT will not include that.
I'm not saying this exact scenario or the V on will come to pass, just that I expect the standard set of options widely in use will continue to change like it already has.
I would assume that the V,B,J and possibly P extentions will all end up in the most used linux profile.
However I think the RV64GC will still be the relevant fallback that most distros will over to cover a very large range of chips.
Lets remember that there BILLIONS of chips that are internal to companies doing other hardware and all of those should be RISC-V as well.
So if you have that ambition you need to have a way to balance standardization and fragmentation. Profiles are way to find standards for specific use type primary in order to do software standardization on these profiles.
I think your points in the video is important - or rather, should be important - but from the fragmentation of ARM it seems they are not as important to chip makers as they should be. However ARM's fragmentation seems to have not been a serious impediment to it's adoption.
I think though going forward the solution may be to target higher level machines (i.e. JVM). This obviously is not realistic for the kernel - but for software that most people want to run on RHEL it would most likely be fine.
ARM has distinctly not been popular in the data center (despite years of work). That's for many reasons but one is surely the fragmentation of the platform.
At least for the server realm, ARM defined the "Server Base System Architecture" standard
exactly to avoid fragmentation in an area where custumers desire standardization.
Can i ask off-topic about the build farm you mention (approx 12:00). Is build performance an issue ? How long do typical builds take ? Would a x5 performance improvement become a must-have feature ?
Perhaps ARM actually did have a point with their "FUD" campaign against RISC-V:
(EDIT: in particular, consider point 3).
RISC-V does allow you to define other extensions in a methodical and detectable way, and we intend to detect those at runtime (as you would, for example, with something like AES-NI or AVX on Intel). However RVC impacts every bit of compiled code so there's no way to deal with it at runtime (something like that would make all binaries much bigger and would greatly complicate the toolchain).
> The standard compressed ISA extension described in Chapter 14 reduces code size by providing compressed 16-bit instructions and relaxes the alignment constraints to allow all instructions (16 bit and 32 bit) to be aligned on any 16-bit boundary to improve code density.
Project lead looks to be running a semiconductor company, and they say they have NDA with the fab company.
> Shakti is the concept or personification of divine feminine creative power, sometimes referred to as “The Great Divine Mother” in Hinduism. As a mother, she is known as “Adi Shakti” or “Adi Parashakti”. On the earthly plane, Shakti most actively manifests through female embodiment and creativity/fertility, though it is also present in males in its potential, unmanifest form. Hindus believe that Shakti is both responsible for creation and the agent of all change. Shakti is cosmic existence as well as liberation, its most significant form being the Kundalini Shakti, a mysterious psychospiritual force.
> In Shaktism, Shakti is worshipped as the Supreme Being. Shakti embodies the active feminine energy of Shiva and is synonymously identified with Tripura Sundari or Parvati.
RISC-V also is becoming the standard for research and the open-hardware community.
Now the Indians can share and collaborate with the open-hardware community.
There might be specific implementation that could be licensed as you describe but I am not aware of any at the moment.
There was a workshop at Chennai:
Also, this is an old video but gives the basic information for the project: https://www.youtube.com/watch?v=OoxOzvf78uQ
This is one of the leads: https://news.ycombinator.com/user?id=gsmadhusudan
I think I will repost his comment on Shakti, more people should see it, it was an answer in a thread about Shakti being an 'ARM killer':
As the lead architect of Shakti and the guy who helped kick-start the project, I figure I am owed my 2 cents !
1. We never positioned it as an ARM killer ! That was the imagination of the reporter who wrote the article.
2. Shakti is not a state only project. Parts of Shakti are funded by the govt, these relate to cores and SoCs needed by the Govt. The defense and strategic sector procurement is huge, runs in the 10s of billions of USD.There is significant funding in terms of manpower, tools and free foundry shuttles provided by the private sector. In fact Shakti has more traction with the private sector than the govt sector in terms of immediate deployments.
3. The CPU eco-system including ARM's is a bit sclerotic. It is not the lic cost that is the problem, it is the inherent lack of flexibility in the model.
4. Shakti is not only a CPU. Other components include a new interconnect based on SRIO, GenZ with our extensions accompanied by open source silicon, a new NVMe+ based storage standard again based on open source SSD controller silicon (using Shakti cores of course), open source Rust based MK OS for supporting tagged ISAs for secure Shakti variants, fault tolerant variants for aerospace and ADAS applications, ML/AI accelerators based on our AI research (we are one of the top RL ML labs around). 4. the Shakti program will also deliver a whole host of IPs including the smaller trivial ones and also as needed bigger blocks like SRIO, PCIe and DDR4. All open source of course. 5. We are also doing our own 10G and 25G PHYs 6. A few startups will come out of this but that can wait till we have a good open source base. 7. The standard cores coming out of IIT will be production grade and not research chips.
And building a processor is still tough these days. Try building a 16 core, quad wide server monster with 4 DDR4 channels, 4x25G I/O ports, 2 ports for multi-socket support. All connected via a power optimized mesh fabric. Of course you have to develop the on-chip and off-chip cache coherency stuff too ! 8. And yes we are in talks with AMD for using the EPYC socket. But don't think they will bite.
Just ignore the India bit and look at what Shakti aims to achieve, then you will get a better picture. I have no idea how successful we will be and I frankly do not care. What we will achieve (and have to some extent already) is - create a critical mass of CPU architects in India - create a concept to fab eco-system ind India for designing any class of CPUs - add a good dose of practical CPU design knowhow into the engineering curriculum - become one of the top 5 CPU arch labs around
Shakti is already going into production. The first design is actually in the control system of an experimental civilian nuclear reactor. IIT is within the fallout zone so you can be sure we will get the design right. If you want any further info, mail me. My email is on the Shakti site. G S Madhusudan
1. Availability of professionals
India: makes tons of electronics engineers and semi specialists, but very very few of them find employment in the country.
China: there is a somewhat ok amount of undergraduate cadres, but for anything above this, you have to attract people from abroad. And yes, Chinese fabless were hiring from abroad since the very beginning. In fact, people who make SoCs at Allwinner, Rockchip and etc are around 50% undergrad and 50% masters level people. In their early days they were eager to hire random college grads and teach them verilog on site.
India: a research program, all work in the past few decades was about delivering some kind of proof of concept level "national chip"
China: make money quick - 9 out of 10 Chinese fabless start with bog down standard, off the shelf "solutions" from ARM, and add some flavour: here you have 4 channel camera controller, here eDP on chip, and here 10G Ethernet for pennies.
India: with all respect, the truth is there are none. And from many people I hear the same criticism - even if the 10th in a row state backed effort to make the "national chip" will succeed, there will be no chances of it ever sustaining it with microscopic domestic market as demanded by political mandate.
China: foreign markets - even 15 years ago, Chinese fabless well understood that their value proposition is actually lesser in domestic market than for the export manufacturing. Most Chinese buying a PC 20 years ago were not deliberating whether their PC has Sigmatel audio codec or some cheaper domestic analogue, but for somebody making stuff for export, every penny saved on expensive imported chip mattered a lot. Even today, the pattern holds: Chines domestic market smartphone models have high-end Qualcomm or Samsung flagship class chips in their majority, and for export they do Mediatek, Allwinner, and Spreadtrum
Now the Indian national program is developing and working together on the same stuff that many silicon valley start ups and western university do.
We are seeing something really exiting in the works and many companies in China also see the potential.
Oy! Got a link?
However the Rust based OK is not there yet so far as I know.
Unfortunately, I'm not the kind of engineer to jump in and help build CPUs.
Makes me feel proud of the work at least partly from the city I grew up and studied in - Chennai, India.
Also very proud of the humble explanation of progress, and sensible goals. One could easily imagine a project like this getting distracted by PR &a chase headlines instead of technical progress (remember the cheap laptop competitor to OLPC?).
I cheer for this project and the people involved in it with all my heart.
I wonder if part of the govt’s interest comes after discovery (-ies) of backdoors in US and China based processors — a national security motivation to develop indigenous manufacturing.
Congrats to this team. Great project!
There is a movie in Netflix called "Paramanu" (https://en.wikipedia.org/wiki/Parmanu:_The_Story_of_Pokhran) that talks about how India was forced by US back then when it wanted to test nuclear bombs. There are also countless other stories including how US tried to stop India from buying cryogenics engine (https://timesofindia.indiatimes.com/india/India-overcame-US-...) and some GPS related incident in the recent Kargil war with Pakistan (https://timesofindia.indiatimes.com/home/science/How-Kargil-...).
This post in Reddit also has some good info: https://www.reddit.com/r/india/comments/27l015/what_fuels_in...
I wonder how long it will be until we can buy a full raspi-like platform that has no proprietary chips or firmware.
As for firmware, that's already pretty open with ARM.
You can boot a Tegra X1 (Google Pixel C, Nintendo Switch) without any blobs with coreboot. (You still want one tiny blob — DDR4 training stuff — if you want your RAM not to be super slow.)
Allwinner and Rockchip platforms also tend to not have blobs, at least not user visible/modifiable ones.
You can even boot a Raspberry Pi without blobs https://github.com/christinaa/rpi-open-firmware (but without usb/ethernet/etc.)
Whenever people make statements like 'RISC-V chips' you know its wrong because their is a broad range of chips from many different people.
Anyway depending on your level of trust at the moment you could:
* Download rocketchip (BSD licensed) and run it on an FPGA. You'd have to trust the FPGA vendor and their proprietary tools like Vivado. This lets you run Linux, at a speed of about 50 MHz.
* Run PicoRV32 on the reverse-engineered Lattice 8K FPGA using the open source toolchain. You cannot run Linux on this, but you can run short C programs without libc, and given that Clifford has done a very good job fully understanding the FPGA we can be reasonably sure there are no backdoors. My experiments with this are here: https://rwmj.wordpress.com/tag/icestorm/
Still available from many places. Here is one.
I can't see where you can actually buy one, or how much it costs, but looks like a project to keep an eye on.
The expansion board still lists a price:
From memory, the HiFive Unleashed was on the order of $1000. The FAQ discusses why the price is high, they're planning for it to come down as demand ramps up.
There's also an Arduino-like embedded board:
This is a huge boundary to cross, and I'm not convinced that it's actually worth doing.
two, very different things.
Such crosslinking is a true service.