Looking forward to a future where CPUs that are open hardware, from the transistor up are generally available.
Who is calling this a PC? OpenV seems to be marketing itself as a microcontroller, it keeps comparing itself to Arduino.
I should also add that the "boring" mips is right now significantly more open than riscv, more mature and easier to get hold of.
I am still searching for an active RISC-V verilog implementation project for me to contribute through design and verification. For those who are looking for contributors you may want to drop me an email: my username at gmail dot com.
 - https://github.com/onchipuis/mriscvcore/issues/3
Clifford has also been doing a lot of cool work adding SMT-based verification tech for synthesis via Yosys, and even started working on a verification bench for RISC-V implementations: https://github.com/cliffordwolf/riscv-formal - he's also very nice and approachable IMO. I'm sure he'd appreciate some extra help.
Sounds like exactly what you want, perhaps.
The "transistor" level would require open source silicon layout software, and probably some level of NDA relaxation on the part of the fabs who surely won't allow a production mask set on their modern processes to be released to the public. I don't see that happening any time soon.
 Mature/legacy processes actually do have public design rules available. The MOSIS "scalable CMOS" rules apparently work for real devices at the 180nm node or thereabouts.
Basically: the "shape of a transistor" (and highly tuned structures like SRAM & flash cells, etc...) in modern logic processes is itself a tightly guarded secret. Outside some crappy electron microcroscopy done by people like Chipworks and the much-prettier-but-really-spun pictures released by the fab's marketing departments, no one involved in the process releases anything.
Now, maybe that's because it's that the masks are ultimately copyrighted by the fab and protected. Or maybe if they leaked they'd be perfectly fine. But what really happens in practice is that before the fab will consent to make your chips for you, they force you to sign an elaborate NDA with no doubt huge contractual penalties.
Again, I don't expect this to change. Silicon fabrication at the mask level will not be open source any time soon, though maybe there's hope that open source tooling might arrive. We just won't be able to see its output.
There are patents, but they don't protect designs or individual physical items, just processes.
There is IC mask protection, but that doesn't protect modified versions of the IC masks (if I understand it correctly):
Am I getting something wrong?
These RISC-V silicon implementations on CrowdSupply are on the low end, RV32I (32-bits, integer-only). The SiFive one supports M (multiply) A (atomic) C (compressed, like Thumb for ARM) extensions. No, you cannot build a smartphone with these.
But there are other implementations. An example high-end implementation, 64-bits (you cannot buy silicon yet, but maybe you can burn your own FPGA) is BOOM:
Did you mean 16NM or are you referring to something else other than process size?
The second chip in those links is a Arduino-compatible MCU that already runs at 320mHz+, so it's got 2x as many cycles as you wanted!
> not to mention PCs.
There have also been demo runs of some 64-bit RISC-V boards, using the same FPGA core that 320mHz chip uses ("Rocket"), that have already hit 1gHz+ on old (think 180nm?) fab processes. It can certainly do GHz, it seems.
This means that anyone can then go away and build their own processor implementation to that standard. Intel could create their own chip for that instruction set and get it running at 4Ghz with your latest fab facility. At the other extreme I could sit at home and use Verilog to create a soft core that is downloaded to an FPGA and be up and running with a much slower implementation). Neither of us has to pay anyone for the right to do so and the code we write should work on both because it adheres to the same standard.
In contrast, if you want to create an ARM or x86 processor then you have to pay a license fee. Even if you implement the entire processor from scratch and use nothing from ARM or Intel except the instruction set specification, you still need to pay and it is not cheap. There are many situations where you want to avoid that cost. A researcher wants to experiment with a new idea, you want an internet of things processor but cannot afford the license cost of an existing processor. Or your Samsung and produce zillions of phones a year and saving that dollar per phone is worth having.
As the RISC-V standard is relatively new it means that the first designs to market are the targets that are easiest to create, so microcontrollers. SiFive are now moving up the scale and are working on a full processor with MMU that could be used to run something like Linux. I would expect to see something like that appear within a year or so.
RISC-V is an ISA that can be used for all kinds of chips eventually, just like there are ARM offerings from <100 Mhz microcontrollers to >1 GHz multi-core phone SoCs.
There a range of implementations - from micro up to superscalar out of order.
AFIK, the two implementations here are both at the micro end of the scale.
* With some private subsidy
Anybody have any experience with it?
It has a lot of Flash for your images or whatever but very little SRAM, even compared to e.g. the Arduino Zero. Only 16KB. So that's quite unfortunate. I only had a Uno R1 though, so it's an upgrade for me in every dimension.
The clock speed is very high comparatively so it could drive some things that may've been out of reach, but they aren't entirely clear on the power usage. The Dhrystone/mW metric is obviously very good but if you don't need raw speed and little SRAM, it's unclear how well it fares power-wise. You could probably get away with less.
The tooling seems to work fine, including the OpenOCD-based debugger (though it has some natural limitations), and compiler toolchain. It'll be nice when the binutils port is upstream so I can get rid of a custom build.
I don't know about the Arduino IDE support. I don't use it. While it's pin-compatible, most Arduino libraries probably need some light modifications to work with the HiFive1 so you can reliably use stock shields/extensions. They have some examples (e.g. Adafruit capacitive display library) on GitHub of doing this. So you can use it with existing stuff if you get your hands a little dirty.
There are no gerber files or masks or whatever I guess, though the RTL they use is available, and they do have directions on how to synthesize the RTL for a small Xilinx Artix FPGA to develop the chip. The FPGA is about $70 I think so you could also go that route and use the soft-core version, though only if you care about HDL.
It's all very new, e.g. the original Dhrystone port on GitHub was a bit busted, but they fixed it very fast. Expect some roughness, but it's mostly been smooth for me since I just use GDB/GCC, OpenOCD and the simulator.
I'd suggest buying it if you want to experiment with RISC-V, support the project, and play with the new tooling -- but probably not as an Arduino replacement, if you have something like a Zero already. Unless you're willing to get your hands dirty, which maybe you are!
The ISA is the least of your problems. In fact the cpu itself is not really an issue at this point. If you look up what linus and others have been complaining about it has almost never been about the cpu itself (which are all pretty open designs these days).
What we need right now is open boot, open firmware and most of all, an open gpu that can compete with the state of the art.
It's more like "open the ISA and -- oh! they're here!". The RISC-V Foundation is a who's who of computing.
> the cpu itself is not really an issue at this point
It sounds to me like you're taking the point of view of someone who values openness for openness' sake. For libre, because it's right. But the Foundation members like openness because it's good business.
A lot of the members of the RISC-V Foundation play in the newly-emerged-but-dominating mobile computing space. For tablets and smartphones most of the existing vendors have to pay royalties to ARM. The low end makes up a lot of volume and those royalties start to look more significant over time.
The server marketplace is an interesting one too. HPC and cloud computing vendors probably like the idea of being able to have a general purpose processor that is "good enough" and special instruction sets that are useful in their domain. Amazon started a cloud FPGA project recently. Clearly this indicates a likely market for specialized computing jobs.
> What we need right now is open boot, open firmware and most of all, an open gpu that can compete with the state of the art.
For me/my sake, I think I agree: those would be really great for me to have. But there's not many organizations out there to fund that vision.
Not at all, but a lot of people seem to think riscv = fully FOSS future, just because the ISA is in the datasheet.
To address your last statement from another angle: this is not the first FOSS cpu, it only happen to be hipp right now. Significant work has gone into the other alternatives which now risk to be forgotten just because they are not SV darlings.
RISC-V has a solid basis in academic and industry experience, and implementations are moving quickly.
Wouldn't it make open source much less democratic since the guy with friends in media will always win instead of the guy with the best ideas?
guy with friends in media will always win instead
of the guy with the best ideas
While media exposure and momentum are not perfectly synonymous, it's rare to find one without the other.
If your chip is complex enough to need programmability (perhaps even only internally, i.e. a network switch or something), perhaps the choice between using an open core makes your final product cost $10, whereas tacking on licensing fees to third-parties would make it $11. More margin, more profit, etc.
At least the x86, x86-64 and ARM ISAs are covered by patents, so you'd need licensing even for an open independent implementation. I imagine this applies to most other commercial ISAs.
In particular, LEON is just about the only SPARC implementation I'm aware of whenever it's brought up, while RISC-V already has at least 3-or-4 implementations that have gone to fabrication in the past few years. It has toolchain support and stuff, at least. Oracle being the steward of the latest 10yrs of SPARC design probably has not helped, though, especially wrt IP concerns. LEON is still 32-bit only apparently from a quick search?
OR1k has realistically never really had support in almost anything, it seems, other than some binutils/Linux forks and an implementation or two. But it also differs architecturally: delay slots, no 2008 IEEE754, no 64-bit address space OR1k availability at the time of beginning. (Does modern SPARC still have delay slots? That's also a major core change, still.) So if you're going to fix those things, you might as well just start from scratch really, it seems.
I've literally never even heard of LM32, despite owning a Lattice FPGA on my desk and it having GCC support, apparently. It's a Harvard design though, and there is no 64-bit support in the chip, so those are two major things that would have to be fixed, and you're still on the hook for e.g. fixing the toolchain for this. So, it's still not free. Also while the core is free, does it really have no IP claims by Lattice?
Ultimately RISC-V just had the right support, at the right time (the last ~5 years), and has the right set of features that make it a lot more appealing architecturally. It has an open toolchain that will be (finally!) going upstream in major open source compilers (e.g. no official OR1k/LM32 support in LLVM, LM32 in GCC though), multiple implementations that are size-optimized (picorv32) and speed-optimized (Rocket) and can be openly synthesized/developed, 32/64bit support, needed things like modern upcoming SIMD ISA designs, multiple large and small supporters, compatible chips are beginning to roll out, etc. The ISA design IMO (the chained encoding) makes the instruction set fairly simple and straightforward to write tools for. If you look at all of these efforts, RISC-V is simply a juggernaut in comparison to any other.
I do think there is hope out there for more open cores to thrive e.g. due to licensing concerns and cost margins, esp in the embedded space. RISC-V does not have to be the "only" alternative or winner. It's just a very good alternative that has a very "wide reaching" design, with modern niceties (64-bit, SIMD, etc).
If I had to pick anything else, I'd probably pick the J-core revival of the SuperH CPU. It's somewhat dated, but the patents/IP have all expired, it's a very dense encoding, a proven architecture (20+ years!), has existing toolchain support (upstream Linux/GCC ports), and in theory can be taped out etc. I've been writing some RISC-V code lately and I'd be interested to see the density vs SuperH -- that matters for stuff like writing a Forth. :)
The goal is to have a secure and open hard and software stack, all the way from the CPU to the top.
There is clearly interest in a more standardized fully open ISA and that why it exists. There now other interest groups who want to build hardware for it, and others who want to build stuff on top of it.
The people who work on open source firmware all suffer from the close source parts and all the unknowns of the software, and even worse they suffer from the multitude of hardware. Getting some consistency into this space with bottom up open standards can only be good.
> + Copyright (C) 2011-2017 Free Software Foundation, Inc.
> + Contributed by Andrew Waterman (firstname.lastname@example.org).
Ofcourse RISC-V is an architecture so not just applicable for smartphones but the point remains. RISC-V solves half the problem but what about the other half i.e. DSPs?
One thing at a time. I'd argue that graphics is still a huge hole in the open hardware world. There are lots of things to be done.
There is the MIAOW open source GPU but it is based on an AMD instruction set, so it's probably not viable for general use.
I believe in the short term the solution will be simply running LLVM-Pipe on a bunch of RISC-V cores for software rendering. The SoCs will just have frame buffer graphics. Another option is to have PCIe support (which HiFive is going to) so you could at least recompile an open source driver for a proprietary video card. I suspect that once RISC-V SoCs are running a full linux distribution with one of these inferior graphics solutions then people will jump in to make better open graphics hardware.
But with this open hardware, they can sell without any worries, anywhere (apart from the problem with following standards). I actually think that this might bring more investment and interest from them. Of course the high end CPUs won't have so much diversification due to the complexity and it's impossible for normal people to stay on par with the complex manufacturing process of <22nm, but baby steps I guess.
Although there is always the 'know your hardware' tag advertised with RISC-V, that's not going to change until we have complete control(or trust) of the fabs.
It's a bit like people being worried about malware/adware/spyware on their phones/personal computers. Are you sure that right now, you have no spyware? How can you be sure if you're not compiled the code your self? If you did compile the code and read all the lines of the code you compiled -- great achievement!.
So now, if your phone has been connected to the internet, how do you know there wasn't a vuln that wasn't exploited, and a root kit installed with the last app you thought was great and had to have it? Or even just the last JS block wasn't malicious?
Sorry for the rant. I just wanted to convey complete security is an illusion we know and make ourselves believe is the real thing. I'm a lot more interested in the cost economics with RISC-V, and I don't really have to worry security right now in the MCU, embedded space, where it is manageable (atleast right now).