* 64 bit hardware will be available from SiFive. It'll be low-ish end, 4 application cores, but it'll run real Linux distros. SiFive are already shipping early hardware to various partners.
* Linux 4.15 will ship with RISC-V support. It's available in RC releases now. (-rc1 was 3 days ago I think)
* glibc will ship RISC-V support. That'll happen in February 2018. I think it's not appreciated how important this is. It means there will be a stable ABI to develop against, and we won't need to re-bootstrap Linux distros again.
* GCC and binutils have been upstream for a while.
* A lot of other projects have been holding off integrating RISC-V-related support and patches until RISC-V "gets real", ie. it's really available in the kernel, there's hardware. These projects are unblocked.
* Fedora and Debian bootstrapping will kick off (again). [Disclaimer: I'm the Fedora/RISC-V maintainer, but I'm also getting everything upstream and coordinating with Debian]
* There'll be finalized specs for virtualization, no hardware though.
* There should be at least a solid draft of a spec for industrial/server hardware. Of course no server-class hardware available for a while.
That just sounds insane to me that the tools can't handle synthesis of straightforward stuff like this. My personal fantasy startup is a next generation, mostly open-source EDA simulation, synthesis/optimization and place & route toolchain. You can expose the analog cell design to the world, but generate revenue from the per-fab process-specific partnering required to work the customers through the process of making real hardware. Who's got funding for me?
 This has probably been thought of and dismissed a million times over by actual experts. Say, losing whatever advantages it may have due to extra micro-ops needed to move stuff back and forth to the correct register file fraction?
Have I got the paper for you! From my adviser's previous student: https://dspace.mit.edu/handle/1721.1/34012
But you're right -- the win isn't obvious. There's pain and advantages in both directions. I've always wanted to build it, but I only have so much time in the day. :(
Where what I want to see (or do, at least in my dreams) is closer to the analog end: design of parametrized cells, GPU-parallel large circuit analog simulation, circuit-dependent optimization (i.e. choose "speed" if the cell is on a critical path, "area" or "power" if not, optimally choose drive strength and buffers based on latency needs of the path, swap flip flop implementations likewise, etc...) place and route, eventually mask file generation, reverse synthesis and design rule checking (though obviously that part becomes fab-specific).
None of this stuff is "hard" in a fundamental way, but it's been hidden behind bad tooling for so long that no one seems to have tried to innovate much in the past few decades. Anyway, I got ideas aplenty.
Now, Cavium's Vulcan-based ThunderX2 seems to be beating Intel's Skylake server chips:
Just saying that in the context of performance/cycle/core, they're closer to a bunch of Atoms than a Xeon (which, again, is totally fine for some use cases).
Maybe these should be expressed as Instructions Per Second (at peak and at the point of diminishing returns?) or something like that, rather than two independent numbers. Higher clock frequency actually seems like a bad thing, all else being equal. It seems to me that throughput ought to trend toward infinity, clock frequency toward zero. ;- )
This is particularly useful when talking about a processor design, and not a specific processor in particular. As you said, there's a lot of good about slower clock frequencies, so you'll see the same ARM design being deployed at a variety of frequencies. Far easier to talk separately about a design's IPC from its achievable clock frequency (although both are important!).
Really hoping to get my hands on RISC-V-based SBCs next year.
Can somebody associated with the project reach out to me?
Maybe listen to this maybe:
Is there something in the ARM license?
The trouble is they all suck in some slightly different way. Maybe only 1gb ram, maybe a bottleneck bus in front of your net or storage io. Graphics are the big pain point right now. Etnaviv/vivante is your only choice for free accelerated graphics, and you'll only find it in a few chips. Mali and PowerVR are all around, but have absurdly difficult to work with closed source drivers.
The nicest oss-all-the-way-down arm socs are i.MX6, which is expensive and frankly old/slow.
To be clear: riscv doesn't solve any of these concerns. That doesn't mean it isn't an amazing project.
But it is an alternative to ARM, and has a governance model that parties interested in open standards might be more willing to join.
Right now, ARM CPUs on the low end have little standardization at the SOC level. They have a few cores from ARM, (what you think of as the main CPU, maybe a GPU, maybe a low end micro for power management, and a couple more for realtime tasks). Then you have the non-ARM IP in the form of image processors for cameras, video decoding, etc. The SoC mfg is responsible for gluing together all these parts, and every mfg has their own proprietary take on the process, meaning a different initialization sequence, different firmware layouts, etc.
The SoC mfg's goal is to ship a product. It isn't to define a standard, and since there is no standard to follow, 'anything' goes.
Because ARM costs enough that it isn't viable to do a SoC layout you aren't going to ship millions of the chip, academic research (where the seeds for standards are often planted) just doesn't happen.
Is it license or fab issues? I thought that license is $0.X per chip, so it shouldn't make a difference (license-wise) if you make Y chips or 100,000 * Y chips. Fab costs change by overhead, but how would RISC-V help there?
Here's a somewhat accurate article on the topic: https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...