
RISC creator is pushing open source chips - Tsiolkovsky
http://gigaom.com/2014/08/19/risc-creator-is-pushing-open-source-chips-for-cloud-computing-and-the-internet-of-things/
======
reitzensteinm
Assuming Moore's Law is slowing down, and there's no rabbit about to be pulled
out of a hat by Intel or others, it's going to close the gap significantly
between what is possible on an $x billion and an $x million budget. This makes
it a great time for initiatives such as this to take place.

In 2001, an unexpected delay of 18 months could mean that you release to
competition running at 2ghz instead of 1ghz. These days, Intel's 32nm, 22nm
chips (and 14nm, from what is known about Broadwell) are more or less
identical from a consumer's perspective.

As an aside, Patterson's book (coauthored with Hennessey), Computer
Architecture: A Quantitative Approach, is one of the best I've ever read.
Right up there with SICP.

~~~
DCKing
Well there's two predictions you can think about that could happen sometime in
the future:

1\. Nobody can put more than XXX transistors on a square millimeter (where XXX
is some physical limit to chip material)

2\. "Everybody" can put YYY transistors on a square millimeter (where YYY is a
number sufficiently close to XXX so that the difference does not matter much)

The first means some stagnation in the chip market: not only does Moore's Law
not apply anymore, but not even does a weak version of Moore's Law ("chips
tend to get faster/cheaper over time") apply. But if that scenario becomes
true, it doesn't necessarily mean that the democratization of the second
scenario becomes true. The ability to put YYY transistors on a square
millimeter could still be limited to a select few companies investing billions
in the production of these chips. After all, how many fabs worldwide are
currently capable of producing 45nm chips, which could be considered
"commodity" [1]?

Is it really true that it becomes _much_ cheaper to produce chips on smaller
process nodes over time? Sure it becomes cheaper since less original R&D is
necessary, but is it reasonable to think that producing 20nm chips ever
becomes a question of investing anything less than, say, 100s of millions?
Does the second scenario ever become true, at least in the forseeable future?

I don't know much about chip fabs so I wonder if someone more knowledgeable
could share some insight on this.

[1]: I genuinely don't know the answer to this, but I don't suppose it's very
widespread?

~~~
cben
If 2 keeps not happenning — if producing custom ASICs remains expensive — it
becomes increasingly attractive to produce cheap as-good-as-they-get FPGAs in
massive quantities.

Ironically, FPGAs as endgame would be way more disruptive than an "ASICs to
the people" endgame.

FPGA density, price/gate and power efficiency trail "native" hardware but
constant factors of overhead but it should be possible to reduce the gap
compared to now. More FPGA volume means better economies; the hardware
architecture could be optimized further; most importantly IMHO the current
design tools are heinously unfriendly and could benefit a lot from programmer
attention (once programmers realize a more-or-finalized FPGA structure is the
"new assembly" to optimize for).

I'm not sure if a constant factor of disadvantage would become very acceptable
(because we'll drop the throw-faster-hardware-at-it mentality) or very
unacceptable (because robots with FPGA brains always lose at high-frequency
chess wrestling to robots with native brains).

~~~
tachyonbeam
Would ideally need to have some kind of open source FPGA catch on (and be
manifactured with competitive performance). Companies like Xilinx and Altera
are not at all keen on open sourcing their technology and tools. IMO, through
their actions, they're shooting themselves in the foot, keeping FPGAs from
becoming what GPGPU is right now (or something even better, in fact).

------
alricb
Chicken, meet egg:

 _We did not include special instruction set support for over ow checks on
integer arithmetic operations. Most popular programming languages do not
support checks for integer overflow, partly because most architectures impose
a signi cant runtime penalty to check for overflow on integer arithmetic and
partly because modulo arithmetic is sometimes the desired behavior. [1]_

Please Regehr, don't hurt them. [2]

[1] Spec, section 2.4 [https://s3-us-west-1.amazonaws.com/riscv.org/riscv-
spec-v2.0...](https://s3-us-west-1.amazonaws.com/riscv.org/riscv-
spec-v2.0.pdf)

[2]
[http://blog.regehr.org/archives/1154](http://blog.regehr.org/archives/1154)

~~~
tachyonbeam
I happen to be implementing my own JavaScript JIT compiler and I can tell you
that SpiderMonkey, V8 and my compiler all perform a lot of overflow checks.
This is because in JavaScript, all numbers are doubles, but JITs typically try
to represent them as integers where possible, because integer operations have
lower latency.

Scheme and Common Lisp also rely on overflow checks to optimize small integers
as "fixnums" instead of always using arbitrary precision arithmetic. Not
having hardware support for overflow checks would complicate the
implementation of many dynamic languages, and reduce their performance
significantly.

Not sure what this guy was thinking. It can't be that hard to implement some
overflow flag you can branch on, I mean, adders basically produces that
information for free, don't they? Seems like a poor design choice.

~~~
ajb
Hmm; carry is free, not sure about overflow. But more generally, it's an extra
output, for what would otherwise be one-output operations. Put it this way -
if a lot of javascript operations could modify a piece of global state, that
would be trivial on a trivial interpreter, but I bet it would add a lot of
complexity to your JIT. Same goes for CPUs - no problem for simple
implementations, but it would add complexity to an out-of-order or other
clever microarchitecture.

~~~
kps

      > Hmm; carry is free, not sure about overflow.
    

Overflow is just the final carry out XOR the second-last carry bit, so it's
practically free.

Of course RISC-V doesn't have a carry bit either!

------
asb
If you'd like to be paid to work on a fully open-source RISC-V SoC, we're
hiring. See [http://lowrisc.org](http://lowrisc.org)

------
PinguTS
Please explain: What would be the benefit of an open source chip?

At the end of the day, the chip is a physical thing that have to be produced
by someone. As long as there is no 3D printer, which can build nano structures
to makes chips, I can only implement it into a FPGA. The main FPGA providers
are either Altera or Xilinx and both a very expansive. So they are good for
prototyping or very low volume. But for anything beyond, I need a specific
implementation. Okay, I could go to a foundry like Samsung (or other) and
order according to my design. But, even that requires a very high volume in
the millions to make it affordable and viable from a business perspective.
Especially, if it is intended for the IoT market. On the other side, I can buy
ARM Cortex M0 and Cortex M1 for cheap from NXP and other for less than $1.
They are power full and power consumption is very low.

Just to mention, the other day the WRTnode[1] was released for less den $25 or
look at an Raspberry PI Compute. Don't get me wrong, Yes, I would like to see
those with an open source chip. But would that be really a viable business
case, except from the University context?

[1] [http://wrtnode.com/](http://wrtnode.com/)

~~~
worklogin
What's the point of an open source operating system? Most people buy their
computing device with the OS installed, so the cost is marginal.

~~~
PinguTS
You are not from the hardware business?

In hardware design you cannot build a "minimal viable product" and then do
"continuous integration" from that on. A hardware product either works within
its defined limits or it doesn't. Then, depending on your limitations, you
need additional approval by FCC, FDA, and such organizations within the US
alone. The same thing then with their counterparts in every different country.
Chip design is then the next level up. Because changing a mask later on,
because you found a bug, means you produced a million chips just for the dust
bin.

That means the entry barrier into hardware, especially chips, is very high.

~~~
worklogin
Reading your comment fully (something I should have done), I see you address
the differences in the analogy. Sorry.

Perhaps the cost situation will change over time. Perhaps, in coming decades,
the ability to fab microchips will itself become something achievable for
thousands instead of millions of dollars. And as for regulation: That only
matters if you're selling. If you're researching or making it for personal
use, the FCC doesn't (or shouldn't) matter.

------
igravious
Talking about the RISC-V Instruction Set Architecture

[http://riscv.org/](http://riscv.org/)

Why would you need 128bit addressing? Isn't 64bit address space plenty big?
This isn't a "nobody'll need more than 640k scenario, right?"

[https://gigaom2.files.wordpress.com/2014/08/requirements-
tab...](https://gigaom2.files.wordpress.com/2014/08/requirements-table.gif)

~~~
unwind
There's motivation in the relevant part (RV128I Base Integer Instruction Set),
starting on page 81 of the 2.0 ISA specification ([https://s3-us-
west-1.amazonaws.com/riscv.org/riscv-spec-v2.0...](https://s3-us-
west-1.amazonaws.com/riscv.org/riscv-spec-v2.0.pdf)).

A choice quote:

 _It is not clear when a flat address space larger than 64 bits will be
required. At the time of writing, the fastest supercomputer in the world as
measured by the Top500 benchmark had over 1 PB of DRAM, and would require over
50 bits of address space if all the DRAM resided in a single address space.
Some warehouse-scale computers already contain even larger quantities of DRAM,
and new dense solid-state non-volatile memories and fast interconnect
technologies might drive a demand for even larger memory spaces. Exascale
systems research is targeting 100 PB memory systems, which occupy 57 bits of
address space. At historic rates of growth, it is possible that greater than
64 bits of address space might be required before 2030._

~~~
userbinator
_if all the DRAM resided in a single address space_

Key point, and that's a _huge_ "if". All large systems are NUMA, and trying to
treat that like a uniform address space will be absolutely horrible because of
the extreme latencies that arise.

~~~
rch
You're correct of course, but I wanted to point out that people shouldn't
dismiss everything >64 bit out of hand based on this reasoning. There are
architectures that can make use of those extra bits, and not just for
addressing a universe of RAM. The Transmeta processors weren't perfect or
anything, but there's merit in the VLIW approach. The first generation Crusoe
chip was a 128 bit part, and the second was 256 bit; this was a decade ago, in
chips designed for ultra-light consumer laptops. As I understand it, some key
people from Transmeta ended up at P.A. Semi, which of course was acquired by
Apple in 2008. I wouldn't be at all surprised if we're talking about VLIW
architectures again by 2016 or at least 2020.

\-
[https://en.wikipedia.org/wiki/Transmeta_Efficeon](https://en.wikipedia.org/wiki/Transmeta_Efficeon)

\-
[https://en.wikipedia.org/wiki/Very_long_instruction_word](https://en.wikipedia.org/wiki/Very_long_instruction_word)

~~~
aidenn0
1) Instruction size is irrelevant to address size

2) Yes Transmeta was VLIW internally, but I see that as an implementation-
detail over other forms of superscalar; either way you have a linear stream of
instructions generated by the compiler, with hardware turning that into
parallel execution by the CPU at runtime. Calling that "VLIW" is about as
interesting as calling a modern x86 "RISC."

------
Narishma
From the article: "Popular chip architectures historically have been locked
down behind strict licensing rules by companies such as Intel, ARM and IBM
(although IBM has opened this up a bit for industry partners with its
OpenPower foundation)."

IBM's new openness isn't really open at all. It's just what ARM has always
been doing: they allow you to pay them a lot of money so you can use their ISA
in your CPU.

------
krmboya
Maybe a bit OT, but I've just started learning programming for MMIX, Donald
Knuth's RISC computer. I've been wondering when, or if, it could one day be
implemented in hardware.

------
Zigurd
This paper
[http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-14...](http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-146.pdf)
pointed to in the article posted here, claims higher performance per Mhz and
lower power consumption than ARM in table 2.

Still, that requires some chip maker to build a SoC around a RISC-V CPU that
attains these efficiencies in the real world.

The paper makes these arguments for RISC-V:

• Greater innovation via free-market competition from many more designers,
including open vs. proprietary implementations of the ISA.

• Shared open core designs, which would mean shorter time to market, lower
cost from reuse, fewer errors given many more eyeballs3 , and transparency
that would make it hard, for example, for government agencies to add secret
trap doors.

• Processors becoming affordable for more devices, which helps expand the
Internet of Things (IoTs), which could cost as little as $1

The first point is not very concrete. China has long had some of their own
MIPS-based RISC CPU designs, and they are most likely to act on the
transparency issue. That leaves super-cheap processors for IoT. ARM may be
able to deliver pricing and value that's better than free.

And all this assumes very low friction in the form of, say, Android adding
this ISA to the standard set of compilation targets for native code, and, to
ART pre-compilation.

------
ChuckMcM
Perhaps the most important contribution Linux has made to the world is that
because of its impact, you can run an OS on nearly any instruction set
architecture (ISA). That drops a pretty huge barrier in terms of getting to
something from nothing. On RISC-V though, unless it was some sort of SoC
(which most people won't build) I don't see the impact. But carrying that line
a bit further ...

One of the challenges that Microchip has faced was that the affinity for C
that the ATMega architecture had from rival Atmel meant it lost a few
significant design wins (Arduino perhaps the most serious). They could use
RISC-V to try to offset the Atmel SAM series. But other than that I don't see
the motivation for folks to not use ARM, granted a full processor license
would be expensive but if you are looking at volumes where that would be an
advantage, it isn't _that_ expensive.

------
TheMagicHorsey
Is licensing an ARM core really a barrier in a business/community venture,
once you take into account the enormous costs of just getting a fab to make
your chip?

I feel like the fabbing cost is so high, that at that scale, the CPU license
fee is really nothing.

Correct me if I am wrong. This is coming from the mind of someone who knows
next to nothing about how hardware is really made.

~~~
aidenn0
I think the bigger consideration is that the ARM IP is encumbered. You can't
tweak it and publish the changes you made.

------
comatose_kid
David gave a nice lecture on RAID over a decade ago when I was taking a
computer architecture class at Stanford.

Perhaps the next edition of his textbook (bible for computer architecture)
should use RISC-V, it would probably help as a learning aid and spread the
gospel about RISC-V.

Of course, it's possible that the current edition of the text already does
this.

------
niix
Every time I hear "RISC Architecture" I think of the movie Hackers.

