
5th RISC-V Workshop Proceedings - razer6
https://riscv.org/2016/12/5th-risc-v-workshop-proceedings/
======
asb
You might also be interested in my notes from the event:

[http://www.lowrisc.org/blog/2016/11/fifth-risc-v-workshop-
da...](http://www.lowrisc.org/blog/2016/11/fifth-risc-v-workshop-day-one/)

[http://www.lowrisc.org/blog/2016/11/fifth-risc-v-workshop-
da...](http://www.lowrisc.org/blog/2016/11/fifth-risc-v-workshop-day-two/)

------
Cyph0n
I'm particularly interested in Derek Atkins' talk on low-power crypto
algorithms. He claims to have developed a DSA algorithm and a key exchange
protocol that are both quantum-resistant and operate on very small numbers.
Unfortunately, I couldn't find any published work on these algorithms outside
of marketing material from his company's website (SecureRF).

I was however able to find some slides on Walnut DSA[1], but I haven't worked
through the math yet as I'm lacking some required background.

[1]:
[https://www.nist.gov/sites/default/files/documents/2016/10/1...](https://www.nist.gov/sites/default/files/documents/2016/10/19/atkins-
presentation-lwc2016.pdf)

------
rwmj
Anyone know if / when videos will be available?

------
Symmetry
I thought the talk on "Enabling hardware/software co-design with RISC-V and
LLVM" was particularly interesting. Designing a chip around your your specific
code with that sort of fast feedback looks really compelling if your volume is
big enough to make it worth while.

------
BenoitP
Trying to make sense of what the future will be made of, and how to be an
early user of the value unearthed.

So basically the silicon is hitting some walls; and the only way to provide
more performance is through hardware specialization. Which we are already
seeing with Google's TPU [1] and AWS having a new FPGA instance type [2].

On top of this, hardware is getting cheap and fast iterations with RISC-V and
its associated ecosystem and philosophy.

\----

From OP's link, in "OpenSoC System Architect: An Open Toolkit for Building
High Performance SoCs" [3], slide 3:

> Continued increases in parallelism and heterogeneity will require advanced
> runtimes, programming environments and compiler optimizations in order to
> take full advantage of these new architectures

\----

Questions abound:

* Runtimes are supposed to provide reduced complexity, regardless of hardware; but the hardware is changing under their feet. Taking the topology of a manycore processor into account is paramount to performance. Can it really be abstracted away in the current runtime models we have? How do you influence programmers to be aware of it? Same thing for memory models. There will be vast amounts of diversity here. Will we see runtime support reasonably early?

* Speaking of abstractions, will there be a clear winner on this new hardware? Threading? Message-passing? A runtime accommodating lots of different ones?

* Will we see large amounts of library code go into hardware (these come to mind: http header parsing, JSON serialization, matrix multiplication, GC[4])?

* Could JITs offload things to specialized hardware, or is it a pipe dream? Intel is supposedly doing stuff for letting a JVM talk to an FPGA at runtime (probably following their Altera acquisition). No further news about this has transpired since the announcement at JavaOne.

* Will all these SoCs necessitate each a specialized abstractions/languages/DSLs? And with one runtime to bring them all and in the silicon bind them?

* The Graal/Truffle compiler[5] structure is doing well at optimizing across languages. Would it make sense in optimizing the way SoCs talk? Would it make sense in providing abstractions over vastly different hardware? Will there be one VM/Compiler to bind them all, x2 (languages + hardware)? I'd love chrisseaton's opinion on this.

* What will be the threshold for cloud providers to provide silicon as a service? E.g you have FPGA code for several customers; what would be the level of income at which getting it into silicon automagically in the datacenter make sense?

* More generally, will there be something new under the sun? Will there be entire new industries, application categories made reachable by hardware getting specialized? Or is it just some improved speed here and there?

* How can a user position himself so as make a profit from it? List current workload patterns, and offer ports to cheaper/faster hardware? (Waiting to do it through a runtime?) Or listen to newly enabled applications, and try to be part of the tide?

* How can a chip designer position himself so as make a profit from it? Make a Node.js_v7.2.1™ chip? Double down on a specific metric (be it latency, throughput, floating point operations, reconfigurability)?

[1] [https://cloudplatform.googleblog.com/2016/05/Google-
supercha...](https://cloudplatform.googleblog.com/2016/05/Google-supercharges-
machine-learning-tasks-with-custom-chip.html)

[2] [https://aws.amazon.com/fr/ec2/instance-
types/f1/](https://aws.amazon.com/fr/ec2/instance-types/f1/)

[3] [https://riscv.org/wp-
content/uploads/2016/12/Wed0900-OpenSoC...](https://riscv.org/wp-
content/uploads/2016/12/Wed0900-OpenSoCSA-Fatollahi-Fard-LBNL.pdf)

[4] some elements in slide 25 of: [https://riscv.org/wp-
content/uploads/2016/12/Wed1415-web-RIS...](https://riscv.org/wp-
content/uploads/2016/12/Wed1415-web-RISC-V-jikesrvm-Maas-UC-Berkeley.pdf)

[5]
[https://wiki.openjdk.java.net/display/Graal/Publications+and...](https://wiki.openjdk.java.net/display/Graal/Publications+and+Presentations)

