1. It doesn't have any onboard NVRAM (same limitation as the open-v). However, it does have a directly memory mapped quad-SPI peripheral and icache, which is a great alternative and might be better for applications that require a large amount of data. Note that an icache would still be required even if it had onboard NVRAM, because of its speed. You could also make swappable game cartridges, for example.
2. It has enough RAM and speed to run Opus.
3. The rest of the peripheral set is pretty barebones, no analog like the open-v has. No I2C or I2S without bitbanging, either.
4. The boot ROM has peripheral information stored in it. You might be able to have one binary that boots on different types of cores using this.
For low bandwidth there is RS232/UART/JTAG. Then there is I²C/SPI for networked. I²C ranges from 100kbit/s to 2.3Mbit/s. And SPI goes to 10Mbps/30Mbps.
For higher bandwidth stuff there is whatever you want over USB, Bluetooth, 802.11BLAH Wifi. Etc... Afaik USB can be considered 'open', you might need to 'register' for a certification/vendor id/logo but from what I understand if you don't want those you don't need to bother. There are also http://pid.codes/ and other organisations that are giving away free pids.
There is the Wishbone bus. https://en.wikipedia.org/wiki/Wishbone_(computer_bus)
But that's got quite a few pins. It allows for on chip networking as well as external stuff. Also RapidIO (which has heaps of pins).
There is a RISC-V debugging standard, but I think that's protocol agnostic.
It would be better if the basic hardware — serial ports and such — was in a standard location for all RISC-V machines, and all the rest of the hardware was discoverable at runtime (like PCs, mostly).
On a PC, there is a serial port at a standard location. It's not "discoverable", but every PC you care about has one at the same address, and it's incredibly easy to use with a tiny bit of assembler. You can always print messages, even in the earliest parts of early boot.
On non-PC platforms (I've used ARMv7, ARMv8, POWER) there's a zoo of serial ports. There are at least a half dozen different bits of hardware, on different ports, self-describing, not discoverable at all, or mentioned in device tree in multiple different ways. Some even require PCI so needing ugly hacks to output early boot messages.
Critically, if you get the wrong serial port, you cannot see any boot messages at all. There's no way to tell what's going wrong.
So I think, for serial ports, the PC approach is definitely, clearly better.
For other hardware, it should just be self-describing. PCI and USB are the model examples here. And in fact, why not just use those?
If RISC-V does this well out of the gate (like with IBM's old school Open Firmware) and has great shiny device tree support, I think we might end up like x86 -- where a single thumbdrive can boot almost any x86 machine under the sun.
Device trees are needed because manufacturers did not create self describing hardware.
This is impossible to do beyond trivial components. Most devices are complex systems of interacting components.
Also, do you want the same lazy manufacturer that couldn't bother to create a device tree create a complete hardware description ROM and get it right in the first attempt?
Configurations pages are a thing of the past, nowadays they should at best be used for confirmation and sanity checks. Device trees are the future and have already improved and simplified hardware management a lot (specially on ARM).
It would probably force the hw manufacturer to think through its design a little bit more, which would be a good thing. I've seen enough of these SoCs with "complex system of interacting components" to feel that a well thought out design that needs less static description of SoC/board/cpu level details would be beneficial.
I really hope they just forgot to mention 100kB+ of RAM on that landing page, and the 16kB data is just a DCACHE.
Also note that Quad SPI flash is 4 bits wide, and is NOR flash.
At 320Mhz, you should have icache and paged code. A process that can clock this high should have no problem with RAM densities to offer more than 16kB. Better to mirror what the entire industry does: low clock speed and embedded flash, or better process with faster clocks, more RAM and external flash.
Quad SPI NOR is still SPI, which requires serial timings. You need to serialize address and data with each transaction. Parallel NOR has parallel address and data, and an order of magnitude improved throughput and latency.
There is really no point to 320Mhz with such slow code memory.
What's the throughput like? Can this be used for data as well (which will need caching too)?
Because this suddenly makes the device much more interesting; for everything I've done with microcontrollers (which, I'll admit, tends to be abusive), I would happily trade performance for some more RAM.
The external QSPI FLASH just appears in the normal memory map, no demand-paging.
Normally used for eXecute-In-Place code (XIP) when the code is too large for the internal FLASH.
There is a performance trade-off, and external QSPI FLASH might run its bus at (for example) 50MHz, but thats still much slower than internal FLASH directly attached to the bus.
I am now really confused.
It has a number of implementations, both fully open source ones (BSD licensed), and proprietary. This one is based on the open source Rocket Chip implementation (https://github.com/ucb-bar/rocket), which is a simple in-order design, something like ARM Cortex-M or Intel Atom. (There is also one open source out-of-order design called BOOM - https://github.com/ucb-bar/riscv-boom)
The performance of MIPS, which it is closest to, has never really been considered anything more than "acceptable". It loses to ARM (which isn't so RISC-y anyway) and x86, so I expect RISC-V to be about the same:
Compared to an 8-bit AVR in an Arduino it's definitely much faster, but compared to other 32-bit architectures, it is not.
It's not in my hands personally, but they seem to be over the vapor hump.
I found it an exciting enough product to drop $60 on, anyway.
Hardware has, in addition to the GPIO, UART and ADC/DAC that you're used to from Atmel, a USB 2.0 device controller (it says "OTG", which implies a host controller too, but sometimes people get this mixed up), a SD/MMC/eMMC controller for storage (no idea about SDIO) and gigabit ethernet.
I'm a little disappointed in the lack of wireless connectivity on the SoC, though I suppose you can make up for that with off the shelf USB devices (or a UART bluetooth radio).
You can probably get it to work with a lot of effort and optimisation, but there'll be very little room to do anything with it afterwards.
Afaik AC97 chips run at around 24MHz, are dedicated to the job, and even they stop at 20 bit resolution and sticking the data onto a data bus. Getting better than that is "hard", which is why all the manufacturers pretty much standardised around such a low standard.
However, while most Cortex-M chips have an I2S peripheral to integrate with an external DAC, the HiFive1 doesn't, which might cause some difficulties. Audio out can be implemented with the PWM peripheral, though.
24MHz is "enough" for 20 bits@96kHz ADC and some post processing.
But 20 bits@96kHz is not decent.
For reasonable SNR, you need at least 24 bits, and even then "the experts" offload to an external CPU
For signal (less audio are more "controller") with high precision you need micro controllers with the power of at least an early 2000s PC (several hundred MHz and single cycle mul/div).
Raspberry Pi 3 is close, but it needs an external ADC/DAC.
Secondly, even if you did want to process at 96kHz, you'd have plenty of CPU left to do so. It's only 2x as intensive as 48kHz (this is a 32 bit CPU so using 16 bit vs 32 bit math is mostly the same, sans DSP instructions) and that amount of headroom is likely available, for example: https://www.rockbox.org/wiki/CodecPerformanceComparison
Thirdly, the article you linked talks about high end DACs but says nothing about the DSP on the card, other than that it has one, for doing... something (?)
Firstly, getting from uncompressed at the input to compressed is basic processing.
Seriously, you don't. Non risk instructions often take Multiple cycles, so you can only do a tiny number of them between samples, usually just enough to compress it to fit the bus speed without loss.
Thirdly, clearly you didn't rtfa.
Fourthly, go disagree on the teensy forum.
Radio signals below 50 kHz are capable of penetrating ocean depths to approximately 200 metres, the longer the wavelength, the deeper. The British, German, Indian, Russian, Swedish, United States  and possibly other navies communicate with submarines on these frequencies.
->That requires min 100khz sample rate.
RISC-V is appealing but if I'm stuck at 32KB or less of RAM I'd stick with the Parallax Propeller which has 8 parallel 100mhz cores.
Memory: 16 KB Instruction Cache, 16 KB Data Scratchpad
I'm wondering if it's possible to combine that or dice it in some way? On top of that, the program resides in SPI flash (128 Mbit -> 16 meg).
EDIT: ok - the above makes no sense, so yes, on 16K on-board RAM (and the other is for cache).
Short of more info, I'd be willing to bet that some of that flash can be set aside (or used like) variable space (albeit at a slower speed), and the on-board memory is more for high-speed stuff (and you'd have to swap things in/out - though likely they'll have a library for all of that - maybe).
If all of that is true (or close to the truth) - well, I don't know if it would be better than the propeller or whatnot, but it certainly looks interesting...
Reading the infosheet on the processor:
It does seem like the flash can be used for data and program space - and it appears like it can be read/written to from the cpu - so it's kinda like the flash storage on the Arduino. I would imagine it can be used similar - although slower - as variable memory (given a proper lib); and "paged" into the faster on-board 16k RAM.
You're also correct on the sizes of the ICache and Scratchpad. You can execute code which resides in the scratchpad, but can't store data in the I-Cache.
This is their embedded / tinker / maker single-board computer based around their E300 platform with on-board SRAM. It's more like an Arduino or a Pi Zero.
According to https://en.wikipedia.org/w/index.php?title=Parallax_Propelle... it is only up to 80 MHz.
The question: how viable is an open-source, publicly-auditable secure boot implementation? Not TPM theater or whatever, but a hardened hardware configuration that could be used to implement truly verifiable boot.
(If anyone from RISC-V is reading this, I think there is a very noteworthy amount of money in building a truly securely bootable reference design. Hopefully lots of people have already told you that.)
Next, regarding why only 16KB RAM...
I'm very interested in the potential of open source ISA, but admittedly completely ignorant about chip design.
With this in mind, I have a theory that the manufacturers deliberately provided a ridiculously low amount of RAM in order to make it impossible to run Linux on it and so keep it out of the mass market.
Considering the CPU is full 320MHz I don't think this is because they have some other product up their sleeve. Rather, looking at the fact that this is the first marketed product they have available that's an actual real chip (!!), I would expect that either the chip itself and/or the chipset likely has bugs in it. They've gone through many internal revisions, and now they feel comfortable with putting the chipset out there for public testing to get bugreports from the field.
My thinking is that applications that will play nice with 16KB of RAM will stress the CPU out significantly less than full Linux would, similarly to how doing basic tasks on a faulty x86 PC may work for years but compiling GCC will find broken bits in RAM sooner or later.
There's also the fact that such projects are also generally quieter as a whole than the thundering herd of people wanting to run Linux on things.
Don't forget, RISC-V has been nothing more than a bunch of VHDL for years, running on "perfect" FPGAs that don't require you to think about low-level electrical niggles and whatnot. If I'm understanding correctly, this is the first time a real RISC-V chip run has been done (?) and made available to the market.
Considering the success of ARM (or, more generically, the market sector of "little PCBs that run Linux"), RISC-V needs to stay competitive and attractive - and full Linux that constantly oopses in mm.c will make RISC-V look real bad real fast. I definitely want the ISA to thrive, and if my assumptions are correct capping the RAM makes an inelegant-yet-elegant sort of sense.
I expect RISC-V will be running Linux on real fabbed chips within the next two years.
: Well, practically impossible. If you don't mind 300KB/s RAM (yes, KB/s, not MB/s) there's always http://dmitry.gr/index.php?r=05.Projects&proj=07.%20Linux%20... :D
Depends if you trust your fab. I suppose you can take random samples and decap them at considerable expense, but then the public has to trust the auditor.
> I think there is a very noteworthy amount of money in building a truly securely bootable reference design.
I disagree. I think there are two critical problems with this: firstly, getting the community to agree on what they consider "truly secure", and secondly getting enough people to buy a system that is (necessarily due to small production runs) slower and more expensive than comparable Intel or even ARM.
If you're willing to trust a manufacturer you can buy secure-bootable ARM devices today with OTP key regions that boot Linux (e.g. iMX). So the market for the proposed "open" system is only people who are willing to trust you but are paranoid enough to not trust one of the existing manufacturers.
I make some counter-arguments:
Secure boot forms the trust basis that the device is definitively running the code you put on it without modification, so is arguably the most security-sensitive aspect of the system, in some ways more critically so than the kernel, network-facing daemons, etc. It is at least as important as those components.
I'm confident a publicly-auditable open-source secure boot implementation would attract fairly reasonable academic interest from the security field and be hacked on (from theoretical design down to implementational edge cases) by the community until it was very very good.
That would help avoid this sort of thing - http://www.cnx-software.com/2016/10/06/hacking-arm-trustzone... - which is currently only an issue because vendor engineering teams are not perfect and there's no widespread collaboration. (There is, of course, also the likely truth that there are similar "vulnerabilities" in all commercial secure boot implementations. Think TSA007.)
The one issue I will acknowledge is that if such a reference design existed and was widely implemented, it would be a very good question as to which manufacturers had "accidents" in the manufacturing process near the secure-boot areas of the chips.
If I understand correctly, the other very major issue (which is kind of really ironic considering what I've just said) is that you have to sign NDAs to understand how the implementation works, and AFAIK even to just use it. So I (a random tinkerer) can't configure a "really trustworthily secure" Linux system, only a major manufacturer/system integrator/etc can. I understand this situation but from the standpoint of paranoid individual security it's crazy - security by obscurity, anyone?
If it is possible for me to play with OTP on iMX from a hobbyist perspective without shelling out for some NDA'd SDK, I'm tentatively interested for what it's worth.
Boot documentation is chapter 7. References "high assurance boot" functionality in the onboard boot ROM, but doesn't give out docs for the ROM. On the other hand, it's not a large ROM so you could just dump and reverse-engineer it.
On-chip SRAM is just expensive in terms of area. It's comparable to what you get on Cortex-M devices. There are no chips with enough onboard SRAM to run Linux. You can't really run Linux without an external DRAM interface, and then you have to find somewhere to put the DRAM. People keep forgetting about this on the Pi because the DRAM is stuck on top of the SoC package.
Sure, maybe if this is a success they'll pay the licensing fees for a DDR3 interface + MMU, or write one themselves. I think you'd also want at least 600MHz for Linux, 320 is kind of slow these days.
Theory, yes. Conspiracy theory, emphatically not, sorry for the misunderstanding. I have no ill will towards the RISC-V ISA and no disagreement with SiFive's operations. Like others here I was just trying to figure out the huge disparity between the CPU clock speed and the onboard memory, in my case with sorely insufficient understanding of the field.
> It is perfectly reasonable to do enough formal verification to ensure that your chip works first time.
Oh, okay then. That's really amazing, I didn't know that :)
> Linux is not going to "stress" your chip more.
Like I said, I'm a bit ignorant here. I was just thinking along the lines of how eg an i7 with faulty L2 cache could be re-designated as an i5, but in this case they're not sure what's faulty, etc.
There's also the fact that CPUs do have errata... (?)
> And nobody deliberately cuts out a viable market sector unless they've got another product to put in it.
Absolutely! What I was saying was that I theorize that this is a first-run design and that another chip was going to follow up. I'm increasingly confident I'm wrong about exactly why (eg, now I understand about the RAM problem)
> On-chip SRAM is just expensive in terms of area. It's comparable to what you get on Cortex-M devices. There are no chips with enough onboard SRAM to run Linux. You can't really run Linux without an external DRAM interface, and then you have to find somewhere to put the DRAM.
I see. Mmm :/
> People keep forgetting about this on the Pi because the DRAM is stuck on top of the SoC package.
Has a look at a picture oh wow so it is, that's amazing.
> Sure, maybe if this is a success they'll pay the licensing fees for a DDR3 interface + MMU, or write one themselves.
Hopefully this run is an incentivisation to get some support for that!
> I think you'd also want at least 600MHz for Linux, 320 is kind of slow these days.
Hmm, that's quite true, yeah. (It's kind of sad how speed-hungry the kernel is nowadays - I have a 32MHz PDA with an OS (EPOC, precursor of Symbian) that draws draggable windows on a little monochrome touchscreen LCD, I can fling the windows around faster than the crystals can update :D)
I would absolutely buy a 320MHz Linux device with an open-source secure boot story though. Text communication doesn't need a snappy CPU.
I think that's RISC "working as intended"; the tradeoff was always supposed to be that you got to issue lots of simple instructions at high speed. I can't find what manufacturing process they're using (?nanometers) but it sounds like it's simply a "why not?" outcome of the design process that the chip is very fast. It doesn't have all that many peripherals either, by modern standards.
Edit: the answer is here - https://news.ycombinator.com/item?id=13067833 - other chips that have onboard Flash are necessarily slower. This doesn't, so it can be faster.
SRAM just takes up a lot of space. Hard to tell without gate counts or die shots but the 16k+16k could easily be over half the die.
EPOC is one of those extraordinary things that can be called a great technological achievement with a tiny dedicated fandom that nonetheless became a dead-end. Like Amiga, Concorde, BBC Domesday Project, etc. I do wish we could have snappier GUIs on our ten-times-faster systems.
> I think that's RISC "working as intended"; the tradeoff was always supposed to be that you got to issue lots of simple instructions at high speed.
> I can't find what manufacturing process they're using (?nanometers) but it sounds like it's simply a "why not?" outcome of the design process that the chip is very fast.
> It doesn't have all that many peripherals either, by modern standards.
This looks to me to have all the hallmarks of a first-gen MVP. A very decent offering likely with some long-term support, but an MVP nonetheless.
> Edit: the answer is here - https://news.ycombinator.com/item?id=13067833 - other chips that have onboard Flash are necessarily slower. This doesn't, so it can be faster.
I noticed that, it's a fascinating design tradeoff they picked.
> SRAM just takes up a lot of space. Hard to tell without gate counts or die shots but the 16k+16k could easily be over half the die.
> EPOC is one of those extraordinary things that can be called a great technological achievement with a tiny dedicated fandom that nonetheless became a dead-end. Like Amiga, Concorde, BBC Domesday Project, etc. I do wish we could have snappier GUIs on our ten-times-faster systems.
Mmm. I consider it insane that the Web is as slow as it is, but it makes a sad sort of sense. I've been wondering about making a cut-down general-purpose information rendering engine with a carefully-designed graphical feature set that's really easy to optimize. Would be really cool.
EPOC was awesome: some hand-wavy testing showed me that the OPL environment was fast enough to support full-screen haptic scrolling of information. It would have totally worked in just a few simple lines of code. If only full-panel capacitative touch were viable in '98 ;)
I'm definitely looking forward to seeing the many-core design!
For a realtime operating system at least a Memory Protection Unit (to use the word that ARM introduced for this limited form of an MMU) is very useful since it can easily make the OS much more reliable if a process cannot write to memory adresses of the kernel or other processes.
EDIT: Or is such an MPU implied by the support for "Privileged ISA Specification v1.9.1"?
If you make use of memory safe systems programming languages on bare metal, like Ada, SPARK, Rust, Oberon-07 than it isn't usually an issue, since the unsafe code will be quite constrained.
For example, http://www.astrobe.com/boards.htm