Yes, this seems to be the case. The layout is only very slightly different than the HiFive1, most of the major components are in the same locations.
I was thinking about buying one of these and porting Tock OS  (written in Rust) onto it for a fun personal project. There's been a surprising amount of work already in the embedded Rust space, and I believe the compiler support for RISC-V is already sufficient.
I was also thinking of porting Tock OS to the board and would love to collaborate.
I happen to have a hifive1 board laying around. If you get to the point where you share your results (whether you're looking for collaborators or not), could you follow up with a link here? I'd love to try it out (alpha/beta versions are welcome).
I'm also working again on a space naval battle RTS game, which is entirely command-line driven. Written in Rust, using an unfamiliar game engine, because I can't make things easy for myself. :-)
- What does a real time command line driven game look like?
- What game engine?
The game is inspired by the Honor Harrington series of novels by David Weber.(2) The ships in that universe can accelerate at 500g or more, so they can scoot across a solar system in a reasonable amount of time (hours), but need to accelerate/deaccelerate the entire time to do that. Consequently, course selection and manuvering are significant decisions that can decide the outcome before the fleets even engage in combat.
The individual units are intended to be as smart as possible, keeping proper spacing, following the rules of engagement, and such. You, the fleet commander, will be issuing orders much like a modern naval commander would. And I will be appropriating as much of the lingo as makes sense. "Set course 126 mark 31", "engage at maximum range", "target group baker-3".
The game itself won't try to accurately represent the actual size of a typical solar system, but instead have everything scaled to that scenarios can play out in half an hour or so. I may also scale up the size of planets considerably, to provide obstacles.
Initial plan is for a 2-D game with fairly simple tactical and strategic views. But 3-D with a Homeworld style interface would be cool.
For example here is how to get Ubuntu linux to boot on an 8-bit AVR ATmega1284p, by emulating ARMv5:
And as most people know, but if they didn't, here is linux on a hard drive microcontroller
And probably the best option for performance, if it can be ported over, uClinux doesn't require an mmu.
That's awesome, even if it takes 3 hours to boot. :-P Almost everything can be considered a Turing machine if you're patient enough.
Notable omissions include: no DMA, no I2C, no ADC/DAC, and no good documentation. (The documentation that is available is inconsistent and poorly written. Some of the documentation references a nonexistent third SPI peripheral, for instance.)
The arduino has a DAC?
> no I2C
Well, I'm out then. That's a big one.
and is also not supported by any Arduino libraries to my knowledge, even with additional board support packages.
Granted, a very high clocked Cortex-M0+. The clocks are about two and a half times the highest clocked M0+es on the market AFAICT. What it lacks is peripherals (especially ADC).
With 16K cache. I'm not sure how you ensure consistent performance though - make sure your code will all fit in 16K, but is that enough? And what about the first time through?
16k cache is likely enough to ensure stable performance of any given function and any tight loops you're using but will probably not be enough for the entire program so you'll still have misses that cause slow downs but it'll probably not be terribly noticeable unless you're trying to ensure timing over large functions.
Also looking at the specs it seems that the SoC has PWM, UART and QSPI but no I2C or DAC/ADC. That might make it a bit annoying to port some Arduino designs to it (although you can always bitbang the I2C). There's also no USB or MAC unless I missed something. So basically if you want to get data in and out fast QSPI is your best bet and even that doesn't seem to have a DMA so you'll have to use CPU cycles to copy everything in and out.
So in summary it's definitely a cool board if you're interesting in hacking on RISC-V specifically but if you're just looking for a controller for your next DIY project there are more fully-featured options for cheaper.
Well, for some reason the entire "maker" community thinks you need to shove the Arduino stack onto something before you're allowed to write software for it. Apparently that's a requirement for street cred.
In addition to the standard toolchain, there is a full Arduino SDK.
Precise motion control often uses control loops (the loop of code that checks the sensor and adjusts the output) that run at 20 kilohertz, 40 kilohertz, or higher. Since the control loop is a bunch of lines of code, this means you want to execute a bunch of lines of code repeatedly at this high rate of speed. For example on an 8Mhz micro controller clock running a 40 kilohertz loop, there’s only enough time to run 200 instructions per loop! (8Mhz/40kHz=200) 200 instructions isn’t a lot, so it would be tough to run a complex motion control loop in that space. And if you want to run multiple control loops to control multiple motors - forget it.
The teensy 32 bit micro controllers are a lot faster - I see a 72mhz and 180mhz option. Either of those would be solid for motion control. However if you want to control multiple motors and possibly perform some other functions, such as computing motion commands from G code in the case of a 3D printer control board, you need all the speed you can get. In that case, these new boards offer plenty of clock speed to play with!
But to be honest you never want to do that in software (except on weird chips like XMOS ones). You'll almost always want to do things like that using hardware timers and counters.
The maths will be done in software but a Teensy would be fine for it.
Tempted to buy one just for making some toy cryptographic applications as the open architecture makes an appealing feature for those applications.
Ultimately though I think it'd be best if they can get China sold on this architecture. $2 Arudino compatible boards or $7 espressif boards with wifi are hard to argue with for most people messing around in the microcontroller space.
This is happening. There are a few embedded RISC-V cores coming out of China now. It seems if you have the choice of paying to license an ARM Cortex-M core or downloading Rocket chip off github, then people are going the download route.
Edit: This is not to say that RISC-V is more popular than ARM in the embedded space. ARM obviously has a huge momentum and vast (billions) installed base.
Makes sense though. First mover is probably a good PR move for SiFive.
If it has to be RISC and open why not using something that already has an existing infrastructure and is well established. E.g. Fully open source implementations of the SPARC architecture exist for a long time already.
More information on the original design (not quite the final design) of the Compressed extension can be found in the original research paper: https://people.eecs.berkeley.edu/~krste/papers/waterman-ms.p...
And ... SPARC. Seriously? Register windows were a failed experiment.
Also, how are register windows a failure?
People also like to point out that the original NIOS processor from Altera had register windows but they were eliminated from the NIOS II. What they forget to mention is that Altera claimed this allowed them to make a smaller core "without hurting performance too much". Which means that the register windows version was faster, not slower like they want to imply.
Special bonus/origin story: completely open for academic research and tinkering.
It is false that density is 'underwhelming'. Thumb while old is extrmly good and specifically designed for density.
With Thumb on 32Bit they are about on par, but on 64Bit RISC-V wins against ARM. ARM 32 Bit is really the only thing that can compete.
They had good reason for not using something that exists. SPARC has one open specification but the version after that is not open anymore. It has a number of technical problems for the modern world. There is not enough open software and hardware in to make the argument for adopting it wothwhile.
Furthermore its a monolitic and RISC-V was from the ground up designed as modular ISA where the same basic software stack can run on deep embeded and HPC.
And this is why they will fail. RISC-V suffers hugely from NIHS and would have no chance against any competition in a real market without that artificial hype. They need to get better.
> They had good reason for not using something that exists. SPARC has one open specification but the version after that is not open anymore.
The fully GPL licensed OpenSPARC T2 is more advanced than anything RISC-V has to offer even though it is ten years old. Why reinvent the wheel when you can build on top of existing solutions that has proven itself on millions of machines including two top 100 Computer clusters.
> RISC-V was from the ground up designed as modular ISA where the same basic software stack can run on deep embeded and HPC.
This is also true for SPARC.
I'm sorry but that is nonsense. An ISA (unless its a utterly terrible one) simply is not what determains performance.
RISC-V will be useful for performance because you can make a good cores far easier then if you used any other ISA. RISC-V is well optimized for performance and future standard extensions will give the micro-architect a lot of options to make performant chips.
Furthermore, why do they suffer from NIHS? The whole ISA is quite literally designed to be a relatively conservative design that specifically build on the knowledge gained by others in the last 30 years. Its the exact opposite of NIHS. The only NIHS is that you are complaining about is that they did something new at all.
> The fully GPL licensed OpenSPARC T2 is more advanced than anything RISC-V has to offer even though it is ten years old. Why reinvent the wheel when you can build on top of existing solutions that has proven itself on millions of machines including two top 100 Computer clusters.
SPARC is now owned by Oracle and only SPARC V8 is an open standard. The OpenSPARC T2 is v9. Do you really think its good to start a new revolutionary compute project on something so strongly tied to Oracle?
You are aware that some of the same people who helped design SPARC also designed RISC-V. You can listen to their explanations of why they didn't want SPARC, specially not for a what is designed to be a universal ISA.
> This is also true for SPARC.
No its not. SPARC is not a modular ISA in the same way RISC-V is and the RISC-V believe that a modular ISA will be needed.
> only SPARC V8 is an open standard. The OpenSPARC T2 is v9.
"Source code is written in Verilog, and licensed under many licenses. Most OpenSPARC T2 source code is licensed under the GPL." - Wikipedia
> No its not. SPARC is not a modular ISA in the same way RISC-V is and the RISC-V believe that a modular ISA will be needed.
"The "Scalable" in SPARC comes from the fact that the SPARC specification allows implementations to scale from embedded processors up through large server processors, all sharing the same core (non-privileged) instruction set" - Wikipedia
> "The "Scalable" in SPARC comes from the fact that the SPARC specification allows implementations to scale from embedded processors up through large server processors, all sharing the same core (non-privileged) instruction set" - Wikipedia
Yes. SPARC is a RISC and therefore it can scale well in implementation. RISC-V however has taken the modular approach to ISA design far further then anything else has so far.
Again, maybe you should actually read about the design of RISC-V and why the didn't want to adopt SPARC.
You accuse me of spreading misinformation, but you don't seem to know what the difference between SPARC and RISC-V are.