As from my experience with GD's STM32 clones, they did not have a real flash, but copied all your code from SPI-like internal flash to main SRAM on startup. I wonder if they did pull the same stunt with GD32V's
what is inside stm32 clone: https://zeptobars.com/en/read/GD32F103CBT6-mcm-serial-flash-...
original page now returns 404, it was this board: http://dl.sipeed.com/LONGAN/Nano/Spec/Sipeed%20longan%20nano...
for $4.90 (https://www.seeedstudio.com/catalogsearch/result/?cat=&q=GD3...)
I still don't see anything in there that's not the usual way flash is accessed in cases where it's mapped into the address space for XIP applications. Hell, a lot of the time people will stick something nicer than a dog simple direct mapped cache like ST's implementation.
Also popular here because some folks imagine a new ecosystem of cheap Rpi type boards with no magical binary blobs. See, for example, lowrisc.org (though they seem far from shipping)
I thought that the biggest blob in most linux systems is GPU driver and RISC-V doesn't solves this.
At any rate, I hadn't heard about anyone trying to make a completely open GPU. That would be cool though.
On-GPU firmware is not that interesting or concerning, you can consider it part of the hardware.
It depends on how it can communicate with the external world. If it only eats numbers and draws pixels, then no problems, but if its driver nature (ie, running at higher privileges than anything including root/administrator) allows it to create a covert channel with other hardware (say a network chip) and send vital information to the outside world, then it becomes a huge potential vulnerability.
In open source systems, CPU/GPU/system chipsets firmware and closed device drivers are the places where malware can hide unnoticed for ages, and incidentally the one where it can do the most damage due to the aforementioned privileges, therefore those should be the first parts of a system in which we should demand for total openness.
A simple program with no access to files (therefore considered safe) reading the CPU load and populating a remote graph with numbers can become an effective spying tool if paired with another seemingly innocuous program which has no network access (considered safe as well) but reads files and busy loads one CPU core with values resulting from some encoding of the data it reads. They're 100% safe by themselves, but once paired (say because they're written by the same entity, or different entities obeying to the same government/s) they can exfiltrate information pretty easily. Back on the firmware topic, we can't know if there any seemingly unrelated device drivers talking each other unbeknownst of the system administrator, but should they do, that would be the most dangerous backdooring toolkit ever conceived. Until they stay closed, there's no way to know if they do other things.
So people don't like it if kernel drivers are closed source, but if all the functionality is moved to firmware and the open kernel driver just pokes the closed firmware that does all the work it's fine? What? How does that logic work?
It makes no sense to counter a hypetrain with intelligent arguments. :-)
Of course, you are right.
If you are jumping on risc-v to save money you have your priorities very wrong. On the other hand if you are Huawei and can't do business with arm...
Some of the engineers thought making our own CPU designs was a stupid idea and some thought that switching to ARM designs was a stupid idea. The only fact I have is that our CPU designs were not the main cause of bugs in our chips. If you asked the digital designers, they'd say things like, "Why are software engineers so interested in CPU designs. The kinds of embedded CPUs we need are easy to design. Implementing a correct power management state machine is hard. Implementing an efficient WiFi PHY is hard. Get off my lawn..."
Some problems with putting ARM CPUs in our chips were: a) integration engineering costs (ie find a way to multiplex some existing pins on the package with the jtag debug port), b) having to do price and contract negotiation with ARM (that part of the company was less efficient than the digital engineering team), c) finding that the change you wanted to make to the chip was incompatible with the ARM contract you'd just spent 6 months negotiating.
That said the chips certainly did there job at a good price point once you got everything stable and fixes made to the binary blobs that you had no way to debug.
Yep. Sorry about that. Not that it was my fault. Anyhow, I guess my belief is that a similar company to CSR could put RISCV cores in their chips now, which would avoid the problem you describe and some of the problems I described with using ARM.
Not trying to be hostile, just pointing out how costly "let's design our own soc, how hard can it be?" is.
I know a few people who joined ARM after CSR closed down but that could be just some offices shutting down.
(not an arm employee, btw)
Furthermore, those 1.5% covered more than just the CPU (peripherals, buss technologies, patents, manufacturing know-how).
I'm not saying WD is wrong, but I think they are overestimating how much they can save. This is probably just to get a better deal from arm.
That would be enough people for a very advanced chip, but WD doesn't need to build the next supercomputer. Instead, they want a chip customized to handle their specific workload with high throughput and probably some specialized instructions to put some algorithms in hardware. Moreover, their RISCV chip design expense should reduce over time both because they don't need to change chip micro-architectures very often and because as more companies switch to shared, open hardware, the costs are reduced
Another example is Nvidia. They continue to hold an ARM license for several ARM designs and another license for their own custom Denver/Carmel designs. Despite this, they still opted to use RISCV in their GPUs.
These companies manufacture chip designs for a living. They certainly understand the costs involved and have determined the immediate and long-term costs are lower vs the provided benefits.
Does that change with ARM? If you're licensing the ISA, or even bits of HDL, you're still fabbing your own stuff, right?
If it was the choice between pre-made RISCV and pre-made ARM, then the licensing could play into a cost difference between models, but otherwise they'd both be pre-fabbed, ready-to-use-products?
I think there is some merit to risc-v but people pushing hard for it's use RIGHT NOW don't seem to understand the challenges of SoC design and manufacturing.
An instruction set is an API. It's the architecture behind it that matters.
Yes, the internal design is the important part, and having an ISA that is designed properly, will make that a lot easier and we have seen both high and low performance chips that can compete with ARM speeds while having far less development time.
RISC-V is designed to allow individual companies to add their own secret sauce and to have a tool-chain that makes that easy.
We should remember that RISC-V is still incredible young, it only just escaped officially from university 3 years ago.
And with the ecosystem available, there's less of a need to port to ARM.
- as a developer my RISC-V (assembly and system) skills will remain relevant and valuable
- by using RISC-V MCU's now I will avoid future family migrations
- RISC-V's are already the MCU's of choice embedded in FPGA's and AI chips, and I won't need to learn another tool before being able to utilize them
- RISC-V enables collective community-enabled innovation (which was the prime driver for creating it)
- Rust runs on RISC-V MCUs (besides ARM)
- RISC-V simplicity simply means less bugs
Fewer bugs in your code, or fewer bugs to encounter in the platform?
I've found that my bugs are usually processor-independent, and that most of the bugs in SOCs are not with the CPU cores, but the support components around them (e.g., DMA engines and timer registers with bizarre behavior).
Well . . . and cache coherency issues. But everyone has cache coherency bugs. :-)
Developers of compilers, debuggers and such are way more likely to introduce bugs with CISC architectures. Complexity doesn't end at the edge of the chip. It goes down the chain.
Debugging RISC object code is also easier, for the same reason.
RISC is a meaningless term today to describe design complexity. People see 'cmov' in x86 or the boot process or variable-length instructions or whatever, and think, "wow, how horribly complex this all is, it must be a huge design problem, and it's probably because of the instruction" -- but if they see some modern ARM machine do superscalar OoO execution in order to execute an instruction over multiple cycles and the compiler scheduled instructions taking that particular latency/throughput into account so it could generate optimal code for that uarch -- it's like, wow, this is all so simple, it must make everything so much easier. It doesn't, but it's worth asking what does the debate even mean at that point? All the actual complexity is elsewhere, and is (mostly) independent of the ISA.
Side note, but I have RISC-V hardware on my desk and one of the errata for this particular silicon is "the MMU may not catch and deliver all memory access violations correctly" -- meaning you just don't have memory protection sometimes, in some cases, on some days with some shirt colors! I'm pretty sure the ISA doesn't mandate that, and I'm also pretty sure it didn't come from the instruction decoder. Luckily, in this case, programmers have repeatedly proven they are extremely reliable at handling and estimating issues of memory safety (reliably bad at it).
Itanium is a counterexample to your argument.
>All the actual complexity is elsewhere, and is (mostly) independent of the ISA.
Which renders ISA complexity difficult to defend, particularly as RISC-V has demonstrated code density competitive with amd64.
Complexity is inherently bad, thus the use of complexity needs strong justification.
RISC-V is trying to make that a thing of the past.
People on this website just hype it to outrageous levels because the ISA is all 95% of them interact with at any level, and sometimes not even then, so all other factors/considerations are completely ignored in favor of just believing it's really The Best (and I say that as someone who owns real RISC-V hardware and am writing my own emulator, and want it to succeed.)
But you don't have to just make stuff up to sell it. You can just say "It's a pretty good ISA, it's freely available/modifiable, and there's a ton of good tooling already available for it". That's a pretty good sell on its own, to be perfectly honest.
x86's have never been a thing in MCU's, so ARM has more to lose.
> Almost all of those points are true of any mainstream architecture
No, that isn't true at all, not when you consider the end-game: they are not open source, they are not simple enough for that to make a difference, and they'll therefore never attract the collective creativity. Maybe some other new architecture will though. Everybody should make their own bets though.
> People on this website just hype it to outrageous levels
Though RISC-V certainly is in the hype phase this is also entirely justified. From that point of view it seems you have an invested interest in some other architecture, and see that investment threatened.
ARM and MIPS require licensing fees, and spinning your own new architecture will make your chips irrelevant. RISC-V doesn't charge you to use their instruction set, and it's already supported by developer tooling.
It is unfortunate that they froze the base instruction set before people who understand performance optimization got a good look at it. But, ultimately, if the instruction density is no worse than Intel's, it won't be handicapped by that until something else comes along. When that happens, it will make the transition to the next easier.
I don't think it is significantly better than existing ones (including the now fully open source mips). And it is already significantly fragmented.
But it's the new thing and where the mindshare currently is. Many new ideas are tested on risc-v, and there exist multiple open source implementations.
Oh that's kinda cool. Playing with 8-bit processors is what first got me interested in low level computing and helped a lot of things "click" for me
On a commercial standpoint, wroom-02 is fcc pre certified. An example of the sipeed nano dev board with WiFi on board is this, which uses esp8265
If someone were to use STM32 for business logic, why wouldnt they just use an esp8266 since its a microcontroller, wifi, and the wroom-02 is pre certified?
Also, for clarification. The STM32 is based off of ARM Architechture. Whereas the Sipeed Longan Nano competes against the STM32 but is RISC-V based?
I would have thought that
would be pieces of hardware that I can buy. Instead, they seem to be designs for hardware which conform to the RISC-V ISA, which could then be fabbed by a semiconductor manufacturer or flashed onto an FPGA.
What marketing/technical terms distinguish the designs of these systems from their implementations? "Core", "processor", "MCU"/"microcontroller", for example? Are any/all of these uniquely constrained to describing either a design (on paper) or that design's physical implementation in hardware?
1. I can read the official specs without agreeing to an onerous license.
2. There are several open source implementations I can choose to run on an FPGA.
3. I can choose the simplest combination of features. Want a multiply instruction, but no branch prediction? Want caching but no MMU requiring multi-level address virtualization?
A large processor vendor has to employ a small army of processor designers. The trend will always be in the direction of adding more features and increasing complexity. Customers will write code to require those features, and the cycle continues.
Only very rarely a fresh design will appear having the right combination of attributes to give it a chance of being a viable platform for a long time. This is what makes Risk-V attractive to me.
FYI ICYWW NTIM
Seeedstudio is such ignorant to their customers. Delivery of my order failed due to local mail and person from other side of customer "support" email rejected even to write a request to mail service. I'm not talking about compensation and/or resend.
I'm getting a 404 too so I am guessing that the page was removed.