[Copying from the other item on this where I was answering a comment about why this is more expensive than a Raspberry Pi]
You have to understand why the Raspberry Pi is priced the way it is: it's because it is a side effect of the massive production of Broadcom chips for tablets, phones, embedded, etc. Millions and millions are made. A relatively tiny number of these find their way into developer boards. The economies of scale mean they can be very cheap.
RISC-V doesn't have this ... yet ... but there are several Chinese manufacturers currently making millions of chips for consumer devices with those going through foundries right now. So sooner or later the economics will work out for a "RISC-V Pi". I'd be surprised if it doesn't exist by 2022.
For now this SiFive board is IMHO the best PC-like developer experience for RISC-V. [Disclaimer: Red Hat works with SiFive]
I know this a losing battle, but I can't help but pipe up from time to time: that's a disclosure, not a disclaimer.
I'm grateful to everyone putting in the work to make RISC-V happen, open hardware is the only way out of the increasingly grim world of 'secure boots' and the like, which seems hell-bent on making general purpose computing on a trusted platform a thing of the past.
"Secure boot" is a perfectly fine and desirable thing. It makes evil maid attacks more difficult and has many other benefits.
The problem is that the current crop of "secure boot" implementations focuses squarely on the average consumer that wants to delegate trust to the manufacturer, and, at best, only pays lip service to users who want to control their own hardware (see: Pixel devices and the like with closed bootloaders/TrustZone but an escape hatch for the OS) or nothing at worst.
Almost every "secure boot" CPU in modern smartphones and such is already user-control-friendly, it's just that they aren't being shipped that way. They get locked down at the factory. The CPU architecture has little to do with any of this.
If you want an example of what an unlocked secure boot device is, look at the Nvidia Tegra devkits. Those come without the public key fuses blown. You can burn in your own public key and then they will only ever run firmware you signed, forever.
ARM devices without secure boot, or with user-control-assertable secure boot exist. And RISC-V devices locked down at the factory will exist.
Isn't RISC-V just an open standard ISA. The hardware implementation might be closed source. Its still possible to but secure boot features on to an implementation.
Sure. It makes the world I'd like to live in possible, it doesn't guarantee we'll get it.
But foundries will burn whatever chip you have the money to pay for. Having a robust open standard ISA lowers the barriers enough that I'm confident we'll see free-as-in-freedom CPUs and GPUs come out of the project.
>So sooner or later the economics will work out for a "RISC-V Pi".
There is already pretty cool stuff. Look at this developer board, you get a RISC-V Dual Core 64bit + an ESP32 (a full MCU for wifi + Bluetooth) on the same board and a camera for $24. That Risc-V chip is optimized for processing neural networks.
I have one of these and can confirm it's pretty good.
Don't expect a smooth out-of-the-box experience (things like the pin numbers don't match in different versions) but the hardware works and the software works enough to be able to use it.
The SiFive HiFive Unmatched comes in the mini-ITX standard form factor to make it easy to build a RISC-V PC. For the first time, standard industry connectors such as ATX power supplies, PCI-Express(r) expansion, Gigabit Ethernet, and USB ports are present on a single-board RISC-V development system.
Cost aside, I think using the PC ecosystem has a lot of advantages over the pi ecosystem.
I was just wondering how you define PC. Form factor or "standard connectors" (whatever that is - which RPi connector is not "standard"?) are none of my criteria.
I'm getting frustrated here. Could you guys stop for a second pretending that your "opponent" is dumb and try to look outside the box?
Again, to me this are entirely arbitrary criteria. They are not "PC-Standards", they are some of many standardized connectors, form factors, etc. and in fact a matter of habit, not defining criteria.
Let me put it differently:
- So is a DTX-Board with an ARM-SoC a PC?
- Is an ATX-Board with a PowerPC but without PCIe a PC? What if we add PCIe?
- Is a SBC with an AMD APU powered by an USB power supply a PC? What if we add an M.2 port?
See where I am going? These are terrible criteria.
As someone who is relatively a lay-person (as in - I know computers but I'm not a hardware person) I understand the difference as "can I buy some standard components off of amazon/ebay and lego-build a computer with fairly powerful components like geforce/radeon type GPUs and m.2 storage" or is something that's basically a sealed box/board (aside from more advanced stuff I won't do, like soldering chips to a PCB) and/or doesn't have access to desktop class components (as far as I'm aware you can't reasonably stick a desktop GPU on an rpi).
The entire ecosystem surrounding modern descendents of the IBM PC compatible.
Normally this would also imply an Intel isa, which dictated a multi-component chipset. (North bridge/South bridge) this in turn implies an multi-voltage power supply with a standard plug or plugs derived from the one in the PC AT.
The PC ecosystem also features a set of peripherals, such as floppy drives, hard drives, and CD-ROM drives. These peripherals have their own busses, starting with MFM, and now IDE and SATA. In some cases SCSI entered into the mix, but most consumer-oriented PCs did not use it.
These drives were also were powered by the same power supply as the rest of the PC, with an evolving set of connectors and electrical standards.
The Mac took a parallel course, but featured it's own set of proprietary connectors and buses, as well as SCSI, for years before converging with the PC ecosystem in part in later years.
Many of these de-facto and cloned standards were adopted by standards organizations, such as EIA/TIA and IEEE. They were joined by PCI buses, which were used in a diverse set of architectures, including Sun. Later serial variants were developed for the PC and Mac ecosystems, driven by demand for faster graphics accelerators, with a side jaunt to AGP.
At the same time, emdedded devices evolved around a simpler usually single-chip chipset, and often single voltage board designs, with the exception of the cpu core voltage in some cases. These largely used memory-mapped IO with periphrial chips coupled to the memory bus, as well as slower buses such as I2C and SPI. These devices were usually power constrained, having quickly moved into the mobile realm with early PDAs such as the Newton. They mostly developed around the ARM chip which so low power in the early versions that it could run without voltage on it's primary bus.
As the PC ecosystem has evolved, it is now largely built around the PCI bus. PCI is cpu agnostic, so the CPU can be replaced with a different isa while keeping the rest of the system intact.
The PC is also historically modular, so substitutions such as those you suggested can generally be made while keeung the PC nature of the system. M.2 is a modern bus that has veen adopted into the ecosystem primarily for laptops, though also featured in some desktops.
Modern emedded devices are begininning to feature PCI interfaces, though these are only recently appearing in consumer electronics. They have been used in storage-centric chipsets for much longer.
The PC also implies a modular memory bus, with chips or modules complying with a logical and electrical standard, and identifying chip that can be probed by the chipset. In some cases the modularity is reduced and the chips are soldered to the mainboard.
PCs have also begun to adopt single-chip chipsets and System-on-Chip designs, starting with some emdedded intel designs used in mobile tablets and the UMPC. These brought the classic north bridge and south bridge as well as graphics (in this case PowerVR coming from the emdedded world). AMD64 chips also began to include the memory controller on die, leading up to the mentioned APUs that can be powered with the help of a standard embedded power IC from a single low voltage source.
Which also implies that the mentioned "criteria" are not good, since it is not a clear cut case. It is not binary, I cannot definitely say when something "starts or stops being PC" or where "non-PC" begins, there is a huge grey area nowadays.
Device tree doesn't really have an embedded focus per se. It comes from Sparc and PowerPC workstations.
And ACPI doesn't really have a standard root bus, it's a soup of descriptor tables and virtual machine blobs you have to run and trust, along with veritable mountains of patches for those vm bytecodes to fix firmware issues.
IMO Device Tree is the right way to go for nearly all platforms.
I'm going to try to describe something I only half understand in the hopes that someone who does fully understand it will come by and provide a better description.
For normal PC hardware, you have a SATA controller, the device company commits a driver for it to the Linux kernel tree, and thereafter every subsequent version of Linux can use the SATA controller in any device with that kind of SATA controller. If the device company doesn't provide a driver but the hardware is popular, somebody else reverse engineers the chip and eventually the same result obtains. Likewise for the network controller, the GPU and so on.
Typical ARM devices like cellphones are beleaguered by some kind of shameful omnishambles whereby that doesn't work. The device maker provides a hideous binary blob with the device, it only works with that specific kernel version and everything is terrible. The exact source of the tragedy is the part I'm not quite clear on.
But making whatever that is not apply to RISC-V boards would be highly satisfactory.
My take is that this is the effect of the requirements of the companies producing hardware and is entirely unrelated to the CPU architecture. On desktop and server consumers have a very strong demand that off-the-shelf OSes Just Work (so they can install stock RedHat or Windows) and so there is pressure on hw manufacturers to create and follow standards. Breaking away from the consensus standards is penalized, not rewarded. You can see this happening as Arm moved into the server space, too -- server Arm is much less heterogenous than embedded Arm. Conversely, in embedded devices there is little or no pressure to standardize, because often the system software will be a custom build for that device anyway, and end-users aren't expected to try to install a new OS on it. In this world, differences in hardware tend to be rewarded rather than penalized -- the funky nonstandard feature in your SoC may be what persuades the h/w designer to pick you rather than a competitor, implementing something in a weird way can shave a bit off power consumption or just save time in the design process, and so on. And because there's no requirement to have the end user install a new OS the temptation to save time in development by not trying to upstream kernel changes is often overwhelming.
The difficulty with 'dev boards' like the RPi is that for economic reasons they're made with SoCs from the high-volume markets of the embedded world but sold to people who want to use them like the standardized hw of the server world. This mismatch tends to be annoying and inconvenient.
Anyway, my take is that for RISC-V the dynamics will be exactly the same -- in the embedded world there will be a profusion of different hardware and a lot of binary blobs and non-upstreamed drivers; in the server world things will be nicer; and dev boards will be more like the embedded world than you would like.
(Also, x86 is really unusual in having such a uniform every-machine-looks-the-same ecosystem; this has happened by historical accident as much as anything else. Of course most people only have experience with x86, but it might help to try to not think of absolute-x86-monoculture as 'normal' and wider-variety as 'weird', when it's the other way around :-))
It "doesn't work" on ARM simply do to the extreme heterogenity. To ask for otherwise would be asking for AMD drivers to work for an Nvidia graphics card.
Additionally, PCs do in fact of tons of glue and support systems that match how ARM style SoCs work. They paper over it with ACPI which is based around a VM that has to be run in the kernel that's basically a giant binary blob, in contrast to device tree's description of the components and how they're connected.
> It "doesn't work" on ARM simply do to the extreme heterogenity. To ask for otherwise would be asking for AMD drivers to work for an Nvidia graphics card.
If what you're saying is that on ARM there are 15,000 different kinds of USB controller that all need their own driver, that's one thing. But if all you're saying is that there are 25 different USB controllers and 25 different drive controllers and 25 different network controllers and that means you get 15,000 different possible combinations, the fact that every combination needs its own separate drivers is the bug. Even if various controllers are combined into one SoC.
> They paper over it with ACPI which is based around a VM that has to be run in the kernel that's basically a giant binary blob, in contrast to device tree's description of the components and how they're connected.
It seems to be doing a useful thing in abstracting away the various binary blobs into a stable interface that allows them to continue operating with newer kernel versions.
Getting rid of the binary blobs entirely is a separate fight, presumably related to getting open source firmware (rather than drivers).
It’s not the matter of combinatorial explosion or binary blobs...
On many non-PC platforms, you’re given a thousand “pins” to which you’re free to connect whatever circuit that you conceived for your product, be it green and red buttons, or backlight control circuits, or temperature sensing inputs, or SATA controllers, or 4 bit 74 series counter repurposed to switch pin 567 between SATA controller configuration interface output and battery temperature gauge input and laser ranging sensor power modulation output and self destruction device detonation cord output.
There’s no way or even motivation to describe those board specific miscellanies in such way that Linux Kernel drivers can dynamically consume and adapt to, or decide how to behave.
This is not an issue for x86 PC platform, because every x86/x64 PC is either a 100% clone of IBM 5170 “PC/AT” to run IBM DOS 5.0 or Microsoft MS-DOS 5.0 binary without ever catching fire, or a Microsoft Windows Logo Program certified product that boots into Windows of the same era from binary installer disc/USB key.
> There’s no way or even motivation to describe those board specific miscellanies in such way that Linux Kernel drivers can dynamically consume and adapt to, or decide how to behave.
It doesn't seem like an intractable problem to standardize and document this sort of thing. Once you know that pin 45 is connected to a model ABC123 backlight controller, you know to communicate with it on pin 45 using the ABC123 backlight controller driver. If the same pin is used for multiple outputs, no problem, standardize the method for switching between outputs and describe which pins are switched to which devices using which control pins in a machine-readable table.
Apparently they haven't done this, so how do we get them to start?
Use a standard power supply, put it in a case and maybe develop on it. and pci-express opens up possibilities the pi (not compute) doesn't really have access to.
I have many pis that I love, but you have to admit, there are a few compromises made for a lower bom cost. That is not a requirement for a $600+ board.
I struggled with a power supply for the pi all the way up to the pi 4 - always getting the low power indicator.
I struggled with the pi SD card - extremely slow and always the possibility of corruption.
I struggled with pi graphics - none of my projects every had accelerated graphics, not even blits.
I struggled with pi USB - at first it was power and speed, now it might be in better shape but not all the way there. The pi is still limited to ~ 15w.
When you get to the pc ecosystem, and it's really just a bit more expensive, you get a step up in capabilities. You get all the voltages. You can get a 300w or a 1500w power supply.
You can add a graphics card.
You can put it in a case that has room for more than just the board. Say, I/O?
I wish the pi had a nice metal case that had a 2x the width allowing for an enclosed breadboard or the like. Or a case with all the connectors on one side.
> Pi 4 has pci-express, no?
really? where is the slot? (I said not compute) ;)
btw, I'm not anti-pi, the opposite in fact.
Maybe I could say it in another way -- what if you could have an itx form-factor pi? I think that would be very exciting.
Not the pi, but the vaguely-similar (better, IMHO) https://www.pine64.org/rockpro64/ has a 4x pcie. Works great for a sata/raid controller, but still has plenty of "embedded" limitations - you can't just plug a graphics card into it.
Hmm I always assumed that there were missing/unsolved pieces of the driver that would need to be fixed first, based on the way that support for these devices seems to need to be solved one by one [0] but it looks like at least someone has made progress on a different board [1]
> You have to understand why the Raspberry Pi is priced the way it is: it's because it is a side effect of the massive production of Broadcom chips for tablets, phones, embedded, etc. Millions and millions are made. A relatively tiny number of these find their way into developer boards. The economies of scale mean they can be very cheap.
That was the case for the very first version, but didn't the subsequent models use SoCs made specifically for them?
Development boards are typically more expensive, since they are produced in lower numbers, and have features a consumer is not interested in (debugging, etc.).
Unless the manufacturer pushes development boards hard to e.g. pull developers in to the new eco-system quickly, then they might get sold at a bargain. But those are then usually not usable in place of the "real thing" (would cut into sales).
Oh, well I guess you can use it as a development board - granted. Never occurred to me.
Although I would expect better debugging interfaces, less hacky hardware setup, a reasonable boot-loader, etc. Some things improved over the generations, true.
That's not what I've experienced: Arduino's are cheaper, Teensys are cheaper, Adafruit boards are cheaper, many others too.
And that's just regular retail prices at e.g. Mouser, or even the official websites — without even getting into the super-cheap Chinese stuff on AliExpress/etc.
— Why the downvotes? IDK, maybe you meant more powerful boards than those?
Your examples all are for simpler chips, more microcontrollers rather than application processors (with the chip on the Teensy 4.x being the closest to the latter category, it's a bit of a hybrid). Those kind of boards also typically do not expose much that's specific to the capabilities of the chip, but rather mostly jut raw access to the IO pins + some helper circuitry. That's not to say you can't use those to develop things with the chips, but it's a somewhat different market.
Whereas from a classic development board it's typically expected that it provides access to as many capabilities of the chip as possible, and more complex chips have a staggering amount of those. E.g. taking the chip on the teensy, since it's the one that's the most high-end one from your examples: It has Ethernet support - add an Ethernet port. It has CAN support - add external CAN circuitry. It has USB - add USB ports. It supports low-level debugging through JTAG and SWD - add that, either a basic debugger or at least a space to connect an external one. It has a display interface - add a dedicated header for that, or maybe even just add a display. I think it supports eMMC flash - add space for that. That makes these boards a lot bigger and expensive, but it means a developer can start with whatever combination of things they want to try, and if the alternative is them spending a day on figuring out and hooking up all the external bits they need on a simpler board the higher price doesn't matter anymore. (Especially coming from the days were there wasn't as big an ecosystem of breakout boards for lots of things available).
The Raspberry Pi is not pervasive and popular because of the architecture. There are millions of single board computer knock off versions of the Pi, new ones coming out every day with different features. None of them catch on. Why would a random new SBC, even at the exact same price or lower, have any chance of replacing the Pi with no identifiably better user facing features other than “it isn’t Arm and it does the exact same thing or less things!!! And it is vastly incompatible with all the software ecosystem and uses you actually care about!”
This is like me handing you a RISC-V iPhone (same price) and 30% of the apps work. Would you ever, ever, ever switch? No.
Sorry, Arm architecture skyrocketed to popularity because of an ultra long term strategy and a few rocket ship design wins like Android and the iPhone. I am afraid the door is shut.
Tizen might have been a good idea, but without a rocket ship to subsidize the improvements of the OS, it died. RISC-V boards are not different.
I am looking for a rocket ship, which is a new high volume use case which is not properly covered by the current set of solutions. Self driving cars? AI cameras? Robots? Nothing looks like it needs RISC-V in particular, none of it seems likely to produce iPhone scale novel designs that will eat the rest of the market out from Under arm.
China adopting RISC-V to achieve IP independence will likely be that killer app.
ARM popularity is going to significantly reduce the porting challenge. The gap between supporting 1 and 2 platforms is an order of magnitude harder to cross than 2 and 3.
sbcs are not the next big thing. But still there are other sbc platforms besides the pi. Do you think that all these iot devices with linux in there are developed from scratch everytime? Also regarding the last part: Ai cameras in iot devices will need riscv look at this: https://www.sipeed.com/solution.html#
No. You can run Fedora out of the box on QEMU[1]. We have Fedora running on the Unleashed, but you can't buy those boards any more. SiFive have Fedora running on the Unmatched, which you can preorder now and will ship this quarter.
It comes out of the box with Linux in the eMMC and FPGA programming that connects the CPU cores to the peripherals, so if all you want to do is run RISC-V Linux then you don't need to muck about with FPGA programming. But you can, if you wish, rebuild the default FPGA programming from source code, enhance it, or replace it completely.
It's a good FPGA development environment (I have one too), but it's not a good RISC-V environment because of the slow clock speed - 650 MHz, which is about half the speed achievable on the HiFive Unleashed, itself not exactly a fast platform. If you want to play with FPGAs with lots of IO, or even experiment with writing your own peripherals for RISC-V then it's fun.
AllWinner are one company that has publicly disclosed doing a large multi-million run of chips: http://www.semimedia.cc/?p=7803 It's not exactly clear from that announcement what they are going to use them for though. "industrial control, smart home, and consumer electronics" - that's everything :-)
This is great news. If peripherals are the same or similar to those in their existing ARM SoC chips, we'd be close to full support on release thanks to the linux-sunxi project efforts[0] and similar efforts in netbsd.
They are in a lot of current Western Digital SSDs. Nvidia is known to have them in their GPUs in some form. I know of other consumer products where they are similarly used as embedded CPU, which is where RISC-V will be most prevalent in the coming years.
Western Digital is heavily involved in RISC-V, but I wasn't aware that they had confirmed any of their products are using RISC-V yet. Last I heard from them, their first generation of in-house NVMe SSD controllers were definitively not using RISC-V. And their second generation of NVMe SSD controllers is only just starting to ship.
> > Hi all – I'm curious about the origin of the chip used in the Pi 4, the Broadcom BCM2711. Was it designed specifically for the Pi? I can't find any reference to it on the Broadcom website, which is strange, nor any record of its use in any other device, e.g. a smartphone.
> Sort of. The BCM2835 (Pi 1) was originally designed as a set top box SoC and was used in devices like the 1st gen Roku boxes. The BCM2836 (Pi 2), BCM2837 (Pi 3) and BCM2837B0 (Pi 3+) were designed for and only used by Pi boards. The BCM2711 was designed for and only used by the Pi 4 but it is closely related to a new BCM7211 which is for set top box usage.
I agree. I think the original assumption about "leftover volume/stock" only really applies to the RPi 1. However, if you talk more in terms of ARM ISA, then it certainly applies to every generation of the RPi in that the chosen ISAs are "leftover/last gen licenses" and are presumably less expensive to procure and produce chips from on older design processes. The ARM11/ARMv6 was new in 2001 while the ARMv6Z core in the RPi 1 was new in 2003 with the RPi 1 coming out in 2013. The ARMv8 ISA came out in 2011, with A72 coming out in 2015, while the RPi 4 came out in 2019. [0][1][2][3]
There arent any. OP is wrong in his assumption of piggybacking on mass volume manufacturing. Videocore was already widely regarded as a failed product at the time of first rpi release.
As an embedded developer and system designer with many years of experience with Broadcom SOCs, I largely share @centimeter's sentiment, and would likely use similar wording in a private conversation.
Well, feel free to look down your nose at things and act immaturely.
I've built things around all kinds of ARM SoCS big and small... And AVR... and 8051 and 6502... I find things about Pi and Raspbian annoying, and sure wish there was more I/O and better graphics support...
But, how awesome is it that there's a ~$50 lego brick out there, with a ton of computation and long term support that a huge ecosystem of projects have sprouted from? And even better, we have all the boards around with less traction (Beagle*, oDroid, etc) to pick from. I don't see the draw in being elitist.
There's all kinds of projects I'd have had to spin a challenging-to-lay-out and expensive-to-manufacture board for before.. and now I can make a little silly Pi hat and get prototypes done in hours. This is exciting!
A colleague of mine has been testing out all of the RISC-V chips he can get his hands on, including a few SiFive boards. He really likes them.
Check out his free book if you are interested in understanding SiFive's boards or RISC-V in general: Making a RISC-V Operating System using Rust (https://osblog.stephenmarz.com/)
The RISC-V spec is clear and simple enough to work with even for a dolt like me. I have started a project to implement the instruction set in Smalltalk [1] (shameless plug), along with basic CPU simulation. Hopefully I'll be able to develop low level risc systems in this environment with decent debugging.
Forewarning, comparing across utterly different segments of devices is bad & dangerous for the health of industries. And yet, let's dive in to just that: there's a lot more I/O than an RPi4 here! 8+4+1 PCIe 3.0 slots. Wow. If it can really use the bandwidth these slots expose that's very impressive. 4 USB ports but so few folks clarify whether that's for real or shared bandwidth under a single usb host or whether they're independent. So either an effective +1.25 or +5 extra PCIe3 slots of bandwidth there, again, if this device can saturate those links.
I'm wondering very much how the CPU fares. This is progress either way, but will it match a RPi4? The RPI4 has tiny I/O but I feel like it's probable the cpu performance is not radically unlike.
Again RPi comparison isn't really what you'd do here. You have the freedom to put whatever GPU you want in there, it has NVME storage, etc.
It would be interesting to benchmark this against a lower clocked x86 system. But it's kind of an odd comparison as this has 5 cores instead of the 2 cores typical in a low-end PC.
It might become a comparison with the Pi4 compute module + the current breakout board they are offering which has a PCIE 1x slot. So far some NVME drives are confirmed to work. Keep an eye on Jeff's channel[0] as he's even poked around at getting a GPU working as well. But he's having some issues atm (personally I believe he needs a breakout board with more power output).
Here's a fresh submission to HN on hooking up a 4-port gigabit ethernet card[1] to the Pi Compute Module 4.
Throughput is coming. I look forward to 2026 or so when I expect things like PCIe over Thunderbolt to start becoming more semi-standard, integrated on to more and more SoC. Already, a TB4 port can do 80Gbps of DisplayPort connectivity. With that kind of oomph, it makes sense to also start considering how we might also use those same transcievers & cables to do more data-oriented tasks.
The CPU only has 8 lanes of PCIe. It's using a PCIe switch to get the rest of the lanes. USB is also connected via the PCIe switch. The ethernet adapter is connected directly to the CPU though.
I think Yunsup mentioned a small % performance improvement over the SiFive Unleashed. [EDIT I wrongly said the memory had doubled here - see correction in follow up comment] On the other hand the SiFive Unleashed was perfectly usable for general development use, even for use as a desktop.
It's not an improvement over $999, it's an improvement over ($999 + $1999) = $2998 because it has PICe and USB and so forth all on one SoC and board.
That's a 4.5x price reduction (or 77.8% off if you prefer) in three years.
The price/performance increase should be bigger than that because of the ~50% increase in IPC and also hopefully a ~33% increase in clock speed (they don't seem to have announced that but it should be around 2 GHz)
Also it's a just a little unrealistic to expect something that will have taped out 6 to 9 months ago to have extensions that aren't even at draft 1.0 status yet, let alone ratified. Any chip with those is going to be 18 to 24 months away.
>Also it's a just a little unrealistic to expect something that will have taped out 6 to 9 months ago to have extensions that aren't even at draft 1.0 status yet, let alone ratified. Any chip with those is going to be 18 to 24 months away.
Together with claims earlier this year that there'd be some ratified V (a 1.0 draft?) by September and they'd have chips ready pretty much immediately.
What I understand at this point is that this simply hasn't happened, but I do not know the specifics.
I don't believe they've ever said they'd have chips available immediately. That would be some kind of time travel.
What I believe they've said in the past is they'll have cores available for licencing immediately, and indeed the "VIU75" with "Vector Intelligence" was announced last week. Customers will be able to sign contracts now, and get preliminary near-1.0 RTL to use in FPGAs for testing and software development. By the time the customers are ready to tape-out in six or twelve months the final spec-compliant RTL would be available.
Neither V nor B are ratified, nor are they part of the Unix platform standard (and hopefully never will be).
V is close, B is not. There are exactly 0 chips or even IP on the market with standard B (which is really a family of sub-standards), but there are chips with pre-standard V (alas, a few of them aren't compatible with the current draft - the perils of running ahead).
Overall, yes you are right, this is most useful for people interested in seriously evaluating RV64GC and/or porting code. It's a one of the necessary steps on the path to more adoption, but not the last one.
Do you have a rationale for this? I understand that:
* The unix platform standard is a relatively fat selection of extensions already.
* The nature of V's flexible width allows it to be implemented in very small scale to very large scale, so it would have little impact on the complexity of the minimum platform, while allowing great benefits as it scales up to larger platforms.
1. Cores already exist or are coming that satisfy the existing platform. They would be obsoleted by a change in the requirements.
2. RV64GC is already quite big. Piling more requirements onto it will make it harder to introduce small cores.
3. V is pretty complex and I don't agree with your assertion. In contrast I'm aware of cores with vector where the vector part takes up 50% of the die and does exactly nothing for the scalar apps that don't use it. I'd much rather spend that silicon on more IPC or a 2nd core.
The price really isn't that outrageous. Consider the use-case: this is a mini-ITX form-factor with a standard power supply, M.2 slots, a PCIe adapter, etc. This is designed for desktop workstations.
It comes with the CPU, motherboard, and RAM, all in one package.
When you sum up the costs of those in your typical desktop PC workstation build, it's not really that far off.
>It comes with the CPU, motherboard, and RAM, all in one package.
Unfortunately, the RAM not being socketed means you're limited to 8GB, which disqualifies this machine as workstation.
Low CPU performance is tolerable, system chocking on low RAM isn't. I'm suffering on a laptop that "only" has 16GB RAM, and this is despite minimizing the load by alternating between Sway and i3 depending on mood on boot. Editor, Browser (unavoidably heavy, with a ton of tabs open as part of the workflow), PDF viewer, Music player and little else. Everything is bloated these days.
I don't see how 8GB can be tolerable as a workstation. It really is a deal breaker.
Hah! 8G of RAM is plenty for a workstation. My main laptop away from home is only on 4G and I seem to manage just fine. My home workstation is 32G but the only time I get even close to that is when I use /tmp to compile a large project.
It's very far off since this is slower and has fewer features than the worst motherboard ($60) and the worst CPU ($50) on the market. Add $30 worth of RAM and you're up to $140.
It's also the first usable workstation for a novel open-source architecture. It's not as fast but that's a feature which, in my opinion, makes the price parity reasonable. Progress is gradual.
A number of companies have asked for popcount to NOT be included in the base spec for B (certainly Zbb for embedded), because of the expense of implementing it vs the relatively small speedup it givesfor quite specialized use-cases.
Of course popcount will have an opcode allocated and binutils and gcc will (already do) know how to use it if it is present, so anyone who wants/needs it can implement it.
In the Alpha case it was a very late addition, in the last widely shipping version of the chip and IIRC was speculated to be part of some supercomputer/classified use case. ARM has a history of having quirky un-RISCy instructions.
(edit: also it seems that ARM has just cnt.v8 for counting 8-bit lanes in NEON and no 64-bit scalar instruction version, interesting. Being part of NEON also means it's an optional part on ARM)
Late addition is more indicative of value than appearing in first releases. People guess about the base instruction set, but additions happen only in response to high demand.
I've a PolarFire SOC eval board on my desk at the moment. 4x U54 cores. Only started to play with it but lots of kernel panics so far. Not sure where the issue is just yet but hopefully it'll get more stable over the next year
The Polarfire SOC is a very neat, powerful series of chips — I also have an Icicle board and have some plans for it — but the Unleashed is definitely a better purchase unless you have plans on utilizing the FPGA, just from the spec sheets. The Polarfire chip is pretty darn cool on its own though...
It's 1 GHz [WRONG - see below], it was mentioned in the presentation. The Unleashed can be overclocked just by catting an entry in sysfs, I'm not sure if the same will apply here.
EDIT: The clock speed I quoted above is WRONG. I'm on the breakout call now and they are NOT announcing clock speed at this time.
No kidding. If I weren't too burnt out at the end of the day, I'd really like to explore the true limits of modern computers. Especially since I stopped playing video games, and I no longer do any real number crunching, I feel like every computer I use (for work or home) is greatly underperforming relative to its potential.
Displays had fewer pixels in that era. Files were much smaller (the video and mp3 files we have now would have been considered insanely large back then).
To illustrate: a 3.5" high-density floppy disk holds 1 second of DVD-quality video.
(yes, I know, it depends on the rate of change in the image since modern video codecs are incremental -- I'm going by data rate rather than effective compression ratio)
> 4x 1GHz... low end NAS
Low clock + lack of B extension means limited hashing and encryption rates.
Even a simple ZFS setup with encryption and mirrowing will be slow.
That's good to know. At least that does ensure it's an ASIC, not an FPGA.
Besides lack of V extension, I'm sad about the seemingly artificial limits (8GB RAM onboard instead of slots, so you can't easily make a workstation out of this) and the still outrageous price.
I think that it has its market niche (developers working on risc-v ports), but most of us are better off trying RISC-V cores on an FPGA or emulator.
I'm hoping China will solve the cheap RISC-V SoC situation by releasing some cheap chip SBCs can be built on, at some point soon.
Lowrisc used to be all about doing that in an open hardware manner, but it seems the moment they got some funding, they got distracted into experiments (e.g. pointer validation stuff) that have little to do with achieving the original goal.
QEMU is shockingly excellent. Do not extrapolate, but one piece of code I was working on ran at a 1/4 of the speed in RISC-V under QEMU compared to the host. That's a really amazing result.
Much of that is due to RISC-V being quite emulation-friendly as far as ISA's go. You wouldn't guess that given e.g. the weird encoding of insn operands, but that turns out to be a minor factor in practice.
Absolutely. QEMU is JITting so decode is only done once. What helps here is the absence of crazy semantics, like flag updates. Some things are still expensive, such as indirect branches (jalr), virtual memory translation (load/store), and handling RISC-V's 31 registers without hitting memory for each of them (the host ISA, x64, has only 16 architectural registers).
Note that RISC-V instruction set extensions can always be emulated in machine mode, so the lack of V or B extensions will only ever be an issue wrt. performance, not compatibility.
What would be a decent graphics card for this? I'm thinking something half height with no fan. Put that in a small enclosure. Could be a great dev/demo box.
> For debugging and monitoring, developers can access the console output of the board through the built-in microUSB type-B connector.
As a software developer who's interested in this but has no experience with low-level hardware interface, how does one debug with the microUSB connector? What displays the console output?
As a software developer that, at one point in a job, was forced to confront hardware head on because the code I was writing was firmware, I'm guessing they mean something like Kermit. A simple tool to get the output from an embedded device.
Will this lower the cost of manufacturing custom chips? Like fully manufactured PCB with components from JLCPCB is just a few bucks. Will the ASIC manufacturing process ever fall to that level?
Unlikely, manufacturing an ASIC is orders of magnitude more complex than manufacturing and assembling PCBs. At most this will reduce the price of whatever RISC-V chip is used on the board, and maybe increase the demand for more RISC-V chips in the future.
Google recently opened up a full 130nm ASIC design platform, including core cells, tools, etc... Google it, can't remember the name right now. They've even committed to manufacturing silicon for free for a few select open source / open hardware projects. I guess that's going to turn into some kind of low-cost shuttle in the near future. That's probably the biggest movement towards democratizing ASIC design the way that JLCPCB and others have democratized PCB manufacturing (and assembly, as you mention, although with a quite restricted selection of available parts). Kicad democratized PCB design and it's being successfully used for many projects these days.
Cool, thank you. You can probably see what I mean: if everything from this list works fine it makes a desktop computer sufficient for me and many other people. I imply most of the classic text-mode tools already work or can be ported with little difficulty. I've forgotten to mention Visual Studio Code but I understand it's going to be slightly harder, it hasn't been long since we've gotten an ARM port.
It has PCIe slots for graphics, so of course no HDMI. This isn't a SoC-on-SBC type product like the RPi. The competitor here is a PC motherboard + CPU combination, not an RPi.
Yes, it will be far slower than a new PC, but this is still pretty damn cool.
To price compare you need to look at a 4+ core processor + modern motherboard + 8GB RAM + 32GB flash (EDIT: 32MB, so meh). I'm sure once you did that you'd find something half the price, but when you consider the volumes involved...
Minor correction: the linked website says 32MB flash, not 32GB. Unless you meant 32GB flash for the micro-SD slot which is presumably sold separately for the SiFive board as it is with the RPi.
I dunno... I mean it's actually smart to not make this a single board machine with built-in GPU, since there isn't really a workable open GPU / display controller solution. So making it a PCIe host that can take any off the shelf GPU neatly solves this problem.
$665 is a bit too much for the specs they’ve offered. They should try to be price competitive instead of charging a premium. You can likely get a more powerful x86 system for less.
I would hope for these to be sold at cost, and doubt they cost that much. Profits shouldn't come from scalping risc-v developers.
I am hopeful some Chinese company will release something that will force some humility into the risc-v market. At least one candidate has been seen in the thread[0].
Not having reached enough developers and leading adopters means no market available except developers doing fundamental porting and experimentation.
Targeted at developers doing fundamental porting means no volume.
No volume means high prices. Low volume ASIC runs aren't cheap. Amortizing NRE costs of a complicated motherboard over dozens or hundreds of units ain't cheap either.
It will sort itself out eventually, but the whole ecosystem has a big mountain to climb to get to where ARM is.
You have to understand why the Raspberry Pi is priced the way it is: it's because it is a side effect of the massive production of Broadcom chips for tablets, phones, embedded, etc. Millions and millions are made. A relatively tiny number of these find their way into developer boards. The economies of scale mean they can be very cheap.
RISC-V doesn't have this ... yet ... but there are several Chinese manufacturers currently making millions of chips for consumer devices with those going through foundries right now. So sooner or later the economics will work out for a "RISC-V Pi". I'd be surprised if it doesn't exist by 2022.
For now this SiFive board is IMHO the best PC-like developer experience for RISC-V. [Disclaimer: Red Hat works with SiFive]