> A Broadcom engineer named Josh watched my earlier videos and realized the ancient LSI card I was testing would not likely work with the ARM processor in the Pi, so he was able to send two pieces of kit my way
I'm curious about this; naively, I expect hardware/drivers to be basically independent of architecture/platform. If I've got the connectors/adaptors, why can't I plug in any arbitrary card and have it work?
One problem I've found specific to the Pi (and likely most of the other lower powered ARM chips out there, from what I've seen from Rock Pi, Pine boards, etc.) is that the PCIe layer is not 100% supported on the hardware level as it should be if you want to support features like IO bar space (nonexistent on ARM).
On top of that, until the past couple years, there were only a few exotic ARM builds (probably mostly in custom or enterprise land) that even had standard PCIe slots, so many vendors didn't (and still don't) test their drivers—even if compiled for ARM—on any real hardware.
I hope this changes, especially since Macs now have ARM64, the Pi CM4 supports it, and some other inexpensive SBCs have 4x or even 8x lanes on the more expensive ones (is there an ARM motherboard out there with a 16x lane bus?).
For the Pi in particular, until the CM4 came out, very few people would ever get access to the PCIe bus on the regular Pi 4 model B, so only enough work was done on the kernel side to ensure the VL805 USB controller worked, and a few bits were just never fully implemented or tested (e.g. BAR allocations, MSI-X support, some memory access functions).
Have you talked to broadcom about the PCIe IO space problem? Most other RISCs handle IO space by just mapping into an MMIO space instead. PowerPC, MIPS, and other ARM SoCs have been handling this for decades at this point. It feels weird that their root complex would choke on that, and more likely that there's some unknown config knock you have to setup that hasn't been documented properly. I feel like you might have enough of a publicity reach to get an actual answer on this one.
SolidRun HoneyComb has about the same slots and price, but instead of weird nvidia closedness you get first-class upstream EDK2 support, everything is about plug & play with ACPI.
It's a mix, and from my work and discussions on getting a GPU running (or not...), it seems to make debugging even harder if I can't figure out if it's bad memory access or an actual hardware feature that's just not there.
You can, with properly written drivers and an open OS.
GNU/Linux obviously is amd64 centric. There's still a tremendous amount of dismissiveness towards anything that's not amd64 (we're even seeing it towards 32 bit x86). So if something doesn't work properly on a non-amd64 system, it's usually not considered a high priority.
The BSDs are much better about this. If something doesn't work, that means it's broken even if it works where 98% of people use it, so it gets fixed. I can, and have, plugged LSI and other RAID cards in to several different non-x86, non-amd64 systems and had them work exactly as expected.
Yeah, I partially wondered because NetBSD specifically has docs talking about ex. plugging "PC" cards into a "Mac" motherboard and being quite emphatic that it should always work because drivers were portable and orthogonal. Sad if Linux isn't doing as well:\
I would guess the peripherals of the processor, bugs in the hardware and firmware etc.
That and the drivers are still murky blobs that I would guess on average don't have much thought put into them from a pure software engineering perspective
Another option than the mentioned CM4 module might be some RK3399 based SBC, such as the Rockpro64, which comes with a PCIe x4 interface (the Rockpro64 has a open PCIe slot so you could put in a x16 card too).
If you just want to hook up some M.2 NVMe SSD there are also other SBCs with the same chip such as the nanopc-t4 that come with a M.2 slot.
that storage pod looks really interesting. Standalone disk enclosure connected to a RAID card. Are there any like these broadly available (in that form factor)?
I'm curious about this; naively, I expect hardware/drivers to be basically independent of architecture/platform. If I've got the connectors/adaptors, why can't I plug in any arbitrary card and have it work?