Hacker News new | past | comments | ask | show | jobs | submit login
Mini-ITX Seaberry Adds 11 PCIe Slots to a Raspberry Pi (jeffgeerling.com)
111 points by geerlingguy 12 days ago | hide | past | favorite | 71 comments





Alas PCIe switches were starting to get affordable before the great chipmaker consolidation of 2015-2019 ("and seemingly overnight the cost of these switches increased three-fold"[1]). Now most every PCIe switch is >$6/lane. Even just the 12 lanes here, on an old PCIe generation, is going to be a $72 chip, perhaps more (article: PEX8619 Broadcom PCIe switch, $129, 16 lanes, $8/lane, no stock. Compare versus Microchip nee PMC-Sierra Switchtec such as PM40028B1 at $6/lane[2]).

It's a cool idea. But wow, for $435, sure feels one ought consider going for an x86 pc. The form factor is cool though.

It feels insane to hope for but I'm really hoping that by the time 2025 rolls around we see some modest affordable arm chips that have Thunderbolt built in. Perhaps even multiple Thunderbolt. If nothing else, there better be some decent PCIe available by then- 8x PCIe 5.0 or better. ARM needs to start doing bandwidth, some day (in places other than server class chips); it's been on USB3.0 for a long long time. Thunderbolt + PCIe would be a great boon for general usability & connectivity. Now that Intel has made it royalty free, now that USB4 already incorporates most of the hard part (packet based switching), it might finally be the time for some real PCIe on chip.

[1] https://www.anandtech.com/show/15821/microchips-new-pcie-40-...

[2] https://www.mouser.com/c/?marcom=100722352


> It feels insane to hope for but I'm really hoping that by the time 2025 rolls around we see some modest affordable arm chips that have Thunderbolt built in. Perhaps even multiple Thunderbolt. If nothing else, there better be some decent PCIe available by then- 8x PCIe 5.0 or better.

In the shorter term, I'm hoping the RK3588 actually comes out, is as good as promised, and has a reasonable price. [1] 4 lanes of PCIe 3.0 + 3 lanes of PCIe 2.0 + 2x USB 3.1 + dual gigabit Ethernet total at well over 10X the bandwidth of the Pi's single PCIe 2.0 lane. Also things I care about for my NVR project: faster video decoding, built-in TPU, and SATA (which I didn't count above because it's multiplexed with the PCIe 2.0).

I don't think the 4x Cortex-A76 + 4x Cortex-A55 is in the same league as the Apple M1, but still quite a bit faster than the Pi4's 4x Cortex-A72.

[1] https://www.cnx-software.com/2020/11/26/rockchip-rk3588-spec...


I don't think the 4x Cortex-A76 + 4x Cortex-A55 is in the same league as the Apple M1, but still quite a bit faster than the Pi4's 4x Cortex-A72.

IMO the node used is more important than the core design. Rockchip claims they will be using an 8nm node for the rk3588. And I think the rk3399 and Pi4 are using 28nm nodes right now. So if this chip materializes it could be much faster (compared to Linux oriented ARM boards of today).


> If nothing else, there better be some decent PCIe available by then- 8x PCIe 5.0 or better. ARM needs to start doing bandwidth, some day (in places other than server class chips); it's been on USB3.0 for a long long time.

Its not up to Arm to attach PCIe lanes but the chip makers who have been competing with x86 which already offer plenty of PCIe lanes. So far, only apple has bothered to make an Arm CPU which can compete with "PC" hardware. Hopefully AMD will offer something similar.

> Thunderbolt + PCIe would be a great boon for general usability & connectivity. Now that Intel has made it royalty free, now that USB4 already incorporates most of the hard part (packet based switching), it might finally be the time for some real PCIe on chip.

USB4 is based on Thunderbolt 3. Given that Thunderbolt 3 is a mess, I don't have high hopes.

Look at RapidIO or 1394 for examples of better thought out interconnects and protocols. PCIe cant connect CPU's together in a NUMA system whereas RapiIO can. RapidIO also uses Ethernet physical layers so there's parallel development with Ethernet. I honestly don't like USB or contraptions invented by Intel that are invariably consumer grade with various extensions to satisfy other industries, bolted on.


i thought it was pretty clear i meant the consumer arm ecosystem in general but thanks for chiming in & clarifying. yup.

usb4's ability to transport a variety of protocols across the switched network seems perfect. reviews like the Plugable tb4 hub show video and data intermixing nicely[1]. the complexity seems scary but in my mind it feels utterly & completely justified, and it builds atop the world we have & makes it better, quietly fixes the underpinnings, rather than insisting on re-invention. thunderbolt hasn't been flawless but it's been a niche product, a luxury product, and we'll quickly gain the experience to make it as pleasant bulletproof & flawless as one might hope for. already, reviews like the one i cited don't come with caveats or speed bumps: they show a working, satisfying consumer experience. there's a lot of afraidness of thunderbolt & usb4, but i for one think we ought to dial down our resistances & afraidness.

that said, i also think the door is totally open to new interconnects emerging. i'd love to see cxl or gen-z (recently folded into cxl) or opencapi or rapidio or whatever become consumerized.

[1] https://www.anandtech.com/show/16964/plugable-tbt4hub3c-thun...


>So far, only apple has bothered to make an Arm CPU which can compete with "PC" hardware.

Also Amazon: https://aws.amazon.com/ec2/graviton/


Those are server processors, I was thinking more along the lines of laptop/desktop performance. Other examples of Arm server CPU's are from Ampere and AMD's discontinued Opteron A series.

I have no experience building PCs, but during the last decade or so I've been running RPis with 6-8 disks attached. It's great software and noise wise, but the cases were made by me with MDF (and I have no idea what I'm doing), so it's not as solid as I'd like them to be.

Every time I search for rackmount or 3D printed cases I'm floored by the prices, which seem completely out of proportion.

Any tips on cheep/custom computer cases are highly appreciated. Thanks


I agree with the advice that you should just use an ATX case. They are a commodity and there are a billion great ones. You'll have to remove the motherboard plate and drill 4 holes for M2.5 standoffs; the full Raspberry Pi dimensions are on the product page, so you can easily print out a 1:1 template, get the right drill bit, and go. No fancy equipment is necessary to drill a small hole in sheet metal, and the result will be very satisfying.

I tend to 3D print my own stuff, and have found it to be the be all and end all of electronics projects enclosures. I never had a good project enclosure until I got a 3D printer, and now I always have them. It's amazing! I have at least 3 different Pi projects, and I designed and printed cases for each of them. It's very straightforward and not a bad project for learning CAD and 3D printing.

8 hard drives are going to be heavy and take up a bunch of space, though, so that's why I'd recommend an off-the-shelf case that already has support for that. (Sheet metal bending is the right fabrication technique for this application, I think, and that's how the cases are made.)


That's what I was afraid, you're convincing me of buying a 3D printer :)

They are still a bit expensive, but are so much fun that they maybe worth it.


You'll like it. Don't cheap out too much. I think the polish that careful manufacturers like Prusa add is worth paying for. I started with a Tiertime printer and it made me so unhappy that I bought a Prusa i3 a month later. Skip that step and go right to the good stuff. The best step for me was that I got the kit, so I put every part together myself. When something breaks, I know exactly how it's put together. (I have broken something by being an idiot and ... printed a replacement part and was back up and running the same day.)

The Prusa Mini is very reasonably priced, but I haven't used it and I'm not sure the build area is enough for what you have in mind.


I absolutely love laser-cut acrylic with tapped holes accepting m3 screws or standoffs.

For the MDF, it's mostly practice and time/care. It took me quite some time to master making square wooden boxes that stand up to abuse and nowadays I could design in Fusion and rip out a case on my CNC machine pretty fast (<hours).

Next project is to encase an entire RPi in clear epoxy: https://www.raspberrypi.com/news/epoxy-pi-resin-io/


It’s probably easier to buy a regular case and then fit the raspberry pi into it.

We should be meeting in the middle and creating SBCs whose bolt holes spacing is identical to some other device standard. For instance 2.5” HDDs, or 3.5-2.5 adapters. Then you can find trays and enclosures and caddies that can be modified slightly to purpose.

Expanding holes in sheet metal is easy. Tapping holes, welding captive nuts, and bending are a lot trickier.


For something as lightly loaded as a home computer, rivnuts are pretty darn easy and cheap way to add threads to sheet metal.

Brass standoffs with a nut on the back are also readily available and cheap.


Brass standoffs are great but they still require a tapped hole.

I was thinking more of moderate scale. I’d totally rivnut one SBC if I had a friend who could show me how. But I’m not doing twenty or small batches for resale. I think there’s a fraction of people in that demographic for whom a rivnut is still fancy tech. I’ve only heard about them from watching Bad Obsession Motorsports (with Rex Hamilton as Abraham Lincoln) and rivnuts are like a running gag argument between the two hosts. Welded captive nuts always win. Every time.

I just looked at a rivnut catalog, and couldn’t figure out how to find a nut that would work with standard computer case screws, or in fact what tool and nut combinations worked together. I’m not gonna sort that shit out on my own.


This would be excellent. That's a great idea.

its most a computer case(sound of drums) of production of scales the only ones who are mass produce are the ultra basic, the others are design for enthusiastic hobbyist who can pay a premium for their hobby.

We're like one or two hardware generations away from having a VME backplane like environment where we'll be able to do some truly nifty and nasty things, at home and hobby levels.

The politics of getting a standard out that isn't owned by (and therefore twisted to favor) a couple of big companies is challenging, but it's happened before, right? (crickets)


I'm pretty sure PCIe is that today.

PCIe CPUs may have failed commercially, but they prove that its possible (Xeon Phi).

PCIe GPUs are common. 4x or 8x GPUs are your typical mining rigs.

Perhaps most common is PCIe SSDs (aka: NVMe or M.2 slots).

------

So our modern PCIe "backplane" gives us storage, extra compute (GPUs most commonly), longer-distance network access (fiber optics, ethernet, etc. etc.). What else do you want?

Sound cards (especially MIDI instruments support) maybe? Dedicated video processors? A big problem with dedicated devices is that modern CPUs are so powerful... there's not much point to buying an ElGato H264 encoder anymore when you can just buy a 16-core CPU instead and do it all in software.

----

Your standard Threadripper motherboard has 4x16 PCIe slots, and I'm pretty sure 8x PCIe to each of the slots, with full support of bifurcation (splitting into smaller PCIe slots). Surely that's enough for any of your hobby projects?


What got me off about VME when i looked at it was the immense amount of small market, single purpose hardware that had been thrown on it and was "technically" compatible. It enabled some stuff like timed traffic lights in a way similar to how Arduinos have opened up hobby level "printer-oid" mechanisms.

For reasons i cant articulate well I'm itching for a bus where CPU and GPU and RAM and IO all live together in hippie harmony. The hardware's all there and dancing to some little basic machine controller CPU anyway so lets open all that up and make everything a first class citizen, hopefully on some interconnect that will last a few years and gather some interesting surplus hardware to filter out to hackers.

As to modern hardware, if i had all my dreams coming true, I'd be grabbing one of the "mining" motherboards with all that I/O broken out into channels and putting SDR's on them to make pretty waterfalls of the biggest bites of the radio spectrum i could visualize.


For hobby Arduino level, the bus of note is I2C, with all sorts of devices available. Temperature sensors, accelerometers, cameras, monitor control, flash, ram and of course... other arduinos and Rasp Pis.

Of course, 1Mbit is slow but the huge amount of compatibility is pretty outstanding.

----------

Today's standard I/O might be Ethernet, as far as just randomly hooking up random compute devices together.

Faster than I2C, but not quite PCIe speeds. You can't deny the huge compatibility available though.

------

PCIe is for serious business high end. A few commodity parts are available but some stuff is just really expensive. It's a professional tool for high end fabrics.

The best of the best remains proprietary: Infinity Fabric, UPI. I guess OpenCAPI isn't technically proprietary but few things use that...

-----

I know there are a bunch of cuda-SDR projects btw. I don't know how well they work but GPUs probably are great at Fourier Transforms and other filtering tasks.


I2C is present in most modern computers. Look up SMBus.

SPI is as well


VME?

https://en.wikipedia.org/wiki/VMEbus

Imagine something like ISA but 32 bits wide, with a GPIB-like arbitration scheme. On a backplane that you plug CPU cards and peripheral cards into.



Please forgive my lack of familiarity with PCI express hardware, but I have a question:

All those PCI express devices are all sharing the single v2 lane? Is the PCI express bus a network?


All I/O is a network these days.

* USB is a network.

* PCIe is a network.

* SATA is a network.

* Infinity Fabric / Hypertransport (AMD socket-to-socket communications) is a network.

* UltraPath Interconnect (Intel socket-to-socket communications) is a network.

-----------

People think their internet connection is Layer4 (TCP) -> Layer3 (IP) -> Layer2 (Ethernet) -> Layer1 (Wires), but that's wrong.

Its really Layer4 (TCP) -> Layer3 (IP) -> Layer2 (Ethernet) -> Layer 3 (PCIe) -> Layer 2 (PCIe frames) -> Layer 1 (PCIe wires) -> reinterpret PCIe on the Ethernet-controller -> Layer1 (Ethernet wires).

There's a bunch of "networks" in-between our networks that we use. That's why OSI model exists, to understand each different network and each of the different rules behind them all.

When a good network is abstracted away, you can ignore all the details.

EDIT: It might not even be Layer2(Ethernet) -> Layer3 (PCIe)... on a NUMA-system it could be Layer2(Ethernet) -> Layer2 (NUMA / MESI messages to CPU#2) -> Layer2 (Southbridge communications) -> Layer3(PCIe)...


network all the way down

-- Alan Kay maybe

btw, slight sidestep HDMI CEC is also a bus/network, and even SCART was too.


One of the goals of PCIe when they were designing it was to avoid the “bus” architecture classic PCI had as that forced every device on the bus to run at the speed of the slowest device. PCIe, OTOH, takes a “star” network topology (more a "tree" though) with point to point connections. Thanks to the use of switches, each point can run at the maximum speed they support (limited by the switches speed).

Basically, it’s like USB (as @dragontamer said). Just as you can turn a single USB port into 7 more downstream ports through a USB switch, you can do the same with a PCIe switch.


Indeed, PCI Express can be thought of in many ways like Ethernet; you can have switches just like Ethernet, though in almost all cases, PCIe is a local area bus (local to one server/machine), whereas Ethernet is still the king of networks.

That's probably the way to think about it. PCIe devices communicate point-to-point over dual-simplex serial connections. If you have something like a Raspberry Pi CM4, which only has a single PCIe lane, then there's going to be a PCIe switch.

Interesting, thank you!

The other neat thing is that peer-to-peer PCIe transactions do not have to go through the root complex (the Pi in this case).

Say you have a GPU and an NVMe drive or video capture card plugged into a PCIe switch, which is then plugged into the Pi. If your software supports having these devices directly communicate, they can do so as fast as their PCIe links support doing, even though there’s only the tiny link to the Pi.

Peer-to-peer transfers are only limited by the smallest link on the path between two devices on the tree, rather than the smallest link in the tree.

A real world example is some of the big GPU compute servers. Some of the 4U chassis can house 20 or more cards. Many of them have all of the GPUs (320 or more PCIe lanes) plugged into an array of PCIe switches which may constrain them to 16 or 32 lanes from the CPU. The GPUs can all talk to each other a full speed, but only a few can go full speed to the CPU at a time.


Don't you mean the CPU can only full-duplex with a few GPU's at a time?

And are you sure that GPU's are actually talking with each other?

Forgive me if I'm woefully out of date/having a massive brainfart, but last I heard, GPU's got CPU-less DMA a few years ago allowing access to system memory, but programming/setting up the pipelines/cleanup/repriming/output consolidation was still solely a CPU-centric work stream.

So like, yeah, your GPU does it's thing and dumps the result in a buffer in it's memory space, and might even be smart enough to shlup a buffer back to system RAM to get the result forwarded somewhere else when the CPU gets around to telling the storage controller to do it...

But did I completely sleep through something absolutely amazing? Are GPU's passing messages/shared memory space aware now? Like,

"Oh, my jobs done, shlup this buffer off to pcie device->compute_card_2 for the next stage of processing."

Because if they're that smart now, holy crap, I have reading to do.


Yes, GPUs are able to directly pass messages and bulk data to each other these days. It's completely supported in CUDA, OpenCL, Vulkan, DirectX 12, etc. DirectX calls it "Explicit Multi GPU".

It's been around since GPUs transitioned to PCIe, it's just that both AMD and NVIDIA just locked it to the "professional" GPUs, or only used it under the hood of some other feature. AMD GPUs have used PCIe peer to peer copies for CrossFireX since the GCN Radeon GPUs (R9 290X and friends), that's why they don't need an external bridge.

There's DirectGMA (AMD) and GPUDirect (NVIDIA) which allows PCIe drivers for other devices to directly copy to and from GPU VRAM.

Other than driver support, nothing prevents PCs from having GPUs communicate directly with storage devices. The GPU would need to expose some of it's VRAM over PCIe (using a BAR window). The OS and the GPU driver would need to create some NVMe submission and completion queues either in system or VRAM that the GPU can be considered to own. You'd then issue NVMe operations where the backing pages point into the VRAM. The resizable bar support of the current Radeon and GeForce GPUs is a major step in this direction, although again, resizable bars were part of the original PCIe specification and has been available on server hardware since ~2008.


Well... Shit.

There goes my free time for the next 6 months. Need to brush up on intra-system component networking and figuring out how to make graphics cards dance.

Must... Become... Massively... Parallel...


The tl;dr is it's a standard mini ITX-sized motherboard for the Raspberry Pi Compute Module 4. It has a Broadcom PCIe switch that it uses to power 4 mini PCIe slots, 4 M.2 E-key slots, 1 M.2 M-key slot, a x16 slot in the standard ITX location, and an edge x1 slot. It also has dual/triple-redundant 12v power supplies (barrel, aux +/- ATX, and PoE++ header, though it doesn't use a 24-pin ATX connector, so you need to power it with an adapter or via wall wart (I used a 96W barrel plug adapter).

Through the course of testing this thing, I also identified a bug in the PWM fan controller circuit—and apparently it's also a small bug present on the CM4 IO Board's implementation, but typically not a fatal one.


Always enjoy the blog/video posts you do, Jeff. Thanks for creating this awesome, in-depth content!

Seconded, I'm envious of your new NAS though :)

Do you have any more info on the PWM bug? We’re having some issues with PWM on the RPi4 and I’m wondering if it’s related…

Recent 4-cores, 2.0+ Ghz and 4GB RAM ARM SBC's are plenty powerful for most tasks a home user may need. Media consumption, browsing, light text editing, editing the weekend video and even compiling small code bases. People have been doing that in chromebooks for years.

Unless you need to compile large code bases, edit serious videos, process huge amounts of data or CAD/games, a simple cheap ARM computer should be enough.

Too bad the desktop was taken by a monopoly and monoculture of x86-64 with windows where you are locked if you need compatibility with some proprietary formats or tools/software.


I wonder if there is a blog post in this somewhere, that I could write.

I have a bunch of Pi 4s (4GB). I just put one in an Argon M.2 case and was wondering what to do with it.

I have a "gut feel" for how my workstation should behave, I am a (backend) software engineer by profession, tend to work my day job on an i7 laptop (XPS 15, Ubuntu) but at home I'm on Ryzen 7 1800X (Manjaro) and Ryzen 9 3900X (Debian).

Is there some useful non-benchmark type "measurements" I could take by using a Pi as a workstation for a few days, that people would be interested to read about?


One significant bottleneck in a common desktop Rpi would be the SD card. You can improve a lot by putting the root filesystem on SATA-over-USB or use the CM with an appropriate board.

If you'd have been in the market for a new board, I would have recommended something that exposes PCIe, or at least SATA.

The best non-SoM option < USD 100 for that is probably RockPi 4 (but please fill in). You can fit it with an M2 NVMe (with limited lanes) or multiple SATA drives. RK3399 or RK3399pro have been holding that segment for some time now. Sadly RK3399 has max 4GB RAM. Armbian works great.

There is also Quartz64 and Rock 3A (max 8GB RAM, PCIe3). But it's going to take a while until community and mainline kernel support make the RK3566/RK3568 ready for a casual user.

You're going to have to be conscious about multitasking. Having 100 browser tabs open is not going to fly. And while 4k may be theoretically possible, you don't want to do that. Other than that I think it's all right.

-----

I have a similar profile to yourself. Currently drafting a small cyberdeck - maybe more as a replacement for smartphone than PC use.


But it's going to take a while until community and mainline kernel support make the RK3566/RK3568 ready for a casual user.

https://wiki.pine64.org/wiki/Quartz64_Development#Upstreamin...

TL;DR

Video output just recently (this week) got ported to mainline Linux, but it is still a bit rough. Only works on 1080p60fps monitors.

Everything else major for the kernel has been mainlined or is in review.

U-boot is still only downstream rockchip with closed source memory initialization (boot blobs).


If you want to experience something really radically different from the rest of 21st century computing, try RISC OS on one of them.

https://www.riscosdev.com/projects/

RISC OS is the original native OS for the ARM chip. It's still alive, actively maintained and developed, and it's FOSS now. It's a 1980s OS that predates NeXTstep and Windows 3, let alone Linux, which means a unique (but very effective and elegant) GUI, and a shell, that are totally unlike anything else you've ever seen.

It's a 35 year old OS. Don't expect miracles. It doesn't support wifi or multiple cores, but it's very very fast and responsive on a single core of a RasPi, and there is a big software library with hundreds of freeware titles and some commercial stuff too.


You can try subjective benchmarks. These ARM SBC's are as powerful as workstations from the 90's, so try to use them with as demanding software as what was run on a workstation from the 90's.

Trying to replicate this would certainly do a great blog post: https://news.ycombinator.com/item?id=27796821


It's comparable to a brand new Intel workstation in 2010.

https://www.cpubenchmark.net/compare/Rockchip-RK3399-vs-Inte...

For most people trying this, the SD card is the bottleneck and there are options. The mentioned Rpi2 had USB 2.0 shared with storage and network. The 4B has USB 3.0 and dedicated lanes for GBE and storage.

Ryzen 3700 <> RPi 4B <> RPi 2


Once you get to this size and price, there’s no point in using a Pi…there are much more powerful options available.

I guess it makes sense as a development board: if you are building a product around a Pi Compute Module you can use this board for quick prototyping before building your own product-specific board. But any hobbyist looking at this should seriously consider the various Pi alternatives around the $200 mark first. Or just buy a used PC.

For the sub-$100 price point, you often can't beat discarded thin clients. There's a steady of stream of units always hitting eBay, including ARM ones like the HP T-series and x86 ones like Dell Wyse units.

If you really just need something that comes with a case and power supply, might already have wifi, and has a few USB2 ports to plug things into, these are great options.


This is spot on. I mention in the post and the video it's an expensive board, and very specialist.

The target audience is basically 'person who wants ARM64 PCI Express development board', and even there, some of the PCI Express implementation on the Pi's BCM2711 is... weird. Therefore some devices have had a hard time getting drivers going on the Pi (case in point: graphics cards and Coral TPUs I've tested). I plan on testing Radxa's CM3 and Pine64's SOQuartz too.

I think some people still want a cheap(ish) CM4 motherboard that just has one x16 slot, front panel headers, and maybe a 24-pin ATX power supply connector. I think that could be made for under $100 and would also solve a lot of needs for debugging purposes.


I can see someone making a very powerful open source WiFi router out of this. The Pi can run passively cooled unlike most Intel or AMD CPUs. You wouldn't be doing any DPI on the network traffic without slowing things down, but this thing with some networking hardware cards could become very interesting to many people.

The cost is too high because it's a niche product, of course, but such a board would be able to replace my home router/server quite capably with a good WiFi antenna and some ethernet ports.


How many networking cards? I know we tend to think of internal buses as infinitely-fast, but a single PCIe 2.x lane is saturated at 500MB/s, which begins to look a little skinny once we're considering 10GE or even Wifi 6.

A single NVMe drive will saturate that.

Indeed, the maximum bandwidth I've gotten out of anything is around 420 MB/sec from a single NVMe drive. Add in more devices and you basically divide the bandwidth proportionally. Round trips can even slow things down a bit more.

A 4 lane PCIe switch would allow for a much cheaper board and still be plenty to saturate the available bandwidth. Maybe 2x 2.5Gbe with one hooked to a switch, plus wifi.

Around this price point there are already ARM boards (and passively cooled x86 machines) available much better suited to this kind of use case, like the Honeycomb LX2K.


There are other platforms that are maybe better suited to that role though. I am not very up to date with the hardware but I remember PCengines being one.

Things that are on my wishlist:

- Compute-Module should have a unified interface, that is not about to change for the next 3 generations of compute modules

- Someone create a "notebook" case having a board with mount for exactly this interface

- Case should have a decent keyboard and display without being to big

- How about having GPIO pins hidden

I would love to have an "upgradable" Notebook with arm architecture...


If you want a fast laptop, your two priorities are, IMHO, the thermal design and making as much room as possible for the battery.

I'm afraid that RPi level of performance in a laptop is going to be underwhelming, though still interesting for many cases.

If there was a widely accepted standard for laptop main boards, like there is for desktop and server boards, that would be wonderful. I don't expect it to ever happen though.


Have you seen the MNT Reform? There's a pi compute module adapter being built for it.

Yeah, quite bulky :-) But interesting approach.

I am curious if by the time one finishes assembling all this stuff for Pi based IoT appliance the total cost (never mind time) will exceed one of some used laptop / small factor PC which can do the same thing but faster.

Is there a storage controller like a SAN that can do provisioned NVMe?

Something like you insert a RPI into a backplane, and it auto-provisions some storage and makes it available to the RPI, in addition to providing power/ethernet.


Some friends and co-workers of mine have an ongoing bet/joke... Who can find something that the Raspberry Pi can NOT do.

One more thing scratched off the list!!


Boot without a GPU - Pi can’t do that :)

Good one. Putting it on the list.

It's an amazing little board, list is pretty short :)


Curious how the author has a decent site, but on Youtube he automatically switches to the trashy clickbait tricks.

Does this make building a large NAS with an RPI doable?

sorry if this is obvious, but how many CM4s can you plug into this board?

One. Don't worry though, next week I'll be covering the Turing Pi 2...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: