This reminds me of my Amiga 2000, because it's the first time I became fascinated by the number of extra CPUs, because my A2000 setup was a bit weird.
It had of course its main CPU, the 68000, and its CPU like copper. But it also had a 6502 compatible CPU controlling the keyboard. Then I put a SCSI controller in it that had a Z80 on it. And a 68020 accelerator board. Then I added a Janus PC card (the A2000 had an ISA bus in addition to the native bus and a Janus card let you run x86 PC software in a window on your Amiga desktop) with a 8086. Lastly I added a 286 accelerator to the PC side.
Of course the original 68000 and 8086 were not active, but having 4 processor families in a single box was fun.
Of course, this whole using Turing complete chips or whole CPUs other places than the main CPU is a much older thing. E.g. Commodore designs "borrowed" this from bigger computer systems when they put fully programmable 6502 based computers in their external floppy drives for their 8-bit line (you can load code into the memory of the 1541 floppy drive and execute it)
The number of CPUs took a temporary dip in the PC world for a while with e.g. increasing use of dumb IDE controllers instead of SCSI controllers which at the time often had CPUs/microcontrollers on them. But it didn't take long before the upward trend in number of CPUs resumed.
Using a foreign CPU architecture for a co-processor was a very common practice in the 90s, even early 2000s; e.g. the GameBoy Advance has a regular ARM CPU as it's brain, but it also hosts an entire GBC on board (including a custom Z80/8080 clone/hybrid) - mostly for backward compatibility, but it performs other regular duties (such as sound) even in "GBA mode".
GBA is a fascinating system, the main CPU is powerful enough to do some serious work - some games even had pretty decent 3D (check out Colin McRae Rally 2.0); all while remaining simple enough architecturally that you could mostly keep the whole thing mapped out in your head. You get to learn about many modern concepts such as instruction pipelining, caches, etc but it's not as complex as to overwhelm you, and you still get to play with funky quirky stuff like the sound chip, that throws you back into the 1980s. You can also still get them pretty cheap, I paid ~€120 for a GBA SP (AGS-101) with case, dongles & a flash cart, so I can trivially run homebrew stuff on real hardware.
Nintendo DS did something similar, with the GBA’s ARM7 running in conjunction with the DS’s ARM9 and acting largely as an I/O coprocessor (IIRC) and handling backwards compatibility.
The Wii and Wii U go a little further; in addition to the main PPC CPU, there’s an ARM core handling auxiliary tasks (mostly security and I/O related, but it’s also involved in early boot IIRC). That one’s not there for backwards compatibility; it’s purely there as a coprocessor.
> E.g. Commodore designs "borrowed" this from bigger computer systems when they put fully programmable 6502 based computers in their external floppy drives for their 8-bit line (you can load code into the memory of the 1541 floppy drive and execute it)
Yup Gwern mentions it in TFA. I had a Commodore 128 (well I still have it and it's still working!): Z80 chip in addition to the MOS ones (not just the CPU but the soundchip too).
I'd say though that back then none of these chips were added due to committees being infiltrated by three letters agencies or their chinese equivalent with the goal of introducing backdoors.
Bil Herd mentions in a few re-tellings that the C128 had power problems when they tried to use the CP/M cartridge originally intended for the C64, so it got "absorbed" into the design instead.
You should not forget Agnus, Gary, Paula, and Denise [1] one the main innovation with Commodore Amiga was the inclusion of dedicated processors for video and audio.
I knew someone who even in the early 90s was supporting a stock control database he'd written on the Commodore PET that constructed queries in 6502 assembler from user input, and then loaded them onto the CPU in the drive which could then chunter away assembling output while the user-facing app continued to respond normally.
Don't fret about the 6510 - chip die shots show that the internal structure is essentially unchanged from the 6502 - they just added 6 GPIO pins, tristate address bus and a halt that works better.
My favorite multi-CPU system was the Apple ][ card for the Macintosh LC line, which is essentially an entire computer on an expansion card to allow schools to get newer computers and keep running their older educational software. Still looking for one for my LC II, someday.
Honorable mention to the MacCharlie, which was a PC in a box that stuck on the side of an original Macintosh and allowed running PC software over a serial connection. (I've heard there was a similar Amiga sidecar for the Commodore, but that's outside of my scope of experience.)
Some later Macintosh systems also had a PC Card -- an expansion card with an x86 CPU and chipset on it. The x86 system could (somehow!) use the RAM and storage of the host computer, and took over the system's video output with a weird adapter cable.
That's similar to the Bridgeboards for the Amiga 2000[1], with the added bit that the A2000 had two buses so you can see the board has connectors for both the Zorro bus on the Amiga and an ISA bus for the "PC side". For the Amiga 1000 there was also the "Sidecar"[2]
Sidecar was for Amiga 1000 [1]. For Amiga 2000 there was the somewhat simpler Bridgeboard / Janus, which plugged into both the Amiga Zorro bus and a separate ISA bus in the A2000, and kept the whole thing internal, and let it access hardware on both buses (mostly used for IO, and to show the video output from the PC side in a window on the Amiga.
The Sidecar plugged into the expansion port which was basically mostly the internal bus just exposed as-is.
A favorite related to talk is Roscoe's keynote titled "It's Time for Operating Systems to Rediscover Hardware" [1] in which he discusses some of the outcomes/consequences of kernels (like Linux) largely sticking their heads in the sand on the issue of these massively heterogeneous SoCs.
Did the talk offer any solutions to this? Its not like any of the secondary processors he could be talking about have any kind of open source support or even public specifications of their programmability or ISAs or similar.
One of the key points he makes is that this is now an entrenched problem. SoC designers looked to Linux and asked "hey, can you drive the power management stuff?" and got no real help and so instead just built it out of the purview of the AP clusters altogether. Hardware is essentially implementing an abstract interface that is convenient for Linux to use and hiding everything else that's too difficult to add.
The solution, and one that he specifically calls out, is that operating systems people need to work with hardware people to build better hardware and software together to build new computers. We got into this situation because everyone was just solving problems with the tools they readily had available to them. This is obviously easier said than done, but it does make now a very interesting time to work at one of those handful of large companies which produces their own custom SoCs and operating systems (I believe it's Apple, Google, and Microsoft) since you really have to take both operating systems and hardware forward simultaneously to solve the underlying issue.
As I understand, this is the promise of Oxide Computer company. While they're not making their own SoCs, they are attempting to design the hardware and software together. Co-founder Bryan Cantrill has gone on rants about the number of extra layers of computers in your servers.
This is a big security issue, because the operating system security model relies on assumptions that can be (and are) broken by these devices in ways that the OS can't control.
The appropriate model would be exactly the one described in the original article above - explicitly treat the computer as a network of devices, and have the OS on the main CPU acknowledge that it is effectively sending a message to e.g. RAM over a shared network that also contains multiple other devices, potentially untrusted.
Linux already knows what firmware is and calls interfaces provided by firmware, so this isn't exactly a stretch for an OS model. The major issue is that all that firmware is proprietary and NDAs and patents block any OS or open source code from taking over the role of the proprietary firmware.
This problem isn't new..
I remember Creative Sound cards hogging the PCI bus longer than their allocated time slice, a sure way to destabilise the entire computer.
IMHO there's no solution, the only thing you can do is to select carefully the HW providers you use and do lots of tests..
That reminds me of a blog post where the author managed to run Linux on an HDD, that is, not only using the HDD as the storage media for a Linux install, but instead running the kernel on the HDD's internal IO controller, SOC inside the HDD that actually runs the plates and everything.
Unfortunately I could never find that post again, because all searches for running Linux on HDD return obvious results for using the HDD as an install media, not as the computer.
The author left off peripherals too. There are microcontrollers that can be hacked in many mice, keyboards, and monitors. Anything that uses usb for more than power has at least a microcontroller, and anything that uses usb2 or later almost certainly has an arm processor in it.
Cypress FX2 USB interface chip (usb 2.0 high speed) actually has an embedded 8051 core in it. They did finally upgrade it (to arm9, sigh) for their USB3 variant of the chip.
Cheap mass-produced USB peripheral chips tend to contain x51 core for control and some kind of high-performace datapath full of DMA and FIFOs for the actual data if the peripheral in question actually needs to move large amounts of data around. Having ARM core in there is somewhat rare because of the costs (both in terms of the IP license and in terms of silicon footprint).
Counterpoint re security issues: If the function of these separate computes like the HDD controller were running on the main CPU, they would have to be integrated in the same software stack; maybe as a driver in the OS, maybe as CPU-internal firmware, whatever. This doesn't just grow a software stack that is already insanely huge, it also introduces new issues because the HDD controller has to perform hard real-time tasks, adding even more complexity. It's also competing for time on the same processor, adding timing channels and other side-channels.
I can't take this beyond a gut feeling, but that gut feeling tells me that separate "computers" are actually a good thing for security.
I think “can be” rather than necessarily “are”, but yes; delegation like this enables more robust separation of concerns, in addition to explicit time / space separation.
“Can be” because you have to be intentional about it. As the author implies, it’s easy to build multiple insecure computers, and when you call it “firmware” and hire people with “firmware” experience, this tends to negatively correlate with the security mindset…
If you are really worried about security like you have a private key protecting millions of dollars of bitcoin say, you are better really putting that on a separate 'computer' called a hardware wallet that isolates it from the general mess.
* Aggregates disparate devices (e.g. PC, smartphone and watch)
* Mentions functional units which could theoretically be treated like a computer (e.g. Turing-complete) - but aren't a computer which acts independently.
* Counts cores as different computers, but - either a core is a computer, or the system with multiple cores is a computer. The computation a multi-core system does is having one of its cores run.
So basically, you'll be told about the on-Mobo chip for controlling its operations, and the Intel Management Engine and the AMD equivalent. Not worth the read IMHO.
I think the titular question is more rhetorical than actually attempting to give you a count. Lots of people do really think that there is one "computer" in their laptop when in fact the question of what a computer even is poses a way more difficult question. I loved the note on the MMU because it, aside from being a hilarious conference talk, poses the question of where we can even draw the line because clearly if the MMU can implement full programs than surely it's wrong to draw the line at the borders of a core on a SoC. Modern hardware is really funky and cool, and this article provides a good introduction to the chaos.
And the quite capable computers that do things like being controllers for your hard drive, or wifi or other devices on your "computer" ?
A relevant example is this article "Evil never sleeps" https://arxiv.org/abs/2205.06114 about the ability of those "secondary computers" to function without the cooperation of the primary computer, in this case the iPhone where they even work (sometimes maliciously) while the main device is "turned off".
Oh, yes, these definitely exist. There was even a paper a few years back in VLDB from Samsung about how they put un-utilized ARM cores from their disk controllers to do work off-loaded from a MySQL database.
It had of course its main CPU, the 68000, and its CPU like copper. But it also had a 6502 compatible CPU controlling the keyboard. Then I put a SCSI controller in it that had a Z80 on it. And a 68020 accelerator board. Then I added a Janus PC card (the A2000 had an ISA bus in addition to the native bus and a Janus card let you run x86 PC software in a window on your Amiga desktop) with a 8086. Lastly I added a 286 accelerator to the PC side.
Of course the original 68000 and 8086 were not active, but having 4 processor families in a single box was fun.
Of course, this whole using Turing complete chips or whole CPUs other places than the main CPU is a much older thing. E.g. Commodore designs "borrowed" this from bigger computer systems when they put fully programmable 6502 based computers in their external floppy drives for their 8-bit line (you can load code into the memory of the 1541 floppy drive and execute it)
The number of CPUs took a temporary dip in the PC world for a while with e.g. increasing use of dumb IDE controllers instead of SCSI controllers which at the time often had CPUs/microcontrollers on them. But it didn't take long before the upward trend in number of CPUs resumed.