OpenPOWER is pretty neat because you can actually see what commercially relevant implementations of these firmwares look like https://github.com/open-power/occ
A lot of NICs are also completely based on multiple general purpose MIPS cores like Broadcom and Mellanox. The most common SAS HBA from LSI/Broadcom was a PowerPC SoC. And SATA/SAS drives are running multiple ARM Cortex M series MCUs for the bus interface, drive geometry, and FTL in the case of flash.
The BMC (IPMI) is also a self sufficient ARM computer.
ASPEED BMC SoCs (popular BMC SoCs also used in OpenPOWER systems) have a ColdFire (m68k) co-processor buried inside them. In OpenPOWER OpenBMC firmwares we use the ColdFire to bitbang a JTAG-ish protocol into the POWER chip
Sort of like how HBAs had i960s for an obnoxiously long time. For what ever reason some Patterson & Hennessy 5-stage RISC gains a foothold in an embedded niche and it's hard to make an argument to switch. RISC-V has the best argument right now; it's hard to beat zero licensing fees.
- Power management chips (of which there are dozens) each containing programmable MCUs
- Batteries, of course, contain one or more MCUs each
- 'Glue' MCUs for things like DMA controllers and USB controllers. (USB controller firmware has been exploited from the port side, and it's not upgradeable)
- RGB LEDs (!)
- SD cards have several (see Bunnie's work)
- Ethernet, WiFi and Bluetooth controllers contain several small and usually a few large computers each
Some of these have fully-baked ROM code, some are programmed and modifiable through flash, and some have firmware written at driver load time to save money.
At what point do you consider a chip a 'computer'? Would something like a voltage regulator or mux/demux count as an ASIC?
The factory needs to get code on there somehow, and usually the method isn't documented. Oftentimes it's not locked down.
I would consider a switchmode PSU where the main switch FET is driven by a microcontroller running firmware (they can then have an I2C/SPI input to allow programmable voltage control). There are probably dozens of these in any modern computer.
PIC chips are small, they are everywhere, they are very useful for adding complex, software-defined behavior (including DSP) at the circuit component level, and they are full general-purpose CPUs with onboard RAM and programmable ROM.
(Of course, this just moves the problem around; it's not uncommon for a chip revision to break software elsewhere in the system.)
I would say anything that approaches a Turing-complete machine.
IANACS, but I'd say the point at which it becomes capable of running Turing-complete languages. This probably covers anything that has a firmware, save maybe if there exist firmwares programmed in limited languages that can't execute arbitrary computation.
 https://m.tau.ac.il/~tromer/handsoff/ "Our attacks use novel side channels and are based on the observation that the "ground" electric potential in many computers fluctuates in a computation-dependent way."
We think of ourselves as a single entity, but we are composed of many life forms: Mitochondria, viruses, gut bacteria...
If you've ever wanted someone to accuse the x86 architecture of having too few registers, this is your URL.
Well that's the most interesting thing I learned today
So no, it's really not fair to say that the page table walker is its own computer. It's just a simple state machine. One tuned for high performance etc., but far from a fully-fledged computer.
All independently identifiable and enumerable computers in a system should be capable of continuing with their computations if their clock remains running while other computers are clocked down.
They have other ways to independently halt. Obviously any core can carry out a calculation, like calculating some arbitrarily chosen recursive primitive function, while the other cores do something completely independent, or execute a halting instruction.
I believe there is only one, but "at most one" is a compromise that works for me.
I suspect yes, if they are disabled after the configuration is settled.
That is to say, the GPU require their continuous presence and support in order to perform a complex calculation. We could set up a GPU to calculate, say, 100 digits of pi, and then clock down the other processors until it's done. Maybe it could indicate by an interrupt or something when that happens so then things can be resumed.
It seems fair that anything that can calculate while other parts are suspended is countable as a separate computer, even some of those other parts are required to prepare its initial configuration and inputs, or to announce the result.
> ...custom on-die silicon (implemented by Intel on Xeon chips for a number of its biggest customers)...
I'm only aware of Amazon having custom CPUs, but I know no details about what's been customized.
Where might I be able to read more about this sort of thing?
Furthermore the parent was asking about having no microcode running on the CPU which is what I was speaking to. A CPU designed to use microcode can't be run without microcode. But yes, it is of course possible to design the CPU to not use microcode in the first place.
> Microcode free CPUs was mostly a RISC thing, and basic CPUs like Z80 and 6502
Those quotes aren't incompatible - the 6502 and Z80 are core to the early mass-market computer movement.
So whilst microcode-free may be rare now, there was a time when it was the norm.
That sounds odd. I take it the author is not talking about the L1, L2 L3 caches, as they are typically only a few MB in size? So what is meant by this statement?
# egrep '^(cache |model )' /proc/cpuinfo
model name : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
cache size : 15360 KB
But yes, there are indeed many components that make up the modern computers that are our smartphones, laptops, et cetera.
> Each of the 2-8 main CPU cores can run independently, shutting on or off as necessary, and has its own private cache (bigger than most computers’ RAM up to even recently) ...
Aren't CPU caches necessarily much smaller than RAM?
I'm not getting anything else from Google except https://en.wikipedia.org/wiki/Xzibit
I was aware of the meme, but not its origin. Thanks.
Can someone elaborate on this? Do CPUs really have gigs of on-die RAM for their own private use? What are those used for? Or is this just roundaboutly referring to the usual cache hierarchy? If it's the latter, AFAIK CPUs have under tens of megabytes of cache, and it's been decades since the average desktop had 32 MB of RAM--a bit far from recent.