Hacker News new | past | comments | ask | show | jobs | submit login
How Many Computers Are in Your Computer? (gwern.net)
335 points by wrinkl3 7 months ago | hide | past | web | favorite | 87 comments



The cpu also has typically multiple independent CPUs to control things like power saving and turbo boost. Modern intel cpus have multiple 486-class CPUs, one for this, one for the ME, and one that vendors can use that has SMM access (which is as scary or more so than the ME).

OpenPOWER is pretty neat because you can actually see what commercially relevant implementations of these firmwares look like https://github.com/open-power/occ

A lot of NICs are also completely based on multiple general purpose MIPS cores like Broadcom and Mellanox. The most common SAS HBA from LSI/Broadcom was a PowerPC SoC. And SATA/SAS drives are running multiple ARM Cortex M series MCUs for the bus interface, drive geometry, and FTL in the case of flash.

The BMC (IPMI) is also a self sufficient ARM computer.


> And SATA/SAS drives are running multiple ARM Cortex M series MCUs

http://spritesmods.com/?art=hddhack


To be totally pedantic WD HDD controllers are Cortex R cores, shifting to RISC-V.


Compare it to the straightforward days of yore, when my hard drive controller only had a Z80 and my printer an MC68000.


Or my floppy drive had the same CPU and roughly the same clock as the computer...


C64? Or was there some other system that was true for?


I thought SMM was just a special mode of the main cpu itself?


Yeah, SMM is just Ring -2 on the main cores.


> The BMC (IPMI) is also a self sufficient ARM computer

ASPEED BMC SoCs (popular BMC SoCs also used in OpenPOWER systems) have a ColdFire (m68k) co-processor buried inside them. In OpenPOWER OpenBMC firmwares we use the ColdFire to bitbang a JTAG-ish protocol into the POWER chip


Writable microcode?


If it's anything like Cell, it's just a bunch of fairly boring configuration information. If you want to dig in more it's called the "Configuration Ring". It's an about 2kbit SPI ring that has stuff like memory timings, and all the stuff that might get fused off for binning. On the PS3 you could try and turn on the SPE that was disabled by playing with the config ring, and see if you got lucky and had a perfect chip.


Does MIPS have some advantage for applications like NICs, or is it just a matter of using what they’ve been using?


It's just hard to beat MIPS perceptibly at perf/gate at the gate counts they're willing to expend, but that's more or less true of any not braindead RISC. Beyond that it's just one of those "it's what we've always done".

Sort of like how HBAs had i960s for an obnoxiously long time. For what ever reason some Patterson & Hennessy 5-stage RISC gains a foothold in an embedded niche and it's hard to make an argument to switch. RISC-V has the best argument right now; it's hard to beat zero licensing fees.


There are probably another order of magnitude more computers in a 'computer' than Gwern estimates.

Consider:

- Power management chips (of which there are dozens) each containing programmable MCUs

- Batteries, of course, contain one or more MCUs each

- Clocks

- 'Glue' MCUs for things like DMA controllers and USB controllers. (USB controller firmware has been exploited from the port side, and it's not upgradeable)

- Supervisors

- RGB LEDs (!)

- SD cards have several (see Bunnie's work)

- Ethernet, WiFi and Bluetooth controllers contain several small and usually a few large computers each

Some of these have fully-baked ROM code, some are programmed and modifiable through flash, and some have firmware written at driver load time to save money.


Many of those chips are not programmable though, or are 'programmed' through things like voltage dividers.

At what point do you consider a chip a 'computer'? Would something like a voltage regulator or mux/demux count as an ASIC?


Many of those /are/ programmable. Here's the code we used to get code execution on an SD card bought at the markets in Shenzhen (the project I did with bunnie, referenced above): https://github.com/xobs/ax2xx-code

The factory needs to get code on there somehow, and usually the method isn't documented. Oftentimes it's not locked down.


The blog entry "On Hacking MicroSD Cards" (2013):

https://www.bunniestudios.com/blog/?p=3554


I'm considering anything that runs firmware. I wouldn't consider e.g. a regulator with configurable voltage via resistor divider to count.

I would consider a switchmode PSU where the main switch FET is driven by a microcontroller running firmware (they can then have an I2C/SPI input to allow programmable voltage control). There are probably dozens of these in any modern computer.


I once knew a hardware engineer who worked on a team designing cutting-edge equipment. His mantra was “just use a PIC chip”. He had all sorts of war stories about other engineers coming to him with power regulator designs they just couldn’t get right. He’d often add a PIC chip or two and have the firmware dev write a few routines in C. Problem solved.

PIC chips are small, they are everywhere, they are very useful for adding complex, software-defined behavior (including DSP) at the circuit component level, and they are full general-purpose CPUs with onboard RAM and programmable ROM.


Sprinkling a ton of micros around a board can create a small nightmare for the software developers though, just working out what version of the firmware is running on each chip on the board can become a painful task. (if the software is 'simple enough', this may not be a problem, but it does tend to grow...)


Often you're unaware that these things contain firmware. They're just sold as e.g. 'bluetooth controller', but internally, there's a largish processor, a chunk of RAM and firmware in ROM mask. The firmware versioning is implicit in the chip revision.

(Of course, this just moves the problem around; it's not uncommon for a chip revision to break software elsewhere in the system.)


At what point do you consider a chip a 'computer'?

I would say anything that approaches a Turing-complete machine.


> At what point do you consider a chip a 'computer'?

IANACS, but I'd say the point at which it becomes capable of running Turing-complete languages. This probably covers anything that has a firmware, save maybe if there exist firmwares programmed in limited languages that can't execute arbitrary computation.


To add a one more: the Macbook charging bricks have an MSP430 in them [1], "roughly as powerful as the processor inside the original Macintosh". It's definitely Turing complete; you may not get tons of access to the rest of the machine, but it might serve as a vantage point from which to launch power-based side channel attacks [2] (yes, you only get the overall power draw from this but that's no barrier [3]).

[1] http://www.righto.com/2015/11/macbook-charger-teardown-surpr...

[2] https://en.wikipedia.org/wiki/Power_analysis

[3] https://m.tau.ac.il/~tromer/handsoff/ "Our attacks use novel side channels and are based on the observation that the "ground" electric potential in many computers fluctuates in a computation-dependent way."


Also interesting to consider: how many organisms are in a human being?

We think of ourselves as a single entity, but we are composed of many life forms: Mitochondria, viruses, gut bacteria...


Some human beings even have whole other human beings inside of them, though for the moment computer manufacturing is a bit more sterile than all that.


Modern CPUs have other CPUs inside them, they never give birth though.


The human body contains approximately as many bacterial cells as human cells: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4991899/


or, how many 'minds' are in a human being? based on split brain experiments, https://en.wikipedia.org/wiki/Split-brain I think you could make the argument that it's >1


A paper linked from that page, "MOV is Turing-complete", is as hilarious as it is clever.

https://www.cl.cam.ac.uk/~sd601/papers/mov.pdf

If you've ever wanted someone to accuse the x86 architecture of having too few registers, this is your URL.


It's not just a paper... there's a reference implementation!

https://github.com/xoreaxeaxeax/movfuscator


And an implementation of Doom at merely 7 hours per frame!

https://github.com/xoreaxeaxeax/movfuscator/tree/master/vali...


If you own a modern a car, they typically have 50-70 microcontroller units. Some of them have several cores and DSP units. At the lower end you have 8051, Atmel AVR or 8 - 16 -bit Renesans chips.


Anything with a CAN or LIN bus connection has at least one MCU in it.


This is a very good article about the different computing units inside servers and smartphones. However I found the discussion of AI and neural nets to be confusing and seemingly unrelated.


> any floating point unit may be Turing-complete through encoding into floating-point operations in the spirit of FRACTRAN

Well that's the most interesting thing I learned today


MMU page faults and floating-point units are not independent programmable machines; their actions are coordinated by the processor into which they are integrated.


The page table walking hardware on x86 is pretty distinct from the main core, and is Turing complete.

https://github.com/jbangert/trapcc


That's a bit of a disingenuous exaggeration. It's not the page table walking hardware itself which is Turing complete -- it's the page fault handling mechanism as a whole, and trapcc involves playing around with trap gates and TSS from what I recall, which almost certainly involves executing microcode that goes through the main core execution units even if not a single x86 instruction is being executed.

So no, it's really not fair to say that the page table walker is its own computer. It's just a simple state machine. One tuned for high performance etc., but far from a fully-fledged computer.


Exactly: can we have the MMU continue to execute the steps of its Turing computation (that having been configured via the trapcc program), if we clock the CPU to zero Hz?

All independently identifiable and enumerable computers in a system should be capable of continuing with their computations if their clock remains running while other computers are clocked down.


The cores in each core complex in my phone can't be clocked separately. Are they not cores?


They can, they just aren't. Big difference.

They have other ways to independently halt. Obviously any core can carry out a calculation, like calculating some arbitrarily chosen recursive primitive function, while the other cores do something completely independent, or execute a halting instruction.


What about a Niagra T1? All eight cores share a single floating point execution unit. They literally can't be clocked separately. And it's not like they're eight "cores" that are really eight SMT threads that are veneers of the same core. They are completely separate cores except the floating point unit.


Can you run this software if the core is removed or disabled?


Can you start or run your computer if the BIOS is removed or disabled? Or the power cord? No? Then does that mean the BIOS or power cord are the computer and the CPU is not a computer? Being necessary does not establish sufficiency, much less identity...


To remain within the scope, content and intent of my argument, I would say that the pieces identified as "BIOS", "power cord" and "CPU" do not constitute three separate computers.

I believe there is only one, but "at most one" is a compromise that works for me.


Can you run code on a GPU if the cores that can touch it's configuration registers are disabled? Or the imaging DSPs in an SoC? Does that make them any less of processors?


> Can you run code on a GPU if the cores that can touch it's configuration registers are disabled?

I suspect yes, if they are disabled after the configuration is settled.

That is to say, the GPU require their continuous presence and support in order to perform a complex calculation. We could set up a GPU to calculate, say, 100 digits of pi, and then clock down the other processors until it's done. Maybe it could indicate by an interrupt or something when that happens so then things can be resumed.

It seems fair that anything that can calculate while other parts are suspended is countable as a separate computer, even some of those other parts are required to prepare its initial configuration and inputs, or to announce the result.


The part of this that leapt out at me that I wish there were comments about was:

> ...custom on-die silicon (implemented by Intel on Xeon chips for a number of its biggest customers)...

I'm only aware of Amazon having custom CPUs, but I know no details about what's been customized.

Where might I be able to read more about this sort of thing?


... and how many are controlled by you?


None, probably, if you’re taking the definition of “open hardware” where you have full access to the microcode running on it and can modify it yourself.


What if it has no microcode running?


Microcode is necessary to decode instructions. When you give the CPU instructions, it decodes them into microcode which is what actually specifies the sequence of electrical operations required to process that instruction.


CPUs without microcode exist. While microcode isn't exactly a new innovation, early mass-market CPUs were not implemented that way.


Right, but it wouldn't be possible for a modern CPU (for example: how would you retain compatibility with older architectures? how would you patch it if you find a critical bug in an instruction?)

Furthermore the parent was asking about having no microcode running on the CPU which is what I was speaking to. A CPU designed to use microcode can't be run without microcode. But yes, it is of course possible to design the CPU to not use microcode in the first place.


Microcode free CPUs was mostly a RISC thing, and basic CPUs like Z80 and 6502, almost everyone else always had some form of microcode in them.


> CPUs without microcode exist. While microcode isn't exactly a new innovation, early mass-market CPUs were not implemented that way.

> Microcode free CPUs was mostly a RISC thing, and basic CPUs like Z80 and 6502

Those quotes aren't incompatible - the 6502 and Z80 are core to the early mass-market computer movement.

So whilst microcode-free may be rare now, there was a time when it was the norm.


The norm in the context of 8 bit home micros.


> Each of the 2-8 main CPU cores (...) has its own private cache (bigger than most computers’ RAM up to even recently)

That sounds odd. I take it the author is not talking about the L1, L2 L3 caches, as they are typically only a few MB in size? So what is meant by this statement?


I remember when desktop computers had total RAM measured in single megabytes. It wasn't that long ago. (Get off my lawn, you kids!)


Single digit megabytes? Get off my lawn kid. I'm not that old but I remember when computers had total RAM measured in TWO digit KILObytes.


He is talking about L1 and L2 caches. They usually use more area than the CPUs, even them being only a few MB in size.


Oh, so the physical size on the die, not the size in bytes. That makes sense!


Actually, no - he meant the size in bytes. For example, my Digital Ocean droplet reports...

    # egrep '^(cache |model )'  /proc/cpuinfo
    model name      : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
    cache size      : 15360 KB
To understand the context of the article's claim, let me just add that we used to make fun of Emacs people by saying the name stood for "Eight Megabytes And Constantly Swapping". That was a time when a machine with 8MB was a big one. You can see above that machines nowadays have caches larger than that!

And to compensate, we've now invented editors based on Javascript :-)


I think he’s using the broader definition of computer in this case. Think microwaves etc.


Huh. I personally have never treated the word computer as a synonym of CPU, APU, GPU, Turing Machine, etc. To me this article reads more like, "I want to redefine what computer means and then tell you all of the things that fall under my definition."

But yes, there are indeed many components that make up the modern computers that are our smartphones, laptops, et cetera.


No, a computer of course is a person who performs calculations.

https://en.m.wikipedia.org/wiki/Human_computer


Don't mean to nitpick, just wasn't able to parse this:

> Each of the 2-8 main CPU cores can run independently, shutting on or off as necessary, and has its own private cache (bigger than most computers’ RAM up to even recently) ...

Aren't CPU caches necessarily much smaller than RAM?


Yes, but caches now are bigger than RAM was before.


Ahhh, that makes sense, thanks!


Indeed, in worst case scenario, a PC can hold like 20+ pieces of silicon running their own OSes. Even something as primitive as a PMIC, signal switch, or a USB hub can hold a fully features RTOS


There can also be analog computers, those might be harder to recognise.


They really dropped the ball not embedding an Xzibit in this paper.


> Alvin Nathaniel Joiner (born September 18, 1974), better known by the stage name Xzibit (pronounced "exzibit"), is an American rapper and broadcaster.

I'm not getting anything else from Google except https://en.wikipedia.org/wiki/Xzibit


Yo Dawg" is an image macro series based on portrait shots of American hip hop artist Alvin Nathaniel Joiner (better known by his stage name Xzibit) and humorous captions that are composed around the recursive phrasal template "Yo Dawg, I herd you like (noun X), so I put an (noun X) in your (noun Y) so you can (verb Z) while you (verb Z)." Since rising to popularity in early 2007, the series has been considered one of the most well-known and longest lasting examples of recursive humor on the Internet.

https://knowyourmeme.com/memes/xzibit-yo-dawg?full=1


HN is the best place to find quality, (frequently) low-noise information about everything!

I was aware of the meme, but not its origin. Thanks.


it’s computers all the way down.


im a computer computer


"How many companies inside your computer?" is the more concerning problem to me. But that's just my tinfoil hat talking.


"How many binary blobs inside your computer?" is even more pressing. For the paranoid you could always buy a Purism[1] laptop with Trisquel[2] (pure libre Linux) running on it (if that's even possible, I'm not sure)

[1] https://puri.sm/

[2] https://trisquel.info/


Ooh, watch out. If you have a talking tin foil hat, it was likely provided to you by a corporate entity. (This is a poor joke about sourcing tin foil hats)


Is the tin from a conflict-free source? How can you be sure without a blockchain to ensure provenance? (This is an even poorer joke)


(This is the poorest joke)


"640 computers inside a computer ought to be enough for anyone." -Gill Bates


> Each of the 2-8 main CPU cores can run independently, shutting on or off as necessary, and has its own private cache (bigger than most computers’ RAM up to even recently)

Can someone elaborate on this? Do CPUs really have gigs of on-die RAM for their own private use? What are those used for? Or is this just roundaboutly referring to the usual cache hierarchy? If it's the latter, AFAIK CPUs have under tens of megabytes of cache, and it's been decades since the average desktop had 32 MB of RAM--a bit far from recent.


He's talking about "recently" in the age of computing hardware, not the last few years. You need to wait until around 1995[1] to find a mac with more standard RAM than an i9 has in its L2 cache[2].

[1] https://en.wikipedia.org/wiki/Power_Macintosh_5200_LC [2] https://en.wikichip.org/wiki/intel/core_i9/i9-7900x


I see. I was hoping there was something more interesting going on, such as CPU-private bookkeeping... or perhaps supporting that JVM that lives in the management engine. :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: