Hacker News new | comments | ask | show | jobs | submit login
'Demonically Clever' Backdoor Hides Inside Computer Chip (wired.com)
360 points by Digit-Al 9 months ago | hide | past | web | favorite | 114 comments



I'm still waiting for someone - say Raytheon, General Dynamics, Northrop Grumman or Boeing - to find that those SMD capacitors or inductors they used for their hardware ended up being more than just simple passive components. The amount of space available in the package is more than enough to hide some circuitry which can be used for other purposes ranging from bridging air gaps to denial of service. These parts are used in positions where ample power is available for such purposes. The device could be triggered by outside signals, by specifically crafted power profiles, by simple timers or other means. They could be designed to detect the location where they're used in the circuitry and act accordingly.


Obviously not at SMD scale, but I have seen fun videos with seemingly simple circuits with LEDs and switches connected in series and they behaved completely differently (eg. each switch connected in series controlled other LED). It turned out that switches and LEDs have been modified with clever frequency generators and filter circuits.

edit: https://www.youtube.com/watch?v=RkTvDjhImwo


He didn't cheat? That's impossible if they're really switches



SMD capacitors are microphonic. They change capacitance slightly when flexed. They also act as speakers, which could be used to exfiltrate data across a (small) air gap.


I tried making my own guitar amplifier in the early 90s. It had a really mysterious feedback problem. Trying to find the cause, I accidentally brushed across one of the decoupling caps with my scope probe, and it went "doink." It was a big goober of a ceramic disc cap. Lesson learned.


Not just SMD; pretty much any tuned circuit will ring when tapped (esp. if multi-layer caps are involved, whether SMD or not).


Raytheon has already been through this [1]:

> Many [counterfeits] have been seized, but any that remain in use pose the risk of causing “components to melt, burst, rupture, catch fire or explode, resulting in property damage, personal injury and death,”

[1]: https://www.publicintegrity.org/2011/11/07/7323/counterfeit-...

Edited to add:

You might think this is different, but controlling the supply chain limits these kinds of attacks.

One way to do it is to control the chain of custody so there's a paper trail on who had access to the parts and when. The other way the pentagon is doing it is putting "dielets" into the chips so they can be verified later.

https://www.scientificamerican.com/article/the-pentagon-rsqu...

In the end it all comes down to controlling the supply chain.


> In the end it all comes down to controlling the supply chain.

And yet people got all up in arms about that guy who got convictted for counterfeiting freely downloadable Windows restoration disks by outsourcing the job to some random shop in China and making them look like official disks.

If anything, he got off easy. The world does not need more factory-backdoored OS installations.


It is for this reason that DoD is very careful about the source of all the components in things that it purchases. It ensures that as much as feasibly possible, the individual components are built in the US.


There are a lot of "ideal diode" devices being used now -- they save power by eliminating the voltage drop associated with passive diodes. Would be simple to hide extra circuitry in these already-active devices (they harvest a bit of inline power) -- and they're typically used in high-power control circuits, so the perfect DoS candidate.


Such shenanigans would be noticed as soon as someone did a routine X-ray inspection at the board house.


"routine" ? Is that a common thing ?


Things that are not routine in commercial manufacture become more common in Defence manufacture. The volumes are lower and per-unit cost-pressure is less, while reliability requirements are higher.


What you're proposing isn't really possible. Sure you could hide a microcontroller in a capacitor or diode packagebbut those components are rudimentary with very simple functions. A diode is like a check valve in pluming, it allows electricity to flow in a single direction. Capacitors are slightly more complicated but still a two pin component.

Imagine you're a pipefitting installed in something, based on the water flowing through you would you be able to distinguish if you were in a house, fire engine, office complex, or high-rise? Would you be able to ascertain what function you served in the system?


> distinguish if you were in a house, fire engine, office complex

If water is used primarily between 9-5 in large quantities: office building

If water is used during breakfast and dinner: home

Fire engines also have very distinct water usage patterns and pressures.

I bet you could easily get to >90% accuracy quickly.


This reminds of only learning about the existing of light level geolocation that uses only an accurate clock recording ambient light level to determine longitude and latitude of migratory birds - https://en.wikipedia.org/wiki/Light_level_geolocator


The parent comment referenced two use-cases for such nefarious components. Both of which seem quite possible:

1. Bridging an air gap. This would basically be a radio repeater that lets you reach other compromised components. It just needs power, and could certainly fit within one of these component packages.

2. Denial of service. The component may be a simple diode, but if it stops working, you could potentially disable a weapon, or maybe even cause it to self-destruct.

That said, I'm sure that defense contractors are very careful about where they source components. They likely have spies placed within their suppliers, and perform regular audits and teardowns of components.


>That said, I'm sure that defense contractors are very careful about where they source components.

Not as careful as you might expect. "Fake" IC components were found in a military 737 [1]. Trusted ICs are a hot topic and the big players in the defense industry are working towards solutions. It's an interesting topic if you have time to read their academic papers.

[1] https://military.com/defensetech/2011/11/08/counterfeit-part...


> I'm sure that defense contractors are very careful about where they source components.

I thought that's why military hardware is so expensive. You're not just paying for a radio or whatever, you're paying for an entire hardened supply chain with everything sourced from trusted manufacturers. I guess maybe that's not the case any more...



Bridging an air gap could also be possible by simply producing a device that looks to x-ray to be a capacitor but is crafted of materials specifically selected for their ability to physically change in some dimension that the attacker is able to measure remotely.

Change physical size when charged/discharged, measure sound pressure via lasers on windows or microwaves sensing cavities in concrete walls.

The victim doesn’t even need to be specifically targeted if they use commodity components whose design is known. Just arrange for the company producing parts to select a specific recipe.


Decoupling capacitors are in a unique position: practically every bit-flip in a chip turns into a glitch signal on the power lines. Normally the capacitor is there to supress those signals by shorting them to ground, but it's also ideally positioned to analyse them.


The component itself doesn’t have to distinguish anything. Imagine if the component just covertly transmitted the water flow. Most would be useless, but the one installed in the CEO’s toilet could give you some useful info.


Such a device would be severely restricted in what it could exfiltrate just based on bandwidth limitations. I'm not super knowledgeable on RF, but I believe that something the size of a SMD resistor or capacitor would be too small to conceal any sort of radio with significant bandwidth (not to mention having to evade RF emissions compliance testing, but maybe some kind of exotic backscatter radio could work). Another possibility would be to modulate the power line signal, but again not enough bandwidth to transmit much that isn't already being leaked.

Perhaps information on power consumption by the CPU, which has been used to recover encryption keys in some attacks, but that's already being leaked in most cases. The most likely scenario as I see it would be parts that deliberately amplified unintentionally leaked information (like high-resolution power usage information) more than normal, but it seems to me that normal compliance testing would detect a lot of that.

I can imagine a bug or something being hidden in a large power supply capacitor, which can have volumes of several milliliters. Maybe capacitors with a hidden transmitter mislabeled as higher values to explain the extra size.


Maybe you could combine those two ideas. Have a really tiny bug in the logic electronics somewhere that transmits at extremely low power, then have a bigger thing in a power capacitor that receives those and retransmits the data at higher power. IANAEE, though, so I could be talking nonsense.

As far as the size goes, is there any tradeoff between size and price? If there’s a more expensive design that occupies less volume, an attacker could use that design, use the extra space for the bug, then sell the whole thing as if it were the cheaper version. You probably couldn’t do this for all of your capacitors, since it would cost a lot. Maybe 1% would be enough to have a good chance of getting a bug somewhere interesting.


Capacitor size is pretty much set by the technology, but where possible smaller equivalent parts are usually cheaper. You can't spam this in all of your products anyway because it makes it too likely that you'd be discovered, plus again there's the issue of making it pass spurious emissions testing. This sort of thing would probably only be used in targeted attacks.


I'll bite; the MCU in the cap (let's assume a tricky case, like a decoupling cap) could transmit by briefly shorting the PSU (one bit at a time).


A friend of a friend designed chips in the 80s. One of his chips became a high-end audio component if hooked up in the right way, unbeknownst to his employer. Apparently he had a very good home hifi system.


Chip! Pinout! Deets!


IIRC, lots of things in the old days (1970's-ish? Popular Electronics, for example) talked about using a 4009 or 4049 as an audio amplifier.

You put a feedback resistor in place from input to output to bias it and then capacitively couple the input and output.

Metal-gate CMOS was particularly good for this as it had an operating voltage from <3V to about 18V.


CD4009 is a buffer. If you connect its input to its output, then you get a bit of SRAM, not an amplifier.

CD4049 is an inverter. If you do the trick above then you indeed get an amplifier, nonlinear and with poorly-controlled gain but an amplifier nonetheless. This isn't some kind of Easter egg; an inverter is just a high-gain amplifier that's usually allowed to saturate, so it fundamentally just does that.

Such amplifiers are not very good, but they're fast-ish and cheap. They're often used for crystal oscillators. The preferred logic series these days is 74HCU. That's "unbuffered" logic, where your inverter really is just one CMOS inverter, and not a string of three like usual. That makes the gain more stable, since the three inverters wouldn't match perfectly, and would each end up biased somewhere different.


> CD4049 is an inverter. If you do the trick above then you indeed get an amplifier, nonlinear and with poorly-controlled gain but an amplifier nonetheless.

The CD4049 hex inverter chip is a popular amplifier chip in some guitar distortion pedals due to all the reasons you mentioned (non-linear and poorly controlled gain). There were a few designs based on connecting several of the inverter stages in series.

The wiring of the MOSFETs inside has superficial similarity to an AB class dual pentode push-pull tube power amplifier and has similar qualities in the sound it produces.

Here's one article on such a design: http://pedalprojects.blogspot.com/2013/04/how-does-red-llama...


Huh? Is my memory faulty and I got it wrong?

I thought that the 4009/4049 were the hex inverters and that the 4010/4050 were the hex buffers.


Oops; you're totally right. I read the datasheet title, and didn't read to the subtitle. In any case, the trick is alive and well with 74HCU logic, good in to the tens of MHz whenever exact gain and distortion don't matter.

http://www.ti.com/lit/ds/symlink/cd4009ub-mil.pdf


> In any case, the trick is alive and well with 74HCU logic, good in to the tens of MHz whenever exact gain and distortion don't matter.

But you don't get the voltage tolerance with 74HCU (6V limit).

This was one of the interesting things about the old 4000 series because they had metal gates and thick oxide--they tended to work from less <1V (probably not for analog, though ...) the whole way to 20V (convenient for 2 9V batteries).

Old 4000 series were also notoriously vulnerable to static discharge, so I suspect that they didn't have much in the way of ESD protection (if any at all).


This is quite clever, since the required addition to the "mask" (actually multiple mask layers) to implement such a function would be quite simple.

During chip design, there are tools (DRC and LVS) that very carefully verify that the mask has exactly what the designers intend it to have, not a single transistor more or less. This abstract mask is called GDSII[1] (or perhaps a successor such as OASIS, the principle is the same).

Once upon a time the layers of the GDSII could be used directly to build ICs. But now chip design rules are too tricky, so the masks are tweaked post-tapeout, in order to be able to get a decent yield of functioning chips.

Still, it is possible to take actual silicon and extract the circuitry from it. This, while quite difficult to do, is routinely done by "reverse engineering" companies.

If it's your own chip you already know exactly what to expect, you actually specified every transistor there. So it would be much "easier" (ha ha) to reverse engineer to verify that your actual chip has all the circuitry, no more, no less, that you intended it to have. I wrote a little about this in an HN discussion a few years ago.[2]

That's the theory. But in reality, does any company reverse engineer their own chips to check? Highly unlikely. Which means they're implicitly trusting TSMC (or whoever the fab is).

Not only that, what's to keep some bad actor at TSMC from inserting this circuitry into your chips perhaps 6 months after initial production. Must you repeatedly keep reverse engineering your own chips to make sure they're still unmodified?

But, as I mentioned in my earlier post, there are many IP blocks in current silicon that come from third-party suppliers. Does anyone fully understand the operation of every transistor in every IP block they bought, or they inherited from an earlier design? If I were to backdoor an IC, I'd use the third-party IP method. It would be much easier to sneak something in that way.

[1] https://en.wikipedia.org/wiki/GDSII [2] https://news.ycombinator.com/item?id=11880935#11891857


A few years a ago I attended a talk given by an engineer of one of the largest American semiconductor companies. After the talk someone asked if they were able to verify that the chips they get back from the fabs are made as specified. The answer was that they couldn‘t but that the problem was considered a serious concern and that their company invested resources into a solution.


And what about all the blackbox IP that get added after you design your functionality. There could be absolutely anything in that test logic added by the vendor, anything in that random 'process measurement' cell added by the fab. I don't see how the verification required is remotely possible.


> If I were to backdoor an IC, I'd use the third-party IP method. It would be much easier to sneak something in that way.

I'm not a hardware designer, but I imagine that restricting a backdoor to a specific block might make it much harder to cause the rest of the hardware to behave in a specific way?


I interviewed at a company called Chip Scan which is a startup that aims to detect backdoors in chip designs. I didn't end up accepting, but it did sound like an interesting job.


I've heard rumors of an even more insidious backdoor.

Screwing with the dopants slightly in order to bias the HRNG slightly one way. Wouldn't show up even under a full visual inspection.


Sounds like

Becker, Regazzoni, Paar, Burleson. "Stealthy dopant-level hardware trojans." Proceedings of CHES, August 2013.

https://sharps.org/wp-content/uploads/BECKER-CHES.pdf


Never trust a chip you didn't fab yourself. This is seriously clever work. Bribe the right people at TSMC, and all of Apple's chips have a built-in side channel vector. Or any other fabless organization.


This might be a silly question, but even if you do fab it 'yourself', does that solve the problem? It might make it harder, but people can still be bribed, or have other pressures applied to them.


Would you put a back door in your own chip? I am not sure how bribe would work if your the target.


I suspect that you're thinking of bribing an organization. You're correct, it's hard to bribe an organization to act against its own interest. But instead, think about bribing one or two workers individuals within the organization. That's much more doable.


You can get around this. A biased HRNG is still an RNG, and there are other sources of entropy on a system anyways, so a decent OS can make exploiting a bias in an RNG infeasible.


Do the RNGs for fixed hardware devices like hardware security modules typically mix entropy from several sources?


I assume that HSM as discrete hardware devices certainly use their hardware RNG only as one input to some /dev/random like CSPRNG. On the other hand I would also assume that single-chip "HSMs" (smartcards...) do not although I vaguely remember that for TPM (which is hardware-wise a smartcard with weird host interface) the RNG output is somehow dependent on state of attestation registers.


Sure, many HSMs/smartcards/tokens lack sophisticated RNGs. And NIST's SP800-90 has proven weak on this.


I'm probably biased because I spend good part of quarter by designing reasonably secure CSPRNG for smartcard chip without hardware RNG (and ended up exploiting essentially any cross-clockdomain communication as entropy source) and thus I assume that typical smardcard vendors don't care about that (too much work) while HSM vendors simply leverage infrastructure of whatever (RT)OS they use and probably harden that somewhat.


Do you have a source for this? That sounds incredibly implausible--analog tolerances are basically never tight enough that you'd rely on them for something like the DC value of an RNG's output, over temperature, normal production spread, etc. Any hardware RNG at least goes through digital processing to remove that bias, and usually goes through a cryptographically strong PRNG.

As an aside, hardware RNGs are one of the only places you can put an undetectable backdoor in a design, since they can't be verified (since they're deliberately non-deterministic). If you do hash(stuff) ^ HRNG, then the CPU can make the result whatever it wants. If you do hash(stuff ^ HRNG) then it can't.


Maybe I shouldn't have suggested this on HN in 2016.[1]

[1] https://news.ycombinator.com/item?id=11768980


Believe me, I slogged hard for 3 months to understand the paper in 2016, when it was made public. I was excited to learn more about how controlling the number of electrons can change JavaScript functionality. Had the opportunity to learn everything from dopant level to the browser level.

Gave up when I reached page 10, coz other priorities took over me.

Had I know about this HN post, maybe I would had finished the entire paper.


I suggested something similar for transistors in 2014[1], and I'm sure I wasn't the first, so you're probably blameless.

[1] https://news.ycombinator.com/item?id=8759749


I feel the same way about this as I do about, say, NSA hacking. It will never effect most computer users--until the day it does.


I think that this glosses over one quite important detail: while the "RC-integrator out of digital logic" is quite small an inconspicuous, the logic required to activate it would be significantly more complex and almost by definition very suspicious.


This is not a new threat idea. DOD has been worrying about this for years. Perhaps the implementation of the threat is new but not the idea of the threat.


They even implemented the Trusted Foundry program to mitigate some risk.

https://www.dmea.osd.mil/trustedic.html


you may be remembering this very same single transistor back-door from 2 years ago?


Yep, the article linked is from 2016. I'm actually friends with one of the authors, the similarity tipped me off


I can’t find the link, but I seem to remember either an SBIR, STTR, or BAA topic a few years ago relating to de ting such backdoors.


That trigger circuit looks pretty big. It would be very hard to find a spot for it in an existing layout without moving things around. Moving things around would invalidate their timing which would be noticeable to the chip designers.

A fab would most likely not be able to do this unless it was an extremely valuable target. But it would be pretty easy if the design team wanted it in the first place.


I work only in VMs. So I wonder if websites accessed in VMs could charge such capacitors in CPU cores. By default, virtual CPUs aren't mapped to particular cores. But then, I do tend to use hardware virtualization. Maybe it'd be more secure to avoid that?


Everyone here knows about "Trusting Trust", right?

And that nanotechnology will be done with software that is effectively compilers, right?


Also living things. I sometimes wonder just how much of the information that determine an organism is stored not in DNA, but hidden in the "runtime" state of the replication mechanism. After all, when a new cell is made, the parent replication mechanism also builds the child's replication mechanism.

Related - Hofstadter's GEB, where he discusses the observation that information is not stored on a storage medium - it's a function of the medium and the mechanism reading that medium.


I once went to a hypno-therapist who did a germ-line regression (as contrasted with a "past-life" regression) where I was lead back in time through my familial linage to talk one of my ancestors. YMMV

> Epigenetics is the study of heritable changes in gene function that do not involve changes in the DNA sequence.

https://en.wikipedia.org/wiki/Epigenetics

> After all, when a new cell is made, the parent replication mechanism also builds the child's replication mechanism.

The whole organism splits in two so the daughter cells' entire mechanism is half of the parent cell's mechanism.

One of the thoughts that trips me out is that each Amoeba (for example) is billions of years old.


What everyone seems to forget is that "trusting trust" has a counter: https://www.dwheeler.com/trusting-trust/


Yes, but...

You have to have at least two independent compiler stack development processes occurring in separate light-cones. If one happens far enough within the cone of the other you cannot trust it.

(Not actually speed-of-light-cone of course. You have the lead-time required to develop general "nanites" and then their travel time to reach the opposite side of the Earth (assuming no one is working on this off-planet. The first thing a paranoid nanotech-haver would does is detect and suborn all other nanotech labs. I call this the "Matter-Lock".)


That goes a bit beyond the "trusting trust" scenario, though, and into subsuming "hardware" rather than "compiler software", which DDC admits it doesn't control for...

I don't have enough backing physics to be confident on the possibility of a "matter lock". How do you detect without being detected a nanomachine that was designed vs one of the nanomachines that already exist in living organisms? Can you also expand on the goal of subsuming all the nanotech of other labs? If things obviously start breaking, that's especially going away from the "trusted trust" scenario which implies things like trojans used to passively sniff secrets and gain advantage through information when an opportunity to export the information arises. I would expect physics might allow some workarounds for that, on top of methods of outright detection like conservation laws, spectrometry...

I've finished the first of three in a sci-fi book series that has introduced the problem of what to do with an adversary that subsumes basic physics research to halt general advances, perhaps there's an entertaining hard sci-fi book you'd recommend for the "matter lock" idea? Or arxiv papers if you have any.


The problem with nanotech is that unlike software it can transmit itself. That's what make the "trusting trust" problem so severe in this context.

On the scale of hypothetical nanites the world is really really huge, so the first hurdle is figuring out how to integrate the incoming information and control the machines.

> How do you detect without being detected a nanomachine that was designed vs one of the nanomachines that already exist in living organisms?

If there are already other machines to detect then you're too late and the scenario is "toner war" as per "Diamond Age" (probably the best nanotech sci-fi novel; or maybe "Blood Music" by Greg Bear.)

If you do get there first (and you've correctly identified this as a huge existential challenge: how can you know that you're not being fed false information by the person who got there before you? You can't. If "matter lock" is possible there's no way to know if you're really first, except to try some shit and see if anyone notices and can stop you) then you have the relatively easy task of locating the other nanotech labs in the world and infecting them with your malware.

> Can you also expand on the goal of subsuming all the nanotech of other labs?

Well, if you're reading "The Three Body Problem" then that's one way. Eventually some people would start to get wise. But nanotech: you detect them and alter their brains to forget. There's always another way to contain the information if you get there early enough.

It would be easy to infect the other labs because you would be infecting every lab everywhere already.

And of course, you can always just declare yourself. Wear a purple silk cape and call yourself the Robot King. Who's going to stop you?

Anyhow, if you wanted to keep your "matter lock" a secret you would have to minimize your interventions, restrict yourself to subtle sabotage, and program every instrument to ignore the fact that every computer and robot in the entire world had a massive Trojan in it. More than that, to actively lie about it and alert you if anyone starts doing weird experiments.

Even then I suspect things would come to a head somehow and... and then I don't know what would happen.

> perhaps there's an entertaining hard sci-fi book you'd recommend for the "matter lock" idea?

Nah. There is one novel about a megalomaniacal mad scientist who achieves "matter lock" and immediately begins editing the world as he pleases. It's grotesque. FWIW it's called "The Goliath Stone" by Matthew Joseph Harrington with some sort of involvement of Larry Niven (who is otherwise one of my favorite authors, but this book is a stinker.) Just one example: the mad scientist is violently opposed to rape (okay) but he makes womens' breasts larger without asking them.

I do recommend these if you haven't read them already:

https://en.wikipedia.org/wiki/The_Diamond_Age

https://en.wikipedia.org/wiki/Blood_Music_(novel)

> Or arxiv papers if you have any.

No. People working on this do not publish. ;-)


I've read the Diamond Age, and the first book in the Three Body Problem series. I don't think the tech will play out like in Diamond Age, and the book I mentioned was indeed 3BP but since I haven't finished the other two books my final thoughts have to wait. (Only thing I didn't like so far was the sudden FTL comms at the end...)

I think your best metaphor is either cracking root access to the Matrix or simply becoming God. Very far removed from the "trusting trust" scenario. But also removed from physical systems. Using that sort of metaphor instead of "matter lock" will insulate any criticisms from hard science. It also reduces the existential concerns to the same level as the question of "what if we're living in a simulation?"

People do publish technical details on both MNT and non-MNT... To use an older reference I would bet that if you ran your idea by someone who has read Drexler's Nanosystems they could point something out at some layer that forbids your idea in principle at least insofar as current understanding of physics, chemistry, and biology go. If we (or some other species) can create machines that can move along a spatial dimension outside our normal 4D space-time but project itself back inside at will, sure, that's one way we're screwed, but that AFAIK has no real basis yet, it's the same concern as if we (or some other species) can root the Matrix...


> I don't think the tech will play out like in Diamond Age

Well (SPOILER ALERT!!!) the whole point of Diamond Age was that nanotech could play out in one of two ways: metered by a central authority to extract rents vs. imitating natural self-replicating systems. The deeper issue being control vs. wilderness.

That's really a psychological issue, and one we are already facing today: witness how the idea of building self-replicating 3D printers ("RepRap") to alter economic conditions became subsumed by companies trying to sell 3D printers to consumers. Most printers cost between $300 to $3000, when I should be able to go down to Noisebridge and print my own for $10.50. People have to make a living; Noisebridge is soliciting donations because their lease is up and they have to move. Can I really fault the folks trying to make a living selling printers?

Bucky Fuller pointed out that we would have the technology to take care of ourselves by sometime in the 1970's, no nanotech required, if we would just apply our resources and existing technology to our problems in an efficient manner.

> the book I mentioned was indeed 3BP but since I haven't finished the other two books my final thoughts have to wait. (Only thing I didn't like so far was the sudden FTL comms at the end...)

I've only read the first two, has the third been released in paperback yet? As for the FTL comms, I think it's really hard to make a hard-sci-fi story that's realistic and emotionally engaging over lots of light-years.

> I think your best metaphor is either cracking root access to the Matrix or simply becoming God. Very far removed from the "trusting trust" scenario. But also removed from physical systems. Using that sort of metaphor instead of "matter lock" will insulate any criticisms from hard science. It also reduces the existential concerns to the same level as the question of "what if we're living in a simulation?"

I don't think there's any hard science consideration preventing the development the machinery for "matter lock" (I'm getting tired of my own jargon at this point, lol.) At the most general level of analysis you have a decay rate and a regeneration rate and as long as the latter is sufficiently greater than the former you're golden. Keep in mind, you would control all atomic energy on the planet in this scenario.

I think it's physically, mechanically possible to suffuse the planetary envelope (the bubble-shaped space between the hard vacuum and the magma) with a communicating network of machines that could sense and affect conditions globally. (After all, life did it.)

The problem I foresee is command and control: could you coordinate it? How does one person (or group) receive, process, and transmit information to and from this system? Here we are pressed up against the so-called Hard Problem of Consciousness, which of course is directly related existential question you mention! That's the weird thing about self-reflexive consciousness: it's still a problem whether your system is "hard science" or "metaphorical" or "I'm dreaming" or whatever.

> People do publish technical details on both MNT and non-MNT...

I didn't mean that they don't, I meant that the (theoretical) people researching how to use nanotech to become Robot King don't publish.

Attaining the "ML" would be akin to becoming a local god, but how would you have to transform yourself to manage it? I believe that is the barrier, if any.

In any event, after reading "A Planet of Viruses" by Carl Zimmer [1] I'm pretty sure that they already have things locked down. It's a non-fiction pop-sci covering recent discoveries in biology of viruses, only 109 pages and nearly every one mind-blowing.

[1] https://books.google.com/books/about/A_Planet_of_Viruses.htm...

Read that, then "Blood Music", then Gregory Bateson's "Mind and Nature: A Necessary Unity (Advances in Systems Theory, Complexity, and the Human Sciences)" I think the Matrix is rooted... ;-)


"...and here's why that's a great thing for your security!"

https://www.wired.com/story/crypto-war-clear-encryption/


Can't the weight be calculated to see if additional components were added between mock-up and production output?

I know weight is how you double check other manufacturing


There is no method of producing a small quantity of silicon chips for cheaper than the mass production method, meaning there are no mock-ups -- there is just a software simulation and then the products made at the foundry.

Also, adding a few additional transistors and paths doesn't really add components to the chip in the way you think. They cause no meaningful difference in weight.


I wonder, does open source software help at all here? I mean, if you don't know what instructions will actually be executed, because the user's compiler is deciding how the code will run, can these hardware back doors even work?


Maybe? The reality is that most OSS is run from downloaded binaries and not precompiled. Even if it were, most people would be using the exact same compiler.

From the description of the attack though, the function charging the capacitor wouldn't have to be all that obscure.

The attack could cause a privilege escalation but if the running process that accidentally triggered it isn't asking for escalated privileges then having them won't cause harm.

The circuitry could have a discharge resistor across the capacitor causing it to drain quickly. This would require the trigger to be executed and then subsequent attack in a very short window of time.


You can target the most popular compilers. And there’s always inline asm.


If each modification to the design is approved using a multi-key process (this is practical, I've done this in financial trading environments), I don't see how this would go through.


With a financial trading environment, it should be easy to tell whether the approved plan is what got executed. How would you audit the chip manufacturer to ensure that they're using the design you approved?


> How would you audit the chip manufacturer to ensure that they're using the design you approved?

would pki be of some help here ? where final tapeout is signed with your and their keys as well for example.


How do you know the chip as fabbed conforms to the final tape out?


umm, i am unaware if it is possible for manufacturing companies to make changes like these to a design handed out. can you please explain how ?


The design as presented to the fab is usually "GDSII" format, which is a huge list of polygons on various layers.

Manufacturing companies usually have to run this through preprocessing in order to make the interference lithography work properly. In the end, they produce a bunch of IC masks, and it's always possible to ""manually"" (with expensive tools) cut another hole in the mask.


How much more expensive would it be to build chips in your own country? I would think that early in the lifetime of a new CPU/GPU, the manufacturing cost is a small portion of the cost.


The cost of a top-of-the line fab has nearly doubled every generation, with TSMC now estimating that a single 3nm fab costs ~$20B to build.

The manufacturing costs of a single CPU are small once you already have a working fab, but the fabs are now the most expensive factories ever built.


Not only that, it costs many years and billions more to build the cadre of expertise it takes to run the fab at more than zero yield, and you also need billions more invested in making huge silicon monocrystals and incredibly pure chemicals. Chips are a strategic technology, though parts of that ecosystem have become more of a commodity, it's still not the sort of thing where Russia, say, could build more than a few trailing edge chips.


> a single 3nm fab costs ~$20B to build

Then counties -- or groups of countries -- that can't or won't fork out $20bn are going to effectively lose their independence.


It could be argued that true independence on a nation-state level has been impossible for most countries ever since a small group of larger countries started building nuclear weapons and ICBMs.


Nukes have nothing to do with the independence of nuke-free countries. It's not like either any of the nuclear club countries can threaten, say, Chile, with nuclear attack if they don't do whatever it is they want. Limited-scale nuclear war would still be a disaster in many ways (political, ecological, economic, cultural, risk of becoming a wider nuclear war, ...), so it can't happen.

Small countries lose independence mainly by having to participate in the larger trade and global economy: others, especially bigger countries, have enormous leverage.

Pick a small country, any small country outside the nuclear club. It will be a lot easier to force that country to do something it'd rather not using economic threats, or at most the threat of conventional warfare, than threatening nuclear attack.


>Nukes have nothing to do with the independence of nuke-free countries. It's not like either any of the nuclear club countries can threaten, say, Chile, with nuclear attack if they don't do whatever it is they want.

No, but it's the opposite. The countries not having nukes can be easily pushed aside and be invaded (like Iraq, Libya, and so on) in ways countries with nukes cannot.


Maybe. You need a big nuclear arsenal and credible delivery vehicles. A few nukes is not enough, as NK is finding out -- a few nukes just makes you a bigger target. A few ICBMs with a few nukes is not enough because we have missile defense.

For nukes to buy you independence you need lots of them, lots of ICBMs/SLBMs, and if you don't have quite enough then you need some allies who have many more. NK doesn't really have allies. Russia won't be defending them. China likes to use NK as a bargaining chip, but they won't again go to war over it.


> A few nukes is not enough, as NK is finding out

They are enough. With showing to the world "we can blow shit up if we want, especially the very near South Korea", they have the leverage to do whatever the f..k they want. If the US (or other Western countries) attempt to repeat Iraq/Libya, they'll blow up Seoul. Basically, they liberated themselves from any kind of pressure from the USA.

That, in turn, allowed NK to actually think about meaningful peace talks with South Korea. Of course, the US will still participate in the talks, but with a lot less leverage over NK - so NK will not feel coerced by the US. (Of course, SK will feel coerced a bit more, but at least in terms of nuclear weapons they're still on the upper edge given the US-SK alliance)

At least, that's what I hope: that both countries find a way back together (or at the very least, a durable peaceful coexistence), and that the NK civilian population will no longer be suffering for their leadership.


The US has a ton of leverage, mainly over China. NK can destroy Seoul with conventional bombardment (they have something like 7,000 artillery pieces that will take longer to find and destroy than they will to go through most of their shells). They don't have enough nukes to get past American missile defense. If we don't emplace missile defense around Tokyo, then I suppose they could nuke Tokyo, and that would suck, but then what? then KJU dies. And the thing KJU most wants: to live and rule, but mostly to live.


>If we don't emplace missile defense around Tokyo, then I suppose they could nuke Tokyo, and that would suck, but then what? then KJU dies. And the thing KJU most wants: to live and rule, but mostly to live.

You'd be surprised what a leader want or doesn't want, especially in a time of national crisis. To "live" is more of a preoccupation for mere mortals.


A few nukes is not enough, as NK is finding out

Not if you cave in your main research lab and kill most of your skilled workers, as they reportedly did.

If you don't do anything stupid like that, then even one nuke is enough, assuming your adversary doesn't know where it is.


One nuke is definitely not enough. After you deliver it you're out.


Nukes don't work by being delivered and then the side launching them winning the war.

They work by the possibility of being delivered, whether those that send them will then be toast or not.


If you were right that nuke count doesn't matter then the U.S. and the USSR would never have built thousands of nukes.

QED

But still, you'll persist, so let's think it through.

Let's say that NK has 3 nukes. Let's say the U.S. has 1,000. Let's say all 1,003 nukes have the same yield, let's say 400K tons of TNT. And let's say both countries have ICBMs and can deliver all their nukes anywhere in the world in ~30 minutes.

Now let's say that NK strikes first and its warheads somehow get past U.S. missile defenses (maybe three nukes is what they have after missile defense). That's about 1% of the U.S. population dead. (Aside: the U.S. thenceforth will never again allow a tinpot dictator to get nukes -- from that point forwards the U.S. will undoubtedly first-strike any country trying it, and Russia and China will just have to deal with it.) Now the U.S. responds and uses only a few nukes to wipe out Pyongyang, Yongbyong, and related sites -- no missile defense there.

You might say this is an ecological disaster, but it's a blip in comparison to all the past atmospheric testing, so we'll survive.

Total tally: similar numbers of dead on both sides, about 1% of Americans, and about 12% of North Koreans.

Also affected: China's trade. You know what happens to that: total blockade by the U.S. Navy, as well as a prohibition on all Allies (big and small) trading with China, as well as canceling all American debt to China. You think a POTUS wouldn't do this if he/she had 3 million dead Americans to think about? No. Any POTUS who didn't do this would get deposed soon and the successor would impose this.

Do NK's nukes work as a deterrent? Maybe, but I think not. The U.S. has a larger nuclear deterrent vs. NK, and larger economic deterrent vs. China. KJU can die and not make that big a dent in the U.S., while the U.S. can wipe out KJU's ruling party and then some, and then too cause the deepest Depression in China, along with all the civil strife you might expect, and probably regime change in time.

It is absolutely in the interests of any POTUS to a) convey all of this to China (though that's not entirely necessary; Xi can count chips too), b) appear mad enough to ignore NK's deterrent. DJT can appear MADder than KJU. You don't have to buy it -- only Xi and KJU do, and I think recent events say they got the message.

In order to have a viable nuclear deterrent NK really needs enough return-strike nukes to get tens of them past U.S. missile defense. That's a lot of nukes, and there's not a lot of room in NK to put them without the U.S. being able to obliterate them in a first strike. So what NK really needs is that many nukes deliverable via SLBMs, and that's decidedly beyond their reach.

Yes, it's entirely possible (likely even) that KJU is aiming to pull a bait-n-switch at the coming summit with DJT. It's even entirely possible (but unlikely) that DJT will take a lesser deal out of desperation to save face. But I don't buy the latter, and I think in the end KJU will cave and give us what we want: unilateral nuclear disarmament.


Still, I'd expect the scales to be vastly different. Computer chips are getting ever more pervasive, while not every household / office / factory has nuclear weapons and ICBMs.


It's only $20B for the next one. It's probably going to be double that for the one after that.

High end chip manufacturing has been consolidating since it started, and this is the reason. There will be a time when only one company in the world can afford to have the very best fab, against which the others can then no longer compete.


A lot of that technology is secret and needs to be developed in house, or with closely partnered companies.

So a big part of that cost is the r&d needed to keep current.

In any given generation, one fab will have the best tech, and another the second best, and all the other fabs have compete on price at a tiny margin or just sit that generation out and make cheap chips with thier existing machines while hoping to catch the next wave.

That means it's an industry got the deep pockets. You need to be able to take a huge loss and keep investing to stay in the game.

So it's basically Intel, and sovereign wealth funds teamed up with interventionist states like Singapore and Taiwan ...

The US and other western countries had and lost this industry because of free market idealism.


It (apparently) takes one rogue employee to enable this. How's the country going to offer deterrence, against lone bad apples?


This sounds like it would be a very powerful targeted attack, but would it be possible at scale?

Or would such a modification going into all chips coming out of a factory be noticed?


They edit the mask used to produce the chip, not the chip. It would affect all chips produced.


Thanks for the clarification!


Someone said that while working at western digital, there were hw backdoors in the hdd controllers.

Such that they didn’t even need access to the OS to read all your data.


With SGX any script kiddie will be able to write undetectable malware once a SPOF is found (I believe some Spectre-variant was in the news recently).


SGX = Software Guard Instructions [0]

SPOF = Single Point of Failure

[0] https://software.intel.com/en-us/sgx


It’s should be noted this is from 2016.


so like port knocking but at an electrical level, nice.


NB: 2016




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: