Around 1999, I wrote a paper about information leakage from blinking LEDs on devices (one DES link encryption box, used by banks, for example, was sending out plaintext on the LEDs).
I was working for a defense contractor at the time on a classified project, and so, following procedures, we submitted the paper to NSA for approval to publish.
It took a year and a half to approve. 
Eventually NSA wrote back and said, "No problem! Go ahead and publish." I kind of wonder what they spent all that time doing.
 Actually, it didn't quite happen that way. Our paper was approved for publication very quickly, in only a few weeks; we submitted it to the USENIX Security Symposium, where it was immediately accepted. Later, NSA called us back, and in a panic, demanded we withdraw the paper from the conference. I had to apologize to the conference committee; it was terribly embarrassing, and the delay in publishing was two years.
Putting tape over all their LEDs?
I ended up pivoting into trying to figure out how to secure the hardware. Deep submicron effects, supply chain compromises, and Clive's energy gapping recommendation led me to devise concept of putting TCB or monitors in 0.35 micron or up process node that's visually inspectable. Tear down random sample using others if they're clean. Trusted foundries for packaging. SOI for EMF/fault reductions. Bunch of little CPU's like Clive recommended with CHERI or SAFE extensions working over optical links with filtered/masked power.
That's about where past year or two of ideas have led me. Havent built anything: just forwarding concepts to people making guards and such. Curious what others csme up with since most guards have dropped to using Linux on x86 or whatever. Shouldve all been decertified for that shit.
Now I'm working to rid the world of CDS boxes built on PCs running Linux; I know exactly what you mean. It doesn't matter that you're running SE Linux; the vulnerabilities are lower down in the hardware—Meltdown and Spectre and what's coming after Spectre will go right through your VMs and your policy enforcement and your hardened OS. What's needed is a completely different way of solving the cross domain problem; that's what I've got in prototype.
I've taught Clive's energy gap notion to my students in university classes. That notion is deeply ingrained in what I'm proposing now. Could I talk with you in more detail about that idea of yours for visually inspectable process node?
Sounds more like engineering or process control than ad hoc. That's a good improvement. Esp if someone is assessing time to market or cost-effectiveness.
"that's what I've got in prototype."
Good. Hopefully, everyone does it a bit differently, too, for a security through diversity benefit. Too much monoculture going on even in old high-assurance systems that all built on GEMSOS, SNS, STOP, or LOCK. Three of which were really similar in security enforcement, too.
"I've taught Clive's energy gap notion to my students in university classes. "
He just did another write-up on that with some new channel ideas built-in here:
"Could I talk with you in more detail about that idea of yours for visually inspectable process node?"
Sure. Just send an email to the address in my HN profile. It's still good.
What? Care to elaborate? Did they just set the LED brightness based on the current octet being encoded or something? But that seems like it's too high frequency to work.
You're right, the trick wouldn't work so well with modern LVDS levels and speeds, but we examined (at the time) lots of Ethernet NICs with LEDs on them and found no indication there of useful data in the optical region. (It was a different story on the back panel of enterprise Cisco routers....)
Ethernet PHY chips invariably, so far as I've seen, implement a pulse stretcher on the LED output pins for exactly the reason you noted. We found no intelligible optical emanations from any of the—admittedly low-cost—Ethernet devices we tested. But since then, two things have happened: USB and HFT.
There are definite opportunities for the same thing to happen again. I've seen indications of it in a few USB devices.
High Frequency Trading, though, is a whole other can of worms. There, you have FPGAs and ASICs being used to shave nanoseconds off latency, and I wouldn't be a bit surprised to see off-the-shelf gigabit Ethernet PHY chips spurned by fintech specialists designing an extremely specialised piece of hardware to do one job and do it as fast as physics allows.
There's a window of opportunity there (sorry!) for rival HFT firms with a telescope and very high speed photodetector to exploit any incautiously wired-up LEDs connected directly to ASIC signals....
Hint: use a photomultiplier; PIN photodiodes are fast but the transimpedance amplifiers they need are s-l-o-w.
J. Loughry and D.A. Umphress. 'Information Leakage from Optical Emanations'. ACM Trans. Info. Sys. Sec. 5(3), pp. 262–289, 2002.
Available for free from http://applied-math.org/acm_optical_tempest.pdf
When the Imperfect Forward Secrecy paper about weak Diffie-Hellman exchanges came out in 2015, not all systems using the vulnerable groups upgraded immediately, and some systems couldn't be easily upgraded either because they were hard-coded or because they had other software limitations.
The IT security situation in many developing countries is also pretty alarming, for example if you look at the prevalence of old unsupported operating systems and browsers in many places.
Even thinking about traditional emanations issues, the "van Eck phreaking" attacks were publicly demonstrated years ago
but have still not been widely mitigated in practice. Neither have the emanations problems documented by Tromer.
And we know that the vendors were notified about Spectre in June of last year and the public was told over a month ago, but pretty much all of us continue to be potentially vulnerable to it in some settings, even more than seven months after vendor notifications!
It can be nice for infosec researchers to think that attackers only gain power and capability by knowing secrets, and that if those secrets are exposed the attackers will lose their capabilities, but a number of people have been pointing out that it doesn't always work that way -- or at least there can be quite a long window of vulnerability.
(However, exposing secrets is definitely an important prerequisite for denying capabilities to attackers.)
It's pretty alarming in most of the developed countries as well.
I was mostly thinking about the narrow problem of widespread use of very old software, and for example Microsoft's visualization of IE6 market share by country after the company got involved in trying to get people to stop using it.
Edit: also, for example, on the Let's Encrypt forums it's somewhat more common for people from developing countries to insist that they have to keep using server software in production that officially went EOL 2 years ago or something. But developing countries absolutely don't have a monopoly on this issue, and I've certainly seen it regularly from users in the United States.
Maybe the opposite. Have other spend time on something that has little practical use given the actual utility instead of spending budget dollars on something else.
1. A buddy was experimenting with 3d printers, and was measuring the spin on a magnet superglued on the back of a NEMA 17 shaft. He had weird interference going on. We eventually figured out the sensor was sensitive enough to measure the magnetic field in the stepper itself. Our solution was to take a spinning rust HD case, and cut/wrap it around the stepper.
Spinning rust cases are made out of MU-Metal. Think of them as magnetic shielding. And in our case, it worked with nothing more than scrap we had around.
2. There was a company I ran across a year ago that was selling chips that could communicate over a personal magnetic area network. They didn't give hard numbers, but claimed they could transmit audio and video within a 2m bubble. They also worked through stuff like water (which kills most radio). I asked for some samples, but fell on deaf ears. I'd still love to have/purchase a few chips to experiment with.
Though magnetic field intensity drops off with an inverse cute law, they mention that in the paper, which restricts the distance. It seems they got up to 100cm but it's 1bps at that point. For 10bps need to be only about 10cm way. Seems hard to exploit but imagine server being next to a wall/floor and someone placing a bug right next to the wall on the other side.
Seems just leaving enough space around the server would be enough. Put it 1m away from floor and walls and it should mitigate it.
I knew about the light and ultrasound before, but also learned from the paper there is apparently a thermal TEMPEST attack as well.
"Inverse cute law" works as you said as well, it's when you're trying to exfiltrate data but physics conspires against you.
If you want true isolation, place it inside the event horizon of a black hole. Or spin up a child universe then cut off the connection. Even then some clever cookie might find ways for information to percolate from one continuum to another.
There's a hypothesis supported by some physicists that black holes in some way export the information content of the things they previously swallowed via Hawking radiation. Hawking himself apparently has not endorsed this interpretation.
Well, at least you might kick the can down the road for a couple trillion years before the stuff leaks out. Ought to be good enough in most cases.
Unfortunately, even that doesn't always work;
My take away from this is that if the information is valuable enough and if someone wants the data enough, there is no way to truly make anything 100% secure. There will always be something to exploit. Is this accurate? Can we ever make something that is “invincible “?
How do you get malware onto one of these things? I guess social-hacking could be used but if you have that then no technical solution is needed.
Mu-metal screen is a thing, so maybe copper plating over the mu-metal to combine both shielding mechanisms?
Also, if you place an electric charge inside a Faraday cage, it will cause an electric field outside the cage, and the electric field inside the cage will still be zero.
In plasma deposition chambers I've built, getting continuous electrical contact all the way around mechanical access ports was essential to keep microwave power from getting out of the box.
You and I wouldn't spend 20 man-years and $50 million to do that, but our government definitely would if there were a chance to, say, disrupt Iran's nuclear ambitions for a few months.
It is yet another clever way to extract information and I expect to see CPU coolers with ferrite flux redirectors to appear at some point to counter it.
I've never seen a SCIF where going in after hours is the "easy route". They're full of motion detectors and the entrance is typically a heavy metal door with a combination lock. The air ducts are too small to crawl through. :)
Well, the whole point is that it is supposedly a "secure" facility, so you may have the rare occasion to enter it again, after having beeen exceptionally been able to access it once to install the software (since you cannot install it remotely due to the air gap and faraday cage even if you can get access and by-pass firewalls and/or data-diodes) and deploy the receiver with flash under the floor (no pictures are allowed on walls of a "secure" facility, and someone may notice a fresh patch of plaster on the wall), in order to gather the data next year, on the 4th of July, not exactly "in a timely fashion".
 And no, it is not like any "secure" facility is confining with an easily accessible area, the walls are normally reinforced concrete and with no less than 30-40 cm thickness.
Because "bugs" that transmit are susceptible to being sniffed by RF signal detectors, opponents would disguise devices which could record their environment, and figure out ways to 'leave' them in places of interest and recover them later.
Because I've not been in a SCIF I don't know if it gets janitorial service or any regular maintenance at all but if it did, then that might be how this might work.
Mostly I was responding to the challenge of "it's only 100cm and you can't transmit anyway" assertion that the vulnerability was not exploitable. I'm expect someone could figure out how to exploit it.
http://www.primalunleashed.com/intel/SCIF.htm (seems to be from a videogame)
https://scifglobal.com/ (looks like a contractor that builds the things)
Well, several years ago I have actually built a couple of them (actually something very similar to them, the term SCIF is US) for the military here in Italy.
Though admittedly not - at least from the construction specs - to be used for this kind of air-gapped computing, they were either a "safe conference room" or a (very large) "storing safe" (to store paper documents and/or weapons).
I have no idea which security measures/protocols were later employed to limit access to them, of course, but there were a number of safety features that would make physical access by any means seemingly impossible.
Of course if you could impersonate someone else authorized or if you can bribe someone with legitimate access, than any construction/safety measure would be m00t, but then if you can obtain that kind of access it would be easier to simply get the data from the computer.
Of course, people still had to be repeatedly counseled for plugging the same USB flash drives from personal movies/internet/etc systems into red (S) and orange (TS) systems. It was sad.
Realistically security countermeasures adapt to the likely threats. In Iraq and Afghanistan the technical sophistication of the adversary in terms of SIGINT/ELINT was poor, so they rightly decided giving up security for wider distribution of information with friendly forces was the right call.
Janitorial service, etc are usually limited to a sign telling whoever works there to take out their own trash and clean up their own spilled coffee. Everything else (like changing a light-bulb) is done in accordance with procedures specifically designed to make it damn near impossible for a single person to do anything they shouldn't.
It's not the first time things like this have happened http://www.cryptomuseum.com/covert/bugs/selectric/index.htm