Hacker News new | comments | ask | show | jobs | submit login
Escaping Data from Faraday-Caged, Air-Gapped Computers via Magnetic Fields (arxiv.org)
161 points by air7 11 months ago | hide | past | web | favorite | 72 comments

Did you ever think when stuff like this come out there is some guy at the NSA going "Damn it, there goes another one" as they have know about this and been using if for years :)

I have first-hand experience:

Around 1999, I wrote a paper about information leakage from blinking LEDs on devices (one DES link encryption box, used by banks, for example, was sending out plaintext on the LEDs).

I was working for a defense contractor at the time on a classified project, and so, following procedures, we submitted the paper to NSA for approval to publish.

It took a year and a half to approve. [1]

Eventually NSA wrote back and said, "No problem! Go ahead and publish." I kind of wonder what they spent all that time doing.

[1] Actually, it didn't quite happen that way. Our paper was approved for publication very quickly, in only a few weeks; we submitted it to the USENIX Security Symposium, where it was immediately accepted. Later, NSA called us back, and in a panic, demanded we withdraw the paper from the conference. I had to apologize to the conference committee; it was terribly embarrassing, and the delay in publishing was two years.

> I kind of wonder what they spent all that time doing.

Putting tape over all their LEDs?

You been alright, Joe? Ever get the new ideas on certification or guards published you were working on when we talked on Schneier's blog?

I ended up pivoting into trying to figure out how to secure the hardware. Deep submicron effects, supply chain compromises, and Clive's energy gapping recommendation led me to devise concept of putting TCB or monitors in 0.35 micron or up process node that's visually inspectable. Tear down random sample using others if they're clean. Trusted foundries for packaging. SOI for EMF/fault reductions. Bunch of little CPU's like Clive recommended with CHERI or SAFE extensions working over optical links with filtered/masked power.

That's about where past year or two of ideas have led me. Havent built anything: just forwarding concepts to people making guards and such. Curious what others csme up with since most guards have dropped to using Linux on x86 or whatever. Shouldve all been decertified for that shit.

Hi Nick! I'm still working on certification and guards, but in a very early stage company now. Do you know I can predict and (sometimes) control the progress of certification testing of cross domain solutions? That was my PhD.

Now I'm working to rid the world of CDS boxes built on PCs running Linux; I know exactly what you mean. It doesn't matter that you're running SE Linux; the vulnerabilities are lower down in the hardware—Meltdown and Spectre and what's coming after Spectre will go right through your VMs and your policy enforcement and your hardened OS. What's needed is a completely different way of solving the cross domain problem; that's what I've got in prototype.

I've taught Clive's energy gap notion to my students in university classes. That notion is deeply ingrained in what I'm proposing now. Could I talk with you in more detail about that idea of yours for visually inspectable process node?

"Do you know I can predict and (sometimes) control the progress of certification testing of cross domain solutions? That was my PhD."

Sounds more like engineering or process control than ad hoc. That's a good improvement. Esp if someone is assessing time to market or cost-effectiveness.

"that's what I've got in prototype."

Good. Hopefully, everyone does it a bit differently, too, for a security through diversity benefit. Too much monoculture going on even in old high-assurance systems that all built on GEMSOS, SNS, STOP, or LOCK. Three of which were really similar in security enforcement, too.

"I've taught Clive's energy gap notion to my students in university classes. "

He just did another write-up on that with some new channel ideas built-in here:


"Could I talk with you in more detail about that idea of yours for visually inspectable process node?"

Sure. Just send an email to the address in my HN profile. It's still good.

Thanks for the Clive reference. I hadn't seen it.

> one DES link encryption box [...] was sending out plaintext on the LEDs

What? Care to elaborate? Did they just set the LED brightness based on the current octet being encoded or something? But that seems like it's too high frequency to work.

In the early 1990s when we did this research, it was mostly RS-232 devices; the designers did in fact often connect LEDs directly to the serial TXD and RXD lines, which are relatively high voltage and have plenty of drive capability. Garden variety LEDs are plenty fast enough to reproduce bit transitions in the nanosecond range; we measured intelligible signals in the lab well into megabits per second; on real devices in the field, hundreds of kilobits per second.

You're right, the trick wouldn't work so well with modern LVDS levels and speeds, but we examined (at the time) lots of Ethernet NICs with LEDs on them and found no indication there of useful data in the optical region. (It was a different story on the back panel of enterprise Cisco routers....)

Ethernet PHY chips invariably, so far as I've seen, implement a pulse stretcher on the LED output pins for exactly the reason you noted. We found no intelligible optical emanations from any of the—admittedly low-cost—Ethernet devices we tested. But since then, two things have happened: USB and HFT.

There are definite opportunities for the same thing to happen again. I've seen indications of it in a few USB devices.

High Frequency Trading, though, is a whole other can of worms. There, you have FPGAs and ASICs being used to shave nanoseconds off latency, and I wouldn't be a bit surprised to see off-the-shelf gigabit Ethernet PHY chips spurned by fintech specialists designing an extremely specialised piece of hardware to do one job and do it as fast as physics allows.

There's a window of opportunity there (sorry!) for rival HFT firms with a telescope and very high speed photodetector to exploit any incautiously wired-up LEDs connected directly to ASIC signals....

Hint: use a photomultiplier; PIN photodiodes are fast but the transimpedance amplifiers they need are s-l-o-w.

The details of the DES encryption vulnerability are all in our paper; it's behind a paywall but you can find it on my web site:

J. Loughry and D.A. Umphress. 'Information Leakage from Optical Emanations'. ACM Trans. Info. Sys. Sec. 5(3), pp. 262–289, 2002.

Available for free from http://applied-math.org/acm_optical_tempest.pdf

Unfortunately, mitigating a vulnerability can take a long time and might never happen.

When the Imperfect Forward Secrecy paper about weak Diffie-Hellman exchanges came out in 2015, not all systems using the vulnerable groups upgraded immediately, and some systems couldn't be easily upgraded either because they were hard-coded or because they had other software limitations.

The IT security situation in many developing countries is also pretty alarming, for example if you look at the prevalence of old unsupported operating systems and browsers in many places.

Even thinking about traditional emanations issues, the "van Eck phreaking" attacks were publicly demonstrated years ago


but have still not been widely mitigated in practice. Neither have the emanations problems documented by Tromer.


And we know that the vendors were notified about Spectre in June of last year and the public was told over a month ago, but pretty much all of us continue to be potentially vulnerable to it in some settings, even more than seven months after vendor notifications!

It can be nice for infosec researchers to think that attackers only gain power and capability by knowing secrets, and that if those secrets are exposed the attackers will lose their capabilities, but a number of people have been pointing out that it doesn't always work that way -- or at least there can be quite a long window of vulnerability.

(However, exposing secrets is definitely an important prerequisite for denying capabilities to attackers.)

The thing I love about HN is that you say something to be amusing and the engineering types (I am one, so I am making fun of myself) always make some logical argument in response. The best part is 90% of the time you learn something!

> The IT security situation in many developing countries is also pretty alarming

It's pretty alarming in most of the developed countries as well.

Yes, thanks for the correction.

I was mostly thinking about the narrow problem of widespread use of very old software, and for example Microsoft's visualization of IE6 market share by country after the company got involved in trying to get people to stop using it.

Edit: also, for example, on the Let's Encrypt forums it's somewhat more common for people from developing countries to insist that they have to keep using server software in production that officially went EOL 2 years ago or something. But developing countries absolutely don't have a monopoly on this issue, and I've certainly seen it regularly from users in the United States.

> Did you ever think when stuff like this come out there is some guy at the NSA going "Damn it, there goes another one"

Maybe the opposite. Have other spend time on something that has little practical use given the actual utility instead of spending budget dollars on something else.

I kind of think its so impractically hard that there are easier exploits they could find.

If you think that you might want to read the tempest security standard, think about how much extra building compliant facilities cost, and realize that someone who already understood the cost approved the requirement because of a credible threat, or a capability they had and weren't sure if the other team would have.

Right. Seems more practical to just use social engineering or an inside man.

That's neat. It does remind me a few things. They're only related by this topic.

1. A buddy was experimenting with 3d printers, and was measuring the spin on a magnet superglued on the back of a NEMA 17 shaft. He had weird interference going on. We eventually figured out the sensor was sensitive enough to measure the magnetic field in the stepper itself. Our solution was to take a spinning rust HD case, and cut/wrap it around the stepper.

Spinning rust cases are made out of MU-Metal. Think of them as magnetic shielding. And in our case, it worked with nothing more than scrap we had around.

2. There was a company I ran across a year ago that was selling chips that could communicate over a personal magnetic area network. They didn't give hard numbers, but claimed they could transmit audio and video within a 2m bubble. They also worked through stuff like water (which kills most radio). I asked for some samples, but fell on deaf ears. I'd still love to have/purchase a few chips to experiment with.

Huh. I didn't know mu-metal was a real thing; I always assumed it was something Guardians of Ga'Hoole made up.

Yep, sure is legit stuff. There's quite a few things made out of mu-metal, but spinning rust hard drive cases are the most readily accessible.


I wonder what ever happened to that idea of a PAN (personal area network) that was touted a few years back.

Does your phone talking with you Fitbit and wireless earbuds, then unlocking and starting your car when get close counts ?

This is brilliant - magnetic TEMPEST.

Though magnetic field intensity drops off with an inverse cute law, they mention that in the paper, which restricts the distance. It seems they got up to 100cm but it's 1bps at that point. For 10bps need to be only about 10cm way. Seems hard to exploit but imagine server being next to a wall/floor and someone placing a bug right next to the wall on the other side.

Seems just leaving enough space around the server would be enough. Put it 1m away from floor and walls and it should mitigate it.

I knew about the light and ultrasound before, but also learned from the paper there is apparently a thermal TEMPEST attack as well.

I realize it was a typo, but I love the idea of an "inverse cute law".

Same here, but I expect it's the wrong law for magnetism. People's magnetism is roughly proportional to the logarithm of their cuteness, isn't it?

Ha! Silly typo. Thanks for pointing it out.

"Inverse cute law" works as you said as well, it's when you're trying to exfiltrate data but physics conspires against you.

I thought it had something to do with Schrodinger's Cat.

Completely isolated systems seem more and more like a theoretical abstraction, not a reality.

If you want true isolation, place it inside the event horizon of a black hole. Or spin up a child universe then cut off the connection. Even then some clever cookie might find ways for information to percolate from one continuum to another.

> If you want true isolation, place it inside the event horizon of a black hole.

There's a hypothesis supported by some physicists that black holes in some way export the information content of the things they previously swallowed via Hawking radiation. Hawking himself apparently has not endorsed this interpretation.



Well, at least you might kick the can down the road for a couple trillion years before the stuff leaks out. Ought to be good enough in most cases.

> Or spin up a child universe then cut off the connection.

Unfortunately, even that doesn't always work;


Kragen Sitaker's post that was featured on HN recently mentioned the idea of simply slowing down communications to increase distance without increasing power. Can that be demonstrated for air gap penetrating malware (with or without electrical shielding)? Can I run software on my computer that successfully sends a message wirelessly to Kragen over on another continent (albeit at some absurdly low data rate)?


As obnoxious as randomly pulsating RGB led's are, it seems like a great way to defeat data exfiltration with status LED's of any kind. I'm also curious why air-gapped data centers don't blast the air-waves both inside and outside of the faraday cages with randomized signal noise.

I can only speak for the DoD/IC but we do blast the air-waves with randomized noise in the form of white/pink noise or radios in and around our SCIFs. It's not really cost effective or critical to add further EM radiative sources to that, as the intention is primarily to reduce risk of voice bugs.

Cool! I can't wait to see data centers that are pulsating with RGB light in a seizure inducing "Linus Tech Tips - THE MOST INSANE RGB DATA CENTER EVER" kind of way. With all the lights there for "security reasons"

The real benefit from this work is not exfiltrating from airgapped computers, but from code running inside Intel SGX or similar isolated computation modes.

If we extrapolate this into the future, we'll eventually have, "Escaping data from Computers through an Event Horizon."

Wow. This is pretty mind blowing to someone that is not an engineer.

My take away from this is that if the information is valuable enough and if someone wants the data enough, there is no way to truly make anything 100% secure. There will always be something to exploit. Is this accurate? Can we ever make something that is “invincible “?

> We introduce a malware

How do you get malware onto one of these things? I guess social-hacking could be used but if you have that then no technical solution is needed.

It seems this data leakage mode could be dealt with by combining the Faraday cage material (which blocks EM radiation) with a high magnetic permeability material (blocks B-fields, as discussed in the paper's ref [50] (paywalled)).

Mu-metal screen is a thing, so maybe copper plating over the mu-metal to combine both shielding mechanisms?

I did a project once where I needed to block RF interference from a battery-powered Raspberry Pi board. I thought back to physics classes and Faraday cages when we learned that no electric fields can come in or out, so I went and put the board into a metal box thinking it would completely block all RF. Boy was I wrong. The box was only enough to attenuate about -10 or -15dB of the WiFi signal! I figured the box had seams so it was behaving as a repeater from outside to inside. I wonder if a perfect seamless box would have done better.

You forgot an important detail from your physics classes: the electric field inside a Faraday cage is only zero when it's exposed to static electric fields. Changing electric fields, like radio waves, can pass through, although they'll be attenuated.

Also, if you place an electric charge inside a Faraday cage, it will cause an electric field outside the cage, and the electric field inside the cage will still be zero.

Seamless might have helped. Seams can act as slot radiators.

In plasma deposition chambers I've built, getting continuous electrical contact all the way around mechanical access ports was essential to keep microwave power from getting out of the box.

MRI rooms are well shielded for both RF and static B field. Multiple layers of air and mu metal gives good B shielding.

As often happens, very nice, but of no practical use whatever, I mean first thing you have to BOTH infect the PC AND place a "magnetic receiver" within 100 cm, but the "magnetic receiver" needs anyway to transmit the data, and that is IMHO the unresolved problem, as this method of transmission would be easily detected (if electromagnetic/radio) or impossible to deploy (cable/wire) in 99.9999% of the "secure sites" where actually an air-gapped computer is used inside a faraday cage.

This sort of exploit is attractive to state actors with virtually unlimited budgets. For them, it might be quite practical. The CIA has been toying with insect-sized remote controllable flying vehicles for decades. Stick a USB stick and magnetic receiver on one of them and you've basically got your exploit.

You and I wouldn't spend 20 man-years and $50 million to do that, but our government definitely would if there were a chance to, say, disrupt Iran's nuclear ambitions for a few months.

easier to just write stuxnet

The magnetic receiver could as well be under the floor, in the wall, in a picture, etc. It doesn't have to immediately transmit, either. It'd be easy enough to put 128gb flash on it and then come collect it later, sneaker-net style.

This. Secure facilities might not be used 24/7/365 and so going in to collect the bug would be the mode used.

It is yet another clever way to extract information and I expect to see CPU coolers with ferrite flux redirectors to appear at some point to counter it.

Seems to me that if you're talking about smuggling hardware into a SCIF and installing exploits on the servers then doing this exotic form of data collection seems like overkill. There should be faster and more reliable options at that point, like turning the keyboard cable into a crude transmitter by banging bit patterns over it.

I've never seen a SCIF where going in after hours is the "easy route". They're full of motion detectors and the entrance is typically a heavy metal door with a combination lock. The air ducts are too small to crawl through. :)

Currently the only way to exploit this would be a Hollywood style infiltration of a human, at which point a usb stick might be easier to use to get in and out quick... https://youtu.be/ar0xLps7WSY

> ... so going in to collect the bug would be the mode used.

Well, the whole point is that it is supposedly a "secure" facility, so you may have the rare occasion to enter it again, after having beeen exceptionally been able to access it once to install the software (since you cannot install it remotely due to the air gap and faraday cage even if you can get access and by-pass firewalls and/or data-diodes) and deploy the receiver with flash under the floor (no pictures are allowed on walls of a "secure" facility, and someone may notice a fresh patch of plaster on the wall[1]), in order to gather the data next year, on the 4th of July, not exactly "in a timely fashion".

[1] And no, it is not like any "secure" facility is confining with an easily accessible area, the walls are normally reinforced concrete and with no less than 30-40 cm thickness.

First, I've never been in a SCIF as far as I know. That said, one of the history classes at USC was "Espionage and Terrorism" (which was a killer History elective if you were an engineering major) and the professor discusses various tradecraft strategies. One of which is local recording.

Because "bugs" that transmit are susceptible to being sniffed by RF signal detectors, opponents would disguise devices which could record their environment, and figure out ways to 'leave' them in places of interest and recover them later.

Because I've not been in a SCIF I don't know if it gets janitorial service or any regular maintenance at all but if it did, then that might be how this might work.

Mostly I was responding to the challenge of "it's only 100cm and you can't transmit anyway" assertion that the vulnerability was not exploitable. I'm expect someone could figure out how to exploit it.

I've been in a SCIF. Imagine walking into a large empty warehouse protected by armed guards, and in the middle of the warehouse is a building the size of a house raised off the ground. There are two pipes leading into the building, presumably electricity, networking, and water. The entrance is an airlock-setup with an outer and inner door - the type of doors you see on submarines in the movies. Inside, there is a mess of devices, screens, and folks that look like they do 14 hour shifts (if there were janitors, I didn't see any evidence that they'd ever been inside). There are machines that you can operate, and other machines that you are not allowed to be within several feet of. You're not allowed to leave with anything they don't allow you to leave with, and anything they do let you leave with is effectively destroyed (HDDs, etc).

Is this kinda what you're talking about:

http://www.primalunleashed.com/intel/SCIF.htm (seems to be from a videogame)

https://scifglobal.com/ (looks like a contractor that builds the things)

Kinda, but none of the pictures on either of those links look like an exact copy.

There are lots of different kinds of SCIFs. I've been in one that looks just like any other room in the office.

No doubt. I'm sure my experience was a little more X-Files than other folks.

>First, I've never been in a SCIF as far as I know.

Well, several years ago I have actually built a couple of them (actually something very similar to them, the term SCIF is US) for the military here in Italy.

Though admittedly not - at least from the construction specs - to be used for this kind of air-gapped computing, they were either a "safe conference room" or a (very large) "storing safe" (to store paper documents and/or weapons).

I have no idea which security measures/protocols were later employed to limit access to them, of course, but there were a number of safety features that would make physical access by any means seemingly impossible.

Of course if you could impersonate someone else authorized or if you can bribe someone with legitimate access, than any construction/safety measure would be m00t, but then if you can obtain that kind of access it would be easier to simply get the data from the computer.

I don't disagree. I was just assuming that either the computers were removed when being maintained (laptops?) or otherwise inaccessible / turned off.

I’ve been in a bunch of SCIFs but they were all downrange in centcomia and basically were tents or sections of a warehouse with a bored-looking dude with an m4 checking IDs and passes before letting people through a tent flap or plywood door. That is really only permitted under contingency ops rules and required treating the entire base as a perimeter, then a compound with itself higher security, etc. There was enough physical separation from likely threats that it was still mostly ok.

Of course, people still had to be repeatedly counseled for plugging the same USB flash drives from personal movies/internet/etc systems into red (S) and orange (TS) systems. It was sad.

Realistically security countermeasures adapt to the likely threats. In Iraq and Afghanistan the technical sophistication of the adversary in terms of SIGINT/ELINT was poor, so they rightly decided giving up security for wider distribution of information with friendly forces was the right call.

>Because I've not been in a SCIF I don't know if it gets janitorial service or any regular maintenance at all but if it did, then that might be how this might work.

Janitorial service, etc are usually limited to a sign telling whoever works there to take out their own trash and clean up their own spilled coffee. Everything else (like changing a light-bulb) is done in accordance with procedures specifically designed to make it damn near impossible for a single person to do anything they shouldn't.

Depends on how things come in and out, who built the facility, etc.

It's not the first time things like this have happened http://www.cryptomuseum.com/covert/bugs/selectric/index.htm

Yes. And properly secured air-gapped devices don't get malware.

40bits/s from 100cm away, cool hack.

Why would they say 100cm instead of just... 1 meter...

The unit indicates the precision at which the scale is represented. So for this, the distribution is probably not very useful at a resolution of 0m, 1m, and 2m and is normalized between the 0 and 1XXcm range.

One reason is to convey precision.

Because the other distances are less than 1m and measured in cm, like 20cm, 50cm, etc. See their paper.

So what, do I just set on the bios to run my CPU at 100% all the time now?

I guess we need to upgrade our tinfoil hats.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact