I had a friend who worked in federal law enforcement who once described a vampire device that they used. It would clamp around a power cable and inject a UPS in the mix so that an electronic device could be removed without turning it off. Seemed like a useful little trick.
If nothing else, would let you move a Frogger machine.
More seriously, I have wondered if you can detect these kinds of external interference. Auto lock the machine if power/network/wifi/Bluetooth/USB conditions change.
Nabbing an unlocked laptop was how they got the Silk Road guy (though they probably already had sufficient evidence elsewhere).
One trick you could use is to abuse the fact that law enforcement often plugs in a mouse wiggler on an unlocked desktop and kill your server the moment you see a new HID device (make sure to run some kind of desktop on your server so they think they can keep the session open, best to do it in a VM).
You could also monitor the ethernet link. They can move your server but they can't move the entire network, set up an encrypted tunnel between two distant physical servers and self destruct the moment that tunnel gets disrupted.
Some computers come with gyros/accelerometers built in. My old HP laptop had some kind of head crash prevention that used that hardware. I know this, because Gnome thought it was a tablet style sensor and turned my screen upside down if I didn't disable the sensor. Maybe getting a HP server can already get you a whole bunch of movement sensors.
You could probably figure out if the server is being moved by measuring capacitance of the case, measuring accelerometers, maybe add a GPS dongle. Or you could add an LTE connector and measure any signals you may receive that you shouldn't from inside a server room. You can probably measure _something_ in the server room, though, so to make sure your LTE dongle doesn't get interrupted, also measure whatever reliable signal you can find to detect Faraday cages.
Lastly, you could put a video camera in the case on all sides and measure changes. Detecting law enforcement badges probably isn't that hard with opencv if you're dedicated enough.
You have to hide your security measures and never tell anyone, though, or they'll just leave the server as-is and use the classic rubber hose exploit to make you give up the key material.
> Or you could add an LTE connector and measure any signals you may receive that you shouldn't from inside a server room.
Incoming Bluetooth Low Energy announcements should have a receive power level associated with them. Stick a beacon (like say a standard ble temperature/humidity sensor) somewhere, and you should be able to tell if the distance to it changes.
Maybe attack the problem from a different angle: use an accelerometer. Or spend a little bit more money to add a gyro and make a real, if very low accuracy, IMU.
That is a great suggestion. I think Android just implemented a “snatch detection” system for phones. Although, I like the idea of not requiring additional hardware. I guess when I start running a drug empire I will have to pony up for the extra dongle.
Some HSMs I've used (payshields) have tamper sensors that can detect motion for this reason.
> The ADXL362 accelerometer in the PayShield 10K acts as a "Motion Sensor" detecting tilt movements. An alarm triggers an alert if the HSM is moved (for example, slid out of the rack)
Rotation itself isn’t a threat, but if you want to directly estimate displacement to distinguish between earthquakes and someone stealing the machine, without relying on heuristics, actual inertial measurement would do the trick. And inertial measurement involves tracking the direction of acceleration, which involves tracking rotation.
It is a secret one way lock. Disturbing the machine and it locks/encrypts/sheds data. Bringing the machine back to the safe zone would not decrypt the data.
Easily. Bolt the machine to the floor in such a way where the case has to be opened and a trip sensor activated to actually move the machine.
You can switch my power source without noticing? Who cares. The attack is taking the machine where it is not supposed to be. That's a problem we've been solving since forever.
Wifi would probably be the easiest. Either hide a dummy AP in the house or use a combination of multiple neighbors APs. If you don't see any beacon frames from the dummy SSID for a 30 second period then lock/shred the computer.
Wifi 5/6 sometimes rake up to a couple of minutes to get online (DFS and whatever) so 30 seconds is like smoking near an open can of gasoline: mostly fine but when it's not...
Isn’t that kinda what they used for Ross Ulbright’s computer? I know it was a laptop but they probably didn’t want to take chances given if that thing shut down the entire thing would be encrypted?
I thought they had an attractive agent distract him for a moment while another agent grabbed his still-unlocked-and-open laptop to prevent him from locking it or closing it up. At least I think that was the cloak-and-dagger story I heard.
two agents posing as a couple feigned a raucous quarrel that distracted him, while a third agent sitting across the table yanked the laptop at the precise moment he was distracted
Someone successfully did this for copper gigabit ethernet and presented at one of the security conferences - but with a few milliseconds interruption in signal.
That is why you put in special outlets that communicate with the PC over the power line encrypted.
You would need to drill holes in the concrete wall to get to the power lines in the wall in order to take the outlet along and hope that there isn't an additional device in the breaker panel.
Its a parasitic tap that connects to the mains power cable going into the device.
It then phase locks an inverter with said mains power, allowing the mains power cable to be unplugged and the whole lot transported elsewhere on battery power.
Careful application of a box cutter for the outer sheath followed by something resembling a scotchlok connector for line and neutral.
Edit: If the machine is plugged into a power bar / power strip / whatever you want to call it, this is much easier still: Plug the vampire UPS into the power bar as well, wait for it to sync up to the grid, and disconnect the bar from the outlet. The UPS continues to feed power into the bar and thus keeps the machine powered.
Power strips make this easier of course, but every outlet usually has two plugs and most* of the time they are wired together. You just need to plug into the other plug.
* In case they are split for whatever reason (switched plug, different circuit) whatever, just take off the faceplate, pull out the outlet, and now you have direct access to the screw terminals and copper wiring on the outlet. You could wire into the plug using the second set of terminals or via the other connection method (one being the screw terminals, the other being the "insert into the hole" depending on which is used) and take the whole outlet with you.
That would apply in North America yeah; that wouldn't apply over here (UK).
The insulation on plug pins prevents you pulling the plug far enough out of the socket to use a plug pin capture device; if it's far enough out of the socket to expose the uninsulated portion of the pins, it is no longer far enough into the socket to be receiving voltage, and you've just interrupted the power, which is precisely what you don't want.
The design of our wall sockets is such that there is no separate faceplate assembly; you'd have to take the entire socket off of the wall. Excepting some exotic sockets (like the MK Logic Plus Rapid Fix), there is only one recessed insulated screw terminal for line and neutral and no holes to push conductors into [1], and loosening that screw to put another conductor in would also risk interrupting the power.
Furthermore, most sockets are on ring circuits, and removing the socket from the wall creates a dangerous potential for an overcurrent condition on the now-incomplete ring, which the breaker will not respond to, as it can't know that the ring is no longer complete.
In order to safely do socket surgery in this scenario, you'd first have to connect both lines and both neutrals together using something like a scotchlok connector. Then you can cut one of the line and neutral conductors from those to the socket. Finally, you can crimp onto the flying socket line and neutral from the vampire, and then cut the other line and neutral when the UPS is ready to feed the socket. This leaves exposed mains-potential conductors behind the wall which should be capped off by some form of scotchlok or crimp connector for occupant safety, and an exposed mains-potential conductor which should be capped off for officer and technician safety. [2]
I dare say this is more involved and riskier than simply carefully cutting into the equipment power cord. Also, good luck finding enough slack conductor behind a wall socket in order to pull this off.
We do, but that doesn't help you much if e.g. they have two computer systems plugged into the same double socket outlet and you want to seize both of them without powering them off, or you fear that the computer system plugged into one socket will react badly to the loss of power of whatever device is plugged into the other one alongside it. Almost all of our sockets are also switched, so you're playing with fire every time you put your hands on it -- you might knock the switch and kill the power to that socket just by trying to take it off of the wall.
As far as i'm aware, often times they just plug into an open socket on an existing powerstrip that are so often used for PCs, no vampire-ing required. You can then unplug the powerstrip from the wall, it stays powered, inputing electricity through one of the sockets instead of drawing.
I guess a more elaborate version of the same idea can be done if the computer plugged directly to an outlet with two sockets too, removing the socket from the wall.
The only time I can forsee vampiring the cable being a thing would be if computer is directly plugged into a single socket outlet on the wall?
This is a great writeup! Especially for those that may want to DIY it, the how and the why and all of that, and not have to shell out for carrier-quality Layer 1 encryption devices. Nice to see that even off-the-shelf components can do it with relative ease at those rates. Also nice to see sane sysctl tunes as well. Anything to make an adversary's day a bit harder. I low key love the explanation of old 10B5 taps, something that so well and truly dead, but the legacy carries on into everything new today.
This is actually a well-trodden area of datacenter interconnect (DCI) devices that do line-rate encryption (to crazy levels like 400G+) to protect those links that may have easily accessible fibers strung along poles, for instance, to prevent just the vampirism described in the post. Packetlight, Ciena, Infinera and others.
Really cool article, I enjoy reading through all the details behind the decision making.
Just spit-balling a little, but I wonder if Wireguard is the best tool here given that the author is only using it for a single point-to-point link and they control the devices on both ends. That CPU supports AES-NI and probably does it a lot faster than Wireguard's ChaCha20 (hard to get numbers for their server CPU, but the tiny little x86 mini PC I use as my router does AES XTS at 43Gbps according to `cryptsetup benchmark`).
You might see better performance by tunneling the vxlan connection using a different technology which can use AES-NI? Then again, Wireguard is definitely still a good tool for stuff like this, and maybe the performance penalty isn't a big deal here.
AES can only encrypt up to 64TB; after that you need to re-key. So you need a mechanism for rekeying anyway. Definitely a good idea to use a battle-tested tool like wireguard instead of rolling your own.
I think alphager is referring to the upper limits of AES before a birthday attack becomes a concern. In GCM mode there's a realistic chance of an IV being reused after around 64GB of data. Other modes have differing limits.
Truly. I think IPSec is practically more "battle tested" than wireguard ever could be, and IPSec offers more useful functionality than wireguard ever will.
Is there reason to think AES used appropriately would be any less secure here? Not trying to be argumentative, genuinely curious.
My understanding is that AES has some design warts that make it not ideal (basically, it's easy to both implement and use in ways that leak information if you're not careful) but that it's still essentially perfect symmetric encryption if you're using it as recommended. Is that wrong?
FWIW, the reason I brought up performance was because the OP spends a large chunk of the post talking about it, so I assume it's an important requirement for them.
It's not about AES, it's about the WireGuard protocol. AES is fine. It's possible that, if Jason had the decisions to do over again today, he might use XAES instead of ChaPoly (he didn't have an especially good AES construction to use at the time). The big thing with WireGuard is not doing ciphersuite negotiation, which is an extremely good decision that is definitely worth paying some cycles/byte for (if you must).
Maybe I'm missing something, but why would he have needed XAES rather than vanilla AES-GCM, which was certainly available at the time WireGuard was created? XAES gives you large nonces which is cool, but that's not something WireGuard needs AFAIK and it's not something regular ChaPoly gives you anyways.
Now I admit ChaPoly has some pretty nice advantages if you're implementing it in software. But with the trend of AES-GCM hardware support and the long-lived nature of WireGuard's crypto choices given the lack of ciphersuite negotiation (which I agree was a good decision!), I'm not sure AES-GCM wouldn't have been the best (albeit less cool) choice.
Although maybe on the other hand, ChaPoly can still be made to run pretty fast even just in software and it gives WireGuard the advantage of being more practical on very low-end devices that might lack AES-GCM hardware. Avoiding ciphersuite negotiation means a tradeoff needs to be made somewhere, at least with current algorithms, and I'd bet line-rate hardware encryption is probably the least likely place to see WireGuard for a while at least, so maybe WireGuard did make the best tradeoff at the time.
WireGuard is an instantiation of Noise, which slightly disfavors AES-GCM (see: the spec). I don't think it's a huge big deal, but at the time WireGuard was being designed it was pretty normal to tack away from GCM.
I agree in advance, Noise already uses counter-based nonces, the extended nonce wouldn't matter to vanilla Noise.
This has been nagging at me for a day, so just to clarify real quick:
I wanted to push back a little on the notion that Chapoly was "cool" and GCM was "lame" back in 2015-2016. At the time, GCM was coming off a pretty rough run of implementation bugs. It was the tail end of a period of time where a concern was that some mainstream architectures wouldn't be able to run performant constant-time GCM at all; like, the fast software GCMs had a table-driven multiplication? I forget the details.
But you could have done a secure WireGuard instantiated on AES-GCM. It's true that GCM was out of fashion and Chapoly was in fashion. I just want to say, that fashion had (has?) some real technical roots. That's all.
AES is probably fine as a cipher but the VPN protocols that aren't Wireguard tend to have various footguns available. In theory someone could create NoisyESP but I'm not aware of it.
That makes sense. I was thinking they could use something like DTLS [1] and tunnel just the one UDP port needed for their VXLAN connections, rather than use full-blown VPN software. I have never actually tried this myself though.
It genuinely might not matter, and it might make sense to use a weaker protocol, if the only threat model you're trying to deal with is someone physically tapping a campus-area network. You'd run the "real" secure transports on top of that, the same way you do on internal networks today. In which case, yeah, it might make sense to select your protocol/constructions purely based on encryption efficiency.
My solution ended up using tc's mirred[0] action for implementing a fully L2-transparent frame relay. I wonder if their setup achieves the same degree of transparency, because afaiui, that's just not possible involving a 802.1Q-compliant (Linux) bridge.
I spent close to a week optimizing my setup looking at kernel flame graphs and perf results, reading adapter-specific tuning guides and driver source, and can say that the only really meaningful performance optimizations (in both the Broadwell- and Zen3/Vermeer-based implementations I tried) were disabling mitigations in the kernel (esp. on Zen3, that boosted performance by more than 20%), and getting CPU frequency scaling/idle states sorted out correctly (which yielded much higher wins on the older Broadwell uarch, because power state transition appears to happen much quicker on Zen3).
As for the solution presented in the (on the whole really great; I love it!) article, I have my doubts about the effectiveness of the cargo-culted "sysctl tuning" mentioned - TCP, for example, is simply not involved at all in the described setup, so "tuning" its buffer allocations cannot have any effect on the workload.
Kudos to the writers for solving their problem in a creative, cost-effective and maintainable way! :)
> I wonder if their setup achieves the same degree of transparency, because afaiui, that's just not possible involving a 802.1Q-compliant (Linux) bridge.
Can you elaborate on what is not transparent about 802.1q bridge in Linux?
I hear you on the system tuning. Whenever I change sysctl variables I always include a comment with what the default was and why the new setting is better. I don't trust sysctl copy pasta w/o decent explanations.
There's a number of "special" Ethernet addresses that a proper Ethernet bridge must never forward. The Linux bridge implements a mechanism to ignore _some_ of these constraints, but not all of them. If you ned that, you can always get to manual patching in https://github.com/torvalds/linux/blob/d42f7708e27cc68d080ac... et al.
What mitigations did you disable, specific ones you know wouldn't be a risk to what the machines were doing (mostly network, mostly kernel space)..?
Like, by disabling the mitigations does that leave the servers slightly more open to someone nefarious finding a way to use some kind of timing attack to get some knowledge of your wireguard keys?
(Genuine question as someone with very little knowledge on both wireguard and *bleed CPU flaws)
No, I actually just booted with 'mitigations=off' and called it a day. We will employ Zen4 cores on the pre-prod setup soon enough, and I'll be looking into the benefit (if any) of disabling mitigations in a more fine-grained manner there.
To "fix" performance (i.e., increase throughput by close to 35%) one has to mess with the "energy performance bias" on the (Broadwell) platform, e. g. using x86_energy_perf_policy[0] or cpupower[1]. Otherwise, the CPUs/platform firmware will select to operate in a very dissatisfactory compromise between high-ish power consumption (~90W per socket), but substantially less performance than with having all idle states disabled (= CPU in POLL at all times, resulting in ~135W per core) completely. One can tweak things to reach a sweet spot in the middle, where you can achieve ~99% of the peak performance at very sensible idle power draw (i.e., ~25W when the link isn't loaded).
With Zen3, this hardly mattered at all.
I also got to witness that using IPv4 for the wireguard "overlay" network yielded about 30% better performance than when using IPv6 with ULA prefixes.
> if you can share anything related to your sweetspot
For Broadwell in particular, it is enough to avoid power states lower than C1E, in my experience.
And no, MTU plays no part in the degraded IPv6 performance. I think it's rooted in a less efficient route lookup mechanism (Linux 6.7 was what I tested with), but I did not take the time to check properly.
I can't believe they were under any memory pressure, so the first three presumably made no difference, but it's also quite surprising to me that the default ondemand cpu governor was responsible for such a dramatic performance hit. Not throttling up quickly enough leading to higher latency maybe? Very interesting anyway.
Did Cisco really invent MACSec?! I thought it was cooked up by the IEEE and supported in hardware from many vendors. I imagine they all have their own bugs though, it's quite a complicated spec. I know some switch/router vendors also now offer hardware-accelerated end-to-end encryption, similar to IPsec, Nokia call their's anysec but I'm sure the other players have their own. The benefit of those is you'd get full bandwidth (e.g. Tbps).
Usually one vendor prototypes a feature then they take it to IEEE/IETF for standardization. Probably half of all network protocols were invented by Cisco.
Why MACSEC isn't the default is pretty crazy! given that is is extremely stateless (encrypting at the frame level) and counters should be pretty reliable (only go up, since there's two parties) you could take advantages of some AES and GCM modes that would pretty quickly spot injection, replay, and other attacks.
But getting back to the main topic of the paper: why not just S2S IPSec the link?
I don't recall the specifics of macsec but it's possible to build a link encryptor that adds essentially zero latency. (like... no more latency than the gate delay of a single xor gate... plus some once-an-hour packet-length delay of some rekeying traffic).
Missing attack: Cause a disruption that obviously breaks the connection while further away you get time to tap it properly.
"Oh, no, a truck run into the pole carrying the copper/fiber, it must be an accident and no intervention is going on undetected because of the outage."
What we really need is promiscuous connectivity , but fully untrusted connections. It's maddening why it's hard to communicate 2 wireless devices while they are literally sharing the same radio spectrum and multiple radios could be used to talk to each other.
Tapping is even easier if you have access to the cable end in a patch panel.
I have a computer setup with a one-way gige connection for reviewing potentially malicious content in an air-gapped manner. The transmit side transceiver needs to see an incoming signal, so I just use one of these to feed its own output back into it:
# setup a 8020 MTU on wg0 interface to account for the 80 bytes wireguard headers overhead
# 20-byte IPv4 header or 40 byte IPv6 header, 8-byte UDP header 4-byte type, 4-byte key index, 8-byte nonce, 16-byte authentication tag)
/sbin/ip li set dev wg0 mtu 8020
Shouldn't that be 8920? To go with the 9000 byte MTU on the outer interface above it.