> Thunderbolt, uh, does not expose PCIe lanes directly. Thunderbolt is an MPLS-like packet switching network that can encapsulate PCIe TLPs over a PHY and MAC without a spec, chips without documentation, and software with barely any support. […] Ah, and let's not forget that the absurdly complicated MPLS-like system with topology written in EEPROMs in every node that, as far as I can tell, you have to bitbang via the half-configured packet path (what the actual fuck?), well, it's very general. You can describe a lot of different networks in it. But of course the Linux driver has two or three hardcoded ones, including indexes into the EEPROMs on the way, that it recognizes and loads a hardcoded configuration for. The entire thing is made -even worse than USB-, which is an achievement.
It pains me to hear people complaining about the lack of Thunderbolt in AMD Ryzen Mobile laptops, and wishing for it in the next generation now that the spec is no longer Intel-exclusive in some way. Lack of Thunderbolt is a feature.
Reading the rant you linked, there seems to be no real content or references to information. It mostly complains about USB-PD negotiation of the Thunderbolt alt-mode (which doesn't have anything to do with thunderbolt), and that the controller is proprietary (which sucks, sure, but so is most other controllers in the system).
IOMMU not being on by default as it should be is a motherboard manufacturer problem. Vulnerabilities found with it active should have CVE's created so that they can get fixed, like any other safety issue.
As someone using Thunderbolt on my Linux laptop, I can inform you that it works quite well.
(EDIT: Also note that Intel technically has promised to make Thunderbolt open and royalty-free, although they have missed their deadline on that promise.)
If Dell's going to do silly things like solder Wifi chips down on the new laptops, then you're damn right I'm going to be looking for Thunderbolt for fast connectivity out of a very expensive laptop.
Designing a purpose-built phy/mac, designed to handle long links and complex topologies, is the obvious thing to do.
Probably also timing etc. - I would think making it packet based is probably easier to make work over longer cables than raw PCI-e.
The bit I really don’t get is why they keep all the specs secret. What do they think they have to gain? All that it has meant is less adoption...
I imagine the spec doesn't really exist because Intel has been too lazy to properly write it (I wouldn't be surprised if Thunderbolt is basically defined as whatever the original Ridge controller implemented) and it's probably also gross. Adoption seems to be limited by cost, not by secrecy (thinking back to Firewire, it was open but there were very few controller chips for it).
 Check out PCIe over RS-232: https://www.youtube.com/watch?v=QMiubC6LdTA
I think when intel was doing Light Peak, they didn't have IOMMU, which in my limited knowledge seems to be a solution to at least some of what is wrong with normal hot plugging PCIe (aside from somewhat complicated hardware design, I imagine). Of course IOMMU was clearly never implemented correctly, so all was for naught.
I wanted to call that out; that's nifty!
I need to look up what they did for this - lots of the tools promise the ability to turn a pile of C into SystemVerilog, but require a bit of manual massaging.
Edit: "used the Arm Cortex A9 CPU on an
Intel Arria 10 FPGA" well that's boring and much simpler.
It does not surprise me that drivers are not secure against maliciously-constructed messages from the "hardware". Most drivers are tested in a fairly limited fashion, and nobody's even got the infrastructure to fuzz them like this.
(There are lots of FPGA-on-PCIe boards, but they tend to be horribly expensive. This one is almost $5000. https://www.digikey.com/products/en?mpart=DK-DEV-10AX115S-A&...)
We're working on porting to a cheaper FPGA-on-PCIe board, but it's still ~EUR 800.
> virtual environment consisting of QEMU, LINUX and GHDL glued alltogether by a small TCP based protocol. It allows PCIE devices to be implemented as standard userland processes, answering actual PCIE requests coming from
QEMU. It supports PCIE configuration headers, requests, memory read/write operations and MSI. Different abstractions are provided to simplify the implementation of PCIE devices.
https://github.com/KastnerRG/riffa & http://kastner.ucsd.edu/wp-content/uploads/2014/04/admin/fpl...
> RIFFA (Reusable Integration Framework for FPGA Accelerators) is a simple framework for communicating data from a host CPU to a FPGA via a PCI Express bus. The framework requires a PCIe enabled workstation and a FPGA on a board with a PCIe connector. RIFFA supports Windows and Linux, Altera and Xilinx, with bindings for C/C++, Python, MATLAB and Java. On the software side there are two main functions: data send and data receive ... Users can communicate with FPGA IP cores by writing only a few lines of code.
Second, the prompt says 'you attached a thing, do you want to allow it?'. The name is often not descriptive, eg 'CalDigit TS3' in our example, so users can't tell what rights are being asked for. As we've found on mobile, when apps pop up prompts about permissions users are conditioned to agree. Also, users can be deceived by putting a sticker on the malicious device with the name on the prompt - the writing on the plastic agrees with the prompt, so the device must be legit. An attacker can also play social engineering tricks - "say 'approve' to enable fast charging" for example.
Finally, it's possible to swap out the PCIe devices behind a Thunderbolt bridge without re-prompting. We took a commercial Thunderbolt dock, removed the PCB with the dock's PCIe peripherals and replaced it with another PCIe card. Windows did not notice that the device had changed and didn't prompt again.
1. Users cannot tell if a device is malicious before authorizing it.
2. Even if the user could tell, the device can be made malicious post-authorization.
Both situations seem like quite universal and unsolvable issues, in that you will never know the exact behavior of a device unless you tear it down atom by atom.
(Of course, #2 could be made harder if sub-devices were made part of the authorization, but it will always be possible to make a device malicious given physical access to it, even if a simple PCIe swap is no longer possible.)
However, assuming proper fix for the bugs discovered, the IOMMU should make DMA access pointless, reducing any attack to be a simple case of a malicious device that can at most be malicious in the execution of its own functions (e.g. eavesdropping or traffic modification). Why would any peripheral receive device mappings outside of the buffers needed to operate it?
There are so many ways to mitigate or stop this, and keep the user in control. Android, at least, knows to prevent privileged device connections with the screen locked.
It also seems to be available on most distros.
As it says in the article, it can also be done with desktops where you can just pop off the case panel and plug a malicious device into the PCIe bus.
I think the biggest issue with Thunderbolt specifically is that it works over the USB-C port that is also used for charging - so you could make a device that looks like a charger but also attacks the device (maybe with a cellular modem in it to exfiltrate data or something!), making it way easier to trick users to plug it in themselves.
I have already been using Cypress FX-3 to get 5 Gbps USB connection to an FPGA, but this is better.
There are EFI/BIOS level “workarounds” like on Dell laptops: they have a setting to only negotiate thunderbolt with approriate dell docks.
Sadly, their thunderbolt dock is entirely garbage because they used a really crappy USB3 controller which has the habit of dropping devices and corrupting CRC checksums on Ethernet packets. Additionally, this defies the _point_ of thunderbolt itself. But if we assume we can disable thunderbolt capability while the host OS is running then that’s already a huge win.
FWIW I already do this with USB, the ports are disabled until I run a command to enable them in Linux. Because I’m one of those “paranoid” types.
As DisplayPort Alternate Mode for USB-C is nowadays used a lot to connect to external displays, instead of the approach Apple chose to tunnel DisplayPort over Thunderbolt, Thunderbolt is probably not even relevant for most users outside of the Apple universe.
Actually not the case -- the TB chipsets are from Intel and promoted by them.
TB drives are pretty fast and great to use if you can afford them. TB external GPUs are useful for ML too (not sure if anyone is using them for actual rendering; that's outside my field of experience).
Taking your laptop or phone away for ten minutes allows somebody with a malicious USB-C device to read out everything without rebooting or leaving any mark that anything had been done, bypassing filesystem encryption and passwords.
If I'm reading it correctly, any newer Mac with T2 chip and full disk encryption would not allow data to be exfiltrated unless a user were logged in.
Id wager youd probably be safe because I think the bare USB keyboard support is provided by the UEFI BIOS and no OS level drivers would be initialized yet (e.g. why you can use basic USB keyboard and mouse support in the BIOS), but this is my mildly educated speculation. This is definitely not my area of expertise.
Fpr the same reason, note that if the computer was booted and locked, it would be affected.
what about using their device to install something onto your system rather than retrieving data from it? is it possible to do one of those mischievous flash roms from something like this?
And people who use GPU passthrough rely on IOMMUs, so it's not like those code paths are not being exercised.
You need to ensure that different network cards can’t see each other’s buffers, for example, and that will not be an easy change to most OSes.
USB-C Macbook Pros we’re only announced on Oct 27, 2016. Not sure what version of 10.12 they shipped with, but seems like it was a pretty fast fix.