"For unknown reasons, Apple decided to block the support for some categories of Thunderbolt 3 peripherals under macOS and when you connect those Thunderbolt 3 peripheral, you simply get an "Unsupported" message under Thunderbolt Device Tree. After digging around, it turns out the block is implemented in software level and it is possible to bypass the block by patching the related kext. This patch modifies IOThunderboltFamily and allows "Unsupported" Thunderbolt 3 peripherals to work under macOS Sierra."
I wonder why they did that. Stability, probably?
Quite frankly, I'm not worried about physical access being a "vulnerability" --- I'm of the "if you physically own it, you should logically own it" mindset.
They don't phsically own your device and yet you're possibly fucked.
"[device name] is requesting [scary sounding permission] Allow?"
It won't protect users from themselves, but it will protect "bad-usb" style attacks.
A conference coordinator hands me a T3-HDMI adapter (as in the above scenario), I plug it in to my laptop, and it says "This device is requesting direct memory access. Allow Y/N?" My talk starts in an hour. Should I ask for the source code to the adapter firmware so I can do a quick audit?
Unless maybe you meant just the same scary message for every thunderbolt device, in which case people become acclimated to ignoring the message. And that may be after they've wasted support time asking why some device wants this scary access.
But are thunderbolt devices really expected to be that common that a message like this would be treated like the windows UAC dialog?
A security prompt seems very reasonable to me here, however I'm not sure Intel or Apple are very interested in that becasue they would be admitting a weakness and hinder the adoption of their own technology in some ways.
However for user trust this would be a very big win in my eyes.
If Thunderbolt gives even a slight advantage over USB-C, vendors will want to use it over USB-C. And the layperson will want it as well, which leads to "creep" making the most insecure but fastest method the "main".
There would need to be a way to dissuade vendors from using Thunderbolt for everything... And sadly a warning message isn't enough.
There is Thunderbolt 2, but it never saw any adoption outside Apple, and it's basically dead.
But in all seriousness, I agree that it's far from perfect, however I still think there is a way to allow the possibility without almost completely giving up on security.
I agree with the premise, but accepting that as a truism sure makes running an off-site server rough.
But it is. Imagine a dongle that installs a rootkit once you plug it in. And
now give somebody your laptop to hold for a minute.
There were (still are?) very similar issues with Windows autorun on CDs and
Full read access means you're pretty much screwed if the entity has bad intentions. No root kit needed.
Apple in MacOS is using IOMMU to protect against Firewire/Thunderbolt attacks.
Here's a developer working on xhyve that is explaining how each PCI device has it's own VT-d mapper/domain: https://github.com/mist64/xhyve/issues/108#issuecomment-3592...
edit: I finally came across the below link from Apple which does seem to imply that IOMMU VT-d is enabled on Macs that are 2012 and newer. On anything before that, though, and DMA attacks could own you. So uh.. don't run macbooks that are 6 years old, I guess.
There are also hardware mitigations to make DMA safe(r) now. https://en.m.wikipedia.org/wiki/Input–output_memory_manageme...
I don't have much knowledge on how the block actually works, but I would assume it's not trivial to "just pretend to be one of the permitted devices."
> There are also hardware mitigations to make DMA safe(r) now.
From what I can tell, IOMMU is not supported on any of the CPUs used by Thunderbolt-capable MacBooks.
Your assumption is incorrect. Even for devices that are designed with the express purpose of being hard to emulate (auth tokens, DRM chips, iPhone cables), it’s at most a simple matter of a grad student or Shenzhen resident with access to a fume hood and an electron microscope finding some burned-in private keys. For devices that aren’t designed to resist emulation, which thunderbolt devices generally aren’t, it’s trivial. This is essentially one of the core messages you should take away from the field of hardware security.
> IOMMU is not supported on any of the CPUs used by Thunderbolt-capable MacBooks.
They all do. Intel calls it VT-d.
Every source that I can find regarding the 2016 DMA vulnerabilities disagrees with you. Most of them actually specifically require that Thunderbolt security features be turned off because otherwise signed drivers are required to be installed before the peripheral will even connect.
>They all do. Intel calls it VT-d.
Got a source for VT-d being supported on MacBooks? I've been looking pretty hard to find a definitive answer, but all I can find are random unverified forum posts, stackoverflow questions, blackhat presentations, etc, and all of them say that VT-d and IOMMU are not supported/enabled on recent MacBooks and MB Pros.
You’re on the wrong page here. I’m not really sure what your thought process is. The (easy) attack is to simply trick the host machine and driver into thinking you’re an approved DMA-capable device. It has nothing to do with host-side checks on the drivers.
> Got a source for VT-d being supported on MacBooks?
Intel’s website and my memory.
> So uh.. don't run macbooks that are 6 years old, I guess.
This thread is specifically talking about thunderbolt 3, so this precludes any such machines.
Why would you assume such a thing? There's no historical precedent for peripheral authentication in the PCI space, nor indeed for any of the common peripheral interfaces.
The closest you get is "you get the driver that claims to best service the thing you claim to be", which makes pretending to be something you're not less useful from a functional perspective, but does nothing for system security.
Which means that unrestricted Thunderbolt can be used to access DRM-protected data...
Their restriction does not prevents actual DMA attacks at all just specifically limits high end TB3 devices.
Are you fucking kidding me? https://developer.apple.com/development-kit/external-graphic...
Apple has been specifically targeting competing TB3 devices they do not white list but rather black list their competition.
And what is even more obvious is that they do not block TB2/1 devices at all which are the ones used for all the publicly available DMA attack toolkits like Inception.
This has nothing to do with security.
This breaks the site guidelines. Please keep that sort of thing out of your comments here! The rest of your comment is just fine.
That Apple dev kit costs the same as buying the components separately, or have you not looked at that possibility?
They do not restrict TB1/2 devices at all (including a TB2 GPU enclosure like the one I’ve been using for over 3 years) which have the same DMA access as TB3 the only difference is the speed of the PCIe lanes.
But sure tell me how this is done for security.
If Apple cared about security they would implement the same port and DMA protection MSFT has with Win10. But this is purely for business gains.
These third party devices that Apple blocked were not just enclosures, there were other TB3 devices (e.g. dual display adapters) that Apple to this day have not announced or endorsed a competitor to, and in fact those same companies (e.g. StarTech) then released Mac compatible versions, which suggests Apple were just enforcing a standard that they wanted OEMs to meet before allowing them to work by default.
The blocks have been found to be based on an earlier chipset version, and that devices made using later versions of the TB3 chipset work just fine.
This is what a peripheral manufacturer said (Plugable):
The version of OS X on the new MacBook Pros (late 2016) will not work with existing Thunderbolt 3 docks and adapters that were certified for Windows prior to the release of the MacBook Pro. These existing devices use first generation of TI USB-C chipset (TPS65982) in combination with Intel’s Thunderbolt 3 chipset (Alpine Ridge). Apple requires the 2nd generation TPS65983 chipset for peripherals to be compatible. Certification of solutions across different device types is still in-progress for this 2nd generation chipset. From the Plugable product line, our dual display graphics adapters for DisplayPort and HDMI (TBT3-DP2X and TBT3-HDMI2X) are affected… We’ve also postponed our TBT3-UD1 Docking Station to update to the TPS65983 chipset and re-certify to make this docking station MacBook-compatible. Our Flagship TBT3-UDV dock with Power Delivery/Charging was already planned to use the next generation controller chip from TI, and will be compatible with the 2016 Thunderbolt 3 MacBooks.
This does still fit with your argument?
Which has photos of the chipset which is labeled TPS65982.
Not malice anyway... From this project:
"Note there is likely a reason why IOThunderboltFamily considers a peripheral unsupported in the first place. Use at your own peril."
It assumes (with no verification) that the first twelve bytes of any kernel function you could want to patch are all one basic block and do not contain a jump target or a relative branch. These reasons ARE why live patching is hard and why proper support for it is basically required in original code (making sure each func starts with a basic block at least twelve bytes long). Without that, it's quite a dangerous game to play in kernel space.
Furthermore, it assumes RAX is safe to clobber. This may not be so. Compilers are generally free to ignore standard ABI when calling provably-static provably-leaf functions. Who said RAX doesn't have a useful value on entry to patched function?
Additionally, it edits cr0 and cr3 with interrupts enabled, allowing it to be preempted during this action. If said preemption happens, another edit to those registers could happen in between. Scary, since what's being done is a non-atomic read-modify-write on CPU control registers.
Strongly advise anyone from running this on any system.
Oh come on. Your para 2-4 are serious conceptual defects with xnu_override in general, but this is the kind of hacker shit you'd expect to find in something that will live patch your kernel code. At the point in time you find yourself turning off SIP to load some code to hot patch your kernel, you've long since crossed the Rubicon of caring about running particularly good code.
The odds are good that the functions this thing is set up patch do hold with your first two invariants (RAX is scratch, there's 12 bytes in the entry BB). The third is the race, and you'll probably win this race most of the time.
"You'll probably win this race most of the time" shouldn't be in any serious engineering design document and is probably grounds for demotion or firing if you bring it up as a defense during a design or code review, but that isn't what's going on here. This is some hacker shit posted to github.
I bet, though, that 99% of the time, it works every time, and for its use case, that's "good enough." Hot patching the kernel is scary, though, and hopefully no one walks away from looking at that github thinking "oh yes, I know so many problems that can be solved by hot patching the running kernel" because honestly getting the act of patching the physical code itself is only the start of your troubles.
(As an aside, can I suggest posting an issue to GitHub? I might not have seen this thread, and xnu_override users who want to know of its shortcomings should also be able to find your comments easily.)
I hope that someone who has the know-how to reverse engineer the kernel, devise a patch to add desired behavior, and write a kernel extension to implement that behavior, will understand that patching the kernel is inherently a dangerous thing to do—and will actually test that their change works.
I wrote this as an alternative to other patchers which provide a string search and replace on the system kexts. Reverting these patches, dealing with the code signing fallout, and updating them for a new release are significantly more challenging with this style of patching.
Basic block detection isn't easy. You CAN find branches in the block, and even patch them to work when you copy them elsewhere (you'll need this since most x86 branches are relative). There is provably NO 100% certain way to make sure no branch anywhere else targets those 12 bytes though.
You can use heuristics to almost-reliably guess though. If there is a "call", the instruction after it likely is a target of an indirect branch (ret). Also you'll have to disassemble the entire function (breadth first search till every traversal path hits a "ret") and see if any jumps to these 12 bytes exist. This will cover 99.99% of cases. The remaining 0.01% is VERY hard.
I still don't think we exactly know why, but the allegations that they're doing it for profit or for some attempt to control the ecosystem don't seem to hold up, especially since most of these peripheral manufacturers since updated their devices with the new chipset to work with macOS just fine.
Them blocking battery extenders with headphone jacks on the iPhone is already bad enough, but doing it to PCs is a whole other level of walled garden.
Quite frankly, it doesn't affect me. The only hardware I plug into my MBP are 1.) the charger and 2.) sometimes a monitor. They're (usually) pretty good laptops.
Mind you, I've seen many types of HW, some of which indeed wouldn't work "with a PC." The expected course of action was always "complain at peripheral vendor," this "we know what you want, but nope because FU that's why" has historically been reserved for vendors trying to keep out competition.
So, while it is theoretically possible that an OS-enforced soft block is there to protect the user, a statistically more probable version is that it exists to protect the OS maker from competition; IMNSHO the ball is in Apple's court to prove otherwise.
I've noticed a disappointing and somewhat repulsive trend increasing over the past few years, one which not only Apple participates in (but they are one of the most notable), of moving from "not supported" meaning "we won't help you with it" (fair enough) to "we will actively stop you from doing it" (FU for even trying to do something we didn't think of/don't want/etc.).
IMHO it's perfectly reasonable for a company to not offer support for literally every hardware combination out there, because that's a huge space; but to deliberately sabotage attempts at doing things outside of that boundary is incredibly hostile. I have no definitive evidence but this trend seems at least somewhat correlated with the rise of authoritarianism.
As a developer myself, and also someone who has done many "unsupported" things with no ill effect, I constantly try to fight this position from others, but it's difficult.
Based on what?
The comments from Plugable make a strong case that there's one or more un-fixable issues with certain combinations peripheral silicon that lead to an unacceptable customer experience.
As can be clearly seen in this thread, given the opportunity a significant slice of the population will choose to blame Apple, so the only sensible response is to avoid the situation.
Put yourself in their shoes; given these two scenarios:
a) some set of customers' peripherals stop working with "this peripheral isn't supported"
b) some set of customers suffer random hangs, crashes, freezes, data loss
And given that you only get to choose one of those two, which one demonstrates a real commitment to the customer?
I realise this doesn't fit the "Apple is always out to screw you" narrative, but I guess you have to ask... if it doesn't, perhaps your story's not so true?
Based on what statistics, exactly?
Apple has a long history of blocking many things to protect users, e.g. unrecognized app developers and unsigned kexts.
OTOH, blocking certain TB3 devices hardly seems like a move that puts their competitors at any disadvantage. Apple doesn’t make any TB3 peripherals.
BTW: Windows 10 also has pretty solid eGPU support on my MBP at least.
Nice thing though is my laptop gets power over the USB-C cable thunderbolt 3 uses, so it’s only 1 cable to plug in.
(No wonder it's unavailable...)
The razer core, that looks like this kind of system, costs 499 without a GPU...