Why do you need a chip for this? Can't you just break the circuit with a regular dumb physical switch? Or is this a much more complex problem than it first appears?
The T2 chip probably also acts as a controller for a lot of the builtin sensors, including the microphone and lid closed switch. As a result, adding a feature to turn off the microphone when the lid is closed becomes trivial. Basically just add a logic gate between the two.
This is just the way people design circuits these days, because it’s easier and cheaper. A single µC with a few GPIOs, rather than a bunch of logic chips scattered around the board.
The goal is to make this so it cannot be overridden in software. By designing hard-coded logic in a chip that already exists, you can get the same level of security for essentially zero cost.
And if an attacker is able to connect a chip programmer on your motherboard, you have a totally different threat model and bigger problems to worry about.
Have you ever seen a MacBook from the last 10
years? There are no physical switches, it's an hall sensor that works with the magnets that keep the thing closed.
“This disconnect is implemented in hardware alone, and therefore prevents any software, even with root or kernel privileges in macOS, and even the software on the T2 chip, from engaging the microphone when the lid is closed,” said the support guide.
So in this respect, there's no difference between the T2 chip cutoff and a physical switch... except that there could still be a backdoor in the T2 chip cutoff that Apple hasn't admitted to.
I could credentialize in multimonitor usage (I have before), but now I'm getting used to something that's incompatible with portability.
Not to mention it would make the case of the computer microphonic — bump it hard enough and that’ll be transmitted through the reed relay.
Nothing to break, zero attack surface, what's not to like?
But if you're so against moving parts, a hall sensor and a transistor would achieve the same result.
At least with the switch if the mic doesn't work you just check for magnets around it and that's it.
The chip provides a hardware disconnect for the microphone, just like your idea. Except it's baked into the silicon instead of an additional large part in the body of the computer.
Regardless, if your attack model is "T2 chip is compromised", all bets are off. Worrying about the microphone would be the absolute least of your concerns. All your local and iCloud data can now be decrypted, your 2FA is compromised, so on and so forth.
The T2 chip and Secure Enclave provide the strongest security guarantees in today's computers and mobile devices respectively. I'm not worried about a compromise of the T2 chip, to be honest.
If someone can physically modify your laptop, you've already been hacked, T2 chip or not.
if your attack model is "T2 chip is compromised"
I think the more realistic attack model is "Apple enables the undocumented 'turn on mic while closed' feature that it was required to add through a national security letter that they can't talk about."
And then that secret hack is either discovered (or stolen) by other government.
Doing it on a chip sounds more reliable than adding a moving part.
The point (it seems) of the T2 chip is that it's isolated from the main system and has far fewer opportunities for exploitation, while leveraging efforts already expended for the iPhone. In my mind the eventual goal of the T1 / T2 etc chip line will be to do everything for a Mac that an iPhone does—storage, webcam control and encoding, audio in/out, fingerprint reader, sensors, battery monitoring, power management etc—and then expose it all to the main system en masse.
Is it easier the "turn off" signal from the T2 chip to each device to be turned off than it is to route the "turn off" signal from the lid closed switch/sensor to each device. Why does it have to pass through the T2 chip first?
If not, then you’re right it would be exactly the same.
People expect their computers not to listen to them when not actively using the mic. Your switch is the program that records you, a second is redundant and confusing.
Putting this functionality into a computer chip is ridiculous and unnecessary partially because there's no clear way for ordinary non-technical users to control that, and partially because this suggests that software (as I presume the software on this new chip will be alterable by Apple) can still affect whether the mic is hot. The headline "Apple's T2 chip will prevent hackers from eavesdropping on your microphone" should probably be interpreted to mean that Apple won't be prevented from listening in.
Camera access and wireless network access should also be physically disconnectable via a simple user-controlled piece of hardware which electrically disconnecting the on-board webcam and wireless network device from the rest of the system -- a slider or rocker switch. This too is something I understand Librem laptops offer but is uncommon elsewhere.
Now, if you trust Apple, the T2 chip is perfectly secure for this use case. Apple claims this serves as a hardware disconnect baked into the silicon, in which case it is no different from a mechanical switch.
Finally, the whole argument about user control over various security features is interesting. I will say that Apple has, better than any other manufacturer, struck almost perfectly the balance between security and usability. Apple's software/hardware security does its job, does it well, and gets out of your way. Including 20 different hardware switches and 150 different security settings that mean nothing to the average user would not be in their design language.
It's all about your threat model. Perhaps yours is a bit too severe to use traditional consumer devices.
If Apple distributed free software trusting Apple wouldn't enter into the discussion. We could inspect what Apple distributed, we'd be allowed to change what we don't like about the software Apple distributed to us, and we could distribute improved versions of that software to help others. No need to buy into uninspectable code we're not allowed to change or distribute further.
And the answer to the question you didn't answer which the original poster asked -- do you need a chip to do this task -- remains "no". Purism's hardware is proof by existence. With Purism's hardware switch there's no software involved to accomplish this feature and they (as far as I know) distribute a free software distro called "PureOS" (a GNU/Linux system) which also happens to have earned an FSF-approved distro entry.
Seriously, putting cameras and mics in every device and then creating web browsers which potentially have access to them at all times? Yeah, I'm most worried about what happens when I close the lid.
Any bug or future exploit negates this feature.
I have the feeling this was just done because it was possible and then the marketing team found out about it and decided to make a big deal out of it.
Words don't describe how delusional this is.
I would love to see a light come on if the camera or microphone is receiving power, and then only turn on power to these devices when a program requests it.
This was proven false when malware was discovered that accessed the camera on Mac laptops without activating the LED light. It's mentioned in the article.
It’s possible that Wardle’s Black Hat paper does, but the link doesn’t back up the assertion in the TechCrunch piece.
Isn't the microphone a simple passive device no different than a voice coil in a magnetic field? I don't think it's that simple, since it won't be powered.
My understanding is that there's often a reconfigurable circuit in the sound chip which determines where the lines in and out are routed. So even if you've physically disconnected the microphone, but left internal speakers connected, it's possible to turn the speakers into a microphone via software alone.
So what, does the light turn on whenever the entire IC sees power? Because if it relies on a signal from within the software-controlled IC, that's not exactly trivial to audit.
Entrusting a black box chip to do this on your behalf is more risky than there being a physical kill switch somewhere which all the voice coils pass through.
There's certainly going to be a way for the chip to be told via software to reconnect the coils, though you may not know how yourself.
We've come so far in our exchange of privacy for convenience, creating a surveillance nirvana... for those with access.
1) Fingerprint scanners
2) Facial scanners
3) Retina / iris scanners
4) Voice recordings, voice assistants.
5) GPS tracking (on IOS Google Maps defaults to knowing location even when the app is not in use).
It appears beautifully programmatic / orchestrated.
In most modern PCs with two audio jacks software will ask you what kind of device you just connected and reconfigure the sound chip to treat it as input or output. I wouldn't be surprised if clever software was able to convince Apple hardware to treat the built in speaker as a microphone.
Mordechai Guri et al. (2016) 'SPEAKE(a)R: Turn Speakers to Microphones for Fun and Profit'.
By Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici.
An inlaid switch would cost a few cents and apart from physical tampering will always work. With Apple's design skills I doubt many would even see it.
People who boldly guarantee a products security are doomed to relearn history.
Unless the user forgets to use the switch. Cue the XKCD comic about $5 wrenches breaking strong crypto.
The deeper philosophical question is whether to build, to quote the Arch Way, user-friendly or user-centric products (and, consequently, security). It should surprise nobody that Apple values user-friendly design over user-centric design.
Fwiw I also would prefer an inlaid physical switch.
Seem to remember an impressive one from this year using modern deep learning, but could not find it now.
Micro Snitch  is another alternative, although it's more limited in its functionality since it doesn't try to detect what program is accessing your webcam or microphone. It also costs $3.99, while OverSight is free.
- image signal processing
- an audio controller
- a mass storage controller
- a dedicated AES engine for encryption.
- secure enclave for touch id
Someday we'll find that the image processing function has a bug that lets you control the microphone when you open some stupid meme. Great separation of security
And still certain things will be impossible, even if you control the software on the T2, because there is no hardware access to e.g. read the encryption keys, from what I understand.
Since the T2 does not need to run user software there are additional options available to make it more secure which are not available for ordinary CPUs. For example, you could completely prevent executable code from being loaded without resetting the chip.
Why knew it was or could or might be happening, but at what scale?
Apple gets ahead by actually securing mic access. Lots of people want it.
Plug in USB mic when you need one.
Tell that to toushands of employees of fortune 500, that don't care one bit about security, as it's not their responsabilty.
They will plug the USB mic and that would be it.
It even goes in the bag with the ports plugged in if it has space.
Of course this is just assuming you keep the default of closing the lid => computer goes to sleep. It is possible for some Macs to keep running while the lid is closed, so basically only the display is not powered. It is also possible for some Macs since Mavericks to actually have most of the kernel running to perform background tasks, called Power Nap.
Closed-display mode: https://support.apple.com/en-us/HT201834
Power Nap: https://support.apple.com/en-us/HT204032
Absolutely. It was possible to trigger the mic/camera on older Macbooks without turning on the green LED. This led to people being spied on without their knowledge (by script kiddies, not nation-states). More recently, there's the unlocking of the San Bernardino shooters' iPhones.
Literally every model of iPhone ever released has been "jailbroken" if that's what you mean. Including of course multiple re-jailbreaks per model when Apple patched the original exploits.
There's an argument that the secure enclave is unhackable, although there have still been ways around it up until very recently (is it finally 100% secure or have the new exploits just not been discovered yet?)
I prefer to work on the assumption that no hardware / software can ever be 100% secure.