It seems like it wouldn't be a stretch to make a USB webcam that presented an "animated" infrared image -- would that defeat this fix?
What I'd really like is the system to consider every new USB device untrusted, and require specific approval before it's added as a device. This should apply to its capabilities too (eg: if a "keyboard" suddenly is presenting itself as storage, that causes a prompt). Think along the lines of "Acme WebCam XYZ wants to add a Camera and Microphone. Allow?"
And while the computer is locked this should absolutely be impossible.
I went looking for some commercial stuff, and there seems to be products aimed at businesses -- but seems these are centrally-managed, work by whitelisting specific devices ahead of time, and are more focused on data exfiltration than preventing a rogue keyboard, badusb or rubber ducky. Is there something that does this?
> What I'd really like is the system to consider every new USB device untrusted, and require specific approval before it's added as a device. This should apply to its capabilities too (eg: if a "keyboard" suddenly is presenting itself as storage, that causes a prompt). Think along the lines of "Acme WebCam XYZ wants to add a Camera and Microphone. Allow?"
There's USBGuard on Linux that seems to do some of this. Can't vouch for it, as I've never used it, though.
>if a "keyboard" suddenly is presenting itself as storage, that causes a prompt
I'm not clear on how you would know it was the keyboard changing identities and not just a new device. Does the USB protocol provide anything where you would know, assuming devices can change Base Class, VID, PID, and so on?
For that matter, I don't think it needs to change. It can just emulate a hub and present both.
What you'd described would require each USB device to cryptographically sign its communications with an unique key. AFAIK USB doesn't have this, but thunderbolt does (it's called "secure connect"). Even if it does get implemented though, it probably won't help much because most users can be social engineered into trusting the new device.
The common USB device classes (video device, HID keyboard/mouse, etc.) don't have this, but anybody can define a new device class that does.
It seems like maybe Microsoft should have required something like this for Windows Hello cameras, if they intended for people to use the hardware as a single factor authenticator.
i.e. the camera generates internal crypto keys and tells Windows about them when you set up Windows Hello, then Windows does a challenge/response to make sure it's getting an image from the authentic camera during login attempts.
Apple did something like this with the fingerprint reader home buttons on iPhones, which is why you had to replace the motherboard + home button as a single unit on damaged devices. They could have provided a reprovisioning tool, but it's Apple, so they didn't.
There exists the USB Authentication Specification Rev. 1.0 [1] which states in section 3.3:
"A private key used by one Authentication Responder shall not be used by any other
Authentication Responders. For example, one instance of a USB PD power supply cannot have the same private key as another instance of the USB PD power supply, even if they are otherwise identical model."
Current USB host controllers aren’t built in the way they can distinguish between physical removal and electronic self reset.
Maybe if you’re NSA you could roll your own USB xHCI and a USB A receptacle that could characterize and identify individual units down to a machine in China used to assemble it, but that will be lightyears ahead of commercial USB host controllers.
My point was if the class, VID or PID changes then treat it like a "new" device. If it presents subdevices or is suddenly a hub, those are new devices.
You'd have to also detect PID/VID brute force attacks. Short of a cryptographic key (as gruez talked about), a device being able to guess a valid class+PID+VID combo would be bad, so this would have to be treated similar to a password brute-force attack: too many "new" devices in a short time (especially unplugged without clicking the authorization UI) would cause the system to stop accepting new devices for a time.
Ideally you can also detect something else -- eg: serial number -- but I think that starts getting into manufacturer/device-specific implementations. (If only there was a company with billions of dollars and huge facilities and relationships with USB vendors and the capability to test tons of devices..)
>My point was if the class, VID or PID changes then treat it like a "new" device
Yes, I understood that. What I'm saying that USB wouldn't give you any context for that. A device changing would be indistinguishable from one being removed and a new one going in. Or future "rubby duckies" could emulate a hub, and add multiple devices, all operating simultaneuously.
> A device changing would be indistinguishable from one being removed and a new one going in. Or future "rubby duckies" could emulate a hub, and add multiple devices, all operating simultaneuously.
In all of these cases, the system should show a prompt asking the user for authorization. Hopefully in a way that makes it clear if they didn't just physically attach a new keyboard, they should be extremely suspicious about what's currently connected to their USB ports.
Yes, you could prompt for every USB device detection. You would have to do that on boot too, though, which would probably give users security notice fatigue. There's basically no persistence, serial numbers, etc.
Only initially. I guess maybe I wasn't clear in my original post, but my idea was always that the system would remember any authorized device (presumably by VID+PID+capabilities, but maybe also serial or other identifying info if the device+OS supports it). You'd only get the UI prompt the very first time you plug in a new keyboard -- after that, you can plug it in at will and it continues to work. This is also why I mentioned about VID/PID brute force attacks.
It would be rare to see multiple devices at once -- eg: the very first boot, or plugging in a new complex device like a dock -- but the experience of this hinges on having the right UI. Having a series of pop-ups like "Do you accept Generic USB Hub 06 (connected to Root USB Hub 2)?" is going to cause them all to be ignored, but, for example, showing a tree of devices and making it easy to accept all at once would probably be ok. And hubs in particular may be a special case, where you don't bother to prompt, but I don't know if that opens an attack vector.
Plus you'd have to teach people to turn off their computers before plugging them in, which would be hard with Windows these days, when Shutdown is actually Hibernate
Seems like a recipe for severe headaches and cursing when devices fail while you're logged out and now you can't log in by just connecting a new keyboard.
Aside from already having malware making this irrelevant, if you didn't just physically attach a new keyboard, the message about a new keyboard is also going to seem very suspicious.
Now I'm curious if there's some device that presents itself as a keyboard, types some Powershell script and "copies" files off of the computer using numlock/capslock/scrolllock signals.
If you're using that as your sole authentication mechanism, then you're not encrypting your data with a password. It's already game over.
These kinds of things of 'security'* features can't be considered protection for the valuable data on your computer, or the e-commerce account you're currently signed in on.
This stuff is for preventing Steven from making a funny Facebook post in your name (he'll find a way anyways).
*roughly the same level of 'security' a "beware fluffy the furry menace" sign on your garden fence provides.
The problem is that Apple does facial recognition and does it in a semi secure way which builds trust in the technology. Then microsoft and samsung jam in the feature without any of the security considerations and ride off the trust Apple built in it.
It's completely outrageous that MS thought it was acceptable to do facial recognition using a basic webcam.
That doesn't really mean anything. It's just another data point about how microsoft rushes out features which are not secure. Android also had face unlock years ago with the nexus 7 and it was even less secure than the current systems.
Very few companies actually care about security over flashy marketing features.
Can you explain why Apple's implementation is semi secure and others' implementations aren't? All of them have been bypassed at one point or another IIRC but that's probably not what you mean?
(also: if it's semi secure then being able to build trust based on that means the other part is PR I guess, outrageous as well?)
Apple's Face ID takes a 2-dimensional infrared image of your face as well as projects 30,000 IR dots to form a 3D depth map of the face. It feeds this into a NN in a separate Secure Enclave processor to determine whether the face is attentive and authorised. I believe they also implement specific NNs just to perform anti-spoofing, both physical and digital.
This is contrasted to Samsung and Microsoft's solutions which take a picture and try to match it.
Apple's Platform Security Guide on Face & Touch ID [0] is an interesting read.
Microsoft's solution requires an (near) infrared camera and will not accept a regular USB camera, though. But it does seem to rely on simple facial recognition rather than building a depth map. The main use of the near infrared seems to be to work around lighting issues and simple spoofing though.
If you have relatives that look like you the chance goes down from one in a million. And they're also the most likely people to have access to your phone, and by interested in what's on it.
Also the younger you are the less likely it is to differentiate.
Windows Hello does require an IR camera AFAIKA, so a "basic webcam" is not enough. However, it's not clear if this exploit would still work if the custom USB device presented a moving IR video as opposed to a static image.
Facial and fingerprint authentication is the most successful and practical security feature protecting billions of computers which would have been unprotected otherwise. It should not be dismissed like that, and I’m glad the work is being done to find and fix vulnerabilities.
I can think of many major security breaches related to poor or leaked passwords and honestly none come to mind with faked biometrics. Not to say that there aren't any, but passwords have a terrible security history and everyone should be glad that they're slowly becoming just another factor rather than the sole gatekeeper.
Catchy umbrella names for a set of security-related products/services cause more harm than good, see Google Titan. When just one facet of that gets compromised it sows doubt about the whole thing due to clickbaity titles.
Out of curiosity, why would showing a printed image of the user's face not have worked as well? Or, say, playing a video of the user's face from another device in front of the webcam? Does the biometric software look for glint or other characteristics of a replicating medium?
From what I read the infrared version was defeated because it accepted a static image while the other requires video. It sounds to me like either one could be fooled by an appropriate video?
Interesting. I thought Windows Hello was implemented with dot matrix hardware like on iPhone, but clearly it isn't. It's illuminated infrared camera tech.
The problem is really how can we be sure that a device claimed to be a camera is really a camera and can be trusted? But yeah, as the device is already physically compromised, there is not much can be done in OS' perspective.
A "physically compromised" iPhone will still not let you in, no matter what devices you plug into it, including removing the camera module and replacing it.
I think this comment is being downvoted unfairly - yes, there are mechanisms to get into an iPhone with physical access (see greykey) - but those rely on leveraging a vulnerability to allow brute forcing the PIN, they do not break FaceID.
What I'd really like is the system to consider every new USB device untrusted, and require specific approval before it's added as a device. This should apply to its capabilities too (eg: if a "keyboard" suddenly is presenting itself as storage, that causes a prompt). Think along the lines of "Acme WebCam XYZ wants to add a Camera and Microphone. Allow?"
And while the computer is locked this should absolutely be impossible.
I went looking for some commercial stuff, and there seems to be products aimed at businesses -- but seems these are centrally-managed, work by whitelisting specific devices ahead of time, and are more focused on data exfiltration than preventing a rogue keyboard, badusb or rubber ducky. Is there something that does this?