Hacker News new | past | comments | ask | show | jobs | submit login

I believe that it’s not literally hardwired in the sense that powering up the camera also powers up the camera LED, and instead this relies on logic in the hopefully un-flashable camera+LED firmware. Someone correct me if I’m wrong.

You need some logic to enforce things like a minimum LED duration that keeps the LED on for a couple seconds even if the camera is only used to capture one brief frame.

I have a script that takes periodic screenshots of my face for fun and I can confirm the LED stays on even if the camera only captures one quick frame.




I happen to have some first-hand knowledge around the subject! In 2014 someone did a talk[0] on disabling the camera on some older Macbooks. It was fairly trivial, basically just reflashing the firmware that controlled the LED. I worked on the security team at Apple at the time and in response to this I attempted to do the same for more modern Macbooks. I won't go into the results but the decision was made to re-architect how the LED is turned on. I was the security architect for the feature.

A custom PMIC for what's known as the forehead board was designed that has a voltage source that is ALWAYS on as long as the camera sensor has power at all. It also incorporates a hard (as in, tie-cells) lower limit for PWM duty cycle for the camera LED so you can't PWM an LED down to make it hard to see. (PWM is required because LED brightness is somewhat variable between runs, so they're calibrated to always have uniform brightness.)

On top of this the PMIC has a counter that enforces a minimum on-time for the LED voltage regulator. I believe it was configured to force the LED to stay on for 3 seconds.

This PMIC is powered from the system rail, and no system rail means no power to the main SoC/processor so it's impossible to cut the 3 seconds short by yoinking the power to the entire forehead board.

tl;dr On Macbooks made after 2014, no firmware is involved whatsoever to enforce that the LED comes on when frames could be captured, and no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.

0: https://www.usenix.org/system/files/conference/usenixsecurit...


There seems to be widespread anxiety regarding cameras, but hardly anyone ever talks about microphones. Are conversations not much more privileged information than potentially seeing someone in their underwear?


"All Apple silicon-based Mac notebooks and Intel-based Mac notebooks with the Apple T2 Security Chip feature a hardware disconnect that disables the microphone whenever the lid is closed. On all 13-inch MacBook Pro and MacBook Air notebooks with the T2 chip, all MacBook notebooks with a T2 chip from 2019 or later, and Mac notebooks with Apple silicon, this disconnect is implemented in hardware alone." [1]

[1] https://support.apple.com/guide/security/hardware-microphone...


That's what they said about the first gen Facetime cameras. "oooh don't worry, it's controlled in hardware!"

We have no way of verifying that anything they said in that document is true.


I'm inclined to believe it. If someone managed to prove Apple's lying about it, there would be serious reputational (and other) risks to their business. I also can't imagine how they would benefit from such a fabrication.

That said, I still use "Nanoblock" webcam covers and monitor for when either the camera or microphone are activated.


It's clear Apple define "Hardware" as "Not using the main CPU". They've pretty much admitted it's firmware based, otherwise the T2 chip simply wouldn't be involved to be mentioned.


It is implemented in dedicated small CPLD that cannot be flashed by any software means. My understanding of relation to T2/SEP is that this CPLD serves as a kind of "IO expander" for T2/SEP which also hardwires logic like this.


The T2 chip is mentioned in the quoted passage as an indicator of the architecture version, not necessarily an indicator that the T2 chip is directly involved


Obviously the camera is also 'disabled' when the lid is closed regardless of the controlling circuitry. So while that's a good feature, it's not relevant.


Depends what your threat model is?

Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby. But naked images of Abby can head off into the ether and be propagated more or less forever, turn up on hate sites, be detrimental to careers etc.

If your threat model is leaking company secrets then sure, microphone bad, as is anything having access to any hardware on your machine.

So sure, maybe people ought to be more concerned about microphones as well, rather than instead.


My point is that the threat model is backwards. The threat associated with a camera is the least severe compared to anything else a malicious person could do with access to your computer. Recored conversations, chats and email, browsing history, etc are all much more likely to result in harm if leaked than a recording of you innocently in your home.

> Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby.

That destroys families, standing within a community, and very often careers.


I don't think it is backwards, personally. The threat of public humiliation, and the capability for someone to spy on what you do in your own home, is worse with the camera.

> chats and email, browsing history, etc are all much more likely to result in harm if leaked than a recording of you innocently in your home.

This is far less of an intrusion for most people than recording what they are actually doing in their own home IRL. People know that information can be hacked, they don't expect and react quite differently to someone actually watching them.

> That destroys families, standing within a community, and very often careers.

Yes, but it doesn't stay on the internet forever in quite the same way.

Now I get to some extent what you're saying - aren't the consequences potentially worse from other forms of information leak?

Maybe. It depends on how you weight those consequences. I'd put (for example) financial loss due to fraud enabled by hacking my accounts as far less important than someone spying on me in my own home. Even if they didn't use that to then extort me, and were using the footage for ... uh ... personal enjoyment. I think a lot of people will feel the same way. The material consequences might be lesser, but the psychological ones not so much. Not everything is valued in dollars.


I think we may just be bumping into cultural differences here. I grew up in a household were being naked around family members was common. I spend time in clothing-optional spaces. I rarely draw the blinds on my windows, etc. I'm not concerned with what other people think in this way and such images could never be used to extort me. Consider the case of Germany - people there are extremely concerned about their privacy and data protection. At the same time public nudity is an entrenched cultural norm.

It's also known that people are not very good at assessing risk. People are more word about dying at the hands of a serial killer than they are of dying in a car crash or slipping in the shower. I feel you're underplaying the psychological harm of having all of your data crawled through by a creep (that would include all of your photos, sites visited, messages sent, everything).

All I can really say is that if someone gained access to my machine, the camera would be the least of my concerns. That's true in nearly every context (psychological, financial, physical, etc).


Empirically, most low level extortion does seem to be about leaking video. I would see a threat model based on 'criminal wants to extort me for money'. As more reasonable than 'creep wants to look through my computer for creeping'. And it seems like extortion focusses on video, so that is the bigger threat. Even if it is less invasive.

I presume the reason behind this is that video is much more likely to be re-shared. Sending bob a zip of someone's inbox is unlikely to be opened, and even less likely to be shared with strangers. But send bob a video of Alice, and he might open it. Heck, he might not know what the video is until he opens it. So even if he is decent, he might still see it. And if he is less decent and shares it, strangers are much more likely to actually view it.


I think extortion in the form of "I've encrypted your data, pay to get it back" is much more common. Ransomware. It's scalable, automatable. Extortion of video is harder to automate.


I think, though am prepared to be wrong, that you'll probably find yourself in the minority there.

It's not just about nudity and extortion, but someone having access to watch you, whenever they feel like, in your safe space. That sense of violation that people also feel when (for instance) they have been the victim of burglary - the missing stuff is often secondary to the ruined sense of security. There's a vast difference between leaving your curtains open and having someone spying on you from inside your own home.

Is it rational to put this above other concerns? That's a whole different debate and not one I'm particularly interested in. But it explains why people are concerned about cameras over 'mere' data intrusion.


I'm not arguing a point here, but I'm curious what the actual number of instances exist where someone is naked or in some other extortionate way (accidently of course) potentially exposed from the position of their webcam. I too would be much more concerned about my microphone, where I know one had conversations that in front of or next to my machine that I wouldn't want "out there". In terms of where my camera is, I woukd imagine they would catch me picking my nose every so often but that's about it.


People watch porn on their laptops. Even just your orgasm face would be embarrassing for most people.


> Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby.

This isn't true at all, even for private citizens. Your friends, parents, children, and colleagues are all likely to care.


It's very limited, it's certainly not going to be passed around like naked pictures could be.


Yes, photos of naked people are used to extort them (usually into just paying the holder to delete them).

https://news.ycombinator.com/item?id=42261730


This raises a different but related question. In what world should a victim of a crime be extorted for doing innocent things in their home. If a peeping tom took a photo though a window, could that be used to extort someone?

When people are extorted for these kinds of things it's usually catfishing that leads to sexual acts being recorded. That's not related to cybersecurity.


Fear of harrasment. You don't want your coworkers see you naked, do you?

edit: s/baked/naked/ :D


They are, but people aren’t scared of those because they can’t see them staring at them.


> and no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.

I may be the oddball here, but that 3 second duration does not comfort me. The only time I would notice it is if I am sitting in front of the computer. While someone snapping a photo of me while working is disconcerting, it is not the end of the world. Someone snapping photos while I am away from the screen is more troublesome. (Or it would be if my computer was facing an open space, which it doesn't.)


Right, so this is all defense in depth. That LED is sort of the last line of defense if all others have failed, like:

The exploit mitigations to prevent you from getting an initial foothold.

The sandboxing preventing you from going from a low-privileged to a privileged process.

The permissions model preventing unauthorized camera access in the first place.

The kernel hardening to stop you from poking at the co-processor registers.

etc. etc.

If all those things have failed, the last thing to at least give you a chance of noticing the compromise, that's that LED. And that's why it stays on for 3 seconds, all to increase the chances of you noticing something is off. But things had to have gone pretty sideways before that particular hail-mary kicks in.


OK, but then what? Leave the LED on for 24 hours after you've captured a single frame? At that point the LED isn't really indicating camera usage because you'll just get used to seeing it on all the time whether the camera is in use or not.


A ranfom thought, that probably won't cover all cases: a second LED or a colour LED. One LED/colour indicates the camera is active, the second can be on for a longer period of time (and perhaps dim as time goes on). I prefer the second LED option since it is better for us colourblind folks, though I suspect there would be more resistance to the idea.

And, of course, covers are an option.


It's strange that none of these companies will include a closable cover for the camera. I got one aftermarket. It is very reassuring since no hacking or accidental misclicks on my part can move the cover.


I've seen HP desktops that have a closeable camera cover, and Lenovo does on some ThinkPads [1], so probably others do too. Laptops usually have very little depth available in the screen part though, which is why most laptop cameras are crappy (exceptions include Surface Pro and Surface Book, which have more depth available and so much better cameras than most, but no cover - at least their camera light is not software controlled).

[1] https://www.businessinsider.com/lenovo-thinkshutter-laptops-...


Higher end Lenovos and Dell Latitude / Precision tend to. Was one reason why I went for a Latitude 74XX rather than a 54XX or 34XX when looking at them last time.


I had a closable cover and someone shut my laptop with enough force that the cover caused the screen to break. Be careful when closing.


Was it a built-in camera cover, or a third-party one? Apple specifically (and possibly other manufacturers?) recommends against third-party covers because the tolerance is so close:

https://support.apple.com/en-us/102177


Sure, that is going.to be true for anything with moving pats. Yet I would also imagine that design and materials are a factor here. Let's face it, these covers aren't exactly common on laptops. There is probably a lack of good design practices for them.


I also purchased a cover for mine, although in a pinch, the removable stickers on fruit work well.


I have a sticky piece of post it note more or less permanently affixed over my camera.


I can remember when someone spotted tape over Zuckerberg's laptop camera. Ref: https://www.theverge.com/2016/6/21/11995032/mark-zuckerberg-...


My Thinkpad does.


Thanks, this is the reason I browse Hacker News


Thanks for posting this interesting tidbit! I find this kind of knowledge absolutely fascinating!


Thank you for your work on this! I wish some other large companies took privacy that seriously.


Thank you for doing this.


I assume you're not longer working on it, but why not just wire it so that:

- The LED is in parallel, but with the sensor voltage supply, not the chip

- Camera sensor idle voltage = low voltage for the LED (be it with stepping if needed)

- Camera sensor active voltage = high voltage for the LED (again, stepping if needed)

- little capacitor that holds enough charge to run the LED for ~3 seconds after camera goes back to idle voltage.

Good luck hacking that :)


That's basically how this works, but manufacturing electronics at a massive scale requires some more flexibility. For example, capacitors have a pretty large tolerance (sometimes +/- 20%) and LEDs have quite a bit of variety in what voltages they'll work at. So for some people the LEDs might last 3 seconds, for some they might last 5s. Using a capacitor also means the LEDs will fade slowly instead of just turning off sharply.

If the LEDs come from a different supplier one day, who is going to make sure they're still within the spec for staying on for 3 seconds?

(And yes, I have long since parted ways with Apple)

Edit:

And to add on: That capacitor needs time to charge so now the LED doesn't actually come on when the sensor comes on, it's slightly delayed!


Thank you for the clarifications. Armchair (well, workbench) engineering strikes again haha!


You can't drive an LED that way in production electronics: you need to use an LED driver circuit of some kind to ensure the LED has constant current, and also to protect against failure modes. Also, a capacitor large enough to power a daylight-visible LED for 3 seconds is not as "little" as you're thinking; there's likely not enough space in a laptop lid for one of those. A driver circuit would be smaller and thinner.

Agreed, however, that the LED should be controlled by the camera sensor idle vs. active voltage.


I've seen a million people parroting "oh now apple fixed it!" and not a single person who has actually verified/proved it. Go on, show my any third party security researcher who has verified this claim via examining the actual hardware.

You'll pardon us all if we don't really believe you, because a)there's no way for any of us to verify this and b)Apple lied about it before, claiming the LED was hard-wired in blah blah same thing, except it turned out it was software controlled by the camera module's firmware.


I'd love for a third party to verify the claim! I'm just giving you an overview of the work that went into making this a thing, knowing full well you have absolutely no reason to trust me.

The LED being "hard-wired" is a tricky statement to make, and I actually wasn't aware Apple has publicly ever made a statement to that effect. What I can say is that relying on the dedicated LED or "sensor array active" signal some camera sensors provide, while technically hard-wired in the sense there is no firmware driving it, is not foolproof.


> Apple lied about it before, claiming the LED was hard-wired in blah blah same thing, except it turned out it was software controlled by the camera module's firmware.

Source?


A capacitor can hold enough charge to power led for noticable amount of time even if powered for a brief moment, no logic needed


I don't think they would waste a high value capacitor just to keep a led lit for longer, also a led directly lit by a capacitor would be noticeable by slowly dimming when the capacitor discharges. It's more likely that the signal driving the led comes out of a monostable implemented in code: pin_on() drives the led on; pin_off() waits n secs then drives the led off.


This is Apple, so that assertion isn’t guaranteed valid like it would be for non-enterprise HP or Lenovo. They absolutely would invest in a capacitor if that’s what it takes, as they are maximally focused on camera privacy concerns and have made a point of that in their security marketing over time; or else they wouldn’t be allowing hardware security engineers to brag about it, much less talk publicly about it, at all.

EDIT: It’s not just a capacitor, it’s a full custom chip, that can’t be software-modified, that keeps the light on for 3 seconds. https://news.ycombinator.com/item?id=42260379


Logic on an already existing ASIC is going to be cheaper than a capacitor.


This is counter-intuitive enough to warrant further explanation.


If you are designing an ASIC for the camera, you can include all the required logic gates to control the LED for a cost that is close to zero. It wouldn't impact the production cost of the ASIC, whereas a capacitor is an additional item in the BOM (and to be charged it requires current, more than the LED, so the driver in the IC must be bigger).


The trick is to keep using the camera until that capacitor is discharged. I'm pretty sure most cameras can run at voltages below a LED's forward voltage nowadays.


See then it's not hardwired at all. It is equally vulnerable to a reflash. Apple just did hardware security (i.e. signed firmware) better and also are relying on security through obscurity (its not a publicly available part).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: