I thought the whole point of these camera LEDs was to have them wired to/through the power to the camera, so they are always on when the camera is getting power, no matter what.
Having the LED control exposed through the firmware completely defeats this.
They are hardwired on Macbooks. From Daring Fireball, quoting an email from an Apple engineer.
> All cameras after [2008] were different: The hardware team tied the LED to a hardware signal from the sensor: If the (I believe) vertical sync was active, the LED would light up. There is NO firmware control to disable/enable the LED. The actual firmware is indeed flashable, but the part is not a generic part and there are mechanisms in place to verify the image being flashed. […]
> So, no, I don’t believe that malware could be installed to enable the camera without lighting the LED. My concern would be a situation where a frame is captured so the LED is lit only for a very brief period of time.
The LED should be connected to camera's power, or maybe camera's "enable" signal. It should not be operable via any firmware in any way.
The led also has to be connected through a one-shot trigger (a transistor + a capacitor) so that it would light up, say, for at least 500 ms no matter how short the input pulse is. This would prevent making single shots hard to notice.
Doing that, of course, would incur a few cents more in BOM, and quite a bit more in being paranoid, well, I mean, customer-centric.
My previous HP Envy x360 had such a switch on the side of the laptop that would electronically disconnect the webcam; it would completely disconnect according to the system. Enabling it would show a new device being connected in `dmesg`.
Not a great laptop otherwise, but that was pretty good!
You can buy/print and stick a physical «webcam cover»[1] manually on your notebook or phone.
My current notebook, manufactured in 2023, has very thin bar on top of screen with camera, so I need a thin, U-like attachment for the switch, which is hard to find.
Am I the only one that is not worried at all about the camera and super concerned about microphones ? The camera may see me staring into the screen, woo hoo. The microphones will hear everything I discuss, incl. confidential information.
There is no physical microphone cover there, is it ?
Well i have Pinebook Pro and it's pretty much abandonware, pine doesn't do any software and OSS lacks maintainers, nobody want's it, e-waste laptop. Take it as you will.
Don't they warn you on the product page that you are buying hardware that is fully reliant on the community for functionality? That's the reason it's so inexpensive
Yeah, that's nonsense. Pinebook Pro is well supported by Linux kernel and you can thus put any aarch64 Linux distro on it. And it's been this way for the last 3-4 years at the very least.
I've been using it daily for 3 years for watching movies and main notebook while traveling.
> I suppose still not ready to be a daily driver to replace my normal phone?
I'd say that depends on your definition of daily driver and/or how much compromises you're willing to take. I occasionally see members at my larger hackerspace running around with those or other seemingly "unfit" hardware and not complain too much about it ;)
I have an old iPhone 7 which has an audio IC issue and the microphone is physically disconnected. Calls don’t work, video records without sound etc. need to connect an external microphone to have one.
Apart from the inconvenience it was somehow liberating knowing there is no microphone physically active.
And the true or not Google or other apps listening then you see ads based on that conversation. I think it's true since far too many times obscure things I've spoken about appear in ads soon after the conversation. So yes I'd say a mic blocking feature you can confirm is working, blocking, is needed.
Recommendation engines work on vast amounts of data they have on you and whatever made you speak about thing X was likely preceded by your internet activity which is not very unique as a precursor to speaking about X. In other words, if other people do Y on the internet and then end up doing stuff related to X, the recommendation engine will show you X just because you also did Y.
The other explanation is one of your contacts who were part of the conversation did things that either directly related to thing X, which you spoke about, or something the algorithm see other people do that relates to X, and you got shown ads based on your affiliation to this person.
I've also worked at FAANG and never seen proof to such claims anywhere in the code, and with the amount of people working there who care about these issues deeply I'd expect this to leak by now, if this happens but is siloed...
> I think it's true since far too many times obscure things I've spoken about appear in ads soon after the conversation
People have been making claims like this since at least the early 90s, about TV then, and no one ever credibly claims to have worked on something like this. I've worked with purchased ad data and I've never seen this data or anything that implies that it exists. It seems far more likely that its a trick of memory. You ignore most ads you see, but you remember ones that relate to odd topics that interest you.
I agree with this sentiment, people talk about x product then realise they are seeing ads for x product. Most likely the ads were there first and the people only start talking about it cause the ads have been working.
That’s pretty much it. You see an obscure ad without realizing it and have a related conversation later. Then when you see the ad again and make note of it, it feels strange.
Yeah, we're well past a point where "phones" have NPUs powerful enough to locally process "sensor" input and produce decontextualized probabilties of potential interests.
It's going to happen sooner or later and people will accept it, just like they accepted training of AI models on copyrighted works without permission, or SaaS, or AWS/PaaS, or sending all their photos to Apple/Google (for "backup").
I really question the commercial value of that kind of data. Credit card data has a lot more to do with intent to make future purchases than any keyword you might spit out verbally or in a search engine.
Reminds me of the chrome bug I filed years ago that is still unfixed. An extension with access to all browsing tabs can open a hidden iframe to a website that commonly would have mic and camera permission (like hangouts.google.com), and then inject its own JavaScript into that hidden iframe to capture mic or camera.
For this to work hangouts.google.com had to not include the HTTP header to block iframing but thankfully if you make up a URL the 404 page served on that domain does not include that http header.
Just a personal anecdote: I don't have a dog, but my grandma has two. Once, while visiting her, the dogs were barking a lot. Almost immediately I started receiving ads for dog food in my cellphone.
It is more likely your GPS placed you in the vicinity (regularly?) with another AD ID that regularly searches for, purchases, or visits dog centric locations. It's also entirely possible that the other AD ID's (your grandma) dog food schedule is predictable and you happen to be visiting within a time frame of dog food purchases.
The camera privacy issue arises because teenagers and college kids often have their computer in their bedroom.
So a webcam hack that lets them watch my 16 year old daughter study would also let them watch her sleeping, getting dressed, and making out with her boyfriend.
It's not only a teenager or college kid issue. I've seen adults with a computer in their bedroom because it's a kind of private space where they don't expect anybody to inadvertently bump into it.
My laptop is in my bedroom in winter, right now, because it's one of the smallest rooms and I can heat it easily. I use it in other parts of the house in the other seasons. I do have a sliding cover on the camera. I bought it years ago. The main issue is the microphone.
When I have to do faux-2FA auth using numeric codes sent by text or email, I sometimes catch myself quietly saying the numbers. A microphone would by quite handy for an attacker, even if they couldn’t see all my network traffic.
A picture of you with the subject "I know what you were looking at when I took this picture of you" is pretty good blackmail--I think there's an active campaign doing this even.
This would've been blackmail 20 years ago.... nowadays it's just "of course you know, I shared my OF likes publicly", will not even raise an eyebrow; or perhaps I'm living in too bohemian society circles
I received a phishing email from this campaign or a similar one several months ago.
The email opened with my name and contained a Google Maps photo of a house where I'd lived 8 years before.
The author claimed to have hacked my laptop and captured videos of me doing embarrassing things. They would release the videos unless I paid them $1000 in Bitcoin.
I searched and it's an extremely common scam, but I did panic for a few minutes.
As someone who often speaks gibberish to myself due to ptsd, if someone recorded me in my room they could convince anyone I am utterly insane, beyond any hope. It is a great way to blackmail people with coprolalia or other verbal tics.
And yeah, if they had access to my webcam, they would just see a guy staring into the screen or walking back and forth in the room.
Eh, random utterances are more common than you think. Especially amongst older people. Most will know at least a couple family members who tend to mutter random things to themselves.
Nobody who is themselves sane is going to judge another for random crap they say when they think themselves alone.
If there is a discrete PA in the speaker path, then not. But I would not be that surprised if there is a single chip codec + PA combination that can conect an internal ADC to pins that are primarily meant as PA outputs of the integrated PA.
"Am I the only one that is not worried at all about the camera and super concerned about microphones ? The camera may see me staring into the screen, woo hoo. The microphones will hear everything I discuss, incl. confidential information."
All phones are suspect. We should go back to only carrying pagers.
Just to note: Apple will refuse to cover any screen damage under warranty if one of these sorts of things was in use.
I would not be surprised if the same is true for some other manufacturers, too, but I can only speak definitely to Mac.
The issue is that lids close too closely + tightly now, and so anything more than a piece of tape winds up focusing all the pressure applied to the closed lid on that one spot in the glass, since the cover winds up holding the display slightly off the base of the laptop when in the closed position.
I find that the sticky part of a post-it works very well for this. Sometimes you have to clean the adhesive part off with 70% IPA, but not too often.
Not as pretty as a custom cover but cost-effective and can generally be done in under a minute with common office supplies (post-it + scissors) which has its own advantages.
Would a bit of Post-It Note (for minimal adhesion) damage the screen coating if left on most of the time? Would even that much thickness stress the screen when opened and closed thousands of times? Is there a better (self-service) material?
I’ve used one for years on various MacBooks and it’s very effective. The paper is very thin so it causes no real mechanical stress and also opaque, so all the camera sees is a field the color of that paper.
There’s been no damage to the screen from the adhesive although occasionally I’ve had to clean the residual adhesive with 70% IPA, but nothing worse than the typical grime that most laptop monitors pick up.
Plastic slide covers that stick on are pretty cheap if your laptop doesn't already have one. I also think that the open microphone issue is a greater problem, especially with the current ability of speech-to-text, but what you utter may not be as important as being seen "doing a Toobin" during an online meeting. YMMV :) (I won't expand that acronym!)
> Would a bit of Post-It Note (for minimal adhesion) damage the screen coating if left on most of the time?
Possible, I have one IPS monitor with a spot on screen where the color is pale. I had a post-it note there and I guess something bad happened when I tore it off.
I used electrical pvc tape for many years on my macbooks, no damage but I got tired of them leaving glue residue. Switched to post-its about 10 years ago, works perfectly.
I've never tried them on a matte or coated screen though.
> The LED should be connected to camera's power, or maybe camera's "enable" signal.
Wiring it in like this is suboptimal because this way you might never see the LED light up if a still photo is surreptitiously captured. This has been a problem before: illicit captures that happen so quickly the LED never has time to warm up.
Controlling the LED programmatically from isolated hardware like this is better, because then you can light up the LED for long enough to make it clear to the user something actually happened. Which is what Apple does -- three seconds.
Which is not an adjustable method -- without changing the hardware design later in production to just tweak a delay -- and surely causes the LED to slowly fade out?
You can design a simple circuit such that both long and short pulses light up the led for atleast 500ms. There is no tradeoff needed to be made at all.
The mentioned one shot circuit does precisely that, in hardware for less cost and 100% non-overridable.
The only time that isolated hardware approach is benefitial in terms of costs would be when you already have to have that microcontroller there for different reasons and the cost difference we are talking about is in the order of a few cents max.
I mean can't you just have the input signal to the light be a disjunction of signals? So it's on if the camera is on OR if some programmatic signal says turn it on?
Even if the LED were controlled by hardware, merely that you can reprogram the camera firmware on this Thinkpad is troubling. Malicious things can be done without the ability to turn off the LED during recording. Like capture images during legitimate recording, or start recording with the LED on banking on the user not noticing.
Firmware programming should require physical access, like temporarily installing a jumper, or pushing some button on the circuit board or something.
(I don't want to suggest signed images, because that's yet another face of the devil).
Cameras are now always on, to reduce the latency to taking a picture or scrubbing video feed. You’d need to wire the led to something tied to the data lines, perhaps.
Source? This seems extremely unlikely to me, running a camera all the time consumes a fair bit of energy and they don't take long to turn on. Unless that's because they're always on?
Regardless, that's a pretty strong claim. I'd love to learn more if you have a link that can back you up!
>The actual firmware is indeed flashable, but the part is not a generic part and there are mechanisms in place to verify the image being flashed.
That might make it harder to develop a hack, but one would hope that if the hardware team tied the LED to a hardware signal, it would not matter if the firmware were reflashed.
I believe that it’s not literally hardwired in the sense that powering up the camera also powers up the camera LED, and instead this relies on logic in the hopefully un-flashable camera+LED firmware. Someone correct me if I’m wrong.
You need some logic to enforce things like a minimum LED duration that keeps the LED on for a couple seconds even if the camera is only used to capture one brief frame.
I have a script that takes periodic screenshots of my face for fun and I can confirm the LED stays on even if the camera only captures one quick frame.
I happen to have some first-hand knowledge around the subject! In 2014 someone did a talk[0] on disabling the camera on some older Macbooks. It was fairly trivial, basically just reflashing the firmware that controlled the LED. I worked on the security team at Apple at the time and in response to this I attempted to do the same for more modern Macbooks. I won't go into the results but the decision was made to re-architect how the LED is turned on. I was the security architect for the feature.
A custom PMIC for what's known as the forehead board was designed that has a voltage source that is ALWAYS on as long as the camera sensor has power at all. It also incorporates a hard (as in, tie-cells) lower limit for PWM duty cycle for the camera LED so you can't PWM an LED down to make it hard to see. (PWM is required because LED brightness is somewhat variable between runs, so they're calibrated to always have uniform brightness.)
On top of this the PMIC has a counter that enforces a minimum on-time for the LED voltage regulator. I believe it was configured to force the LED to stay on for 3 seconds.
This PMIC is powered from the system rail, and no system rail means no power to the main SoC/processor so it's impossible to cut the 3 seconds short by yoinking the power to the entire forehead board.
tl;dr On Macbooks made after 2014, no firmware is involved whatsoever to enforce that the LED comes on when frames could be captured, and no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.
There seems to be widespread anxiety regarding cameras, but hardly anyone ever talks about microphones. Are conversations not much more privileged information than potentially seeing someone in their underwear?
"All Apple silicon-based Mac notebooks and Intel-based Mac notebooks with the Apple T2 Security Chip feature a hardware disconnect that disables the microphone whenever the lid is closed. On all 13-inch MacBook Pro and MacBook Air notebooks with the T2 chip, all MacBook notebooks with a T2 chip from 2019 or later, and Mac notebooks with Apple silicon, this disconnect is implemented in hardware alone." [1]
I'm inclined to believe it. If someone managed to prove Apple's lying about it, there would be serious reputational (and other) risks to their business. I also can't imagine how they would benefit from such a fabrication.
That said, I still use "Nanoblock" webcam covers and monitor for when either the camera or microphone are activated.
It's clear Apple define "Hardware" as "Not using the main CPU". They've pretty much admitted it's firmware based, otherwise the T2 chip simply wouldn't be involved to be mentioned.
It is implemented in dedicated small CPLD that cannot be flashed by any software means. My understanding of relation to T2/SEP is that this CPLD serves as a kind of "IO expander" for T2/SEP which also hardwires logic like this.
The T2 chip is mentioned in the quoted passage as an indicator of the architecture version, not necessarily an indicator that the T2 chip is directly involved
Obviously the camera is also 'disabled' when the lid is closed regardless of the controlling circuitry. So while that's a good feature, it's not relevant.
Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby.
But naked images of Abby can head off into the ether and be propagated more or less forever, turn up on hate sites, be detrimental to careers etc.
If your threat model is leaking company secrets then sure, microphone bad, as is anything having access to any hardware on your machine.
So sure, maybe people ought to be more concerned about microphones as well, rather than instead.
My point is that the threat model is backwards. The threat associated with a camera is the least severe compared to anything else a malicious person could do with access to your computer. Recored conversations, chats and email, browsing history, etc are all much more likely to result in harm if leaked than a recording of you innocently in your home.
> Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby.
That destroys families, standing within a community, and very often careers.
I don't think it is backwards, personally. The threat of public humiliation, and the capability for someone to spy on what you do in your own home, is worse with the camera.
> chats and email, browsing history, etc are all much more likely to result in harm if leaked than a recording of you innocently in your home.
This is far less of an intrusion for most people than recording what they are actually doing in their own home IRL. People know that information can be hacked, they don't expect and react quite differently to someone actually watching them.
> That destroys families, standing within a community, and very often careers.
Yes, but it doesn't stay on the internet forever in quite the same way.
Now I get to some extent what you're saying - aren't the consequences potentially worse from other forms of information leak?
Maybe. It depends on how you weight those consequences. I'd put (for example) financial loss due to fraud enabled by hacking my accounts as far less important than someone spying on me in my own home. Even if they didn't use that to then extort me, and were using the footage for ... uh ... personal enjoyment. I think a lot of people will feel the same way. The material consequences might be lesser, but the psychological ones not so much. Not everything is valued in dollars.
I think we may just be bumping into cultural differences here. I grew up in a household were being naked around family members was common. I spend time in clothing-optional spaces. I rarely draw the blinds on my windows, etc. I'm not concerned with what other people think in this way and such images could never be used to extort me. Consider the case of Germany - people there are extremely concerned about their privacy and data protection. At the same time public nudity is an entrenched cultural norm.
It's also known that people are not very good at assessing risk. People are more word about dying at the hands of a serial killer than they are of dying in a car crash or slipping in the shower. I feel you're underplaying the psychological harm of having all of your data crawled through by a creep (that would include all of your photos, sites visited, messages sent, everything).
All I can really say is that if someone gained access to my machine, the camera would be the least of my concerns. That's true in nearly every context (psychological, financial, physical, etc).
Empirically, most low level extortion does seem to be about leaking video. I would see a threat model based on 'criminal wants to extort me for money'. As more reasonable than 'creep wants to look through my computer for creeping'. And it seems like extortion focusses on video, so that is the bigger threat. Even if it is less invasive.
I presume the reason behind this is that video is much more likely to be re-shared. Sending bob a zip of someone's inbox is unlikely to be opened, and even less likely to be shared with strangers. But send bob a video of Alice, and he might open it. Heck, he might not know what the video is until he opens it. So even if he is decent, he might still see it. And if he is less decent and shares it, strangers are much more likely to actually view it.
I think extortion in the form of "I've encrypted your data, pay to get it back" is much more common. Ransomware. It's scalable, automatable. Extortion of video is harder to automate.
I think, though am prepared to be wrong, that you'll probably find yourself in the minority there.
It's not just about nudity and extortion, but someone having access to watch you, whenever they feel like, in your safe space. That sense of violation that people also feel when (for instance) they have been the victim of burglary - the missing stuff is often secondary to the ruined sense of security. There's a vast difference between leaving your curtains open and having someone spying on you from inside your own home.
Is it rational to put this above other concerns? That's a whole different debate and not one I'm particularly interested in. But it explains why people are concerned about cameras over 'mere' data intrusion.
I'm not arguing a point here, but I'm curious what the actual number of instances exist where someone is naked or in some other extortionate way (accidently of course) potentially exposed from the position of their webcam. I too would be much more concerned about my microphone, where I know one had conversations that in front of or next to my machine that I wouldn't want "out there". In terms of where my camera is, I woukd imagine they would catch me picking my nose every so often but that's about it.
This raises a different but related question. In what world should a victim of a crime be extorted for doing innocent things in their home. If a peeping tom took a photo though a window, could that be used to extort someone?
When people are extorted for these kinds of things it's usually catfishing that leads to sexual acts being recorded. That's not related to cybersecurity.
> and no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.
I may be the oddball here, but that 3 second duration does not comfort me. The only time I would notice it is if I am sitting in front of the computer. While someone snapping a photo of me while working is disconcerting, it is not the end of the world. Someone snapping photos while I am away from the screen is more troublesome. (Or it would be if my computer was facing an open space, which it doesn't.)
Right, so this is all defense in depth. That LED is sort of the last line of defense if all others have failed, like:
The exploit mitigations to prevent you from getting an initial foothold.
The sandboxing preventing you from going from a low-privileged to a privileged process.
The permissions model preventing unauthorized camera access in the first place.
The kernel hardening to stop you from poking at the co-processor registers.
etc. etc.
If all those things have failed, the last thing to at least give you a chance of noticing the compromise, that's that LED. And that's why it stays on for 3 seconds, all to increase the chances of you noticing something is off. But things had to have gone pretty sideways before that particular hail-mary kicks in.
OK, but then what? Leave the LED on for 24 hours after you've captured a single frame? At that point the LED isn't really indicating camera usage because you'll just get used to seeing it on all the time whether the camera is in use or not.
A ranfom thought, that probably won't cover all cases: a second LED or a colour LED. One LED/colour indicates the camera is active, the second can be on for a longer period of time (and perhaps dim as time goes on). I prefer the second LED option since it is better for us colourblind folks, though I suspect there would be more resistance to the idea.
It's strange that none of these companies will include a closable cover for the camera. I got one aftermarket. It is very reassuring since no hacking or accidental misclicks on my part can move the cover.
I've seen HP desktops that have a closeable camera cover, and Lenovo does on some ThinkPads [1], so probably others do too. Laptops usually have very little depth available in the screen part though, which is why most laptop cameras are crappy (exceptions include Surface Pro and Surface Book, which have more depth available and so much better cameras than most, but no cover - at least their camera light is not software controlled).
Higher end Lenovos and Dell Latitude / Precision tend to. Was one reason why I went for a Latitude 74XX rather than a 54XX or 34XX when looking at them last time.
Was it a built-in camera cover, or a third-party one? Apple specifically (and possibly other manufacturers?) recommends against third-party covers because the tolerance is so close:
Sure, that is going.to be true for anything with moving pats. Yet I would also imagine that design and materials are a factor here. Let's face it, these covers aren't exactly common on laptops. There is probably a lack of good design practices for them.
That's basically how this works, but manufacturing electronics at a massive scale requires some more flexibility. For example, capacitors have a pretty large tolerance (sometimes +/- 20%) and LEDs have quite a bit of variety in what voltages they'll work at. So for some people the LEDs might last 3 seconds, for some they might last 5s. Using a capacitor also means the LEDs will fade slowly instead of just turning off sharply.
If the LEDs come from a different supplier one day, who is going to make sure they're still within the spec for staying on for 3 seconds?
(And yes, I have long since parted ways with Apple)
Edit:
And to add on: That capacitor needs time to charge so now the LED doesn't actually come on when the sensor comes on, it's slightly delayed!
You can't drive an LED that way in production electronics: you need to use an LED driver circuit of some kind to ensure the LED has constant current, and also to protect against failure modes. Also, a capacitor large enough to power a daylight-visible LED for 3 seconds is not as "little" as you're thinking; there's likely not enough space in a laptop lid for one of those. A driver circuit would be smaller and thinner.
Agreed, however, that the LED should be controlled by the camera sensor idle vs. active voltage.
I've seen a million people parroting "oh now apple fixed it!" and not a single person who has actually verified/proved it. Go on, show my any third party security researcher who has verified this claim via examining the actual hardware.
You'll pardon us all if we don't really believe you, because a)there's no way for any of us to verify this and b)Apple lied about it before, claiming the LED was hard-wired in blah blah same thing, except it turned out it was software controlled by the camera module's firmware.
I'd love for a third party to verify the claim! I'm just giving you an overview of the work that went into making this a thing, knowing full well you have absolutely no reason to trust me.
The LED being "hard-wired" is a tricky statement to make, and I actually wasn't aware Apple has publicly ever made a statement to that effect. What I can say is that relying on the dedicated LED or "sensor array active" signal some camera sensors provide, while technically hard-wired in the sense there is no firmware driving it, is not foolproof.
> Apple lied about it before, claiming the LED was hard-wired in blah blah same thing, except it turned out it was software controlled by the camera module's firmware.
I don't think they would waste a high value capacitor just to keep a led lit for longer, also a led directly lit by a capacitor would be noticeable by slowly dimming when the capacitor discharges. It's more likely that the signal driving the led comes out of a monostable implemented in code: pin_on() drives the led on; pin_off() waits n secs then drives the led off.
This is Apple, so that assertion isn’t guaranteed valid like it would be for non-enterprise HP or Lenovo. They absolutely would invest in a capacitor if that’s what it takes, as they are maximally focused on camera privacy concerns and have made a point of that in their security marketing over time; or else they wouldn’t be allowing hardware security engineers to brag about it, much less talk publicly about it, at all.
If you are designing an ASIC for the camera, you can include all the required logic gates to control the LED for a cost that is close to zero. It wouldn't impact the production cost of the ASIC, whereas a capacitor is an additional item in the BOM (and to be charged it requires current, more than the LED, so the driver in the IC must be bigger).
The trick is to keep using the camera until that capacitor is discharged. I'm pretty sure most cameras can run at voltages below a LED's forward voltage nowadays.
See then it's not hardwired at all. It is equally vulnerable to a reflash. Apple just did hardware security (i.e. signed firmware) better and also are relying on security through obscurity (its not a publicly available part).
The context from the article the parent comment linked is that Mac webcams made prior to 2008 both had the camera LED controlled in firmware and didn't verify the camera firmware blob when it was downloaded into the camera's RAM. The quote you're replying to simply says that Apple solved these security issues by tying the LED to a hardware signal AND verifying the camera firmware blob. The result is still that there's no way to turn on the webcam without making the LED light up.
AFAIK iOS devices use a tiny firmware on the camera and a larger one on the secure enclave chip.
If you successfully compromise the host OS and also the secure enclave firmware, that might be enough to let you turn on the camera (without vsync) and reconstruct the correct image via later analysis... but at that point you have committed tens of millions to the hack (so you'd better not overuse it or it'll get noticed & patched).
Many complex chips have GPIO signals rather than hardwired outputs. That way you can select any [5-10] of [20-100] functions for each pin. As a result, things that you think should be hardwired are controlled by firmware.
I've also read of exploits which found ways to burn out the Macbook LED light by somehow messing with the power being supplied to the webcam without damaging the camera. Thus afterward, the LED light no longer powers on when in use but the camera still works.
While Apple made a laudable effort in this design, sadly it requires thoughtful care and design at every iteration. Typically the iPhone team couldn't pull it off and the only official claim is for macbooks.
I think it's simpler to assume that most devices can be hacked and the LED indicator isn't infailable than to always keep in mind which device lines are supposed to be safe and which ones aren't.
Apparently it was purely in software on iPhone/iPad. However, starting with the iPhone 16 and M4 iPad Pro, the LED indicator is rendered by a separate secure exclave:
Do you know if the same occurs in iPhones? That was always my assumption, but seeing a Mac-only response makes me wonder if it is addressing a Mac/only question or if it’s applicable only to Macs.
An indicator light hardwired is nice but I apparently can't trust hardware manufacturers to design it properly. My work laptop (HP Dragonfly) has a physical blocker that closes over the camera when I haven't explicitly pressed the button that enables the camera. The blocker is black and white stripes so it's very obvious when it's covering the sensor. This should absolutely be the security standard we all strive for with camera privacy.
On my ThinkPad it’s instead painted with a red dot. Because, obviously, the conventional meaning of a red dot appearing on a camera is “not recording”.
Not just the weird meaning, but on my last Thinkpad the red dot and the slightly red glean of the camera lens look surprisingly like each other. Even worse I managed to get the cover in a position where it looked like it was closed, but the camera could still see.
I just looked up to my "Lenovo Performance" webcam and saw its red dot [1] looking at me... some product designers have a worrying lack of awareness about de-facto standards and user expectations affecting the UX.
Same on my Dell Latitude. Seems a very odd design decision. They've also centrally aligned the switch so that it's not immediately obvious from the switch position whether the cover is iver the lens or not. Super annoying.
The Dell Latitude business laptops now have a wired led and wired switch. Besides the white led, there’s no indication which is on or off, and I don’t trust any of the software or firmware chain to be reliable. (score one for macs being transparent and prescient)
Dell should go back to the basic design of the Latitude E6400, but with modern electronics and screen of course, and drop the optical drive. The keyboard on that laptop was fantastic, and the single captive screw on the back panel was great for serviceability.
For some inexplicable reason Dell has chosen to mark the button as "mute mic" (mic icon + X). So if the LED on the keyboard is lit up, the microphone is off, or rather, the microphone muting is on on. Brilliant design.
Yeah, the physical barrier is key. It's not that hard, and provides absolute certainty. As indicated by this thread, software experts (rightly) don't trust software by itself enough. It's by the same rationale software people are proponents of electronic voting machines printing physical, verifiable paper copies of votes.
My Latitude 7440 has a physical slider switch that covers the camera, in addition to turning it off in a software-detectable way (it shows "no signal" and not just a black screen once the slider is about 50% covering the lens). My only criticism of this is that it's subtle and at a glance hard to tell the difference between open and closed, but I guess you just get used to the slider being to the right.
I was just testing and the white LED comes on when I open something that wants to use the camera, even when the cover is closed. This seems like a useful way to detect something (eg malware) trying to use the camera, and is a good reason to not bluntly cut power to the entire camera module.
Probably the camera “power” is always on as any other microcontroller on the same board, but is only active when called through the control bus or an interrupt, having an LED tied to the power rail would keep it on all the time whenever the lapop is on.
Then tie it to some signal or power rail that only gets enabled when the camera is in use, and that must be enabled for the camera to work, e.g. when there's power to the sensor itself.
I suspect most people don't want it. I can imagine lots of people calling customer service "Q: why doesn't my camera work?", "A: Did you open the cover?"
There's just a valid an argument to do the same for phones. How many phones ship with camera covers and how many users want them?
You can get a stick on camera cover for $5 or less if you want one. I have them on my laptops but not on my phone. They came in packs of 6 so I have several left.
> I can imagine lots of people calling customer service "Q: why doesn't my camera work?", "A: Did you open the cover?"
In some over-engineered world, when the camera cover is engaged the webcam video feed would be replaced by an image of the text "Slide camera cover open" (in the user's language) and an animation showing the user how to do so.
We have that on the most recent generation of Framework Laptop. When the hardware privacy switch is engaged, the image sensor is electrically powered off and the camera controller feeds a dummy frame with an illustration of the switch.
And adding 2+2, the man being interviewed (Nirav Patel) is the same man who replied to my comment (HN user nrp), i.e. the man who actually did the overengineering.
If you rewind to 17:03, he talks about the changes of what the switch does (previously: USB disconnection, now: as he described in grandparent comment).
The cover on my laptop's camera is behind the glass. I suppose there is a chance that the slider itself could get damaged, but at least they minimized the exposed surface that could be damaged.
That said, I really can't comment on how durable it is. I only remove the cover about a half dozen times a year.
I had that exact discussion with somebody recently, and it took me a few minutes to realize that their laptop had a physical camera cover that somehow disables camera permissions in windows too. So yeah, happens a ton I would imagine.
For what it's worth, you could just power on the camera, take a pic, then turn it back off instead. Provided you can do this fast enough, an indicator LED is rendered worthless. So you'd need to make the indicator LED staggered, to stay lit for a minimum amount of time.
There's also the scenario where the LED or the connections to it simply fail. If the circuit doesn't account for that, then boom, now your camera can function without the light being on.
Can't think of any other pitfalls, but I'm sure they exist. Personally, I'll just continue using the privacy shutter, as annoying as that is. Too bad it doesn't do anything about the mic input.
I worked on this feature for Apple Macbooks around 2014 as the security architect. All Macbooks since then have a camera indicator LED that is (barring the physical removal of the LED) always on at least 3 seconds. This feature is implemented in gates in the power management controller on the camera sub-board.
There's a LOT of pitfalls still (what if you manage to pull power from the entire camera sub-assembly?), this was a fun one to threat-model.
A minimum light duration seems pretty trivial to physically engineer.
For one the energy to take a picture is probably enough to power a light for a noticeable amount of time.
And if it isn't, a capacitor that absorbs energy and only allows energy through once it's full would allow the light to remain on for a couple of seconds after power subsides.
Wasn't arguing that it's difficult, just that it's needed (and that I'm not expecting it to be done in practice. Because the indicator LED on my laptop doesn't do it either, despite being enterprise grade).
Trust me, I was using it semi-sarcastically too. This thing is slower than my old Pentium 4 would be, yet has a fast enough 30% to 3% battery discharge rate that it would make the speed of light itself blush.
> The main culprit is that anyone estimating battery life in percentages.
I thought this was a solved problem, like, decades ago? At least I remember even the first gen MacBooks having accurate battery percentages, and it’s a more vague memory but my PowerBook G4 did too I think.
The "accurate" charging level mostly happens with specific amount of charge cycles (i.e. new). Laptop batteries suffer from higher temperature (over 60C), overcharging (over 4.22 per Li-Ion for most chemistries).
"A third" is again fraction/percentage - it's still a representation stuff that depends on charge and charge cycles... and likely previous over charging and heat (Li-Ion doesn't like heat).
To put it simply: the charge level, usually, is just a lookup table for voltage (not under load).
In case it was somehow magically unclear, it's not that I don't understand how batteries work, but that either that exact charge approximation mechanism is working exceptionally incorrectly, making it appear as if the battery suddenly lost so much charge, or the battery is a bust.
I do not know whether the battery is actually experiencing that sudden loss in charge, nor do I care, because in practice the end result is the same...
While true, the amount of power would be too low, LEDs also have quite high forward voltage (~3V for blue ones) and they are current driven devices. That suggestion would require pass all the current through the LEDs. LEDs don't like to be reverse biased either. Overall, it's a rather appalling idea. On top of the fact that LEDs can fail short.
More also you'd want a hold up time for the light (few seconds at least), as taking pictures would flash them for 1/60 of a second or so.
They have high forward voltage /drop/ which is a useful property. You drive them with constant current for constant brightness and improved lifespan which is most pertinent for LED light bulb replacements than it is for a simple signal status light. Fixed delay before standby isn't hard to enforce either.
Even so this whole attack vector isn't solved with this. How long should the light stay on for after the camera is put in standby before a user considers it a nuisance? 5 seconds? So if I turn my back for longer than that I'm out of luck anyways.
The anti-TSO means would be a hardware serial counter with a display on the camera. Each time the camera is activated the number is incremented effectively forming a camera odometer. Then if my previous value does not match the current value I know it's been activated outside of my control.
I might be out of the loop, but I thought that was only for some machines - I remember the LED being wired that way being a selling point for MacBooks at some point, as a privacy feature. It definitely should be the standard, though!
I don't know whether 2024 models has the LED or not, but there's an unmaskable/global overlay warning for Webcam / Microphone / Location services, and I think they are controlled at Kernel level. You can't bypass these indicators when any software accesses these devices.
These warnings have hysteresis and logging. They don't disappear the moment you close the device, and you can see which app is using which device.
...and no, ambient light sensor handles the true tone and brightness. It's not the camera.
Can you point me to a link? This is very disturbing to me as I thought they were wired together. I can’t find any source confirming or denying newer than like 2022…
I can't find it now, but recently I read how one company's design team added this feature to their laptops. A subsequent review by the team responsible for manufacturing found that they could change the circuit to cut down on the part count to save money. The light was still there, but it was no longer hardwired. The company continued to advertise the camera light as being hardwired even though it wasn't.
I stumbled on a forum once where it was just filled with people trying to modify the software for various laptops to disable the "tally lamp" (as it is called). There were people selling the mods and one guy claiming he was selling his cracks to three-letter agencies. The people on there seemed to be using this to extort people (mostly women) by being able to record videos without the owner knowing. Some really dark shit.
Yeah the first day I read about RATers... jesus. The camera LED seemed to be a major thing for them, because if they could bypass it then the chance their RAT would be discovered was much lower.
Really nasty world they've made for themselves, blackmailing, extorting and generally controlling other people (mostly women and girls, but some men too) with threats of releasing compromising material.
Since some sort of firmware is required, this seems like a "turing tarpit" security exploit from my laymans perspective.
There's no standard that I know, that, like "Secure EFI / Boot" (or whatever exact name it is), locks the API of periphery firmware and that would be able to statically verify that said API doesn't allow for unintended exploits.
That being said: imagination vs reality: the Turing tarpit has to be higher in the chain than the webcam firmware when flashing new firmware via internal USB was the exploit method.
No firmware is required. Macbooks manufactured since 2014 turn on the LED whenever any power is supplied to the camera sensor, and force the LED to remain on for at least 3 seconds.
Thanks for your reply — yourself as the Source can only make me feel flattered then for you responding to me.
> Macbooks manufactured since 2014 turn on the LED whenever any power is supplied to the camera sensor, and force the LED to remain on for at least 3 seconds.
That convinced me originally I think, good old days! I'd almost forgotten about it. The way you phrased it, it sounded like 50% OS concern to me.
But if cam & LED rly share a power supply, and the LED is always on without any external switch, Good then!
I was not very popular with the camera firmware folks for a while. They had to re-architect a bunch of things as they used to occasionally power on the camera logic without powering the sensor array to get information out of the built-in OTP. Because the LED now came on whenever the camera was powered they had to defer all that logic.
Apologies. OTP is One-Time-Programmable. The physical implementation of this varies, in this specific case it was efuses (anti-fuse, actually). It's used for things like calibration data. For a camera it contains information about the sensor (dead pixels, color correction curves, etc.).
That's why many ThinkPads have physical covers over their cameras. You don't even need to worry about whether the LEDs are hardwired - relying on any electronic indicator is already a half-baked security measure. If you want real security, just go with a physical solution.
…until it isn’t: my ThinkPad P1 Gen 6 has the camera cover, yes - but it doesn’t have a cover for the depth-sensing camera, only the RGB cam, even though userland applications can get imaging data from that camera just as easily - which is potentially a bigger security issue: I imagine you could reconstruct my facial shape from the data and build a dummy head to get into my iPhone/iPad via FaceID.
(No, I’m not actually worried about this, I’m far too unimportant for anyone to make a targeted attack against)
In the past I've used microsnitch on macos which tells you when the mic or camera are activated, but macos seems to have support for this baked into the os now. In zoom calls the menu bar shows what is active. If this can be sidestepped and avoided in software, and the camera can be activated without any indicator, I do not know. If direct access can be done, and you don't need to go through some apple api to hit the camera, maybe.
We may already have this law. If the manufactures makes claims about this LED, then that this attack is possible mean a lawyer can force them to recall and fix everything.
The idea has been around for quite some time. But it is always dropped.
My guess is that, assuming the most basic and absolute physicial design, the light would flash for silly things like booting, upgrading firmware, checking health or stuff like that.
Flashing is easily fixed with a capacitor and also not a bad thing if it turns off when it loses power immediately. The only explanation that makes sense to me is it being separately controlled is a feature not a bug.
I agree on the capacitor fix for flashing, I pointed it out in another post.
In this case I was referring to false positives to the user.
This would mean we can't update the firmware without causing the user some paranoia.
Also. Would an app requesting permission to use camera itself send some power to the camera to verify it is available? In a similar vein, what about checking if the camera is available before even showing the user the button to use the camera?
Maybe there's solutions to this, I'm just pointing out some reasons they may have gone the software route instead of the hardware route.
It could be something very simple, such as requiring less USB hub complexity for a camera that can be woken up via a command on the USB bus instead of needing to connect/disconnect the USB power rails (wired in parallel with the LED) to it.
Somebody here has also mentioned Apple using the camera for brightness and maybe color temperature measurement, for which they wouldn't want to enable the LED (or it would effectively always be on).
That doesn't automatically make that a good tradeoff, of course; I'd appreciate such a construction.
> Somebody here has also mentioned Apple using the camera for brightness and maybe color temperature measurement, for which they wouldn't want to enable the LED (or it would effectively always be on).
That is not true. MacBooks have separate light sensors. And the camera physically cannot activate without the LED lighting up and a notification from the OS. People say a lot of stupid things in the comments…
It isn't clear to me that webcam firmware ever powers down a typical camera module. See below for data about how the Sony IMX708 sensor is an I2C device with start and stop streaming commands.
It's probably done to keep it in a low powered state and reduce the initialization delay. Maybe also to prevent the Windows USB plugging sound from playing upon turning the camera on, as it would seem weird to the user ("I don't have any USB devices plugged in...")
Most business class thinkpads have a physical cover in the screen that covers the camera with a piece of plastic.
Led, no led, who cares, plastic is blocking the lens. Move the cover away, say hi on zoom, wave, turn the camera back off, cover on, and stay with audio only, as with most meetings :)
Actually astound about the same thing with the microphone mute LED and the speaker mute LED. Even without any attack they are sometimes malfunctioning. None of those seem remotely hardwired on my ThinkPad Z13.
"Add an LED next to the camera, our customers demand it!"
"Job done boss!"
That's it. That's what happens. Nobody ever reviews anything in the general industry. It's extremely rare for anyone to raise a stink internally about anything like this, and if they do, they get shouted down as "That's more expensive" even if it is in every way cheaper, or "We'll have to repeat this work! Are you saying Bob's work was a waste of time and money!?" [1]
[1] Verbatim, shouted responses I've received for making similar comments about fundamentally Wrong things being done with a capital W.
Lawyers after the fact review this. I expect one to start a class action - they will make millions, and everyone else who has this laptop will get $1. The real point is the millions means every other company is on notice that these mistakes hurt the bottom line and so the industry starts to review these things. So long as it doesn't hurt they won't review.
I feel really dirty calling lawyers the good guy here, but ...
The exact words matter. If they call it a led they are maybe fine. If they call it a camera on led they are sunk. Even if they just call it a led, the implication that it is about camera on is an arguement the courts will not toss out - though how they rule is not as clear
Enterprise organizations want to be able to watch their employees without them knowing.
Other organizations like law enforcement, are also ambivalent about this.
The easy solution, of course, is a folded business card or piece of tape. But tbh I'm not surprised they didn't implement that approach, and likely deliberately.
Yeah, my understanding is that is how the light on MacBooks works, but I'm not sure about any other makes/models. Obviously, if this is possible that Thinkpad model doesn't do that.
> I thought the whole point of these camera LEDs was to have them wired to/through the power to the camera, so they are always on when the camera is getting power, no matter what.
This definitely happened too on Mac in the past, then they went in damage control mode. Not only had Apple access to turn off the LED while the camera was filming, but there was also a "tiny" company no-one had ever heard off that happened to have the keys allowing to trigger the LED off too. Well "tiny company" / NSA cough cough maybe.
After that they started saying, as someone commented, that it requires a firmware update to turn the LED off.
My laptop has a sticker on its camera since forever and if I'm not mistaken there's a famous picture of the Zuck where he does the same.
I've got bridges to sell to those who believe that the LED has to be on for the camera to be recording.
I can see why some people might be concerned about the camera, but I'm far more concerned by the microphone. There's far more sensitive and actionable information that can be gathered from me that way! I'm glad that macOS started putting a light in the menubar when the microphone is in use, but I'd prefer to have unhackable hardware for that instead.
I believe it is possible to turn a speaker into a microphone. Found a paper which claims to do just that[0]. So, there is no safety anywhere?
SPEAKE(a)R: Turn Speakers to Microphones for Fun and Profit
It is possible to manipulate the headphones (or earphones) connected to a computer, silently turning them into a pair of eavesdropping microphones - with software alone. The same is also true for some types of loudspeakers. This paper focuses on this threat in a cyber-security context. We present SPEAKE(a)R, a software that can covertly turn the headphones connected to a PC into a microphone. We present technical background and explain why most of PCs and laptops are susceptible to this type of attack. We examine an attack scenario in which malware can use a computer as an eavesdropping device, even when a microphone is not present, muted, taped, or turned off. We measure the signal quality and the effective distance, and survey the defensive countermeasures.
Even where it works, speakers are much worse microphones that dedicated microphones, and so the amount of data that can be gathered is low. Why bother when you probably have a microphone on the same PC that can capture far more sound?
I think there was a long period where a proper PC would frequently have only the cheap stereo speakers which are small enough to far outperform raw microphone leads. But I'm not sure this works that well in >=HDMI even if some monitor speakers might otherwise be ideal.
Despite this being a 2016 paper, it's worth noting that this is true in general and has been common(ish) knowledge among electrical engineers for decades. Highschoolers and undergrads in electrical engineering classes often discover this independently.
What's notable about this paper is only that they demonstrate it as a practical attack, rather than just a neat fun fact of audio engineering.
As a fun fact, an LED can also be used as a photometer. (You can verify this with just a multimeter, an LED, and a light source.) But I doubt there's any practical attack using a monitor as a photosensor.
Yes! LEDs as photometers is something that you don't really see around much anymore, but it is really cool. Even an LED matrix can be used as a self-illuminating proximity sensor with the right setup.
I recall in the early or mid 2000s using some cheap earbuds plugged into the microphone port of my family computer as a pair of microphones in lieu of having a real microphone nor the money for one. Then I used Audacity to turn the terrible recording into a passable sound effect for the video games I was making.
Not knowing much about how soundcards work, I imagine it would be feasible to flash some soundcards with custom firmware to use the speaker port for input without the user knowing.
You will still see DJs do this in NYC! Old school flavor. You can also see Skepta rapping into a pair on the the music video for That's Not Me: https://www.youtube.com/watch?v=_xQKWnvtg6c
I've seen some theatrical DJs bring a cheap pair, snap them in half, and then use them like a "lollipop." Crowd eats it up. Even older school: using a telephone handset: https://imgur.com/a/1fUghXY
Yup it's wild to me how much anxiety there is about cameras while no mind is given to microphones. Conversations are much more privileged than potentially seeing me in my underwear.
That said the most sensitive information is what we already willingly transmit: search queries, interactions, etc. We feed these systems with so much data that they arguably learn things about us that we're not even consciously aware of.
Covering your camera with tape seems like a totally backwards assessment of privacy risk.
I’m just going to assume you’re a man, and don’t generally worry about things like revenge porn. Because that is a bigger concern to me than you, it seems. Sure, I don’t want my sound to be recorded either, but that’s why I put a cover on the webcam AND turn off the physical switch on my (external) microphone. They are both easy things to do.
> Yup it's wild to me how much anxiety there is about cameras while no mind is given to microphones. Conversations are much more privileged than potentially seeing me in my underwear.
It depends on the person, I don't think you could gain much from me? I don't say credit card numbers out loud, I don't talk about hypothetical crimes out loud. I don't say my wallet seed phrases out loud. I also don't type in my passwords. Yes you could probably find out what restaurant I'm ordering delivery for, but other than that I suppose my conversations are really boring.
The cost of feeding your entire years speech to an LLM will be $0.5/person. I'm sure summarized and searchable your conversation will be very very valuable.
The microphone also can't be covered with a $1 plastic camera cover off Amazon. It's so easy to solve the camera issue if you care about it, but there's really nothing you can do about the mic.
You can do it even cheaper with some painter's tape!
For the mic, perhaps you could disable it by plugging in an unconnected trrs plug into the audio jack. I'm not sure how low level the switching of the microphone source is when you do this, so maybe it's not a good method.
i tried that with some sugru on an old phone (samsung s10e) and it does a really good job of blocking the microphone.
if you have a case on your phone its a lot less destructive too since you can just stuff the sugru into the microphone hole in the case. the case i was using was soft rubber so it was easy enough to pop out the corner of the case to be able to use the microphone for a call.
that wasnt my daily phone at the time though so im not sure how well it would work in reality. i could see myself forgetting to pop out the case when i get a call and the other person just handing up before i realised what was going on.
it also doesnt work on every phone. i tried the same thing on a pixel 5 but blocking the mic hole did nothing, but that phone uses an under screen speaker so maybe there is something similar going on with the mic
Plus one-ing this - I think the external monitor may be the kicker to keeping the mic active. This drives me up the wall when Google Meet decides to just default to the closed Macbook next to me instead of my already connected Air Pods when joining work meetings.
Are you sure it’s the MacBook (T2 or Arm) mic? I imagine you’d sound super muffled if you were trying to use it while closed anyway, so I can’t imagine it’s very usable to yourself?
I just tested this with Voice Memo and can confirm it works at least in that scenario. The recording didn't stop, the mic was just disconnected then reconnected when lid was opened. Using Amphetamine w/ script installed on M1.
I'm not sure if an attacker could get some additional sensitive information from me with access to the microphone or the camera, if they already have full access to my PC (files, screen captures, keylogger). Most things they would be interested in is already there.
Also, on Qubes OS, everything runs in VMs and you choose explicitly which one has the access to microphone and camera (non by default). Admin VM has no network.
If you're half-serious about this sort of opsec, you already have bluetooth disabled. Ideally your hardware wouldn't have support for it at all. Same for wifi.
M2 and newer MacBooks have an IMU on-board, which is just a funny way of spelling microphone. Admittedly a very low quality one; I'm not sure if you could pick up understandable speech at the 1.6kHz sample rate Bosch's IMUs seem to support.
> M2 and newer MacBooks have an IMU on-board, which is just a funny way of spelling microphone. Admittedly a very low quality one; I'm not sure if you could pick up understandable speech at the 1.6kHz sample rate Bosch's IMUs seem to support.
Are there examples of using IMUs to get audio data you could point to? A quick search didn't reveal anything.
Going into full paranoid mode, I wonder if some other sensors / components can be used as a makeshift microphone. For instance, a sufficiently accurate accelerometer can pick up vibrations, right? Maybe even the laser in a CD drive? Anything else?
"We also explore how to leverage the rolling shutter in regular consumer cameras to recover audio from standard frame-rate videos, and use the spatial resolution of our method to visualize how sound-related vibrations vary over an object’s surface, which we can use to recover the vibration modes of an object."
A condenser microphone is just a capacitor. Your computer is full of them.
They are very low level input and generally need a pre-amp just to get the signal outside the microphone. However conceptually at least they are there and so maybe someone can get it to work.
Well it doesn’t need to be visible to work in contrast to camera. Seriously though, no technological and almost no economic barrier preventing embedding a mic into every wireless communication chip.
Record, produce transcript, look for keywords, alert the puppeteer when something interesting is picked up - trade secrets, pre-shared keys, defense sector intelligence, etc.
Only works if there's labeled data for your prior keystrokes as training data. Unless, there's some uniform manufacturing defect per key in a widely available keyboard like Macbook Air
macOS is a proprietary binary blob, remotely controlled by Apple. So, the light in the menu bar is not a reliable indicator of anything. There is no privacy on macOS, nor any other proprietary system. You can never be 100% sure what the system is doing right now, as can be anything it is capable of. Apple is putting a lot of money to "teach people" otherwise, but that is marketing, not truth.
> There is no privacy on macOS, nor any other proprietary system.
Nor is there on any free system for which you didn't make every hardware component yourself, as well as audit the executable of the compiler with which you compiled every executable. (You did self-compile everything, hopefully?)
> Nor is there on any free system for which you didn't make every hardware component yourself, as well as audit the executable of the compiler with which you compiled every executable.
If the components follow standards and have multiple independent implementations, you can be reasonable confident it's not backdoored in ways that would require cooperation across the stack. At least you raise the cost bar a lot. Whereas for a vertically integrated system, made by a company headquartered in a jurisdiction with a national security law that permits them to force companies to secretly compromise themselves, the cost of compromise is so low that it would be crazy to think it hasn't been done.
The root of all trust is eventually some human, or group of humans. See "Reflections on Trusting Trust." At least so far, Apple has convinced me that they are both willing and competent enough to maintain that trust.
Myself, I stopped trusting Apple. There are now too many dark patterns in their software, especially once one stops using their services. And, DRM was re-instantiated, when iTunes started streaming as Apple Music. On top of that, their lies, such as those about the Butterfly keyboards being fixed, cost me a fortune. They fuck up the keyboard design, and then they buy the computer back for 40% of its original price, due to a microscopic scratch nobody else could see. And that happened twice to me. They put a lot of money into advertising themselves as being ethical, but that is only marketing. These, of course, are my personal opinions.
> DRM was re-instantiated, when iTunes started streaming as Apple Music
Purchased music is DRM free. Streaming music was never DRM free, since you arguably do not "own" music that you have not purchased. Though I'm sure record labels would love if they could get DRM back on purchased music again.
But this is a pretty extremist take. Just because a company doesn't push source code and you can't deterministically have 100% certainty, doesn't mean you can't make any assertions about the software.
To refuse to make any claims about software without source is as principled as it is lazy.
Imagine an engineer brought to a worksite, and they don't have blueprints, can he do no work at all? Ok, good for you, but there's engineers that can.
Yes, I think all devices packed with sensors that live in our homes should be transparent in what they do, that is their code should be available for everyone to see. And yes, it is extremist take, given where we ended up today.
> There is no privacy on macOS, nor any other proprietary system.
Which is to say, every system in actual widespread use. All such CPUs, GPUs, storage devices, displays, etc. run closed microcode and firmware. It'd be funny if it wasn't so profoundly sad.
And even if they didn't, the silicon design is again, closed. And even if it wasn't closed, it's some fab out somewhere that manufactures it into a product for you. What are you gonna do, buy an electron microscope, etch/blast it layer by layer, and inspect it all the way through? You'll have nothing by the end. The synchrotron option isn't exactly compelling either.
Yes, ultimately, I want everything to be open. This is not a bag of rice. These are devices packed with sensors, in our homes. As for inspection, I do not have a problem trusting others. I just do not trust big corporations with remotely controlled binary blobs, no matter how much money they put into the safety and security ads. This is a personal opinion, of course.
Trusting someone doing the right thing when you purchase is different from trusting them not tampering things remotely in the future.
Companies can change management, human can change their mind.
The time factor is important
Hardware can be and is implemented such that it changes behavior over time too, or have undisclosed remote capabilities. There are also fun features where various fuses blow internally if you do specific things the vendor doesn't fancy.
There sure is a difference in threat model, but I don't think the person I was replying to appreciates that, which is kind of what triggered my reply.
Why do you think my stance is internally inconsistent?
For example, I completely trust Emacs maintainers, as I have yet to see any malice or dark patterns coming from them. The same applies to other free and open source software I use on a daily basis. These projects respect my privacy, have nothing to hide, and I have no problem trusting them.
On the other hand, I see more and more dark patterns coming from Apple, say when signed out of their cloud services. They pour millions into their privacy ads, but I do not trust them to act ethically, especially when money is on the table.
Thinking about it, I might have misunderstood what you wrote a bit. What I read was that you trust people, but then you also don't. That's not really a fair reading of what you wrote.
That being said, I have seen "patterns" with open source software as well, so I'm hesitant to agree on trusting it. But that's a different problem.
I also know how little hardware, microcode and firmware can be trusted, so that doesn't help either.
Thank you for the clarification. I certainly could have worded my comment better. I agree with you on that we should never trust open-source software blindly. That said, we can at least audit it, along with every new patch, which is impossible with binary blobs. That is why, I personally think, open-source should be preferred, for free and non-free software alike.
For something as compressible as voice, I do not know how you would feel confident that data was not slipping through. Edge transcription models (eg Whisper) are continuing to get better, so it would be possible for malware to send a single bit if a user says a trigger word.
Network traffic monitoring is routinely done at enterprises. It's usually part-automated using the typical approaches (rules and AI), and part-manual (via a dedicated SOC team).
There are actual compromises caught this way too, it's not (entirely) just for show. A high-profile example would be Kaspersky catching a sophisticated data exfiltration campaign at their own headquarters: https://www.youtube.com/watch?v=1f6YyH62jFE
So it is definitely possible, just maybe not how you imagine it being done.
I do believe that it sometimes works, but it's effectively like missile defense: Immensely more expensive for the defender than for the attacker.
If the attacker has little to lose (e.g. because they're anonymous, doing this massively against many unsuspecting users etc.), the chance of them eventually succeeding is almost certain.
On a ThinkPad X1 Carbon Gen 8, it's easily possible record video with the webcam LED off. I did not verify newer generations of the X1 Carbon.
Lenovo put a little physical switch—they call it "ThinkShutter"—that serves to physically obstruct the webcam lens to prevent recording. It's supposed to have only two positions: lens obstructed or not. But if the user accidentally slides it halfway, you can still record video with the lens unobstructed but somehow the webcam LED turns off. It's because the ThinkShutter actually moves 2 pieces of plastic: 1 to cover the lens, 1 to cover the LED. But the piece covering the LED blocks it first, before the other piece of plastic blocks the lens. I discovered this accidentally yesterday while toying with a X1 Carbon... I am reporting it to Lenovo.
They fail to develop a reliable webcam indicator, and patch that with some half-assed attempt at physical view obstruction. The whole approach is a demonstration of bad engineering and unreliability. And that's just the part that became public.
I think that its easier to compare the shutter to airplane windows.
The windows are there just to make the humans inside more comfortable, similar to how many people would be more comfortable without a camera pointed at them.
Flashing firmware is a big hill to climb for bad guys in most peoples worlds.
I suppose covering the LED is a less expensive way to provide the same user experience as hooking an electrical switch to the ThinkShutter to electrically turn off the LED when it's in the "blocked" position.
The malware posts comments like OP to lots of hacker oriented message boards, prompting all curiously minded ThinkPad X1 Carbon Gen 8 owners to verify that the comment tells the truth by trying it for themselves.
Arguably a much, much bigger problem are the (many) microphones of modern devices.
These usually get neither an LED nor a switch, and unlike cameras can't easily be covered, nor pointed away from potentially sensitive topics/subjects.
And having a microphone in the same chassis as the keyboard would make creating a keylogger easier. A microphone in the same room as the keyboard can be made into a keylogger[1].
At the same time we're at a point where synthesizing your voice is getting more trivial everyday, you need only a few seconds of it and you can be made to say whatever someone wants.
Sure, but that doesn’t mean they learn everything I said: Passwords, personal details etc.
Also, getting a voice sample in the first place gets significantly easier that way: Not everybody publishes video or audio recordings of themselves online.
Which reminds me, to strengthen your point, it doesn't have 100% keystroke recognition, but there are works[1] on keylogging via audio, and 93% via Zoom-quality audio streams is concerning enough for me.
Lots of ThinkPads have «Microphone is muted» LED. Not exactly what's requested (and is bound to a software mute/unmute shortcut), but it's better than nothing regarding state of machine being observable with a quick glance.
That one seems to be software controlled. I'm fairly sure I remember having the mic working with the mute LED lit, which was confusing. That was on a x1 carbon gen9.
One can do whatever to one's computer, it doesn't matter if one is still carrying a microphone in one's pocket the whole day and sleeping next to it too.
For example, I'd not be happy having my voice auto-transcribed by some malware as I authenticate to my bank providing my SSN etc (which as an authentication method is of course horribly insecure, but that's a different discussion).
Of course, this will vary from person to person, but as mentioned above, just being able to mechanically cover a camera when required makes it less of an issue for me.
>>If someone drains my accounts, I'm definitely screwed.
You ring your bank and it's reversed almost instantly. Your photos on the internet you have no way of doing anything about them, they are there forever.
Not to downplay the ramification, especially to a young person, but the leak of photos are a passing thing and it's quickly forgotten. Afterwards you're pretty immune, what are they going to do, leak the photos again?
A young Danish woman had nude photos leaked by an old boyfriend. She had her friend take better pictures and posted those herself so now no one can find the original photos. Not suggesting that as a solution, but I thought it was a pretty fun response.
If an attacker is at the point where they can turn on your microphone, or video, it doesn't really matter anymore, it's been game over for a long time.
This never really registered to me, before a former colleague commented on the nonsense with people putting tape over their camera. If an attacker has access to your camera, or microphone, then they have access to pretty much very thing else. The difference in damage is negligible, it's already total for most.
If people are truly concerned about the camera in their laptop, then keep the computer in a dedicate room, shut it down when your done (or close the lid, if it's a laptop).
Sure it's kinda dumb that the LED is software controlled and that there's not a physical switch for turning off the microphone, but even having those things done correct doesn't negate the amount of damage someone could do with they have that kind of access to your devices.
> If an attacker is at the point where they can turn on your microphone, or video, it doesn't really matter anymore, it's been game over for a long time.
This is obviously incorrect.
There is lots of software that can get access to your camera/microphone but not have access to anything else, like browser-based applications. And on Mac even locally installed applications are limited; getting access to user data directories requires a separate permission grant from webcam access.
You might also simply have nothing incriminating on your machine.
After GCHQ was discovered doing this back in 2014 with their 'Optic Nerve' program[0], I have tried to avoid computers with integrated webcams for use as my personal devices (exceptions are mobile devices).
An exception to that rule is if they have hardware switches for turning off the power supply to the camera and microphone.
Currently, I am very happy with my Framework, where the LED is hardwired into the power supplied to the camera[1].
I assumed that most if not all of these webcam LEDs are wired in series with the power to the camera itself. Which then makes it impossible to disable them. Who designs this LED to be software addressable?
Assuming Hanlon's razor it's a Chesterton's fence situation you just see a LED that indicates the camera is on. Assuming they ask the question at all they think it's just to remind you your still streaming/in a meeting. Then someone asks any of the following questions:
Can we use it to indicate additional information?
Can we make it standard with the other LEDs?
Can we dim it so it's more pleasant to use at night or make it a customisable colour?
I'm sure plenty of other questions take you down the same path and you've just destroyed one of the LEDs most useful functions.
Hanlon's Razor: Don't attribute to malice what can be attributed to stupidity
Chesterton's Fence (worth googling as it's a nice little parable, or it might be a derivation of the parable,can't quite remember): Can boil it down to: if you don't know what something does assume it serves a purpose until you've figured out what purpose it used to serve. In this case I'm implying these people are playing the part of wanting to change the fence without knowing it's purpose.
In series with the power to the camera would be odd. You would be passing the same amount of current through both the camera and the LED. Unless you meant in parallel, which still leaves the other issue that the camera is likely always powered even when not in use, so the LED would always be on.
I'm not an EE by trade, but I personally wouldn't want to put a CCD in series with an LED with god-knows-what Vf tolerances. Then again, I'd bet that nearly all laptop webcams come as off-the-shelf modules with their own internal regulators for the CCD anyway. So maybe it wouldn't matter.
I'll bet it went something like this: As originally specified, the user need was "LED privacy indicator for the webcam." Product management turns that into two requirements:
1) LED next to webcam.
2) LED turns on and off when webcam turns on and off.
Requirement 1 gets handed to the EEs, and requirement 2 gets handed to the firmware engineers. By the time a firmware engineer gets assigned the job of making the LED turn on and off, the hardware designers are already 1 or 2 board spins in. If the firmware engineer suggested that we revise the board to better fit the intention of the user needs, one of two things will happen:
1) They'll get laughed out of the room for suggesting the EEs and manufacturing teams go through another cycle to change something so trivial.
2) They'll get berated by management because it's "not the engineers' place to make decisions about product requirements."
Of course this is all spitballing. I've definitely never been given a requirement that obviously should have been a hardware requirement. I've definitely never brought up concerns about the need to implement certain privacy and security-critical features in hardware, then been criticized for that suggestion. And I've definitely never, ever written code that existed for the sole purpose of papering over bad product-level decision making.
I'd just add a resistor and stick it + the LED in parallel with the camera module. If it's a white LED and a 1.8V supply, you might not even need the resistor (you should probably still put a 0 ohm link in there though, just in case).
Yes, and it would die rather quickly if it would be a typical low power 2mA LED. Cam sensors can consume quite a bit of power, certainly more than 6mW at 3.3V when active (easily 200-500mW). Even if it would not die, it would be irritating, because light intensity would vary with power consumption of the sensor module.
Nobody sane would hopefully design it this way. :)
Production of the ThinkPad X230 stopped 10 years ago in 2014.
Would be more interesting to read something about a RECENT model.
In late 2014 was the last big webcam vulnerability "hype" I remember [1], which led to a wave of media attention, webcam covers, vendor statements that LED-control is / will be hard-wired etc.
I'm more interested how this big attention impacted future designs of laptops (like my cheap HP here, which has a built-in camera cover)
The X230 is still relevant for those who want a ThinkPad that supports Libreboot (an alternative firmware without proprietary components). I personally found this demonstration interesting; users of these devices often believe they're at risk of targeted attacks.
Just tried to programatically take a picture on my MacBook Pro 2012. Managed to take a picture in sub second. The LED flashes briefly and you could easily miss it .
Would be good to keep that LED ON well after the Camera switches off (Not sure what that minimum would be without causing an inconvenience - but how about 15 minutes ? - Long enough to educate the users to worry about their privacy and perhaps take breaks between making video calls !) - Just a thought.
I like the thought, but if it becomes an "oh, that light's always on, just ignore it" kind of experience, that might train people to think it is not an important signal.
For what it's worth, my Lenovo laptop has a manuel shutter slider button on the side that actually physically covers the camera (and it must also does something driver wise because windows considers it unplugged). It's so easy and convenient that I always use to off the camera.
Many of lenovo have that even included their gaming laptop line (it's actually even better and more convient on that one, thanks to the larger size available).
Doesn't solve the problem this article talks about, but if that's something that worries you I would still trust that more than most (and it's a lot less weirdo looking than taping your camera).
Taping your camera doesn't necessarily look like anything. I have a small piece of electrical tape over my webcam, and it blends in so perfectly with the background that other people probably wouldn't see it unless they were specifically looking for it.
(I personally just leave the tape there all the time, because if I need to videoconference, I’d rather connect my mirrorless camera with a much better lens and sexy bokeh.)
This exploit picks up audio, too. The shutter helps make sure you're not accidentally sending nudes to North Korea's hacking teams, but audio can still be hijacked unfortunately.
The Electronic Frontier Foundation sells a set of stickers exactly for this purpose [0]. I have a set and it works reasonably well. And it supports a good cause.
Note, that someone somewhere made a decision not to hardwire the led to the camera enable line. This to me is far more of a scandal than the fact another person decided to exploit it.
In their new upcoming webcam module for Framework they would still cut off the sensor power, but not the USB interface due to usability issues (e.g. in my experience Google Meet can detect the camera after the privacy switch turned on, but Zoom and Microsoft Teams do not)
> Cameras and microphones and write enable must have physical switches, not software ones. When will people learn?
Your preferences are not everybody's. Personally, I'd be totally fine with a camera and microphone LED that is guaranteed to activate whenever there is power/signal flowing from either.
> Me, I unplug the camera and mike when not in use.
That's a bit hard to do on a laptop that has both built in.
> That's a bit hard to do on a laptop that has both built in.
The Framework laptops have two tiny switches near the camera that physically turn off the mic and camera, and it presumably wouldn't be difficult for other manufacturers to follow suit if enough people cared.
I used to design airplane parts and systems. A guarantee isn't worth squat. Being able to positively verify it is what works.
You're right that I don't use a laptop for videoconferencing. I wouldn't use the builtin mike and camera anyway, as a 5 cent microphone can make it hard for the other party to understand you. I use a semi pro mike. If you're in business, I recommend such a setup.
Heavy camera and mic users might want to be careful with the unplug when not in use approach, in case either the camera/mic and/or the computer/hub the camera/mic connects to has a connector that is only good for the minimum number of mating cycles required by the USB spec.
For type A connectors that is only 1500 cycles. Mini USB connectors raise that 5000 cycles. Micro USB and USB-C raise it to 10000.
For a type A just plugging and unplugging twice a day every workday would reach 1500 cycles in a little over 3 years.
What I do now for things that I'm going to plug/unplug a lot where the thing is expensive enough that I don't want to risk the connector wearing out before I'm ready to replace the thing is use a short extension cable or an inexpensive hub. The extension cable or hub can be relatively permanently connected and the thing that is frequently plugged/unplugged connects to that.
For everything except Micro-B, an average connector lasts far more than those minimum rates (not possible in Micro-B, as Micro-B has the socket as the wear item rather than the plug).
Some laptops (I've seen it on a lot of Thinkpads) include a physical cover that can be slid over the webcam when you aren't using it. While that doesn't cut power to the camera or mic, I figure would pretty straightforward for manufacturers to add contacts to the camera cover to use it as a power killswitch instead of just a privacy cover.
I think that's pretty standard outside the Apple ecosystem. HP seem to have this on most (if not all) the laptops I've seen at $DAY_JOB which uses HP for all laptops.
> Cameras and microphones and write enable must have physical switches, not software ones. When will people learn?
I feel like people were pleading for this when people were getting ratted and began taping over their cameras, and the tiny number of laptop manufacturers just ignored what would be a cheap easy change. Eventually, people just accepted that it must be impossible to install a switch. I couldn't ever think of any motivation for a lack of a switch other than government pressure, so I've always assumed that the cameras and microphones are backdoored.
I don't get how "some tape" became the standard solution for these thousand dollar devices.
I remember the repair book "How To Keep Your Volkswagen Alive for the Complete Idiot". On some beetles the battery light would flicker dimly, though nothing seemed to be wrong. The recommended fix was to put enough tape over it to block the flicker, but not the full on.
Black electrical tape was also the solution for the blinking 12:00 on consumer VCRs.
I am not a hardware engineer or anything of the sort. My laptop has a slide shutter over the webcam, but this obviously does nothing about the microphone. How difficult/error prone would it be for the power signals to the microphone and camera to be individual wires/traces and have a physical switch that breaks the power or data connection physically? Surely these are very low voltage so the switch could be like the iphone mute switch?
My Framework 13 has this - 2 physical switches next to the camera.
I would assume (but haven't checked), that they physically disconnect the camera/mic.
The Framework laptop does this for both microphone and webcam, and there are privacy focused Android phones which also have a microphone switch which cuts the power to the microphone. It's definitely possible.
This is why I like a self built PC over laptops. Now I'm sure there's still some way to spot on me via a PC with no built in camera or microphone but I bet it's more difficult.
I do have a laptop and it have a physical cover I can slide into place. Short of black blutack I've not got a decent option for the mic though.
this is so widespread and simple that i basically don't have any respect for laptop manufacturers who refuse to add a simple webcam shutter onto their laptop designs
what would be even better is PHYSICAL HARDWARE POWER SWITCHES for microphones, speakers, and webcam
this ought to be a manufacturer regulation, no more ridiculousness
Can we not require physical, electromechanical switches (like an old-fashioned light switch) for each of the following: camera, mic, cell/LTE, gps, bluetooth, wifi?
Each should have their own switch, otherwise they will group them all into one "privacy mode" switch that also includes something you basically can't live without. Like the keyboard doesn't work in privacy mode or something. Plus, I'd like to be able to leave some of these off by default, only switching them on when I want to use that feature.
I imagine a company good at design (e.g. Apple) could make these small, elegant and easy to use.
Wow, that's really awesome! I might buy one just to fool around with it.
I looked at pictures of the phone in their store, and cannot see the same switches as are shown on the page you linked to. I'll assume they're under the back cover or something. I hope at some point they'll move these to a quickly-accessible location on the outside. I'll leave them some feedback to that effect as well.
The top part of a sticky note, found in most offices, works great with having to take off and put back on. Always assume that the company's provided laptop is a RAT with voice and video recording with notice is a norm.
> Always assume that the company's provided laptop is a RAT with voice and video recording with notice is a norm.
I... don't? Depends on the company, but I trust that my company has no override for the hardware based LED light on my Mac, as well as the software based microphone indicator. If they did, I would consider this highly scandalous for apple
Don't even understand why laptops have cameras and microphones. If you're serious about video meetings you'll want an external camera anyway.
I keep covering them up with bits of paper (because like most people, I don't trust LEDs or switches) that look ugly and invariably get blown off by a gust of wind and have to be reapplied when moving.
It just seems like at some point around 2010 some cabal decided that every device with a screen needs to have a camera facing the user and a microphone.
> Don't even understand why laptops have cameras and microphones. If you're serious about video meetings you'll want an external camera anyway.
The whole point of a laptop is to be able to move around and travel with it.
FWIW you can still encounter laptops without webcams (MNT reform comes to my mind) and you can also choose to disable/load/unload the kernel modules for them dynamically on linux distros and BSDs
That would be yet another gadget (with yet another tangly wire) to add to your bag. One of the best things (and one of the worst things, if you're interested in repairability and upgradability) of a laptop is that everything (other than the external power adapter) is built-in.
When I was covering my webcam on a ThinkPad some 15 years ago, my coworker was laughing at me. Until he read about the Snowden revelations. We learned that everything can be compromised. Bioses, chips, compilers, everything. And just because something should not be the case, doesn’t mean it won’t ever happen.
We should always assume that everything is possible in the digital world. And act accordingly.
How does the Macbook check the ambient light to determine the screen brightness, does it have a separate ambient light detector buried under the screen somewhere or inside the camera? (if the camera is not used for this I mean which would have been the best thing to do, but would require the camera to grab frames now and then without the LED flashing)
Turning off camera LEDs and recording video is an old hack and old news. This is for a specific firmware and computer model and attack surface via USB to update the webcam's firmware, so I am assuming that makes it news?
EDIT: I keep a piece of black electric tape over any of my notebook's webcams.
> Turning off camera LEDs and recording video is an old hack and old news.
This.
Some of the linux webcam drivers drivers have had the option to specify the behavior of the LED via a parameter since way back, including turning it completely off.
Technology connections made a very sarcastic but entertaining video of the "stupid" design of being able to control the camera and the led independently.
Saw that a decade ago, and let's not talk about the mic you cannot mechanically disable.
Deliverable computer security is something which does _NOT_ exist. If somebody is telling you otherwise, they are trying to sell you something.
Less worse scenario: you need to strip down software (including the SDK) and hardware to the bear minimum, and to aim for excrutiating simplicity still able to do a good enough job (this is currently mostly denied by the "planned obsolescence" and "overkill complexity" from Big Tech or brain damaged "standards/SDK"). Maybe, then, and I say maybe, you may have a little chance, but usually that chip(CPU) you are using is full of backdoors (or convenient bugs) anyway.
I wish laptop webcams were designed like this: when I want to turn off my webcam I slide, with my finger, a piece of plastic that physically covers the webcam. This simultaneously turns off the LED, which is always on unless the camera is covered by the plastic.
As an extra feature, firmware could hook up to the software that uses the webcam to send a "hang up" signal when I turn off the webcam. That way I don't have to find some button in whatever software the person I'm having a call with wants to use — I just "turn off" my webcam.
For context these Laptops are well supported with open source BIOS and Managment Engine disablers. So dispite thier age are still favoured by security sensative users. (If you can mitigate branch prediction issues).
I'm surprised how much these X230s are still being used. People who love real keyboards love them.
Personally I didn't think Lenovo's later keyboards were too bad. The one on my T490s was wonderful. However since my work moved to the T14s series, the keyboards have become terrible. The key movement range is too low now, and the feel is crap. It's too bad because Lenovo was the last holdout which still had decent keyboards. The T14s is also bad in other ways, the body got thinner but the screen got a lot thicker and heavier so it's actually worse to carry than the T490s.
Anyway, ontopic: I'm not surprised these cam controller firmwares can be hacked. It's very specific to the controller though.
However, most people I know that care about privacy close the cam door anyway, or put a sticker over it. I use the SpyFy. https://spy-fy.com/collections/webcam-covers . Good luck hacking that.
What worries me a lot more is the microphone. It doesn't have a light, and it's really hard to block. A simple sticker won't do much. These things are super sensitive. I can literally hear myself talking in the other room with the right boost settings.
100% agree with you on the keyboards. I had an X220, then a T490, and now I'm on an E16. The keyboard has gotten noticeably worse every time I've upgraded, sacrificing key travel and feel for flatness. It's such a shame - I would happily take a little extra body thickness for a nicer feeling keyboard.
Yeah there are at least two X230's powered on in my home as we speak. To me that was the last good keyboard. I got an X250 and it's not even close. Same with T480… really disappointing. Using the X230 (and X220) is always a pleasure with the solid-feeling keyboard that makes it easy to type accurately on. I mis-position my fingers on the newer keyboards and make typos on them frequently because it's just one big flat surface (or feels as such).
This is well known in the security community, when I was at Bromium around 2011 or 2012 this was shown in internal demos. And the X230 is a very old laptop, hopefully the newer ones have fixed this problem.
It's mind-boggling that this is not enforced by hardware: as other commenters has said, both by powering the LED from the same power as the camera, and also by making the enabling of that power use a simple hardware pulse-stretcher that would prevent the attacker from enabling the camera at such a low duty cycle that its flickering would be invisible to the user.
Ideally, the same should also be true of the microphone pre-amp, with its own LED separate from the camera one.
Just install an NSA-B-GONE, my janky open-source modboard that adds Thinklight-controlled USB hardware switches to the webcam and microphone! Designed for the X220, but the X230 is pretty similar so I bet it would work: https://github.com/zakqwy/NSA-B-GONE
Of course, if everyone does that, attackers will just start pulsing Thinklights and seeing if anything enumerates, I suppose.
>getting software control of the webcam LED on ThinkPad X230 without physical access to the laptop.
adding an LED implant control by flashing a USB, internally on an "8051-based" CPU, where the value's dependence is a feature of a dynamic memory allocator.
going one step further with cron tasks scheduled at irregular intervals would be interesting.
used to be able to do the inverse with an old TV set using a RFID controller back in the early 90's.
Camera, meet square of electrical tape. Problem solved. Remove tape as needed for video conferences and etc. Why harbour doubts when you can apply solutions?
after it can read my keyboard, .ssh files, browser cookie files... i couldn't care much for the camera. and everything you run can already do all that. occlude stuff you npm/cargo/mvn/go/pip/mix install. not to mention those git hooks or build scripts of that project you just downloaded the source in vscode and it's already running all that for your convenience right away.
The odds of being targeted don't matter in what we're talking about. The fact its possible is what we're talking about. High value targets like zuck being well aware of that fact, and taking steps to guard against it is just icing on the cake.
Could be useful, if this was 2000s. These days don’t even need to hack the victim. They proudly give it all up via social media. Talk to a person long enough and they will spill every detail about their life. Routine, job, social life, deepest desires.
I've seen them when shopping for certain CAD-certified laptops.
There are some of these out there, from major brands (HP?). Asus seems to have more. ( https://rehack.com/reviews/best-laptops-without-webcams/ ) They tend to be workstation grade, sometimes gaming, machines at higher price points. For new laptops, see if you can customize it out on their site.
While searching for one on amazon/ebay stinks, you can find ones without webcam (doublecheck for integrated microphone status in product details too though) by looking manually for terms like "no webcam"... vendors usually don't want returns due to surprises so it will be mentioned in the product title.
ThinkPad X1 solved this problem by having a physical camera shutter, embedded one. Nothing beats that (though there is a problem of potentially transparent material of the shutter, so this need to be checked as well).
Another demonstration that duct tape can fix almost everything. Put a little over the webcam and presto, no more malware spying on you through webcam video. Now about something for the microphone?
And arguably if one applies duct tape all over the laptop, the laptop can no longer be used, therefore no data can be input into it, preventing that data to be then exfiltrated by malware. A truly versatile product.
"I saw something in the news, so I copied it. I put a piece of tape — I have obviously a laptop, personal laptop — I put a piece of tape over the camera. Because I saw somebody smarter than I am had a piece of tape over their camera."
i have insta360 with physical shutter. it even disables the camera as a device, which is great. i have lenovo legion notebook and it too has physical shutter.
I am so disappointed that there are camera LEDs out there that they aren't hardware connected to the sensor. Especially when there are bio-metric sensors out there that can do a crap-ton of calculation all in-device so no privacy concerns arise. I wonder if any of them are vulnerable to a firmware attack.
As other mentioned Apple has had either good circuit design or now 'attestation' (which has other concerns, but that's more of a state actor worry).
That said it reminds me of the fun reversal of how a decade or so ago, Windows Phone 'lost' the ability to get the hot app SnapChat, because they did not want to give apps the ability to 'detect' a screenshot command in the name of privacy. Now, We have Copilot on windows, and LinkedIn tells me when I've screenshotted a post as a notification.
Speaking as someone who owns 3 Framework laptops, I was not disappointed to find Framework mentioned no fewer than 13 times in this comment thread. Assuming all goes according to plan with the moderation on this comment, now Framework is mentioned at least 16 times!
When those cameras first came out I said that of course the LEDs could be disabled with firmware only, and it was probably a government-mandated requirement to be able to do so, and I was called a conspiracy theorist.
Well who's laughing from within a tinfoil Faraday cage now?
This approach likely affects many other laptops, as connecting the webcam over USB and allowing to reflash its firmware is a common design pattern across laptop manufacturers.
WTF? Am I the only one who thinks webcams shouldn't even need accessible firmware in the first place? I have one which is over 2 decades old and it has worked perfectly fine (albeit at a low rsolution) since the day it was bought, with no need for any firmware updates.
Fantastic, another nothingburger proof of concept for people to point to when arguing in favor of more manufacturer-lockdown-based "security". It's not a coincidence that this demonstration is on one of the last generations of laptops that can actually be secured against Intel themselves.
In reality, remote code execution should be considered game over, end of story. Trying to obfuscate to hide that fact just ends up creating more unknown places for malware to persistently hide. The same knowledge that allows one to write new camera firmware also allows one to verify it on every boot. Meanwhile the camera model that hasn't been publicly documented is an ever-present black box.
> for people to point to when arguing in favor of more manufacturer-lockdown-based "security"
I don't see why this is the first thing you think of, when the infinitely more obvious thing to point out is that the indicator LED should be impossible to address and be connected in series with the power pin of the camera instead. Case in point, most other comments in this very discussion thread.
Conversely, your comment (to me) reads like you're trying to derail conversation and argue in favor of weakening device security in whatever flavor you find compelling. Very intellectually honest of you to present those ideas this way.
Sure, you can solve this one particular thing with fixed hardware [0]. The problem is that just slightly more complex, any designer isn't going to opt for hardcoded logic but rather going to go "we have a microcontroller sitting right here, of course we're going to use it". This path ends with firmware "security" that prevents straightforwardly reading/writing these devices, which is exactly what my comment is about.
> you're trying to derail conversation and argue in favor of weakening device security
No, I'm arguing in favor of analyzing security in terms of device owners rather than manufacturers. "Security" isn't simply some singular property, but is rather in the context of a specific party [1]. It's certainly possible to build hardware that verifies running software and also doesn't privilege the manufacturer with an all-access pass. Just no manufacturers have done it, because centralizing control in their favor is easier.
[0] even this case is borderline. Your series LED suggestion isn't likely to be work because it will drop at least 1.6v, and constrain the current draw of the camera. Also if the firmware can be reprogrammed such that it can take pictures using very low average current draw, you haven't actually solved the problem. Alternatively, an LED in parallel with the power supply will require at least an additional resistor (if not a diode and a capacitor), which costs real money in the eyes of a design engineer working at consumer volumes.
[1] eg how the TSA that drones on about "security", while they're actually making individual travelers less secure from having to unpack and splay their belongings out, making them easy targets for immediate or later theft. They're not talking about your security, they're talking about their operation's security.
What if, for some reason, the webcam's LED failed (e.g., it physically stopped working)? In case of a malware attack, you might not even realize that your webcam is being used even if the malware can't have access to control the LED. In my opinion, the most effective way to protect yourself against webcam hacking is to physically cover the camera with a sticker or cover when it’s not in use. Simply put, I believe physical safeguards are more reliable than software-based solutions whenever possible.
Having the LED control exposed through the firmware completely defeats this.
reply