Hacker News new | past | comments | ask | show | jobs | submit login

Dear Apple: for Christmas I'd like physical, no-bullshit power shutoff switches for your camera and microphone on the Macbook Pro. Other devices too—if you can manage it, that'd be great. Sincerely, the people who put tape over their cameras, the people who don't because it's ugly and messes with closing the lid but wish they could, and the people who would be in one of those two camps if they understood the risk (so, altogether, 100% of your users).



From my understanding, Apple has done sort of what you're asking for, they've hooked up the physical wiring of the camera LED to the camera itself, so it is physically impossible to power the camera without the LED being turned on (as opposed to the "turn on LED" being part of firmware logic that could be hacked).


This is a better solution overall, as it's "by default". A hardware switch relies on the user to be privacy conscious. An LED which is physically connected to the camera circuit (!) is immediately noticeable if it turns on unexpectedly.


As a layperson in this arena, I'm skeptical as to whether it's a great solution. Is it possible to turn the camera on and off very quickly? If so, a smart hacker could do that really quickly and if the owner ever notices they would probably think there is a problem with the electrical rather than thinking they are being monitored.


> Is it possible to turn the camera on and off very quickly?

Not particularly. At least on my 2015 rMBP, using code that I wrote (so I know it's not doing anything extraneous), the light is on for about a quarter of a second before the first frame is returned from the camera. This is because the LED is literally showing you when the camera has power (which includes any sort of handshake with the system), not just when it's capturing frames.

Is that enough that a user who's really concentrating on the screen will nonetheless see the light come on? Not necessarily. But GP has a good point about this being a feature that doesn't rely on the user being proactive.


It's a USB camera. It needs more time than a flicker to turn on and start producing frames. I don't think you could do as you said and still have the camera both work and the LED be dim.


The camera on Macs has actually been a PCIe peripherafor quite a few years now. But your point stands; it still takes a good second or so after the LED turns on for the camera driver to start producing frames to userspace.

How long did it take you, a self-proclaimed layperson, to come up with the idea of quickly pulsing the camera? Now, how likely do you think it is that someone who's actively trying to prevent camera shenanigans would think of this idea as well, and mitigate by e.g. introduce delays or latch the light on for a couple of seconds?

How likely? Who knows? It could be an intern that implemented it for all we know. I've seen more critical things implemented by interns at a medical device company I was previously at. Do you sincerely think Apple is more concerned with secure operation of a camera (that people are going to put tape over anyways if they are that concerned with security) than a full fledged (successful) medical device company is with medical devices?

Moreover, even if it wasn't an intern, how experienced do you think the engineer is at understanding human behavior in response to hacks? Many engineers I have met have difficulty conversing with other people and have even more difficulty in actually understanding their behavior. I can almost guarentee you that even switching it on and off at slow rates will convince most people that there are electrical issues.

Also do you honestly think the average electrical engineer is that well-versed with hacking paradigms? I would conjecture that software engineering is one of the leading fields to be a gateway to understanding hacking and during my electrical engineering degree, most of them acted like writing software was a nuisance they had to do to get through the degree. Hell, even most of the lab instructors we had from JPL looked down on software engineering and talked the same way to bad EE students that a cliche high school instructor would talk to bad high school students; instead of telling them, you better like asking, "do you want fries with that" they would say (in the same tone), "you better be good at writing software."

How do you even know what the budget for the department the engineer is in? How do you know they have the budget to spend weeks on securing a camera most security minded people are going to put tape over anyways? How do you know it wasnt some off the cuff, in a meeting comment, saying I can implement this feature in an hour and everyone was like that's nice, you should do that and the thought of security never went further than that?

Unless you were there, you dont have the slightest clue as to how well thought out the whole thing is.


If you install Oversight, you can get persistent notification center alerts for most mic and cam activations (of course, it likely won't help if you have targeted malware that knows how to disable/uninstall Oversight) - https://objective-see.com/products/oversight.html



LED brightness is controlled by pulse-width modulation: at low frequencies, the camera LED would appear dimly lit. A more sophisticated approach might be to combine gaze detection to ramp-down frequency if someone is looking towards the camera/LED.


PWM reduces average power. If the LED is on the same circuit as the camera, I don't know how successful you will be at powering the camera while trying to dim the LED.


A momentary flicker would not be perceptible in a lit room.


A momentary click on a phone line was also imperceptible... until it wasn’t.

You might not even see the flicker but if you catch it in your peripheral vision often enough, or you found out someone else was caught by it or it hit the news big time, you’d suddenly become more suspicious about that momentary flash. Maybe even paranoid.


It is just like the small hacks that are possible with ANY of these “require UNRELATED user interaction” things.

Like being able to speak “” when the user clicks. Or something really short or kind of unpronounceable like “,,,,”. Apple could of course try to require the first speech to always be long enough to be unmistakably speech. But otherwise ANY user interaction is enough to enable ANY speech.

The alternative would he to have dialogs for everything: “would you like to turn on the camera?” “Would you like to let this website use speech to text?” “Always remember my choice for this domain”.

Seems giving the user a master switch that overrides things, and letting websites detect this and complain, doesnmr have many downsides but has tons of upsides.

And then of course there is browser fingerprinting. It’s now really hard to turn it off without breaking tons of sites that care about the width of your window (size of your phone) and your operating system, and so on


This is not a better solution overall and there's no reason we can't have both, other than manufacturer design choices. How often are you looking directly at your camera? Even if you are, once the camera comes on unexpectedly, it's too late.

> How often are you looking directly at your camera?

On my Mac, I find the LED very noticeable when it comes on unexpectedly! It's bright and green and not part of my screen. And yes, this has actually happened to me!

> Even if you are, once the camera comes on unexpectedly, it's too late.

Nah, they saw a few frames—they're very unlikely to be useful. What's more important is knowledge.

I agree we could have both, but each of these features does have a financial cost. I consider the LED significantly more important.


Doesn't really cover situations where the computer is in a persons bedroom (common with eg: teenagers), and not powered off overnight.

I remember there was a story saying that it's possible, via software only (due to a "bug" and some "poor" hardware design), to turn on the camera without activating the LED.


It used to be that the LED was controlled by firmware. As far as I know this is fixed now but I haven't proven it myself.

There are leaked schematics of MacBooks online (that unofficial repair shops use) so if you want to investigate this I'd expect it to be a good place to start.



Published in 2013, but describes "the Apple internal iSight webcam found in previous generation Apple products including the iMac G5 and early Intel-based iMacs, MacBooks, and MacBook Pros until roughly 2008".


That's useful, but it's not as reassuring as a hardware switch where you can see it work.

(For video, anyway. I don't see any similar solution for audio.)


Two reasons why switch is better:

1. If it does happen to light up, what would you do, turn off your computer? That's shitty.

2. What if your AFK and aren't looking at the light?


A switch plus a notification (as opposed to modal prompt) if you try to get into video/audio calls but forget to flip it back. Just to remind you that you’re trying to enable video/mic but have the hardware switched off.


Arguably better to be aware of the breach than prevent one sensor from leaking too.


What about the mic?


FWIW... I bet you could power the camera, get a still frame at 60fps, shut it down, and not see the led come on. It would be 1/60, 16ms plus or minus the amount of overhead needed, plus autofocus and exposure correction might make it impractical, but it’s definitely not a bulletproof fix.


Apple could simply make the camera take 1 second to activate.


Slow spool up embedded devices... we’re moving backwards :)

If Google Calendar thinks it’s OK to set a 500ms animation on opacity for event edit dialog - then it doesn’t seem like a 1 sec spooling for a webcam is too much :)

How much would you bet?

Considering I have a device here that has an I2C camera and in firmware I can turn it on and off at right around 25ms, I’d consider betting quite a bit.

Instead of being snarky, how about you explain why this isn’t possible. Even if it was 100ms almost no one would catch that.


I’m not being snarky, and we’re not talking about some random device you have. We’re talking about the camera on a Mac. The bet is that you can’t turn it on and off so fast as to be un-noticeable because there is a noticeable delay between that light turning on and getting an image from that camera. So I’ll gladly take the other side of that bet.

> From my understanding, Apple has done sort of what you're asking for, they've hooked up the physical wiring of the camera LED to the camera itself, so it is physically impossible to power the camera without the LED being turned on

Correct. And that was my reason for NOT covering the camera. Because I would be able to see if it was on due to some malware. However, I did not expect a vulnerability like Zoom's, where a simple website would be able to trigger a webcam. Combined with external monitors, the LED would be potentially missed for a good amount of time. So I've reversed my position since then.


I’ve read this as well but my macbook senses ambient light and adjusts my backlight accordingly. Isn’t this through the camera - with no LED?


The ambient light sensor is right next to the camera, I think it's usually to the left. It's a bit hard to test since they're close together. Macs have had ambient light sensors for a long time. For example, take an old iMac and put it to sleep. The power light will pulse with a period of about 2s, and the brightness will depend on the ambient light level.

Edit: Just tested on my MBP. Opened photo booth, covered the camera with my thumb, shone a bright flashlight at the point just left of the camera. The display got brighter but Photo Booth showed no changes in what the camera was seeing.


No, the ambient light sensing is done with a light sensor on the body - look for the tiny hole drilled into the chassis. It doesn't use the camera.


Remember when they did that for your battery power indicator! With a little button to trigger it.



I think I understand the risk, and I definitely would not bother with such a switch even were it available. So maybe I actually don't understand it.

What exactly is the risk? Have there been any actual cases of someone being spied on with their laptop webcam that would have been prevented by a switch? I'm only aware of cases where the webcam switch would not have helped (e.g. roommate sets up notebook to record owner naked). Even that is incredibly rare, or if not rare, almost never reported.


A quick search seems to turn up quite a few examples of webcam spying. I'd love to see actual numbers, but it doesn't seem to be "incredibly rare".

https://www.dailymail.co.uk/sciencetech/article-5228017/Hack...

https://www.dailymail.co.uk/news/article-2638874/More-90-peo...

https://globalnews.ca/news/2158281/what-you-need-to-know-abo...

https://www.telegraph.co.uk/technology/news/10131456/Hackers...

This site claims a guy made a business selling software to hack and remotely control webcams, complete with paid employees and $350,000 in income:

https://www.welivesecurity.com/2015/04/21/webcam-hacking/


OK. That has caused me to update my beliefs. I still think that there is relatively little risk -- like, you should be much more worried about being in a car accident -- but I no longer think it's on par with being struck by lightning.


Why would this be of little risk? The good thing about software is that it is automatable. That's also the bad thing.

Create a malware (which due to some big company fuckups can be even embedded in a webpage these days). Capture frames indiscriminately. Add some image recognition algorithms (from OCR to machine learning, depending on what you want to do) to flag interesting hits.

Voila. Massive dragnet. Applications can range from simple blackmail (a-la Black Mirror) to industrial espionage.


I'm not saying I can't picture it. Just saying it doesn't seem likely to happen on a large scale basis.

I get what your saying, that the personal risk is low, especially compared with say driving or heart disease. Heck, I'm a middle aged heavy guy and couldn't care less who sees my nudes.

But, I believe we (as technologists) have a responsibility to use and push for strong security practices. I don't want my kids to grow up in a world where creeps blackmailing them through their webcams is a possibility, or where a rogue politician has all the tools of absolute authoritarianism already set up and waiting for him.

A camera cover is a huge win. It's super easy and cheap (a piece of plastic), it's easy to understand (entirely mechanical), it works 100% when used, and it's failure modes are obvious. Not all security controls are cheap, easy, and 100% effective, but this one is. And if you don't bother to use it in your bedroom, then that's fine, but every webcam should have one.


Considering that there have been Trojan malware programs out there that can secretly take over a user's webcam since webcams became popular, I'd say it's a given that it is happening on a regular basis.

Also, there are many security programs that can seruptitiously take photos or videos using the camera. Usually this is to help in recovery after theft.


Here's a recent 0-day involving Macs and Zoom (or RingCentral): https://medium.com/@jonathan.leitschuh/zoom-zero-day-4-milli...


Which is an exploit, not an attack.


When I was in highschool people would do this all the time to each other by attaching trojans to files to gain access to webcams. This was the days of Windows 98, and security was almost non-existent for most users.

People would get someone infected, and then share the credentials so everyone could watch. So, I personally know of a handful of people that were spied on 20 years ago.


Here's a case that went to court about it,

https://en.wikipedia.org/wiki/Robbins_v._Lower_Merion_School...


It's for when someone pwns root on your computer, you can still turn off video and audio recording physically and they can't spy on you.

The use case is that you leave them turned off by default in case someone pwns you, and only turn them on when you need to use them.


One use case is protecting against non malicious but unexpected use of the camera. For example when you want to join a call with voice only but the app defaults to video and voice. You can have your camera off and mic on and you know for sure you won't be unexpectedly streaming video.

A while back it was in the news that school issued laptops had their webcams remotely activated by administration during non school hours. So yeah there's been at least one case.

Well, yeah. There's a reason why built-in webcam covers / physical switches are much more prevalent on laptops oriented towards business/government users (cf Thinkpads).


It's more or less a feeling. Because if someone can activate your camera without you knowing you have bigger problems.


If anyone is looking for a nice webcam cover for laptops, I've found the super-thin ones (marketed as "0.7mm thin" by EYSOFT and others on Amazon, etc) are fantastic. Very cheap, very thin, visually unobtrusive, and dead-simple to use. Doesn't interfere with closing my Macbook at all, it protrudes less than the rubber bumper around the screen border.

Still miss the physical mic mute button on my old Thinkpad X230... and it didn't have a webcam button for that, but we've _almost_ had all of the right features in the past...


FWIW I had one of these on my laptop and the screen now has a crack that started right where this was.


Also note that the webcam closes onto the large, glass trackpad on Apple laptops, increasing the risk of damage if any debris gets in between there.


You can find "slim" webcam privacy covers on Amazon. Since they can slide it they're a better solution than tape.

That's what the Purism laptops have. But they're linux-based (although I hear Mac OS is based on linux kernel).



The kernel is not BSD, it's based on the Mach microkernel [1] with a BSD compatibility layer implemented on top. The whole thing collectively is called XNU [2].

[1] https://en.wikipedia.org/wiki/Mach_(kernel)

[2] https://en.wikipedia.org/wiki/XNU


It was developed from something that was based on Mach. However, it is also based on BSD, as your second link states.

> The BSD code present in XNU came from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project.


The description of "Mach with a BSD layer on top" was nonetheless accurate. When you look at actual BSD you will see it is rather un-Mac-like, so I don't think it is a highly relevant comparison. (The path drawn here from the Purism laptop to XNU is very dubious.)


macOS is based upon XNU, which was based upon the Mach kernel (with bits of BSD userspace), not Linux.


Mac OS is in no way based on the Linux kernel.


Perhaps it would be if the license was different.


They began development at NeXT in 1985, 6 years before Linux.


Unlikely. The Apple kernel guys are purists and would probably use the BSD kernel even if Linux were MIT licensed.


In the 90s they had a project called MkLinux that had Linux combined with Mach in a similar position to where BSD is in XNU. This was shortly after the NeXT acquisition.

I bet they were considering it.

https://en.wikipedia.org/wiki/MkLinux


The Pinebook too.

Why only Dear Apple? Dear Dell, dear Lenovo and dear Asus, enable your products with physical switches

macOS already ask the user for permission the first time an app tries to use a camera/microphone. If you want to be asked each time, install OverSight.

Have you noticed that the people who tape up their laptop camera almost always still carry around a smartphone in their pocket 100% of the time.


A smartphone is significantly more secure than a computer. I install lord knows what NPM package from God knows where on a weekly basis. Only since very recently does mic or camera access cause any kind of system prompt on Mac.

Smartphones , for all their faults , at least are far less vulnerable to viruses than pcs.

Or at least iOS vs Mac.


I consider my desktop computer to be far more secure than my phone, since it's harder for someone to access it physically and it isn't running Android. The things I install on it are more trustworthy as well, since they're mostly small, established unix tools.

Your device is as secure as you make it. Why are you installing "lord knows what npm package" on your laptop?


>Why are you installing "lord knows what npm package" on your laptop?

Probably because he installs lord knows what npm packages to his production servers too.


I don't get why people who even admit that they dont trust these random npm packages can think its okay to ship them in production and put all their user's data at risk. It's malpractice.


I’d love to know a metric of trust and its relation to customer data. How many trust points for how much PII? I’m assuming it’s a logarithmic scale? And a Debian stable package gets , what, double the points of an npm package? Or I guess it depends on the weekly downloads? What about pip, gems, vim plugins, emacs packages (I’m looking at you melpa) , quicklisp, ...

Then we can play an honest thought experiment: how many people satisfy that metric? Don’t forget to correct for actually how much PII points one is handling.

If you don’t at least have some consideration of those factors, claiming malpractice seems fatuous.


It's not a question of establishing an absolute scale of trust. It's about admitting that you consider npm packages to be insecure, but you run them in production anyways.

Imagine you believed that steel had a 10% chance of spontaneous combustion, regardless of whether its true or not, if you believe that and you still built a bridge out of it, that's malpractice.


Point being where is the line? How high are the stakes (bridge: say 20 human lives at any time, very important). How dangerous is it really? (10% chance of fire per year: extremely high). Then you combine those two and see if they match.

Everything has a limit. Otherwise why do you trust your compiler, your computer, your eyes, your sanity?

Be careful with a word like malpractice, and analogies that suggest blithe endangerment of human lives. It doesn’t leave a lot of room for honest engagement and suggests you either don’t understand the human mind, or the value of a human life.


You continue to miss the point. Its not a question of _why_ I trust my compiler or my computer. If you trust npm packages and ship them then that's not malpractice.

Its about admitting you _don't_ trust npm packages, but you go ahead and use them anyways. That is malpractice, because you admit you know better but take action anyways.

"I know this procedure may do more harm than good, but I will perform it anyways because I'm too lazy to find an alternative"

That is textbook malpractice.


Trust? I don’t even trust my eyes.. :)

Though yes, if laziness is what makes it malpractice, then I’m the Jack Kevorkian of IT. I plead guilty.


Glad were in agreement :)

The difference being that the smartphone camera isn't pointed at your face or room the entire time the device is in use.


Frankly I don't think it's the camera that people should be most concerned about - it's the microphone.


Why?


Can my smartphone see anything from inside my pocket?


I tape my phone cameras too.

If an attacker can access your webcam/microphone at will this means they can run arbitrary code to capture your screen, display anything you want and capture your keystrokes.

In this case the camera or microphone is the least of your worries.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: