Hacker News new | past | comments | ask | show | jobs | submit login
Camera and microphone require HTTPS in Firefox 68 (blog.mozilla.org)
648 points by johnnyRose on July 9, 2019 | hide | past | favorite | 197 comments



It will still work on localhost, which is nice. It would be nice if it also worked on local IPs, like 192.168... Those do not work on Chrome, I think, which make mobile testing a bit more cumbersome.


> It would be nice if it also worked on local IPs, like 192.168...

That would defeat the security purpose.

Anyone within your local network (which practically speaking very often means the next Wifi your device could find) could attack you.


But how do you do local development when you can't get an SSL cert for your dev machine's server? No, self signed certs don't always do what you need, especially on mobile where you can't just add your cert as a trusted cert easily.


Wihtout using any third party service, you could use an SSH tunnel, with autossh for automatic reconnections.

    autossh -L 2080:localhost:80 192.198.1.14
And then, you'll be able to visit your dev website on http://localhost:2080

Firefox will believe that your service is local, and will allow the activation of the camera and the microphone even though you do not use https.


Browsers have weird behaviors on localhost, such as allowing webcam and microphone on HTTP and iirc, permit cross-origin resource access which is blocked on less trusted domains.


> and iirc, permit cross-origin resource access

Firefox does not do that.

Safari has some behaviors along those lines last I checked.

I can't recall for Chrome whether it does or not.


If you own a domain, you can add a subdomain that points to the local network IP, and get Let's Encrypt to give you a certificate using the dns-01 validation method (which doesn't require Let's Encrypt to actually access the IP address in the A record).

This is clearly more complicated than ideal, but it should work.

Edit: You can also use a custom CA root certificate, which can be installed on iOS etc. mkcert is a good starting point:

https://github.com/FiloSottile/mkcert


> This is clearly more complicated than ideal, but it should work.

Exactly.

Imagine you're someone who just wants to play around with cool web technologies. Maybe you're fairly new to web dev; maybe you're fairly new to the world of programming in general and you're using the web to learn it, which has historically been one of the huge strengths of the web. You suddenly encounter a brick wall, where you figure out that programming isn't enough; you have to fork over money for a domain and learn how SSL works and how to set up let's encrypt and how to make root certs and how to install them on your phone, just because you wanted to play with something you found interesting.

The web looks like it's going away from being a good platform to learn and play with programming in the name of security. It will be annoying but workable for most professional programmers who can just do whatever hacks they need to get by, but we're erecting some monumental barriers to learn this stuff. You already can't even include a fucking javascript module file from an html file without learning how to set up and configure a web server because Chrome blocks modules when using file://.


I fundamentally agree, but I think the solution is to continue making HTTPS easier to use rather than giving up on security.

For anything on the public Internet, things are already incredibly better than they used to be: HTTPS is free, and there's a wide range of easy ways to set it up on your site, ranging from Caddy (a webserver with built-in Let's Encrypt support) to CloudFlare (who will proxy your site for free including SSL termination). There are still problems – e.g. for all that certbot (official Let's Encrypt client) tries to be easy to use, it's more fiddly than ideal. But the goal of "HTTPS just works" seems clearly within reach, and things can only continue to improve from here.

On the other hand, the situation with local network servers is a complete mess. This includes not just development environments, where "just don't bother with security" is a viable option, but also things like home routers and IoT devices which do want to be secure. Currently, routers tend to just use HTTP, which is insecure if you have anyone untrusted on your Wi-Fi network. IoT devices, of course, tend to route everything through centralized cloud services; there are a lot of reasons for that, and it's easy to blame invidious motives, but I suspect a significant part of the reason is just that it's really hard to make an easy-to-use-device without a centralized service. At a bare minimum, you need to be able to:

1. Securely pair with the device;

2. Connect the device to the network; and

3. Access the device's services over the network, using the existing pairing record to establish identity and prevent a MitM.

(Ideally you would also be able to expose the device's services to the wider Internet, but that's another story.)

You can do this already with a custom protocol, but not with a browser. The closest browsers have to a "pairing record" is asking you to trust a self-signed cert for some random domain, but that's nowhere near the semantics you actually want. After all, it doesn't really matter whether the device controls such-and-such domain; what matters is whether it's the same device you paired with. Meanwhile, trusting random self-signed certs is fundamentally insecure, and (intentionally) difficult to do.

What we need IMO is an entirely new protocol to address this use case, and I think such a protocol might also work for local dev servers.

In the meantime... well, there are always plenty of workarounds.


Fully agreed on the last point (and the hypothesis that this is one of the factors driving the ridiculous "talk to the cloud for everything" design pattern in IoT.)

Seems the current push is making certain legitimate use-cases not just hard but pretty much impossible. Such as providing a local web server that is independent of the public internet.

Devices used to provide embedded web pages as an easy way to show config options. This seems to have become completely impossible: Even if you'd jump through all the hoops of generating a unique subdomain and certificate for each device, you'd also somehow need to update that certificate on that device. So your (possibly fully local) device now needs internet access for the sole reason so the browser does not refuse to display the local configuration page.


This feels pessimistic to me: most people didn’t learn the web that way, instead using shared servers — and there were plenty of similar complaints that it was too hard to learn Unix/Windows admin stuff, too. Today, you can use glitch, github pages, jsbin & a million friends, zeit, etc. or the same cheap Dreamhost account people used $20 years ago and start practicing with HTTPS and many other amenities at minimal cost. JavaScript CDNs make it pretty easy to use a ton of stuff without even needing to learn how to install it, too, and increasingly you can do that as native modules.

I’d worry a lot more about how many people are being told they need a J2EE-scale tool chain to run hello world even though the native environment has never been richer.


I have to imagine the "just open a .html file and start playing" route is a huge vector for getting people into proper programming. I know it's what both I and my brother did. Maybe you don't agree, but I think it's a horrible shame that we're making that route less and less possible by disabling features for anything other than HTTPS.

Glitch honestly looks really good, but I'm a bit worried about telling people that the way to learn programming is to rely on a random for-profit corporation's computers rather than letting people realize the actual power they have over their own machine.


I have to imagine the "just open a .html file and start playing" route is a huge vector for getting people into proper programming.

It may be, but just open an HTML file and write an app using your camera and microphone is not something that is typically the result of doing just that (nor should it be).

Putting some effort into figuring out how the pieces fit together is not a bad thing. You can still set up HTTPS if need be without having to rely on a for-profit corporation. It's trivial to install a self-signed cert in iOS and OSX the last I checked (and I seem to remember it wasn't so hard in Windows either). It was excruciatingly bad in Android (well, mostly missing IIRC) around Gingerbread — but that too is a good example of why using products built by people with no comprehension of how to secure things is bad.

A learning curve is not inherently bad, but beyond that, especially with something that has such huge security implications, some understanding of WHY you should be encrypting camera fees is something I'd want any dev working on a camera/audio recording app to understand. There's a reason that while CB radio is easily accessible, there are barriers to entry for Hams. With power comes responsibility.


"just open a .html file and start playing" is a route to making web pages, not get people into programming.

If they do want to get into programming, Scratch and other learning DEs and/or node/js are much better paths than dealing with the layers of barnacles that have accrued over HTML to get to the OPA / Webasm / TS/JS etc "web programming" environments.

.NET and VSCode are free downloads, provide a managed environment, and C# is a good imperative language to start with. It also supports F# if you want to get into functional programming.


I find "it's ok if the platform becomes worse because there are so many third-party services you can use" not a compelling argument.

(Not to mention, you need to get to know the services in the first place, while a browser is directly accessible to you)


How did the platform become worse when this is a new capability which didn’t used to be part of the platform? Similarly, while it’s true that you need to know services exist, that’s never been easier just as the documentation, available guides, and especially the developer tools have never been better for someone learning.

I’m not entirely in love with the needs driving this decision but I think it’s reasonable to make security and privacy decisions which benefit a billion people at the expense of making certain tasks slightly harder for a much smaller group.


> Imagine you're someone who just wants to play around with cool web technologies. Maybe you're fairly new to web dev;

Then... you can do it on localhost. I can't really image the new web dev that is using separate computer on their local network as a dev server but can't figure out how to get a Let's Encrypt cert to use.


False. Usually it’s just as easy if not easier to configure your dev server to listen on 0.0.0.0 instead of 127.0.0.1, and these days most sites are mobile first, so it’s very natural to develop on your computer and test on a phone in the same LAN. Figuring out accessing your computer via 192.168.0.x is way easier than figuring out how to issue a LE cert (the easy part) and use it on your dev server (the hard part).

Before you mention mobile device simulation in desktop browsers, I’ll point out that mobile browsers often have quirks that are not present in desktop browser simulations. For instance, mobile Safari is subtly different from desktop Safari responsive design mode in many ways, and the only way that I know to actually simulate mobile Safari is with full-blown Simulator.app.


> Figuring out accessing your computer via 192.168.0.x is way easier than figuring out how to issue a LE cert (the easy part) and use it on your dev server (the hard part).

It really isn't that hard. I still can't imagine this hypothetical "new dev" who is doing cross-browser testing of this specific feature but can't install a simple SSL cert or get help doing so.

This isn't a barrier for entry to new developers, this is a specific use case that requires learning a minor, otherwise useful skill to get around. I think that is a totally reasonable trade-off.


The burden is a little higher but not insurmountable and it obligates new folks into things they need to know. I think it's an acceptable forcing function. Especially given the security concerns.


What security concerns though? It's not like accessing the camera on a random attacker controlled HTTP page is less secure than on a random attacker controlled HTTPS page. If the user lets a malicious web page access the camera, that's game over regardless.

I'm all for doing stuff to tell users that the HTTP page they're visiting is insecure, but telling people who are new to web dev to get a domain and learn how the world of SSL and domains work is actually a pretty fucking big hurdle. They'll have to get into that if they want to get serious about it, sure, but there's no reason to unnecessarily front-load the frustrating and complicated and unrelated parts. You may think it's acceptable, I value the web's accessibility to new developers.

Surely there's better ways to convince sites to use HTTPS than to say they can't use getUserMedia on HTTP.


> What security concerns though? It's not like accessing the camera on a random attacker controlled HTTP page is less secure than on a random attacker controlled HTTPS page. If the user lets a malicious web page access the camera, that's game over regardless.

No. But accessing the camera on a non-attacker-controlled HTTP page is less secure than doing so on a non-attacker-controlled HTTPS page, because an attacker could MitM the former but not the latter. (Even if the camera data itself is sent securely, the attacker could just change the host page's JavaScript to send it to a different server instead.)


Another option is xca[0]. It's not as quick to get going as mkcert, but it's quite full featured. I used it to create an internal CA and certificates for my internal services and it works quite well.

[0] https://hohnstaedt.de/xca/



> especially on mobile where you can't just add your cert as a trusted cert easily

What do you mean? Do some Androids block it? Or iOS? I've got it easily available in settings.


VSCode in recent versions has the Remote Workspace Function. It can forward ports if you press Ctrl-P and type "ssh forward", enter and the port number, enter again. You can setup the ssh-config for that workspace to also automatically forward ports.


Not sure if you've heard of NGrok, but this tunnels your server and can be accessed outside of your network with or without https.

https://ngrok.com/


Easy:

1) Get a signed certificate for a subdomain on a domain you own (e.g. dev.example.com)

2) Change your hosts file to point any local IP you wish, or setup a DNS entry for that subdomain that points to 127.0.0.1


I know localhost is supported by Lets Encrypt but not sure about local lan network. One possible workaround is ssh forwarding. Create a tunnel. But I have not tried it myself.


For local development, Chrome has a flag that lets you force specific origins to be treated as secure:

  chrome://flags/#unsafely-treat-insecure-origin-as-secure
I don't think Firefox has anything equivalent though? This bug on the topic is unassigned: https://bugzilla.mozilla.org/show_bug.cgi?id=1410365


Interesting! Though I find that behaves rather strangely – seems to clear itself on every launch.


Yea that's annoying, but I believe that's intentional... you can pass that flag in as an arg when you launch chrome though so you don't have to set it up each time.


I don't imagine that works on mobile chrome, which is what the parent comment was talking about.


It does, on Android.


Ah, TIL.

Though it doesn't work on iOS Chrome, and you want to test on iOS too obviously.


iOS Chrome is just Safari with a Google skin, Google has no control over this type of thing.


That's both true and completely irrelevant.


Well, it's relevant insofar as you're complaining to the wrong company.

Does this behavior even exists in iOS Chrome? If it does, it exists in Mobile Safari as well.


I... wasn't complaining about Google though? I said you can't enable chrome://flags/#unsafely-treat-insecure-origin-as-secure (or an equivalent) on iOS, not that it's Google's fault that you can't.


Mozilla’s previous blog post on the topic says

> Mozilla will provide developer tools to ease the transition to secure contexts and enable testing without an HTTPS server.

https://blog.mozilla.org/security/2018/01/15/secure-contexts...

But the bugzilla entry they linked to with that has been unassigned for two years, so maybe they changed their minds or figure the localhost exception is sufficient.

https://bugzilla.mozilla.org/show_bug.cgi?id=1410365

The last comment proposes a whitelist for development domains, but no response to it.


for easy bypass this guard - ( for rapid development / testing ) - and dont have a localhost environment

ths helped me (with the 68.0 win edition):

about:config

set the

media.getusermedia.insecure.enabled

from false to true


Good to know! I'm leaving it alone on my main FF installation, but I've set that in Developer Edition.


on the developer-edition 69.0b3 it didn´t work - may there are some more flags needed. But on the normal edition 68.0 it worked / it is actually working.


It's fantastic that it works with localhost (and I assume 127.0.0.1?), and it's fantastic that it doesn't work with anything else. This is the best middleground.


When it doesn't work on anything other than localhost, you can't host a web server on your dev machine and test how it works on your phone. I've been through the hell of trying to test WebRTC applications on mobile Safari, and it's horrible.

Specifically, you need HTTPS for WebRTC, but you obviously have to use a self signed cert because local IP. You can ignore the cert error and load the page, but connecting to the websocket for signaling will still fail because websocket on iOS requires a non-self-signed cert.

Non-HTTPS websocket would work, but not from a HTTPS host. So you're in a situation where you need HTTPS due to WebRTC, but you can't use HTTPS due to websockets.

In trying to push people to HTTPS by disabling features on HTTP, we're making development a _much_ worse experience. I'm not sure that's right.


You can probably just create your own root CA and install it on your mobile device for testing. I've done this for my internal stuff at home and it works well.

I use xca[0] to create/manage the root CA and the certificates, but there are other tools to do this.

[0] https://hohnstaedt.de/xca/


> you obviously have to use a self signed cert because local IP

Not true at all, SSL certs have nothing to do with IP of the servers that use them, the servers just have to have the correct private key for that cert.

You can make any domain point to local IPs by using the hosts file or even editing DNS directly.


I see your point about mobile testing. (I don't do mobile work, so I didn't think of it.)


Would be better if it also supported a new warning / permission to request insecure camera access.


Too confusing. There is already a permission prompt for camera access, so now there'd be two prompts, or it's still one prompt but the text is different in either case, which users can't be expected to understand.

You can go into about:config and explicitly undo this setting if you're in some weird dev corner case where it's a problem, but you should definitely put "Stop doing AV stuff in an insecure context" near the top of your TODO list.


I wrote a script a while ago that automatically creates a subdomain on a domain I own and registers a Let’s Encrypt certificate for this exact purpose. I’ve found it very useful as it will work even for local addresses and dynamic addresses. (It will update the dns record and renew the cert on futher invocations.)

I do wish there was a public solution offering this type of easy dynamic DNS with https. (Sharing the script I wrote could cost a lot on dns hosting and increased server expenses.)


You could add a tunnel via iptables to just route one port on localhost to another ip


iptables on your phone? I think you would need to root your phone, if it's Android.


No need to root, with android in developer mode, run this to forward localhost:3000 to the same port on the phone.

> adb reverse tcp:3000 tcp:3000


I think it's not unreasonable for a techie to be the admin of their own phone. (I still don't get how this is an unpopular opinion.)


As I get older and get other responsibilities, I want to be admin of fewer and fewer things. Same reason I replaced my Linux fileserver with a Synology. I'll let someone else stay on top of the security issue du jour.


I also think it’s not unreasonable to not want to have root access to your phone.


If you have a OnePlus (which come more or less rooted) or a Samsung (Which has a lot of community support), you can probably root it pretty easily. But a lot of other phones lack the manufacturer's blessing or the community's support, leaving you SOL unless you can find an exploit yourself.

Also... iphone?


> (I still don't get how this is an unpopular opinion.)

Not unpopular, just unrealistic. You can't do it on an iPhone, and if you're doing local web development you really ought to be checking it on an iPhone.


If you're doing local web development, then running the iOS simulator will give you the necessary Safari iOS Web view controls.


It won't get you the necessary touch interactions if you're doing something that uses them, though.

Broadly speaking, you want to test on the actual device your users are using, not an approximation of one.


Probably possible with a local VPN on android? I believe those can do anything they please.


Is generating localhost certs and then accepting them in your browser once is that hard?

openssl genrsa -out key.pem 2048

openssl req -new -key key.pem -out certificate.csr

openssl x509 -req -in certificate.csr -signkey key.pem -out certificate.pem


if all development servers for all frameworks just did that on first run of a project it'd be so much easier.


I just stopped using languages that require a "framework" or "development server" with monster json/yaml configurations to run a web server. In Haskell, a change from http to https is switching from warp.run to warp-tls.runTLS function (with certificate paths set).


in order to replicate or simulate a production environment i sometimes do something like this:

ip route add local 192.0.2.123/32 table local dev lo

which makes your system act like this is a local address without actually being one.

EDIT: nvm i just realized that wont solve your issues with mobile development...


To any browser developers out there, I beg you, please, please, please whitelist lvh.me. I am so tired of security restrictions making everything painful for lvh.me.


Why? That is just an ordinary domain name that someone pointed at local host. I don't see why it should get any security exemptions.


But nothing ensures that domain name will always point to localhost. So why should browsers trust it more than other HTTP domains? It's owned by one person, so DNS registration could lapse even with the best intentions.


In general, I'm NOT happy with how camera/microphone, GPS and sensors like Gyro/accel/magnetometers and beacons (radio/wifi/bluetooth/nfc), screen size/resolution, battery level, hardware port identifiers etc are accessed by any website or app on my laptop/phone.

This developed over the years without any input or choice from the end-user. The device manufacturers, platform owners (Apple, Google, Microsoft, Mozilla) and app developers joined together and forced this surveillance aparatus on all end-users.

This power balance has to change.


There's no problem with having the capability in a web app. It's only a problem if those capabilities are not consented to b the user first.


There is definitely no problem in having the capabilities in the hardware. But appropriate assured controls should have been built and provided to the end-user. The challenge is these controls are not in hardware-switch-esque form. They are left to the whims and fancies of individual apps. That blame goes to platforms.


I disagree. Having things like this rolled into the browser means it's one security vulnerability or corporate decsion away from hurting someone.


Dear Apple: for Christmas I'd like physical, no-bullshit power shutoff switches for your camera and microphone on the Macbook Pro. Other devices too—if you can manage it, that'd be great. Sincerely, the people who put tape over their cameras, the people who don't because it's ugly and messes with closing the lid but wish they could, and the people who would be in one of those two camps if they understood the risk (so, altogether, 100% of your users).


From my understanding, Apple has done sort of what you're asking for, they've hooked up the physical wiring of the camera LED to the camera itself, so it is physically impossible to power the camera without the LED being turned on (as opposed to the "turn on LED" being part of firmware logic that could be hacked).


This is a better solution overall, as it's "by default". A hardware switch relies on the user to be privacy conscious. An LED which is physically connected to the camera circuit (!) is immediately noticeable if it turns on unexpectedly.


As a layperson in this arena, I'm skeptical as to whether it's a great solution. Is it possible to turn the camera on and off very quickly? If so, a smart hacker could do that really quickly and if the owner ever notices they would probably think there is a problem with the electrical rather than thinking they are being monitored.


> Is it possible to turn the camera on and off very quickly?

Not particularly. At least on my 2015 rMBP, using code that I wrote (so I know it's not doing anything extraneous), the light is on for about a quarter of a second before the first frame is returned from the camera. This is because the LED is literally showing you when the camera has power (which includes any sort of handshake with the system), not just when it's capturing frames.

Is that enough that a user who's really concentrating on the screen will nonetheless see the light come on? Not necessarily. But GP has a good point about this being a feature that doesn't rely on the user being proactive.


It's a USB camera. It needs more time than a flicker to turn on and start producing frames. I don't think you could do as you said and still have the camera both work and the LED be dim.


The camera on Macs has actually been a PCIe peripherafor quite a few years now. But your point stands; it still takes a good second or so after the LED turns on for the camera driver to start producing frames to userspace.


How long did it take you, a self-proclaimed layperson, to come up with the idea of quickly pulsing the camera? Now, how likely do you think it is that someone who's actively trying to prevent camera shenanigans would think of this idea as well, and mitigate by e.g. introduce delays or latch the light on for a couple of seconds?


How likely? Who knows? It could be an intern that implemented it for all we know. I've seen more critical things implemented by interns at a medical device company I was previously at. Do you sincerely think Apple is more concerned with secure operation of a camera (that people are going to put tape over anyways if they are that concerned with security) than a full fledged (successful) medical device company is with medical devices?

Moreover, even if it wasn't an intern, how experienced do you think the engineer is at understanding human behavior in response to hacks? Many engineers I have met have difficulty conversing with other people and have even more difficulty in actually understanding their behavior. I can almost guarentee you that even switching it on and off at slow rates will convince most people that there are electrical issues.

Also do you honestly think the average electrical engineer is that well-versed with hacking paradigms? I would conjecture that software engineering is one of the leading fields to be a gateway to understanding hacking and during my electrical engineering degree, most of them acted like writing software was a nuisance they had to do to get through the degree. Hell, even most of the lab instructors we had from JPL looked down on software engineering and talked the same way to bad EE students that a cliche high school instructor would talk to bad high school students; instead of telling them, you better like asking, "do you want fries with that" they would say (in the same tone), "you better be good at writing software."

How do you even know what the budget for the department the engineer is in? How do you know they have the budget to spend weeks on securing a camera most security minded people are going to put tape over anyways? How do you know it wasnt some off the cuff, in a meeting comment, saying I can implement this feature in an hour and everyone was like that's nice, you should do that and the thought of security never went further than that?

Unless you were there, you dont have the slightest clue as to how well thought out the whole thing is.


If you install Oversight, you can get persistent notification center alerts for most mic and cam activations (of course, it likely won't help if you have targeted malware that knows how to disable/uninstall Oversight) - https://objective-see.com/products/oversight.html



LED brightness is controlled by pulse-width modulation: at low frequencies, the camera LED would appear dimly lit. A more sophisticated approach might be to combine gaze detection to ramp-down frequency if someone is looking towards the camera/LED.


PWM reduces average power. If the LED is on the same circuit as the camera, I don't know how successful you will be at powering the camera while trying to dim the LED.


A momentary flicker would not be perceptible in a lit room.


A momentary click on a phone line was also imperceptible... until it wasn’t.

You might not even see the flicker but if you catch it in your peripheral vision often enough, or you found out someone else was caught by it or it hit the news big time, you’d suddenly become more suspicious about that momentary flash. Maybe even paranoid.


It is just like the small hacks that are possible with ANY of these “require UNRELATED user interaction” things.

Like being able to speak “” when the user clicks. Or something really short or kind of unpronounceable like “,,,,”. Apple could of course try to require the first speech to always be long enough to be unmistakably speech. But otherwise ANY user interaction is enough to enable ANY speech.

The alternative would he to have dialogs for everything: “would you like to turn on the camera?” “Would you like to let this website use speech to text?” “Always remember my choice for this domain”.

Seems giving the user a master switch that overrides things, and letting websites detect this and complain, doesnmr have many downsides but has tons of upsides.

And then of course there is browser fingerprinting. It’s now really hard to turn it off without breaking tons of sites that care about the width of your window (size of your phone) and your operating system, and so on


This is not a better solution overall and there's no reason we can't have both, other than manufacturer design choices. How often are you looking directly at your camera? Even if you are, once the camera comes on unexpectedly, it's too late.


> How often are you looking directly at your camera?

On my Mac, I find the LED very noticeable when it comes on unexpectedly! It's bright and green and not part of my screen. And yes, this has actually happened to me!

> Even if you are, once the camera comes on unexpectedly, it's too late.

Nah, they saw a few frames—they're very unlikely to be useful. What's more important is knowledge.

I agree we could have both, but each of these features does have a financial cost. I consider the LED significantly more important.


Doesn't really cover situations where the computer is in a persons bedroom (common with eg: teenagers), and not powered off overnight.


I remember there was a story saying that it's possible, via software only (due to a "bug" and some "poor" hardware design), to turn on the camera without activating the LED.


It used to be that the LED was controlled by firmware. As far as I know this is fixed now but I haven't proven it myself.

There are leaked schematics of MacBooks online (that unofficial repair shops use) so if you want to investigate this I'd expect it to be a good place to start.



Published in 2013, but describes "the Apple internal iSight webcam found in previous generation Apple products including the iMac G5 and early Intel-based iMacs, MacBooks, and MacBook Pros until roughly 2008".


That's useful, but it's not as reassuring as a hardware switch where you can see it work.

(For video, anyway. I don't see any similar solution for audio.)


Two reasons why switch is better:

1. If it does happen to light up, what would you do, turn off your computer? That's shitty.

2. What if your AFK and aren't looking at the light?


A switch plus a notification (as opposed to modal prompt) if you try to get into video/audio calls but forget to flip it back. Just to remind you that you’re trying to enable video/mic but have the hardware switched off.


Arguably better to be aware of the breach than prevent one sensor from leaking too.


What about the mic?


FWIW... I bet you could power the camera, get a still frame at 60fps, shut it down, and not see the led come on. It would be 1/60, 16ms plus or minus the amount of overhead needed, plus autofocus and exposure correction might make it impractical, but it’s definitely not a bulletproof fix.


Apple could simply make the camera take 1 second to activate.


Slow spool up embedded devices... we’re moving backwards :)


If Google Calendar thinks it’s OK to set a 500ms animation on opacity for event edit dialog - then it doesn’t seem like a 1 sec spooling for a webcam is too much :)


How much would you bet?


Considering I have a device here that has an I2C camera and in firmware I can turn it on and off at right around 25ms, I’d consider betting quite a bit.

Instead of being snarky, how about you explain why this isn’t possible. Even if it was 100ms almost no one would catch that.


I’m not being snarky, and we’re not talking about some random device you have. We’re talking about the camera on a Mac. The bet is that you can’t turn it on and off so fast as to be un-noticeable because there is a noticeable delay between that light turning on and getting an image from that camera. So I’ll gladly take the other side of that bet.


> From my understanding, Apple has done sort of what you're asking for, they've hooked up the physical wiring of the camera LED to the camera itself, so it is physically impossible to power the camera without the LED being turned on

Correct. And that was my reason for NOT covering the camera. Because I would be able to see if it was on due to some malware. However, I did not expect a vulnerability like Zoom's, where a simple website would be able to trigger a webcam. Combined with external monitors, the LED would be potentially missed for a good amount of time. So I've reversed my position since then.


I’ve read this as well but my macbook senses ambient light and adjusts my backlight accordingly. Isn’t this through the camera - with no LED?


The ambient light sensor is right next to the camera, I think it's usually to the left. It's a bit hard to test since they're close together. Macs have had ambient light sensors for a long time. For example, take an old iMac and put it to sleep. The power light will pulse with a period of about 2s, and the brightness will depend on the ambient light level.

Edit: Just tested on my MBP. Opened photo booth, covered the camera with my thumb, shone a bright flashlight at the point just left of the camera. The display got brighter but Photo Booth showed no changes in what the camera was seeing.


No, the ambient light sensing is done with a light sensor on the body - look for the tiny hole drilled into the chassis. It doesn't use the camera.


Remember when they did that for your battery power indicator! With a little button to trigger it.



I think I understand the risk, and I definitely would not bother with such a switch even were it available. So maybe I actually don't understand it.

What exactly is the risk? Have there been any actual cases of someone being spied on with their laptop webcam that would have been prevented by a switch? I'm only aware of cases where the webcam switch would not have helped (e.g. roommate sets up notebook to record owner naked). Even that is incredibly rare, or if not rare, almost never reported.


A quick search seems to turn up quite a few examples of webcam spying. I'd love to see actual numbers, but it doesn't seem to be "incredibly rare".

https://www.dailymail.co.uk/sciencetech/article-5228017/Hack...

https://www.dailymail.co.uk/news/article-2638874/More-90-peo...

https://globalnews.ca/news/2158281/what-you-need-to-know-abo...

https://www.telegraph.co.uk/technology/news/10131456/Hackers...

This site claims a guy made a business selling software to hack and remotely control webcams, complete with paid employees and $350,000 in income:

https://www.welivesecurity.com/2015/04/21/webcam-hacking/


OK. That has caused me to update my beliefs. I still think that there is relatively little risk -- like, you should be much more worried about being in a car accident -- but I no longer think it's on par with being struck by lightning.


Why would this be of little risk? The good thing about software is that it is automatable. That's also the bad thing.

Create a malware (which due to some big company fuckups can be even embedded in a webpage these days). Capture frames indiscriminately. Add some image recognition algorithms (from OCR to machine learning, depending on what you want to do) to flag interesting hits.

Voila. Massive dragnet. Applications can range from simple blackmail (a-la Black Mirror) to industrial espionage.


I'm not saying I can't picture it. Just saying it doesn't seem likely to happen on a large scale basis.


I get what your saying, that the personal risk is low, especially compared with say driving or heart disease. Heck, I'm a middle aged heavy guy and couldn't care less who sees my nudes.

But, I believe we (as technologists) have a responsibility to use and push for strong security practices. I don't want my kids to grow up in a world where creeps blackmailing them through their webcams is a possibility, or where a rogue politician has all the tools of absolute authoritarianism already set up and waiting for him.

A camera cover is a huge win. It's super easy and cheap (a piece of plastic), it's easy to understand (entirely mechanical), it works 100% when used, and it's failure modes are obvious. Not all security controls are cheap, easy, and 100% effective, but this one is. And if you don't bother to use it in your bedroom, then that's fine, but every webcam should have one.


Considering that there have been Trojan malware programs out there that can secretly take over a user's webcam since webcams became popular, I'd say it's a given that it is happening on a regular basis.

Also, there are many security programs that can seruptitiously take photos or videos using the camera. Usually this is to help in recovery after theft.


Here's a recent 0-day involving Macs and Zoom (or RingCentral): https://medium.com/@jonathan.leitschuh/zoom-zero-day-4-milli...


Which is an exploit, not an attack.


When I was in highschool people would do this all the time to each other by attaching trojans to files to gain access to webcams. This was the days of Windows 98, and security was almost non-existent for most users.

People would get someone infected, and then share the credentials so everyone could watch. So, I personally know of a handful of people that were spied on 20 years ago.


Here's a case that went to court about it,

https://en.wikipedia.org/wiki/Robbins_v._Lower_Merion_School...


It's for when someone pwns root on your computer, you can still turn off video and audio recording physically and they can't spy on you.

The use case is that you leave them turned off by default in case someone pwns you, and only turn them on when you need to use them.


One use case is protecting against non malicious but unexpected use of the camera. For example when you want to join a call with voice only but the app defaults to video and voice. You can have your camera off and mic on and you know for sure you won't be unexpectedly streaming video.


A while back it was in the news that school issued laptops had their webcams remotely activated by administration during non school hours. So yeah there's been at least one case.


Well, yeah. There's a reason why built-in webcam covers / physical switches are much more prevalent on laptops oriented towards business/government users (cf Thinkpads).


It's more or less a feeling. Because if someone can activate your camera without you knowing you have bigger problems.


If anyone is looking for a nice webcam cover for laptops, I've found the super-thin ones (marketed as "0.7mm thin" by EYSOFT and others on Amazon, etc) are fantastic. Very cheap, very thin, visually unobtrusive, and dead-simple to use. Doesn't interfere with closing my Macbook at all, it protrudes less than the rubber bumper around the screen border.

Still miss the physical mic mute button on my old Thinkpad X230... and it didn't have a webcam button for that, but we've _almost_ had all of the right features in the past...


FWIW I had one of these on my laptop and the screen now has a crack that started right where this was.


Also note that the webcam closes onto the large, glass trackpad on Apple laptops, increasing the risk of damage if any debris gets in between there.


That's what the Purism laptops have. But they're linux-based (although I hear Mac OS is based on linux kernel).



The kernel is not BSD, it's based on the Mach microkernel [1] with a BSD compatibility layer implemented on top. The whole thing collectively is called XNU [2].

[1] https://en.wikipedia.org/wiki/Mach_(kernel)

[2] https://en.wikipedia.org/wiki/XNU


It was developed from something that was based on Mach. However, it is also based on BSD, as your second link states.

> The BSD code present in XNU came from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project.


The description of "Mach with a BSD layer on top" was nonetheless accurate. When you look at actual BSD you will see it is rather un-Mac-like, so I don't think it is a highly relevant comparison. (The path drawn here from the Purism laptop to XNU is very dubious.)


macOS is based upon XNU, which was based upon the Mach kernel (with bits of BSD userspace), not Linux.


Mac OS is in no way based on the Linux kernel.


Perhaps it would be if the license was different.


They began development at NeXT in 1985, 6 years before Linux.


Unlikely. The Apple kernel guys are purists and would probably use the BSD kernel even if Linux were MIT licensed.


In the 90s they had a project called MkLinux that had Linux combined with Mach in a similar position to where BSD is in XNU. This was shortly after the NeXT acquisition.

I bet they were considering it.

https://en.wikipedia.org/wiki/MkLinux


The Pinebook too.


Why only Dear Apple? Dear Dell, dear Lenovo and dear Asus, enable your products with physical switches


You can find "slim" webcam privacy covers on Amazon. Since they can slide it they're a better solution than tape.


macOS already ask the user for permission the first time an app tries to use a camera/microphone. If you want to be asked each time, install OverSight.


Have you noticed that the people who tape up their laptop camera almost always still carry around a smartphone in their pocket 100% of the time.


A smartphone is significantly more secure than a computer. I install lord knows what NPM package from God knows where on a weekly basis. Only since very recently does mic or camera access cause any kind of system prompt on Mac.

Smartphones , for all their faults , at least are far less vulnerable to viruses than pcs.

Or at least iOS vs Mac.


I consider my desktop computer to be far more secure than my phone, since it's harder for someone to access it physically and it isn't running Android. The things I install on it are more trustworthy as well, since they're mostly small, established unix tools.


Your device is as secure as you make it. Why are you installing "lord knows what npm package" on your laptop?


>Why are you installing "lord knows what npm package" on your laptop?

Probably because he installs lord knows what npm packages to his production servers too.


I don't get why people who even admit that they dont trust these random npm packages can think its okay to ship them in production and put all their user's data at risk. It's malpractice.


I’d love to know a metric of trust and its relation to customer data. How many trust points for how much PII? I’m assuming it’s a logarithmic scale? And a Debian stable package gets , what, double the points of an npm package? Or I guess it depends on the weekly downloads? What about pip, gems, vim plugins, emacs packages (I’m looking at you melpa) , quicklisp, ...

Then we can play an honest thought experiment: how many people satisfy that metric? Don’t forget to correct for actually how much PII points one is handling.

If you don’t at least have some consideration of those factors, claiming malpractice seems fatuous.


It's not a question of establishing an absolute scale of trust. It's about admitting that you consider npm packages to be insecure, but you run them in production anyways.

Imagine you believed that steel had a 10% chance of spontaneous combustion, regardless of whether its true or not, if you believe that and you still built a bridge out of it, that's malpractice.


Point being where is the line? How high are the stakes (bridge: say 20 human lives at any time, very important). How dangerous is it really? (10% chance of fire per year: extremely high). Then you combine those two and see if they match.

Everything has a limit. Otherwise why do you trust your compiler, your computer, your eyes, your sanity?

Be careful with a word like malpractice, and analogies that suggest blithe endangerment of human lives. It doesn’t leave a lot of room for honest engagement and suggests you either don’t understand the human mind, or the value of a human life.


You continue to miss the point. Its not a question of _why_ I trust my compiler or my computer. If you trust npm packages and ship them then that's not malpractice.

Its about admitting you _don't_ trust npm packages, but you go ahead and use them anyways. That is malpractice, because you admit you know better but take action anyways.

"I know this procedure may do more harm than good, but I will perform it anyways because I'm too lazy to find an alternative"

That is textbook malpractice.


Trust? I don’t even trust my eyes.. :)

Though yes, if laziness is what makes it malpractice, then I’m the Jack Kevorkian of IT. I plead guilty.


Glad were in agreement :)


The difference being that the smartphone camera isn't pointed at your face or room the entire time the device is in use.


Frankly I don't think it's the camera that people should be most concerned about - it's the microphone.


Why?


Can my smartphone see anything from inside my pocket?


I tape my phone cameras too.


If an attacker can access your webcam/microphone at will this means they can run arbitrary code to capture your screen, display anything you want and capture your keystrokes.

In this case the camera or microphone is the least of your worries.


This sucks, my community[1] has a local offline-first video/audio call app that we run on a physical mesh network.

This will make it impossible for people to talk to each other, without first needing to be connected online to some certificate authority, or without some extraordinarily difficult pre-installation process, which is often not even possible on a phone.

HTTPS was important, but now its being used to shoe horn dependency on centralized online-only authority. Perfectly ripe to censor anyone.

1. https://gitter.im/amark/gun


A browser doesn't need to connect to the certificate authority to validate a cert; only the server hosting the app 'needs' to be online (at least long enough to obtain a signed certificate every so often).

The bigger problem is that there has to be a single server hosting the app in the first place, which IMO is a severe flaw in the Web's architecture. But this change doesn't really make the situation worse.


Subnet IPs are always different tho. Can I really get a cert for all subnet addresses? That'd be awesome! Please please educate me.

I want to be clear though, I need it so that the user doesn't have to install the cert themselves, or have to be online to approve.

Previously, a user would connect to the local wireless network, then the router would open them up to a directory listing of the local apps available on the network (like the video/audio call), they click the link (just points to the dynamic subnet IP of a static file server) to load the offline HTML page which then connects to call anyone in the network, including users on neighbor and neighbor-of-neighbors routers.

Basically our own decentralized telecom!


SSL certificates are typically issued for domains and aren't tied to a specific IP address. So you need a domain name, and clients need to be able to resolve that domain to your IP, which means the network needs a DNS server – but the DNS server itself doesn't have to be online at all times, it just has to know what IP to return for your domain. I am not sure how that works with your mesh setup.

Note that some domain validation methods involve the certificate authority resolving the domain to an IP address and trying to connect to it on the public Internet – but not all. Let's Encrypt, for example, supports the dns-01 method, which just requires a custom TXT record to be set on the domain. (But of course the TXT record itself needs to be on the public Internet.) That said, since your goal is to work offline, you may want to use a different CA that issues longer-lived SSL certificates, since Let's Encrypt only gives you 3 months at a time.


Since we're talking about a Web Browser I'm going to assume we're talking about the Web PKI - certificates that are trusted in common software like web browsers that use TLS.

Leaf certificates in the Web PKI specify one or more SANs (Subject Alternative Names, the "alternative" is because this is the Internet's alternative to the way the X.500 directory system was designed, you don't use the X.500 directory system so you don't care about this) which can each be either an IP address (either IPv4 or IPv6) or it can be any Fully Qualified Domain Name (like bobs-laptop.example.com) from the Internet DNS or it can be a "Wildcard" like *.servers.example.com, which is considered a "match" for any Fully Qualified Domain Name that has exactly one label (a name with no dots in it) instead of the asterisk, so it would batch bigfiles.servers.example.com, and www.servers.example.com but not www.example.com or bigfiles.servers.microsoft.com)

You can get software (such as "Certbot" or "acme.sh") to help obtain trusted certificates periodically from the Internet automatically (at no cost) for a machine which has a Fully Qualified Domain Name on the Internet and is connected to the Internet at least sometimes. You may need to write software yourself to manage actually installing such certificates if your server software is custom - if you use common server software like Apache the tools can do it for you. The no cost option is provided by a charity, ISRG. If you're not a charity and appreciate the service you might consider sending a few bucks their way so they can keep doing this.

If your servers are not ordinarily connected to the Internet, but you do own an Internet domain name (e.g. example.com) you can just make up names for them in that domain and you will be able to obtain certificates for those made-up names, since you control the domain they're your names to do with as you please. But doing this is a bunch more work than the scenario where they're on the Internet.


I'm not familiar with this exact setup, but I am assuming you have full control over the router software, but want to limit any installation or configuration of either the browser's computer or the local network fileservers.

> Subnet IPs are always different tho. Can I really get a cert for all subnet addresses?

SSL certs don't usually have anything to do with the IP address, that is usually handled by the hosts file / DNS entries.

There is no reason the non-profit can't get a domain and a free SSL cert and distribute that cert and it's private key with the router software as a default while allowing admins to install/configure their own domain and SSL cert.

The router can then MITM all requests to that domain using a SSL termination proxy for the file server.


Correct.

Probably can even configure local network file servers, but better if not.

If we don't ever need to use domains in the mesh (we have a separate directory / search system).

Wait, I only have to have the certs locally (offline) on the routers?

Ahh, hmm, cause you're saying I could MITM it. But Browsers (especially on mobile) all usually freak out when they go to `https://subnetIPaddress` saying "your connection is not private" "back to safety" every single time, with freakishly small "prcooed anyways" links on mobile. Either way, mobile or not, this warning totally just trashes the experience. How do I fix that?

Or you're saying they still type in the domain? But doesn't that require existing internet to then go through? Or you're saying, router still MITM that, but happens to have matching private key, so then it is able to locally (offline) proxy the traffic into the mesh? Hmmmmmmmmmmm!!!! This might be very helpful. Sucks we still have to buy certs to run our own offline system - who has the longest certs? (Let's Encrypt is like only 3 months?)

Super thanks to everyone for helping us!


You won't have to buy certs from LetsEncrypt, they're provided free. So you'd have to have an external DNS that allows you to provision DNS records for mymesh.example.com and request a wildcard certificate for that domain.

The script is automated and will ensure that the certificate is always up to date.

Inside the mesh you would need:

* Have an internal DNS that resolves myserver.mymesh.example.com to an internal IP address

* Distribute the private key and certificate to the internal servers of your mesh.

* Have the browsers/clients of your mesh use the DNS names instead of raw IP addresses. So users would have to learn to go to https://myserver.mymesh.example.com instead of https://a.b.c.d

What you will need to do is have an internal DNS server that resolves "myserver.mymesh.example.com" to an internal IP address. The server would use the *.mymesh.example.com private key and cert.


To further clarify, running an internal DNS server doesn't require a MitM, as the DNS server address for a network is generally supplied as part of DHCP. (There is one reason you might want to do a MitM, but I really don't recommend it. Namely, some people change their settings to ignore the DHCP-supplied DNS server and hardcode an address, e.g. 8.8.8.8, which they would fail to reach if the network isn't connected to the Internet. In theory you could work around the issue by redirecting such traffic to your own DNS server.)

As for longest certs, the CA/Browser Forum Baseline Requirements (which all CAs have to follow) specify a maximum validity period of 825 days, or a little over 2 years. You should be able to find CAs offering certs with that period. (Why such a specific number? I have no idea.)


It is true, instead of MITMing the HTTP request, you can MITM the DNS request. The issue then is that you need to distribute and configure the private key and certificate on all the static file servers rather than just on the router


> Or you're saying, router still MITM that, but happens to have matching private key, so then it is able to locally (offline) proxy the traffic into the mesh?

The router needs both the private key and a signed cert recent cert. Neither machine needs access to to the internet to validate the cert. The client uses it's preinstalled root certificate public keys to see if any of them signed the certificate provided by the server (any intermediate certs are also provided by the server).

The only online requirement is to have the clients recieve SOME non-local ip address in response to their initial DNS query for the domain (the specific IP address doesn't necissarily matter since the router will be intercepting the request before it is routed using that IP address)

You can get longer multi-year certs, but Let's Encrypt doesn't charge and allows you to script the automatic regeneration of new certs.


If you are running a network disconnected from the internet, it follows logically that you'll have to reconfigured the normally internet-anchored security mechanisms on end-devices. You always have the option of using another app for this too.

Are you sure you don't want confidentiality on the audio/video calls on your network? After all it's passing through all the mesh nodes and vulnerable to eavesdropping.


We do end-to-end encryption with WebCrypto in the browser, but WebCrypto is only available on HTTPS :P which (tho sister comment might be onto something!) making it hard to access our subnet IPs offline in the first place!


It's been over a year, but I played around with PKI and installing your own self-signed root certs on iPhone and Android (for HTTPS) was not hard.


But having to start exchanging root certs is not trivial in mesh environments, it's also insecure if you have to accept someone else's root.


Does anyone still remember the times when MSIE actually warned you if you were POSTing anything over an unencrypted connection?


Yes and it basically showed how it doesn't work. Applications were designed around the assumption that the user would click through the warnings. It looks like Firefox and Chromium are doing far better by restricting features to SSL, though I would be happier if they were also trying to push something more resistant to nation-state abuse...


This page's content is in a div named "mobile-pusher" with a 400px padding-right, which then gets disabled when loading a JS script from a third-party domain (ffp4g1ylyit3jdyti1hqcvtb-wpengine.netdna-ssl.com). wtf?


that's wpengine's cdn, most likely the host of the blog.


Its funny how everyone feels empowered to be a digital sleuth these days but makes ridiculous conclusions with the same material everyone else has


"wtf?" is not a ridiculous conclusion.


probably related to the hamburger menu on mobile, which pushes that div over 300px


Love it, although I'm sure you can still send video/audio in non-encrypted ways right?


This is very annoying for development in my eyes.

What is the preferred way to include https in your development flow? Have an nginx or apache running? What about automated tests against a running application?


localhost is considered "secure" and doesn't need https - this can be used by most development and automated testing flows. For remote development, I tend to setup a caddy server which makes it very easy to get an SSL certificate.

This is still mildly annoying.


What took them so long?


They cant even get firefox to properly load (let alone let me pick) all 10 of my webcams.

How about making that work, first?


Chrome has been doing it since around 2016, it seems.


In Firefox 73, they'll require consent!


They already require consent. You probably know that. Not sure what the joke is.


Both of these permissions already require consent.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: