That would defeat the security purpose.
Anyone within your local network (which practically speaking very often means the next Wifi your device could find) could attack you.
autossh -L 2080:localhost:80 220.127.116.11
Firefox will believe that your service is local, and will allow the activation of the camera and the microphone even though you do not use https.
Firefox does not do that.
Safari has some behaviors along those lines last I checked.
I can't recall for Chrome whether it does or not.
This is clearly more complicated than ideal, but it should work.
Edit: You can also use a custom CA root certificate, which can be installed on iOS etc. mkcert is a good starting point:
Imagine you're someone who just wants to play around with cool web technologies. Maybe you're fairly new to web dev; maybe you're fairly new to the world of programming in general and you're using the web to learn it, which has historically been one of the huge strengths of the web. You suddenly encounter a brick wall, where you figure out that programming isn't enough; you have to fork over money for a domain and learn how SSL works and how to set up let's encrypt and how to make root certs and how to install them on your phone, just because you wanted to play with something you found interesting.
For anything on the public Internet, things are already incredibly better than they used to be: HTTPS is free, and there's a wide range of easy ways to set it up on your site, ranging from Caddy (a webserver with built-in Let's Encrypt support) to CloudFlare (who will proxy your site for free including SSL termination). There are still problems – e.g. for all that certbot (official Let's Encrypt client) tries to be easy to use, it's more fiddly than ideal. But the goal of "HTTPS just works" seems clearly within reach, and things can only continue to improve from here.
On the other hand, the situation with local network servers is a complete mess. This includes not just development environments, where "just don't bother with security" is a viable option, but also things like home routers and IoT devices which do want to be secure. Currently, routers tend to just use HTTP, which is insecure if you have anyone untrusted on your Wi-Fi network. IoT devices, of course, tend to route everything through centralized cloud services; there are a lot of reasons for that, and it's easy to blame invidious motives, but I suspect a significant part of the reason is just that it's really hard to make an easy-to-use-device without a centralized service. At a bare minimum, you need to be able to:
1. Securely pair with the device;
2. Connect the device to the network; and
3. Access the device's services over the network, using the existing pairing record to establish identity and prevent a MitM.
(Ideally you would also be able to expose the device's services to the wider Internet, but that's another story.)
You can do this already with a custom protocol, but not with a browser. The closest browsers have to a "pairing record" is asking you to trust a self-signed cert for some random domain, but that's nowhere near the semantics you actually want. After all, it doesn't really matter whether the device controls such-and-such domain; what matters is whether it's the same device you paired with. Meanwhile, trusting random self-signed certs is fundamentally insecure, and (intentionally) difficult to do.
What we need IMO is an entirely new protocol to address this use case, and I think such a protocol might also work for local dev servers.
In the meantime... well, there are always plenty of workarounds.
Seems the current push is making certain legitimate use-cases not just hard but pretty much impossible. Such as providing a local web server that is independent of the public internet.
Devices used to provide embedded web pages as an easy way to show config options. This seems to have become completely impossible: Even if you'd jump through all the hoops of generating a unique subdomain and certificate for each device, you'd also somehow need to update that certificate on that device. So your (possibly fully local) device now needs internet access for the sole reason so the browser does not refuse to display the local configuration page.
I’d worry a lot more about how many people are being told they need a J2EE-scale tool chain to run hello world even though the native environment has never been richer.
Glitch honestly looks really good, but I'm a bit worried about telling people that the way to learn programming is to rely on a random for-profit corporation's computers rather than letting people realize the actual power they have over their own machine.
It may be, but just open an HTML file and write an app using your camera and microphone is not something that is typically the result of doing just that (nor should it be).
Putting some effort into figuring out how the pieces fit together is not a bad thing. You can still set up HTTPS if need be without having to rely on a for-profit corporation. It's trivial to install a self-signed cert in iOS and OSX the last I checked (and I seem to remember it wasn't so hard in Windows either). It was excruciatingly bad in Android (well, mostly missing IIRC) around Gingerbread — but that too is a good example of why using products built by people with no comprehension of how to secure things is bad.
A learning curve is not inherently bad, but beyond that, especially with something that has such huge security implications, some understanding of WHY you should be encrypting camera fees is something I'd want any dev working on a camera/audio recording app to understand. There's a reason that while CB radio is easily accessible, there are barriers to entry for Hams. With power comes responsibility.
If they do want to get into programming, Scratch and other learning DEs and/or node/js are much better paths than dealing with the layers of barnacles that have accrued over HTML to get to the OPA / Webasm / TS/JS etc "web programming" environments.
.NET and VSCode are free downloads, provide a managed environment, and C# is a good imperative language to start with. It also supports F# if you want to get into functional programming.
(Not to mention, you need to get to know the services in the first place, while a browser is directly accessible to you)
I’m not entirely in love with the needs driving this decision but I think it’s reasonable to make security and privacy decisions which benefit a billion people at the expense of making certain tasks slightly harder for a much smaller group.
Then... you can do it on localhost. I can't really image the new web dev that is using separate computer on their local network as a dev server but can't figure out how to get a Let's Encrypt cert to use.
Before you mention mobile device simulation in desktop browsers, I’ll point out that mobile browsers often have quirks that are not present in desktop browser simulations. For instance, mobile Safari is subtly different from desktop Safari responsive design mode in many ways, and the only way that I know to actually simulate mobile Safari is with full-blown Simulator.app.
It really isn't that hard. I still can't imagine this hypothetical "new dev" who is doing cross-browser testing of this specific feature but can't install a simple SSL cert or get help doing so.
This isn't a barrier for entry to new developers, this is a specific use case that requires learning a minor, otherwise useful skill to get around. I think that is a totally reasonable trade-off.
I'm all for doing stuff to tell users that the HTTP page they're visiting is insecure, but telling people who are new to web dev to get a domain and learn how the world of SSL and domains work is actually a pretty fucking big hurdle. They'll have to get into that if they want to get serious about it, sure, but there's no reason to unnecessarily front-load the frustrating and complicated and unrelated parts. You may think it's acceptable, I value the web's accessibility to new developers.
Surely there's better ways to convince sites to use HTTPS than to say they can't use getUserMedia on HTTP.
What do you mean? Do some Androids block it? Or iOS? I've got it easily available in settings.
1) Get a signed certificate for a subdomain on a domain you own (e.g. dev.example.com)
2) Change your hosts file to point any local IP you wish, or setup a DNS entry for that subdomain that points to 127.0.0.1
Though it doesn't work on iOS Chrome, and you want to test on iOS too obviously.
Does this behavior even exists in iOS Chrome? If it does, it exists in Mobile Safari as well.
> Mozilla will provide developer tools to ease the transition to secure contexts and enable testing without an HTTPS server.
But the bugzilla entry they linked to with that has been unassigned for two years, so maybe they changed their minds or figure the localhost exception is sufficient.
The last comment proposes a whitelist for development domains, but no response to it.
ths helped me (with the 68.0 win edition):
from false to true
Specifically, you need HTTPS for WebRTC, but you obviously have to use a self signed cert because local IP. You can ignore the cert error and load the page, but connecting to the websocket for signaling will still fail because websocket on iOS requires a non-self-signed cert.
Non-HTTPS websocket would work, but not from a HTTPS host. So you're in a situation where you need HTTPS due to WebRTC, but you can't use HTTPS due to websockets.
In trying to push people to HTTPS by disabling features on HTTP, we're making development a _much_ worse experience. I'm not sure that's right.
I use xca to create/manage the root CA and the certificates, but there are other tools to do this.
Not true at all, SSL certs have nothing to do with IP of the servers that use them, the servers just have to have the correct private key for that cert.
You can make any domain point to local IPs by using the hosts file or even editing DNS directly.
You can go into about:config and explicitly undo this setting if you're in some weird dev corner case where it's a problem, but you should definitely put "Stop doing AV stuff in an insecure context" near the top of your TODO list.
I do wish there was a public solution offering this type of easy dynamic DNS with https. (Sharing the script I wrote could cost a lot on dns hosting and increased server expenses.)
> adb reverse tcp:3000 tcp:3000
Not unpopular, just unrealistic. You can't do it on an iPhone, and if you're doing local web development you really ought to be checking it on an iPhone.
Broadly speaking, you want to test on the actual device your users are using, not an approximation of one.
openssl genrsa -out key.pem 2048
openssl req -new -key key.pem -out certificate.csr
openssl x509 -req -in certificate.csr -signkey key.pem -out certificate.pem
ip route add local 192.0.2.123/32 table local dev lo
which makes your system act like this is a local address without actually being one.
EDIT: nvm i just realized that wont solve your issues with mobile development...
This developed over the years without any input or choice from the end-user. The device manufacturers, platform owners (Apple, Google, Microsoft, Mozilla) and app developers joined together and forced this surveillance aparatus on all end-users.
This power balance has to change.
Not particularly. At least on my 2015 rMBP, using code that I wrote (so I know it's not doing anything extraneous), the light is on for about a quarter of a second before the first frame is returned from the camera. This is because the LED is literally showing you when the camera has power (which includes any sort of handshake with the system), not just when it's capturing frames.
Is that enough that a user who's really concentrating on the screen will nonetheless see the light come on? Not necessarily. But GP has a good point about this being a feature that doesn't rely on the user being proactive.
Moreover, even if it wasn't an intern, how experienced do you think the engineer is at understanding human behavior in response to hacks? Many engineers I have met have difficulty conversing with other people and have even more difficulty in actually understanding their behavior. I can almost guarentee you that even switching it on and off at slow rates will convince most people that there are electrical issues.
Also do you honestly think the average electrical engineer is that well-versed with hacking paradigms? I would conjecture that software engineering is one of the leading fields to be a gateway to understanding hacking and during my electrical engineering degree, most of them acted like writing software was a nuisance they had to do to get through the degree. Hell, even most of the lab instructors we had from JPL looked down on software engineering and talked the same way to bad EE students that a cliche high school instructor would talk to bad high school students; instead of telling them, you better like asking, "do you want fries with that" they would say (in the same tone), "you better be good at writing software."
How do you even know what the budget for the department the engineer is in? How do you know they have the budget to spend weeks on securing a camera most security minded people are going to put tape over anyways? How do you know it wasnt some off the cuff, in a meeting comment, saying I can implement this feature in an hour and everyone was like that's nice, you should do that and the thought of security never went further than that?
Unless you were there, you dont have the slightest clue as to how well thought out the whole thing is.
You might not even see the flicker but if you catch it in your peripheral vision often enough, or you found out someone else was caught by it or it hit the news big time, you’d suddenly become more suspicious about that momentary flash. Maybe even paranoid.
Like being able to speak “” when the user clicks. Or something really short or kind of unpronounceable like “,,,,”. Apple could of course try to require the first speech to always be long enough to be unmistakably speech. But otherwise ANY user interaction is enough to enable ANY speech.
The alternative would he to have dialogs for everything: “would you like to turn on the camera?” “Would you like to let this website use speech to text?” “Always remember my choice for this domain”.
Seems giving the user a master switch that overrides things, and letting websites detect this and complain, doesnmr have many downsides but has tons of upsides.
And then of course there is browser fingerprinting. It’s now really hard to turn it off without breaking tons of sites that care about the width of your window (size of your phone) and your operating system, and so on
On my Mac, I find the LED very noticeable when it comes on unexpectedly! It's bright and green and not part of my screen. And yes, this has actually happened to me!
> Even if you are, once the camera comes on unexpectedly, it's too late.
Nah, they saw a few frames—they're very unlikely to be useful. What's more important is knowledge.
I agree we could have both, but each of these features does have a financial cost. I consider the LED significantly more important.
There are leaked schematics of MacBooks online (that unofficial repair shops use) so if you want to investigate this I'd expect it to be a good place to start.
(For video, anyway. I don't see any similar solution for audio.)
1. If it does happen to light up, what would you do, turn off your computer? That's shitty.
2. What if your AFK and aren't looking at the light?
Instead of being snarky, how about you explain why this isn’t possible. Even if it was 100ms almost no one would catch that.
Correct. And that was my reason for NOT covering the camera. Because I would be able to see if it was on due to some malware. However, I did not expect a vulnerability like Zoom's, where a simple website would be able to trigger a webcam. Combined with external monitors, the LED would be potentially missed for a good amount of time. So I've reversed my position since then.
Edit: Just tested on my MBP. Opened photo booth, covered the camera with my thumb, shone a bright flashlight at the point just left of the camera. The display got brighter but Photo Booth showed no changes in what the camera was seeing.
What exactly is the risk? Have there been any actual cases of someone being spied on with their laptop webcam that would have been prevented by a switch? I'm only aware of cases where the webcam switch would not have helped (e.g. roommate sets up notebook to record owner naked). Even that is incredibly rare, or if not rare, almost never reported.
This site claims a guy made a business selling software to hack and remotely control webcams, complete with paid employees and $350,000 in income:
Create a malware (which due to some big company fuckups can be even embedded in a webpage these days). Capture frames indiscriminately. Add some image recognition algorithms (from OCR to machine learning, depending on what you want to do) to flag interesting hits.
Voila. Massive dragnet. Applications can range from simple blackmail (a-la Black Mirror) to industrial espionage.
But, I believe we (as technologists) have a responsibility to use and push for strong security practices. I don't want my kids to grow up in a world where creeps blackmailing them through their webcams is a possibility, or where a rogue politician has all the tools of absolute authoritarianism already set up and waiting for him.
A camera cover is a huge win. It's super easy and cheap (a piece of plastic), it's easy to understand (entirely mechanical), it works 100% when used, and it's failure modes are obvious. Not all security controls are cheap, easy, and 100% effective, but this one is. And if you don't bother to use it in your bedroom, then that's fine, but every webcam should have one.
Also, there are many security programs that can seruptitiously take photos or videos using the camera. Usually this is to help in recovery after theft.
People would get someone infected, and then share the credentials so everyone could watch. So, I personally know of a handful of people that were spied on 20 years ago.
The use case is that you leave them turned off by default in case someone pwns you, and only turn them on when you need to use them.
Still miss the physical mic mute button on my old Thinkpad X230... and it didn't have a webcam button for that, but we've _almost_ had all of the right features in the past...
> The BSD code present in XNU came from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project.
I bet they were considering it.
Smartphones , for all their faults , at least are far less vulnerable to viruses than pcs.
Or at least iOS vs Mac.
Probably because he installs lord knows what npm packages to his production servers too.
Then we can play an honest thought experiment: how many people satisfy that metric? Don’t forget to correct for actually how much PII points one is handling.
If you don’t at least have some consideration of those factors, claiming malpractice seems fatuous.
Imagine you believed that steel had a 10% chance of spontaneous combustion, regardless of whether its true or not, if you believe that and you still built a bridge out of it, that's malpractice.
Everything has a limit. Otherwise why do you trust your compiler, your computer, your eyes, your sanity?
Be careful with a word like malpractice, and analogies that suggest blithe endangerment of human lives. It doesn’t leave a lot of room for honest engagement and suggests you either don’t understand the human mind, or the value of a human life.
Its about admitting you _don't_ trust npm packages, but you go ahead and use them anyways. That is malpractice, because you admit you know better but take action anyways.
"I know this procedure may do more harm than good, but I will perform it anyways because I'm too lazy to find an alternative"
That is textbook malpractice.
Though yes, if laziness is what makes it malpractice, then I’m the Jack Kevorkian of IT. I plead guilty.
In this case the camera or microphone is the least of your worries.
This will make it impossible for people to talk to each other, without first needing to be connected online to some certificate authority, or without some extraordinarily difficult pre-installation process, which is often not even possible on a phone.
HTTPS was important, but now its being used to shoe horn dependency on centralized online-only authority. Perfectly ripe to censor anyone.
The bigger problem is that there has to be a single server hosting the app in the first place, which IMO is a severe flaw in the Web's architecture. But this change doesn't really make the situation worse.
I want to be clear though, I need it so that the user doesn't have to install the cert themselves, or have to be online to approve.
Previously, a user would connect to the local wireless network, then the router would open them up to a directory listing of the local apps available on the network (like the video/audio call), they click the link (just points to the dynamic subnet IP of a static file server) to load the offline HTML page which then connects to call anyone in the network, including users on neighbor and neighbor-of-neighbors routers.
Basically our own decentralized telecom!
Note that some domain validation methods involve the certificate authority resolving the domain to an IP address and trying to connect to it on the public Internet – but not all. Let's Encrypt, for example, supports the dns-01 method, which just requires a custom TXT record to be set on the domain. (But of course the TXT record itself needs to be on the public Internet.) That said, since your goal is to work offline, you may want to use a different CA that issues longer-lived SSL certificates, since Let's Encrypt only gives you 3 months at a time.
Leaf certificates in the Web PKI specify one or more SANs (Subject Alternative Names, the "alternative" is because this is the Internet's alternative to the way the X.500 directory system was designed, you don't use the X.500 directory system so you don't care about this) which can each be either an IP address (either IPv4 or IPv6) or it can be any Fully Qualified Domain Name (like bobs-laptop.example.com) from the Internet DNS or it can be a "Wildcard" like *.servers.example.com, which is considered a "match" for any Fully Qualified Domain Name that has exactly one label (a name with no dots in it) instead of the asterisk, so it would batch bigfiles.servers.example.com, and www.servers.example.com but not www.example.com or bigfiles.servers.microsoft.com)
You can get software (such as "Certbot" or "acme.sh") to help obtain trusted certificates periodically from the Internet automatically (at no cost) for a machine which has a Fully Qualified Domain Name on the Internet and is connected to the Internet at least sometimes. You may need to write software yourself to manage actually installing such certificates if your server software is custom - if you use common server software like Apache the tools can do it for you. The no cost option is provided by a charity, ISRG. If you're not a charity and appreciate the service you might consider sending a few bucks their way so they can keep doing this.
If your servers are not ordinarily connected to the Internet, but you do own an Internet domain name (e.g. example.com) you can just make up names for them in that domain and you will be able to obtain certificates for those made-up names, since you control the domain they're your names to do with as you please. But doing this is a bunch more work than the scenario where they're on the Internet.
> Subnet IPs are always different tho. Can I really get a cert for all subnet addresses?
SSL certs don't usually have anything to do with the IP address, that is usually handled by the hosts file / DNS entries.
There is no reason the non-profit can't get a domain and a free SSL cert and distribute that cert and it's private key with the router software as a default while allowing admins to install/configure their own domain and SSL cert.
The router can then MITM all requests to that domain using a SSL termination proxy for the file server.
Probably can even configure local network file servers, but better if not.
If we don't ever need to use domains in the mesh (we have a separate directory / search system).
Wait, I only have to have the certs locally (offline) on the routers?
Ahh, hmm, cause you're saying I could MITM it. But Browsers (especially on mobile) all usually freak out when they go to `https://subnetIPaddress` saying "your connection is not private" "back to safety" every single time, with freakishly small "prcooed anyways" links on mobile. Either way, mobile or not, this warning totally just trashes the experience. How do I fix that?
Or you're saying they still type in the domain? But doesn't that require existing internet to then go through? Or you're saying, router still MITM that, but happens to have matching private key, so then it is able to locally (offline) proxy the traffic into the mesh? Hmmmmmmmmmmm!!!! This might be very helpful. Sucks we still have to buy certs to run our own offline system - who has the longest certs? (Let's Encrypt is like only 3 months?)
Super thanks to everyone for helping us!
The script is automated and will ensure that the certificate is always up to date.
Inside the mesh you would need:
* Have an internal DNS that resolves myserver.mymesh.example.com to an internal IP address
* Distribute the private key and certificate to the internal servers of your mesh.
* Have the browsers/clients of your mesh use the DNS names instead of raw IP addresses. So users would have to learn to go to https://myserver.mymesh.example.com instead of https://a.b.c.d
What you will need to do is have an internal DNS server that resolves "myserver.mymesh.example.com" to an internal IP address. The server would use the *.mymesh.example.com private key and cert.
As for longest certs, the CA/Browser Forum Baseline Requirements (which all CAs have to follow) specify a maximum validity period of 825 days, or a little over 2 years. You should be able to find CAs offering certs with that period. (Why such a specific number? I have no idea.)
The router needs both the private key and a signed cert recent cert. Neither machine needs access to to the internet to validate the cert. The client uses it's preinstalled root certificate public keys to see if any of them signed the certificate provided by the server (any intermediate certs are also provided by the server).
The only online requirement is to have the clients recieve SOME non-local ip address in response to their initial DNS query for the domain (the specific IP address doesn't necissarily matter since the router will be intercepting the request before it is routed using that IP address)
You can get longer multi-year certs, but Let's Encrypt doesn't charge and allows you to script the automatic regeneration of new certs.
Are you sure you don't want confidentiality on the audio/video calls on your network? After all it's passing through all the mesh nodes and vulnerable to eavesdropping.
What is the preferred way to include https in your development flow? Have an nginx or apache running? What about automated tests against a running application?
This is still mildly annoying.