Zoom’s response to this[1] is a wonderful example of how not to respond to security issues. It includes the classic tropes:
* Our users don’t care about security.
> Our video-first platform is a key benefit to our users around the world, and our customers have told us that they choose Zoom for our frictionless video communications experience.
* We have no way of knowing if this has been exploited in the wild, so it’s probably fine
> Also of note, we have no indication that this has ever happened.
* Other products have the same vulnerability
> We are not alone among video conferencing providers in implementing this solution.
* We decided not to fix it
> Ultimately, Zoom decided not to change the application functionality
And also a lovely one I haven’t seen before:
* We tried to buy the researcher’s silence, but he refused
> Upon his initial communication to Zoom, the researcher asked whether Zoom provides bounties for security vulnerability submissions. Zoom invited the researcher to join our private paid bug bounty program, which he declined because of non-disclosure terms. It is common industry practice to require non-disclosure for private bug bounty programs.
> Zoom invited the researcher to join our private paid bug bounty program, which he declined because of non-disclosure terms. It is common industry practice to require non-disclosure for private bug bounty programs.
Is an NDA really "common industry practice" for bug bounty programs? I know NDAs are common for pen-testing but it seems like an odd (and kind of dishonest) requirement for a bug bounty program.
Some kind of NDA terms are not unheard-of. Like a 1-3 month period in which to work on things during which disclosures won't go out.
That said, there's a slight disconnect between Zoom's two statements here. The first is that the researcher declined out of concerns over Zoom's NDA. The second is that NDAs are common. What this doesn't say is that Zoom's NDA is cookie-cutter or what the specific terms are.
If I were to guess, Zoom was using some unusual NDA and attempting to buy permanent silence.
Thanks for the explanation. That makes sense and seems pretty reasonable. The company should certainly have the opportunity to fix the vulnerability before it's made public and could be exploited.
> If I were to guess, Zoom was using some unusual NDA and attempting to buy permanent silence.
Considering that Zoom ultimately decided not to correct the issue I suspect you're right.
> - Offered and declined a financial bounty for the report due to policy on not being able to publicly disclose even after the vulnerability was patched.
I'd have to guess this as well. I have dealt with a number of public and private bounties, and not one of the researchers has ever rejected an NDA or not allowed us time to remediate before they could disclose this information to 3rd parties. Unless you count Tavis tweeting critical findings I guess.
And to be fair, none of the times I've engaged a private bounty have been due to some massively critical bug that impacted privacy or could hijack parts of client systems. I could see that if the researcher worked with Zoom and didn't feel like they took it seriously they would refuse this and just disclose it due to the impact it has.
The researcher makes it clear that they rejected the NDA because it was a permanent gag on any discussion (even after patching). With that in mind, and this clearly being an intentional design, I can see why it might come off as Zoom not taking the issue seriously.
Will they pay the rest of your team and your spouse as well? "I've already sent these results to a few colleagues around the world to test out, but don't worry, they won't disclose anything for 90 days".
Also, they seem almost entirely focused on "unwittingly joining a meeting" as the real problem here, ignoring the fact that they have made the extremely poor choice of exposing a dodgy control API on your mac to the entire internet. What are the odds there are no bugs in this shitty little HTTP server they snuck onto everyone's machine? The fact that they came within five days of losing control of one of the domains that has the power to install arbitrary code on every mac running this thing is absolutely insane, and they should be asking themselves 1) how that happened, and 2) how utterly screwed they would have been if they lost control of that domain.
In a more amusing alternate universe, someone discovered the zoomgov.com vulnerability, waited until it expired, snapped it up, then published an "update" that uninstalls zoom entirely. In a nastier one, they used this idiotic design flaw to pwn every zoom client machine out there.
It's exactly how you want to respond if you plan on sharing it publicly on Twitter in the hopes of fooling those not in tech.
If my mom stumbled into that article, she would likely think they perfectly explained everything (well... she would likely contact me but, still).
Given this news is already not sticking near the top of hacker news and barely reported elsewhere, it feels like they are already getting away with it for the most part.
> All first-time Zoom users, upon joining their first meeting from a given device, are asked whether they would like their video to be turned OFF. For subsequent meetings, users can configure their client video settings to turn OFF video when joining a meeting.
> Additionally, system administrators can pre-configure video settings for supported devices at the time of install or change the configuration at anytime.
TBH, they're not as dismissive as you're sounding them to be
That part just doesn’t seem very responsive. Unless Zoom is recommending that everyone should turn it OFF, and urgently releasing a patch to make OFF the default, why does it matter that the vulnerability is in an optional feature rather than a mandatory one?
That is a pre-existing feature, and while it mitigates one specific aspect of the issue, it doesn't represent a security-focused response. Yes, I am saying that's not good enough: an appropriate, non-dismissive response would commit to writing code to deal with the issue raised, subject to the industry standard 90-day embargo. Depending on how much importance they place on their user's security.
Stayed on that call for over 3 hours and I just have to say that it was one of the best experiences I've had on the internet in years.
People behaved pretty good considering it was a random public Zoom call (except for a few trolls, but nothing really bad).
It just felt like the internet of yore where random people would meet and chat and just be nice to each other.
Lots of interesting topics, people from all over the world, lots of surprised faces, random camera sights out the window, someone with a unicorn mask...
It was a blast. Thank you Jonathan for a great time!
I listened for a long time, learned a lot as well.
This made me think - is there any website that facilitates you to do such public conferences on zoom like clients. Basically a bunch of people who are interested in a certain topic could join and chime in - go from topic to topic. It could be a very healthy discussion. People could post and schedule meetings and essentially anyone who wants to learn could join. I do listen to podcasts often, but such meetings would be pretty different than podcasts.
Does this already exist?
I am not aware of anything like what you describe, but I did see some people in that Zoom call suggesting the creation of a Discord and/or Slack channels.
However what I fear is that they will become like any other modern forum in that you will need heavy moderation, people will try to troll, etc.
The beautiful thing about Jonathan's call was it's spontaneity I think, and that everyone was so excited to talk about the vulnerability that the group had a single focus.
I might be too cynical so maybe it's a good idea, and if someone suggest a place/site/forum to have these kind of discussions I would definitely try it out.
Interestingly, I implemented every mitigation listed in the article: kill the web server process, remove and add an empty directory at `~/.zoomus` to prevent it being re-added, remove Firefox's content type action for Zoom, and disable video turning on when Zoom launches. When I visit a Zoom join link or the POC link above, Firefox prompts me to open the Zoom client to join the meeting, and when I click "Open Link" the client opens just as it should and joins the meeting.
This seems to confirm that there is no functionality to create a seamless experience for the user that actually requires the presence of the web server. If you don't have the client installed the page can prompt you to download it the same as it would the very first time you download and install it. You can ask your browser to remember the link association and not be prompted for which app the link should open going forward. These are minor steps, even for a regular user, and ones with which most users are likely already familiar.
To me this further illustrates that the web server is truly just a ploy on Zoom's part to keep their hooks in users' systems, and have a way in that the user isn't privy to. Any other excuse they are giving about "enhanced experience" is dubious at best and deceitful at worst.
I can confirm that this vulnerability exists in RingCentral for macOS, version 7.0.136380.0312.
I was taken into Miguel's meeting, but since the host wasn't presented, it simply let me know it was waiting for him (It also had a friendly notice "Your video will turn ON automatically when the meeting starts".
I've changed my settings in Video > Meetings, just like in Zoom, to turn off my vid when joining. Also confirmed that the server is running on port 19424 (via terminal command 'lsof -i :19424').
> You can confirm this server is present by running lsof -i :19421 in your terminal.
Might be good to specify what the output would be if the vulnerability is present or not, like this:
"If the server is running on your machine, you'll get a line specifying which process is listening to that port. If the command returns empty, your machine is not vulnerable."
Huh, I'm on Windows and it auto-joined the meeting too, with video enabled. I wonder if this is because at some point in the past I opened a Zoom meeting and allowed Chrome to open the Zoom URI in the Zoom app?
Great chat, I think you were right when you said all vulnerabilities should have a video conference for Q&A after release. It was really helpful to get a better understanding of the platform and the threats facing it.
The key thing here is they think this is a fair trade-off because Safari asks if you want to open Zoom.
> This is a workaround to a change introduced in Safari 12 that requires a user to confirm that they want to start the Zoom client prior to joining every meeting. The local web server enables users to avoid this extra click before joining every meeting. We feel that this is a legitimate solution to a poor user experience problem, enabling our users to have faster, one-click-to-join meetings. We are not alone among video conferencing providers in implementing this solution.
I do not believe that this is a fair trade-off given that any website can act on this locally installed server.
EDIT: I think they need to be made aware that this isn't acceptable. My reply to their support team:
I do not believe this is a fair trade-off - allowing any arbitrary web site local control of privileged software installed on my machine - because Safari offers a security prompt (specifically so that any arbitrary web site does not gain control of privileged software on my machine). I will be switching ~/.zoomus/ZoomOpener.app off, and considering other options until it has been fixed.
I realised I had a paid account, so I've cancelled that too. And I've also reported them to Apple, after seeing that the ZoomOpener app reinstalls the client - which is completely and utterly unacceptable.
I'm not sure how this can be construed as Apple's fault (and I've never owned any Apple products). A general purpose OS runs what the user installs. This is purely on zoom for backdooring the system. I'm not sure how many Mac users bother running ps every once in a while, but it seems like it wouldn't be that hard to detect either.
That said I have to say zoom's the only businessy meeting client I've used that doesn't require running through hoops on Linux. Maybe I should check if there are any devious backdoors installed on my system...
Apple only lets you install verified applications by default. Zoom is in the damn AppStore.
The whole point of making the AppStore a walled garden is such that these things don't happen. If an AppStore App can install a server in your machine that remains there and reinstalls the App after it has been deleted, and can be used to spy you via the camera or DDoS you. Then... the AppStore sucks.
> I think they need to be made aware that this isn't acceptable.
Oh, definitely. I cancelled my subscription because of this, but I wonder if the reason will make it through the corporate fog.
What is worrying is that more and more companies think it is fine to install "helpers", "openers" and other cruft. I recently removed several, and I still have to use software that scares me sometimes (DYMO web printing, Brother web printing). This should not be considered OK.
> I wonder if the reason will make it through the corporate fog
I really doubt it. Given the change control policies of huge corps and how awful it is to get anything new/get rid of anything they'll just toe the zoom party line and keep it.
> This vulnerability leverages the amazingly simple Zoom feature where you can just send anyone a meeting link (for example https://zoom.us/j/492468757) and when they open that link in their browser their Zoom client is magically opened on their local machine. I was curious about how this amazing bit of functionality was implemented and how it had been implemented securely. Come to find out, it really hadn’t been implemented securely. Nor can I figure out a good way to do this that doesn’t require an additional bit of user interaction to be secure.
Does anybody understand (and have a moment to explain) why the author says this is difficult to do securely? macOS has a simple facility for handling custom URL schemes, so my impulse would be to have `https://zoom.us/j/492468757` do a server-side redirect to a URL like, say, `zoomus://492468757`, which would launch Zoom locally using the OS's built-in services. This wouldn't require a third-party daemon of any sort, and would just be a regular application that the user could trivially uninstall.
Is there a security hole there that I'm missing? Or have I misunderstood the author's point?
A custom URI wouldn't work as seamlessly as zoom's UX team would have liked. If you hadn't installed zoom, either a nasty message would tell you the protocol wasn't supported, or it would redirect you to a google search.
Their answer was to send people to a URL they controlled and brought you through the install process as easily as possible, but the issue they needed to solve was determining if you needed to have an install or just redirect to the app.
They broke so many security rules just to shave off a few inconvenient seconds, and those seconds rose them to the top.
Yeah, it's a tradeoff by nature. This applies to security in general, not just computers. Having to unlock the door to your house when your hands are full with shopping is annoying, but the alternative is leaving your house unlocked all the time and trusting nobody will walk in.
Depending on the context (location, is there usually someone home anyway, value of stuff within the house) you may or may not find the tradeoff makes sense and voluntarily opt for the worse 'UX'.
As I understand it, they tried to design a new plane that wouldn't require pilots to be re-trained on how to use it, if they'd already been trained on an older model. That's the UX I'm referring to.
The fun thing is users mistakenly recognise the tradeoff as a sign of the security. If it was annoying it must be secure. Why would somebody waste my time for no purpose? See also placebo effect - of course I feel better, you gave me pills and I took them, duh, it's medicine.
This is the pattern of applications continuing to be deeply flawed and heavily advertised as long as you can be bought for a billion by IBM/Microsoft/Google/Facebook/TechOverlorfOfTheYear and finally get into a stable enough state so that they can be part of the infrastructure when a full-features open source version emerges.
Ah, yeah, the flow for when the app isn’t installed makes particular sense (at least as a motivation for why someone would implement something so awful). Thanks!
If you want to really break down their viewpoint on the situation, lets translate their PR statement line by line:
> Zoom believes in giving our customers the power to choose how they want to Zoom.
Zoom believes if their app isn't convenient to use, their customers have the power to leave their ass, as they are in an incredibly competitive market.
> This includes whether they want a seamless experience in joining a meeting with microphone and video automatically enabled, or if they want to manually enable these input devices after joining a meeting.
This includes making sure that they aren't asked to provide confirmation to access their camera/microphone, which impedes the convenience of the app to all participants. Less clicks equals less thinking.
> Such configuration options are available in the Zoom Meeting client audio and video settings.
Stop complaining about this as we have given ourselves a legally compelling user defined control hidden in a single tab deep within our preferences.
> However, we also recognize the desire by some customers to have a confirmation dialog before joining a meeting.
We can tell you aren't going to drop this.
> Based on your recommendations and feature requests from other customers, the Zoomteam [sic] is evaluating options for such a feature, as well as additional account level controls over user input device settings. We will be sure to keep you informed of our plans in this regard.
We don't care. We have lots of users, and lots of success having this option turned on by default. The support costs alone telling non-technical people how to turn on their cameras don't make it worth it.
Oh come on. There is no easy way to send people without the app to a installer page, that is the issue. And that is something every single person wants.
Good point. Maybe MacOS/iOS should have a feature where, just like going to a custom service that can launch an already installed app, such as zoomus://123456789, they can allow software vendors to register an install URL that users who don't have the app already installed will be directed to. Let the OS handle security, where it should be, and still make the first install user experience good.
Bad behavior for unknown protocols is not a MacOS specific problem. Instead of registering things with Apple, a link to the handler should be included in the protocol link and the OS should send the user there if a handler is not installed. Something like <a href="zoom://12345" handler="https://zoom.us/install">
Your proposal is the closest thing to the best solution I have seen. It still has at least several issues:
* When Zoom is already installed:
- should be able to handle most instances
- needs to account for version management, eg installed version zoom could still be version that is too old to process the uri correctly. Version could be in the uri.
When Zoom is not installed:
- an information dialog needs to be somehow shown to the receiving user, asking them if they want to install 'Zoom'.
- that screen must include the 'uri' and validate certificates etc to prevent abuse (hence must necessarily be 'ugly' and not 'seamless')
- the language on that dialog has to be provided by the OS/Browser, not the software vendor, to prevent abuse. For similar reasons the Windows UAC dialog text can't be written by the vendor.
- the language employed by the OS/Browser has to of necessity be fairly neutral, neither encouraging nor discouraging installation, to prevent abuse. This is necessarily at odds with the UI principle of leading the inexperienced user through clear steps to achieve their intended goal.
- the user of average-to-lower-quartile experience, as of 2019, for a product with a client base of 40 million+, is likely not in a position to meaningfully distinguish a legitimate Zoom install uri from a malicious / imposter one. Hence any popular software using this install-from-uri-handler becomes an appealing target for malicious actors to mimic, which they will.
- some proportion of users will likely install from malicious links, and whichever product (let's say Zoom for example) is the most likely software for malicious actors to masquerade as will become the name associated with the attack in the mind of the wounded public
Those are some interesting points. I'm not convinced that versions should be in scope for this sort of thing though. If I'm writing a protocol handler, I think it's my responsibility to make sure my software can update itself, and make the default behavior that it should check for updates if it is given a URI it doesn't understand.
Secondly, version checks assume that the user wants to run this specific protocol handler. I as the user might prefer to run an open source non-official zoom client. I think the OS should only be trying to help me if I don't have any handler.
They have the opposite starting with Catalina and iOS, Universal Links that lets an app register to take the first pass at handling zoom.us URLs. Android always had this with their intent system.
Well, presumably if that's the case, their ZoomOpener could simply be configured to respond that it exists. That would be enough to either direct the user to a download page or open the protocol-specific URI.
If I'm understanding it correctly, the reason it does more than that is to bypass the "protocol-specific URI opening" UX.
I'm unclear what subset of users are desktop only Zoom users that aren't also familiar with the same "Do you want to allow this app to access your camera/microphone?" dialogs on mobile devices. This can't be a large demographic, can it?
In fairness, I get irritated about the fact I need to tell WebEx to use my computer's audio to join the call every damn time I join a meeting quite annoying.
If only there was some happy middle ground between never asking and always asking ...
You seem to imply that they have an UX team but not a security team, so nobody convinced anybody else that this wasn't a good idea.
Without genuine security orientation, even if an expert realizes there is a security problem, who wants to be the boring paranoid pessimist who wastes time and attempts to ruin products, only to be staved off by the efforts of more productive employees that focus on adding value?
A sustainable company isn't built on velocity, lack of conflict, and willful ignorance.
Decisions need to be made between strong opinions about the right path forward. There needs to be balance and respect between these aspects.
Reading the PR statement, I highly doubt the people who have those strong opinions about security are being given a fair voice. They are probably there, but they have zero power to change anything within their product.
I think literally every VC isn't built to be sustainable, they are designed to randomly jab the marketplace for a good investment bet. I wouldn't even expect them to listen to this kind of advice, it doesn't apply :)
I have some experience with this, you can use javascript on the https:// meeting link to detect if the app protocol (zoom:// or whatever) exists. If the app protocol exists then go straight to the app protocol link. If it doesn’t then prompt the user to download and install Zoom. The JS is a bit messy and requires a few different approaches but it works on all popular browsers on Windows and Mac (Linux support wasn’t needed, so not sure).
Of course, the browser will pop up a confirmation dialog to ask if you want to open the Zoom app but this is a feature not a bug.
The basic problem is that you enter a meeting by loading a URL, and loading URLs is something any website can do. There probably needs to be a confirmation step before joining a meeting.
A custom URL scheme would at least provide an opportunity to confirm launching Zoom, even if Zoom itself didn’t confirm joining s particular meeting (which I agree it should).
i didn't see that. for me it just failed and to join a meeting now i need to open the zoom client and copy the meeting id manually. not a big deal for me, just wondering...
If Universal Links was supported on macOS we could get the best of both worlds.
The web server basically presents meta-data in a JSON-file (in the .well-known directory) which Safari/iOS uses to launch the app if it is installed, and otherwise just renders the webpage [0].
The app contains information about which domains it allows itself to be opened from which would fix this issue.
Universal Links are better than their localhost webserver insanity, but don't really solve this. A malicious website can still redirect you to a zoom.us URL that will instantly join the meeting without confirmation.
The underlying problem is that they want a URL to join a conference call hosted by any random user and share your audio/video without confirmation. And it's simply not safe to trigger that kind of action from a URL.
Positively terrible... Kudos to this researcher. I liked Zoom when I used it a couple of times, but the reinstall “feature” is a huge violation of my trust. Software from the company behind it will not touch my system anymore. Too bad really, because properly working video chat is hard to find. The App Store model is not my favorite, but at times like these, a forced sandbox and inspection by a trusted third party start to look like the only way forward.
If you had a sandbox, you wouldn't even need anyone to inspect it - since all the app's files would be contained in one place, uninstalling it would remove everything, and there wouldn't be a way to leave a server behind.
Right, I agree! My point is that preventing this situation from happening in the first place, through better sandboxing restrictions, is both more fair and more effective than having each app be individually approved. If you try to mitigate this just with app review, then 1) you're going to miss apps that do bad things, and 2) It introduces huge conflicts of interest for the reviewer. But if you were to have effective sandboxing, it wouldn't be possible for Zoom or any other app to do this in the first place, so that you would be able to trust the apps that you install even if they haven't been reviewed.
In response to all of the well-deserved criticism, Zoom just made two updates to their blog post[1] to announce that they will be completely removing the webserver for all macOS users in a new release tonight, and also adding an option prompt going forward:
JULY 9 PATCH: The patch planned for tonight (July 9) at or before 12:00 AM PT will do the following: 1. Remove the local web server entirely, once the Zoom client has been updated – We are stopping the use of a local web server on Mac devices. Once the patch is deployed, Mac users will be prompted in the Zoom user interface (UI) to update their client. Once the update is complete, the local web server will be completely removed on that device. 2. Allow users to manually uninstall Zoom – We’re adding a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server. Once the patch is deployed, a new menu option will appear that says, “Uninstall Zoom.” By clicking that button, Zoom will be completely removed from the user’s device along with the user’s saved settings.
PLANNED JULY RELEASE: Additionally, we have a planned release this weekend (July 12) that will address another security concern: video on by default. With this release: 1. First-time users who select the “Always turn off my video” box will automatically have their video preference saved. The selection will automatically be applied to the user’s Zoom client settings and their video will be OFF by default for all future meetings. 2. Returning users can update their video preferences and make video OFF by default at any time through the Zoom client settings.
You know you've blown it when the following appears in a buzzfeed article about your software:
> open the application called, “Terminal.” Copy and paste this text: lsof -i :19421. Press enter. You’ll get a string of mumbo jumbo. Underneath the text “PID,” copy the string of numbers underneath. Then type “kill -9” (without the quotes), add a space after -9 and paste the PID string of numbers. Press enter. The server has been killed.
What I'd really like to see now is them addressing the fact that their initial response to this was terrible, as if whoever was making the decision had no idea how bad this design was from a security standpoint.
This whole thing reads like a security response driven by marketing and branding considerations. They put a lot of work into that seamless experience they're so proud of, apparently without security professionals being involved.
These factors point to a company that fundamentally doesn't take security very seriously. That's not a fast, easy, or cheap thing to change. I suspect it won't any time soon.
> We’re adding a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server.
Including the local web server that definitely doesn't exist anymore anyway after this patch?
I uninstalled it via their new patch, but it doesnt remove all files. I think its just caches and logs left but who knows. If you want to purge this malware with fire you still need to follow the instructions at https://apple.stackexchange.com/questions/358651/unable-to-c...
HIPAA provides an effective strategy for holding Zoom’s feet to the fire in cases like this. Since the company markets compliant video conferencing for healthcare professionals, they are classified as a Business Associate. It is quite likely that a well-written complaint on the HHS Office of Civil Rights site would result in further investigation and regulatory action.
Only insofar as that people usually do not complain. I’ve worked with software clients on OCR investigations that were prompted by far less substantial complaints.
Not sure I follow the CORS angle. The linked stackoverflow question mostly seemed to be someone who was confused about how CORS works, and the issue in the Google Chrome tracker was closed as WontFix because they couldn't reproduce it and said it should work.
I'm nearly positive that CORS from localhost works OK. I set this up all the time for local development. For example, I run a client CRA app on localhost:3000 and an API on localhost:3001. The API sets the CORS headers and the CRA app can make requests to it.
If this is correct then I believe all Zoom needed to do is have their localhost application set CORS headers for their production domain. This would have allowed AJAX communication and only allowed it for Javascript running on their domain. Instead they did this totally hacky method that lets the whole world interact with the localhost server...
Maybe I missed something but if they could have done this the right way and didn't that is much worse IMO...
You're 100% correct, and while someone has pointed out the proper headers that need to be set on the bug report here: https://bugs.chromium.org/p/chromium/issues/detail?id=67743, it's been drowned out by people who don't seem to understand the issue:
CORS is set up to protect data from being given to a third party, e.g. JS requests obtaining and being able to observe data they shouldn't have access to. Since images are being loaded by the browser (second party), there is no such protection, since a third party should not be able to read them anyway (barring some other vulnerability). It's assumed the first party is correctly doing what it's supposed to, an example could be fetching an image from a cdn.
Hmm I still don't understand why they have to use the image hack. Since they control the server on localhost they can set the CORS headers to allow all domains, then JS from a site could access localhost right?
Yes. I don't think there is any good reason to use the image hack. Further, they could have made the CORS lock only the production zoom domain for better security...
> One potential hiccup I encountered was that Firefox blocked my XHR request due to a policy against "mixed active content". This was because my origin site was accessed through an HTTPS connection and the localhost server was only HTTP. That's one potential reason Zoom might have opted to use their <img> garbage; since <img> elements are passive not active content, they could avoid using HTTPS on the localhost webserver. That's not a good excuse, but clearly they weren't interested in finding a good solution -- whatever the problem that prompted the <img> hack was.
Very interesting, thank you! This is definitely no excuse for not filtering the origins -- they just don't get it for free through passive but they still need to do it, or since they are a native app, generate and install a cert -- but it could be the motivation for the decision to go this route which is really useful to know.
I'm trying to think of the real-world implications and how this would play out.
Normally this would be pretty obvious, wouldn't it? Users would see Zoom open into some weird meeting, and close it.
Presuming the exploit cannot avoid bringing the Zoom app to the foreground when it joins the meeting and activates the camera/mic. If it can do that and stay in the background, all bets are off.
In spite of its obviousness, it's still pretty darn scary --
Scenario 1: malicious website/app opens link while you're sitting there.
You're sitting in front of your computer, you see Zoom open, you're like "WTF?!", close that shit, uninstall Zoom; hopefully discover how to permanently remove it (it otherwise leaves a localhost http server running that can reinstall itself).
But crap the hijackers have, even with a few seconds of video: your face, your surroundings, the audio of your surroundings, all of which can increasingly be fingerprinted. That alone is very scary. Just to be in an unintentional meeting for a moment is very disturbing. A violation of sorts.
Scenario 2: malicious website/app delays opening the link until some threshold of mouse/KB inactivity is reached.
Activate the Zoom link and hope the person is AFK. Spy on their home/office/whatever. Also a violation.
Are there other scenarios I am missing?
Personal note 1: I'm happy I switched to a Linux laptop after finding last year's MBPs disappointing (and the TB revolting; I have a physical escape key!).
Personal note 2: I do actually like Zoom a lot, it's an awesome video conferencing app. But this should be fixed for Mac users.
I wonder if this works in an electron app (like Slack maybe) displaying it?
Maybe you could intentionally send this link to someone shown as inactive on Slack, and have the WSlack webpage preview thing run enough javascript to pop open Zoom with the camera and mic running...
I'd test it myself, but I deleted Zoom and the sneaky localhost web server while I read the article...
It says that the server sends an image with certain dimensions back as an error code, so I wonder what you could do if you served some simple HTML that uses the local server as a meta tag that renders in the preview?
I imagine slack would do that on the client since it’s built on electron.
> Activate the Zoom link and hope the person is AFK. Spy on their home/office/whatever. Also a violation.
I think this is the most likely scenario. There are ways you could potentially delay it (e.g. they leave a tab open and you don't open the link until a certain time)
Scenario 1 extended: Add this into an ad or a popover for a porn site and potentially capture some very compromising footage.
Scenario 3: Add it as a tracking pixel in an email.
I guess there are all kinds of scenarios since it's an unsecured API that responds with an image. You can trivially embed it in anything that renders HTML.
Well, the company and product are dead to me now, gonna hassle our CTO to switch. I just really hope theres some dev at Zoom who hated this whole installing backdoors idea who's gonna have the greatest "I told you so" day at the office tomorrow.
We work with a lot of hospitals and find that video conferencing tools are often blocked by their IT departments for security reasons. I’m expecting Zoom to find itself on that list in short order.
“On Mac, if you have ever installed Zoom, there is a web server on your local machine running on port 19421.”
...
“All a website would need to do is embed the above in their website and any Zoom user will be instantly connected with their video running. This is still true today!”
This server is still running on my machine despite having "removed" Zoom a few months ago (macOS).
Guess I was a bit naive in thinking just trashing the .app and immediate artifacts in Library would do the trick.
EDIT: I missed the .zoomus directory in my home folder that had the culprit. Funny enough Zoom's instructions on how to uninstall the app on macOS just points to documentation from Apple and wikiHow (???) with standard methods that don't fully remove Zoom.
I'm sure Zoom intentionally failed to tell you how to remove the web server. After all, if it's still running, then it's just that much easier to reinstall Zoom on your machine.
I’m surprised more enterprise IT orgs haven’t flagged this behavior, or simply made it impossible via local machine policies that would prevent running a web server.
Does anyone know how this web server starts itself after restarting your machine? As far as I know, a `~/.zoomus` directory can't restart a web server after your machine restarts.
It doesn't start on boot, it starts on login. It appears as a Login Item named ZoomOpener in your local user account in the System Preferences -> Users & Groups.
Additionally, when you launch the main application, it will check to see whether ZoomOpener is running. If not it will boot it up. The main app will install and register ZoomOpener as a Login Item if necessary.
> This being said, I also recommend that any researcher that finds a vulnerability in Zoom’s software does not directly report the vulnerability to Zoom. Instead, I recommend that researchers report these vulnerabilities via the Zero Day Initiative (ZDI). The ZDI disclosure program gives vendors 120 days to resolve the vulnerability, the ZDI will pay researchers for their work, and researchers have the ability to publicly disclose their findings.
On my Mac, I have uBlockOrigin installed in my browser and I have it configured to always block 3rdparty and 3rdparty frames and it prevents both the POCs completely.
I have one browser that I use for work email and video conference, where system grants access to camera/microphone to the browser and browser allows Google Meet to access camera.
I have another browser where system does not grant access to any of the devices - camera, microphone, USB etc - and I use that for web surfing.
And I strictly don't install any plugins for video calls. I have refused to join meetings where people try to make me install random binary software on my machine. There's always phone call for such situations.
I feel better about dedicated apps on iPhone where again I can install and grant permissions before the call and then uninstall the app completely. On iPhone I don't do any web surfing. I have Firefox Focus for occasional emergencies to open the unknown web.
Refusing to install software for video calls is a good policy. Also, having a throwaway workstation for such things (also Skype, which is spyware of the worst kind) is useful too, for when it’s a pre-sales call and security can’t outweigh closing a six+ figure deal.
and I am in the crowd of mac users who tape over their camera. when it comes to video conferences at most I have ever seen the desktop shared. what type of work do you do that uses the video for portions other than the presentation?
You weren't asking me, but I run into this same thing.
In my business -- project management software -- I'm in online meetings a LOT (say, 20 hours a week?) because everyone in my company is remote. We have never, ever used video. It just doesn't come up. Nobody wants it internally, and none of our customers ask for it in external meetings. I don't think any of them use it internally, either (and many of our customers are large, distributed organizations with offices all over the place).
This seems normal to me.
My neighbor is an IT VP for a health care concern. She travels a lot (30-40%), and when she's home she's in online meetings pretty much all the time. And in her company, video is ALWAYS included. I have no idea why, and neither does she; it's a cultural thing.
The upshot, though, is that I work in t-shirts and cargo shorts, and she has to be "office ready" even though she works at home. However, I will note that, if I run into her outside when she's walking the dog, it's not unusual to see her in a nice blouse, hair and makeup done, but wearing yoga pants or whatever. Which is its own kind of hilarious.
Sounds like false security thiugh- at least the camera has a light (which last I heard has been hardware level synced with the camera) so you know if someone's watching but what you have no control over is the microphone
Video meetings are so much more effective than audio meetings. I went from a company that always uses video to a company that rarely uses video and the difference is huge.
Often the most important parts of meetings are nonverbal.
To ubermonkey's point, some of this can be a company culture thing.
Certain teams at my workplace use webcams all the time, others never. My team leverages them quite a bit, as our team is all over the world. It helped solidify our team members not just as random voices on a phone line, but as actual people who we will likely never meet in person.
Another small thing (big for me) Zoom does is register their app as the handler for `tel:` links every time you launch it, with seemingly no way to disable that. Companies that make themselves the default for something on your machine by force are not to be trusted.
I’m not surprised they start a web server from under their users, and that their response to the vulnerability was lacklustre.
Yep, they have lost my trust too, especially with their terrible response on the blog. And I don't trust that they simply won't remove my mitigations if I have that app on my system again.
Why isn't zoom running fully in the web browser at this point? Meet does this, and as far as I can tell the quality is indistinguishable from Zoom. Can someone with a better understanding of the underlying protocols shed light on why Zoom continues to ship a separate desktop app?
To allow meeting participants to use the web client to connect to a meeting requires the meeting host to explicitly enable the option in their advanced settings (it is disabled by default).
"as far as I can tell the quality is indistinguishable from Zoom"
In my experience, the quality is similar to Meet when all parties have great internet connections. But if one or more parties has high/variable latency or packet loss, then Zoom provides a much more smooth experience.
Not related to zoom, but I'm working with a team which uses 'highfive'. I can't, for the life of me, get the downloaded desktop app to ever work. There's this perpetual dance of "you need to be logged in" and "register now" and "log in". I was thinking it was something to do with the VPN, but it seems to be the same on or off. However, grabbing the full URL and pasting in to Chrome, works like a champ. I'd prefer to use the desktop app, but I can only get the browser version to work.
Why is the web client (on Chrome, etc) so bad on Zoom ? I mean, people are building Google Earth on the browser .Its not just the bad video experience - even the product experience is seriously broken.
For example, the default audio setting when you sign in to the web video client is to connect using PHONE AUDIO. In case you figure out how to click the tab to use computer audio...it breaks down a couple of time in asking for browser permissions (camera, mic). It is unusually bad for something that is supposed to be that good.
hangouts still rules when it comes to web based video conferencing. And for countries with massive linux based usage (like India), Zoom is not a very viable option.
Jitsi Meet is also an excellent web-based video conferencing tool. They also have a basic comparison of WebRTC vs Zoom [0] which is actually has the same demonstration video as your second article. I posted a basic overview elsewhere in this thread [1], and I would strongly recommend it.
It's free. It has apps for iOS and Android and you can also phone in. I've never used it for such large meetings, but apparently it handles 100+ people fine [0], and they have a blog post about scaling [1].
The Zoom client on Linux used to (?) have a nasty command injection. The URL for joining a meeting got passed to some bash reinvocation (so they could set the library path if my memory serves me). A specially crafted URL could execute commands on the system. I haven't been too interested in using Zoom since seeing that.
At least that was patched. These sorts of issues are frustrating, because as a Linux user I really want to like Zoom -- I appreciate that the treat all platforms pretty equal (Mac, Windows, Linux, Android, iOS) with native apps. That is a rarity.
For the longest time the Linux client would just crash randomly. It also tends to heat up your laptop and use all of your cores at 100% if you're looking at someone's screen.
Just run `strace -f zoom 2> wtf.zoom` to see all of the shit it does (looks like it is polling for events like crazy).
I tested the repro-steps and found that the ZoomHelper was not listening, although it was installed . In this case i was prompted to download zoom.pkg rather than activate video.
I'm guessing it was because I have MacOS firewall = strict (no listening ports)
Also, here's a nice tip to show all listening apps (good habit while cleaning up)
"To shut down the web server, run lsof -i :19421 to get the PID of the process, then do kill -9 [process number]. Then you can delete the ~/.zoomus directory to remove the web server application files."
So a browser allows a random remote website access to stuff running on the localhost interface? Is this a good idea? Stuff like camera access I can at least disable...
The browsers allows anything according to the CORS configuration on the target website. Perhaps it would be a good idea to prompt for access to localhost/127.* resources.
Hosting a web server on localhost is the equivalent of adding a backdoor on your customer's machines. How does a product team even reach this decision?
The most charitable interpretation of the Superhuman read-receipt problem is kinda the same thing: they had an idea, thought it was good, and then did some deeply shitty things to make it work. And nobody at Zoom or at Superhuman had the organizational power to stop it.
Do you know if this vulnerability might manifest in other ways, and permit remote participants to force you into sharing or viewing your screen silently?
One of the first times I'd ever used Zoom was in a call with a startup trying to pitch my company on something. The remote participant said something later in the call that was uncannily prescient and related to notes I had in a separate application window. I wrote it off as coincidence, but the phrasing used (and the fact that it was an answer to a question I hadn't asked) seemed nearly verbatim to my written notes.
This is crazy, heres a video what happens after deleting and opening your URL.
https://www.youtube.com/watch?v=DMY7Z9Fe0ic
Before testing that I had the app itself removed months ago...
These meeting apps feel like the browser plugins of the 2000's. There are so many that do almost the same thing they now resort to seriously insecure methods to make sure you have theirs installed and never remove it.
Apparently one less click is a competitive advantage, whatever the cost.
I don’t get Mozilla and Chromium’s responses. I can think of few cases in which a website should be allowed to issue requests (CORS, img, or otherwise) to an address on the local network and none whatsoever in which a website should be able to contact localhost.
The fix seems straightforward. Require user permission to access the local network (subject to appropriate heuristics as to what “local” means). Require a config option and user permission to access localhost. Problem solved.
The problem with asking permission is dialog fatigue and similar.
As far as supporting local content: Historically a lot of terrible (read: Enterprise, H&R Block tax software, etc) apps are glorified webpages, coupled with a local server that provides things like FS access and malware installation. Those apps use a kludge of remote and localhost urls, and generally expect to work.
I suspect at this point though that browsers will just start going for the "no access to localhost" route as this practice is mercifully dying out (alas in favor of Electron apps shipping full, but out of date, browsers).
To me the bigger problem is: Zoom installed a server on a machine, without consent, with the ability to install software (without consent). Removing the browser's access to that service doesn't mean anything because an attacker can always just directly attack the server.
Even if the server locks connections to exclusively coming from localhost they've provided a service that can install and launch software, which can therefore be used as a sandbox escape - e.g a super constrained network service gets compromised - the idea is that service can't modify the filesystem or what have you, but now it can just connect to localhost and get a file written to disk.
People keep on complaining about apple "locking down the system", but its because of developers like Zoom that Apple needs to do this: and average user is not going to see this post, and Zoom has clearly decided that it is in their interests to leave a service running that can install software for them.
I hope that apple drops the XProtect hammer on the server binary, and the ban hammer on their signing cert.
They have a web server on your machine. If the browsers did that, they would find some other way to handle this since they have a server running on your computer.
I don't think the browser vendors are to blame here.
> The problem with asking permission is dialog fatigue and similar.
That’s why I suggested config option and permission. There’s no dialog fatigue if you never see the dialog.
That being said, there really ought to be a little menu of permissions that can be granted to a website such that the website cannot make it blink, flash, or otherwise draw attention to it. Crud like “allow push notifications” could go there. Granting push notification permission to a site is fine, but I don’t think sites should be able to ask for push notification permission.
In my experience if you tell users "to use this awesome thing, you need to enable this option", a reasonable portion will do it.
It sounds like what you're saying is that there should be a dialog, but only if you've already enabled a setting, which raises the question of "if this feature is so bad you don't want it exposed, why would you have it available at all?".
What if you have uninstalled Zoom? It seems that it leaves a web server on your machine that will re-install Zoom if it receives a request to join a meeting.
macOS is gradually adopting that starting with Catalina, e.g. System Extensions (that will replace Kernel Extensions) and DriverKit drivers too I assume, are installed with app bundles and uninstalled when the app is trashed.
The whole reinstallation thing freaked me out, since I did try Zoom a while back, but apparently my own uninstall process kept the reinstallation hack at bay.
By this I mean:
I have no local web server running on 19421; and
Your link doesn't launch or reinstall anything for me.
Now, something I do that most people probably don't is periodically check StartupItems as well as the LaunchAgents and LaunchDaemons folders, so I can remove anything left over.
I do not mean to trivialize this problem, because what Zoom has done here is egregious and unforgivable, BUT is it accurate to say that the reinstall behavior depends on
1, usage of Chrome and
2, the presence of a StartupItem / LaunchAgent / LaunchDaemon?
I ask because it didn't work for me, even though I still had the ~/.zoomus shit in place (obvs, I don't anymore).
I just want to make sure I understand it properly, and that I've taken the necessary steps to prevent Zoom's unwelcome return.
I use uMatrix, and I've seen localhost show up as a domain a site tried to connect to quite a few times. I never gave it too much thought since I block all non-first-party resources by default anyway, but I now realise it could indicate the use of tricks like this to attempt to communicate with some other process running on my computer. I'll now make sure to look closer whenever I see this. I bet Zoom isn't the only one doing things like this.
They're not wrong. Empirically, users explicitly preferred Zoom because it lacked the "ask the user" step before starting a session. Less security is a user visible advantage.
Same problem Microsoft faced when it added "UAC" in Vista. Admittedly the implementation might not have been the best from a usability perspective but I think any attempt at implementing proper privilege management in Windows would have had many users complaining and not seeing the point.
I guess the lesson here is not to give your users bad habits for the sake of convenience otherwise it'll backfire if you ever want to do things right later. MS had everybody run as root for decades before they finally decided that it might not be such a great idea after all, and then they had to face annoyed users and bad publicity.
That being said I can't really imagine how having a non-intrusive "do you want to start the call" dialog before initiating the call can be considered a deal breaker. I assume you could even reduce that annoyance further by adding a "don't ask me again for this website/user/whatever" checkbox. Do you really think that would hurt Zoom significantly? I've never used their product so I can't really form an educated opinion.
This is especially stupid because I have no doubt that now that it's been made public some people will abuse the vulnerability, if only for fun.
It wasn't bad habits, up to Windows XP which introduced user separation on consumer oriented Windows (NT and 2K were meant for businesses and businesses who had networked PCs were really meant to use those) all personal computers were fully controlled by their users without any notion of privilege separation - this is a behavior that traces its lineage back to the original Altair 8800. Computers weren't networked and those who were were either running a different OS (NT, Unix, whatever) and/or controlled entirely by a single entity (a company). Or just didn't care and used Windows 9x.
And honestly i do not think it is bad habit even today. UAC is intrusive, the main reason you do not see it as much as at the past is because applications nowadays work around it: see how Chrome or even VS Code saves the executable files for their updates to your %APPDATA% folder (where normally regular data are going) to avoid the UAC annoyance of going through Program Files (which makes the UAC protection pointless) or how app stores like Steam change the permissions to "everything allowed" to be able to modify the folder contents.
People are using computers to do specific tasks they want to do, anything else is an annoyance and something they'll want to avoid.
Today's security issues come from things a lot of developers and companies simply do not want to acknowledge: trying to put everything online, connect all computers together, trying to have everything controlled by whoever writes the applications users use (putting everything online is a way to do that), trying to come up with monetization schemes where users pay nothing out of their own pockets, trying to make users pay subscriptions instead of one-off fees (the excuse is often that they have to somehow keep their servers going, willfully ignoring that the developers/companies are those who decided to make something run on a server in the first place and that by doing that they are the ones in control).
A lot of security issues would be gone if computers weren't so connected to each other. Sadly i do not see that happening any time soon since no developer wants to give up that sort of control (some developers nowadays do not even know how it is to not have it) and no company wants to get rid of the biggest excuse they have to ask for continuous payments.
Personal computers back in the 80s and 90s were very insecure, but that didn't matter because they weren't so connected as they are today. It isn't surprising that pretty much all famous security issues of the time (like the ILOVEYOU worm) happened exactly as that connectivity started getting widespread.
I think the only hope there is is that the IoT craze will blow up everyone's collective faces and realize that it might not be such a good idea to connect everything after all. Sadly the more cynical side of me thinks that what will happen instead is the introduction of more draconian user hostile measures which end up with the users losing every more control to big companies that control their devices and OSes in the name of security and usability (more like dumbability) and any voice against that would be marginalized as "you are a power user, you do not matter" (ok princess, then what are power users supposed to use after you lock down everything? - i guess the answer is somewhere between "expensive licensed workstations" and "nothing, now piss off").
I’ve had viruses and anti viruses years before I had internet. Getting a virus was trivial in the 90’s when windows had no security and any program could do anything.
Any program can do anything in modern Windows too, only special places like C:\Windows\System[32] are protected. I'm not against such protections since they can be easily overridden if needed and in day-to-day use they do not harm anyone nor affect negatively the usability of the system.
I'm not saying that we should go back to 90s entirely, we have a lot of good improvements over the years. I'm just hoping we'll tone down the "connect all the things" a bit since that is the main source of a lot of security issues.
I agree that less connectivity is better for security, which is why I think rushing to IoT-everything is premature.
However unless a computer cannot be physically connected to the internet, it must implement all of the protections it can. Just not having wifi enabled or cable disconnected is a false sense of security.
The question is about the "all the protections it can" part - what does that imply? Because "all the protections" can include user hostile (not just in terms of usability) misfeatures that give control to OS vendors in the name of security even though the real purpose is controlling what the users can do with their own devices (for a variety of reasons, with stuff like market segregation and forced obsolescence being among the more benign ones).
All the protections that help the machine survive in non-compromised state in a hostile environment. I think of stuff like not giving random users permission to write over system files or give processes access to peripherals (camera, microphone) without explicit user consent.
Your comment is a bit ambiguous. Are you saying that even retail software could be considered a virus just because of what it can do on the system? Or was virus software making it onto the machine in other ways?
When I was a kid it was quite normal to pass around floppies and later CDs full of warez. These contained viruses more often than not especially since an infected machine would auto infect any writable media it got hold of.
I think you make good points but to sum it up: privilege separation wasn't needed pre-internet because vulnerabilities and computer viruses weren't that big of a problem back then.
>A lot of security issues would be gone if computers weren't so connected to each other.
I mean, sure, but having computer connected together is pretty damn amazing.
I'm actually drawing the opposite conclusion compared to yours: I think UAC doesn't go far enough. You need more finely grained permissions. That seems to be the trend too: Android, SELinux, OpenBSD's pledge... It's all about giving every process only the privileges it needs and nothing more.
Finely grained permissions mean bad UX and as Android has shown you gain nothing practical from that since the people will learn to ignore them pretty much like they learn to ignore the UAC warning while on the other hand you lose the flexibility, functionality and openness of the entire system (all significant pillars for ensuring user control).
Note that i'm not saying to disconnect computers entirely, i'm saying to rely less on connected computers. Simple stuff like use LibreOffice or MS Office instead of Google Docs, use a desktop calendar and other tools instead of relying on "web apps", instead of using a "cloud-based solution" for syncing data with your mobile phone, just connect it directly to your computer (via wifi, bluetooth, whatever - this is a UX issue mainly - but it doesn't have to roundtrip with someone else's server). Stuff that makes you and your computer less reliant on the network.
Not everything can work like that of course, but then instead of trying to isolate applications from each other using fine-grained separation, we can simply treat the network itself as hostile and try to defend from it (e.g. applications that can access the network cannot access outside of a designated folder - the OpenBSD pledge approach but forced on all applications that access the network). I think it is a much easier, flexible, user controllable and understandable approach than UAC on steroids or any other approach that relies on application segregation.
It does require a massive shift in developers' mindsets and profit incentives for companies though, which is why i do not see such a thing happening.
> we can simply treat the network itself as hostile and try to defend from it (e.g. applications that can access the network cannot access outside of a designated folder - the OpenBSD pledge approach but forced on all applications that access the network)
Won't work. Malicious actors (both malware developers and companies with user-hostile business models) will start working around it, by for instance giving you two applications, one connected to the Internet and one not. The first application will be the C&C server, the second one will be the executor, and they'll talk with each other over e.g. files in first application's folder.
Trying to block that would pretty much hose all utility in having a general-purpose computer. You'll be back to the crappy UX of a smartphone.
I honestly don't know how to solve this conundrum. You can't solve it technologically, as you quickly hit the Halting Problem. You can't solve it socially, because for any power user benefiting from the modicum of interoperability you leave in, you get 10 regular people who can be trivially social-engineered into selfpwning their device. It seems that in the end, you'll either have to lock down computers to near uselessness, or live with the risk of bad actors exploiting them.
My comment isn't an ideal solution, it is what i consider a better solution considering how things are treated nowadays.
Ideally users would be wary of what they do with their computers, but considering how the world devolved from "you should never use your real name and address online" to modern social media, this is yet another case where i do not see such an ideal happening.
I don't see how it solves the selfpwn problem - that is, for any capability I can explicitly grant if I know what I'm doing, someone else can grant it because a malicious actor nicely asked them to do it. If you take away the ability to grant the capability, you're reducing usability.
Yeah, that's really an unsolvable problem I guess. But you could at least make it clear to the user what some app is requesting. If it's requesting the root capability / ambient authority (basically access to everything) then that should be a big red flag.
This is what things like https://sandstorm.io and Google's Fuchsia OS are trying to solve. Of course it requires a huge shift in how you design applications, but it does not impose any burden on the user's side really. They just allow $APP access to some data or resource, and then it has access to only what it needs going forward, with no need to allow it every time (unless you revoke it). This can be done when the app gets installed, so there's no UX problem really.
My original comment was about having computers be less connected because a large reason for security issues and their implications today arise from their connectivity, so i do not see what sandstorm.io is solving there.
I'm not familiar with Google's Fuchsia OS to judge, though i do remember reading some months (year?) ago about a clash between their developers and Google's advertising team that ended up with the developers compromising Fuchsia's design. Which brings me back to "let's not rely too much on connected stuff and prefer stuff we have control over, shall we?"
They’re not incorrect. They are, however, wrong to think that users not caring about security means they don’t have to care either. Product makers have a duty of care beyond what their customers have.
No, less friction is a user-visible advantage, less security isn't user-visible, for most users, until sometime after the vulnerabilities exposed thereby are exploited and, when it becomes user-visible, is very much not considered an advantage.
Also, most users of zoom are job applicants - so theyre more likely to care less abt security because they really need to be in that interview session.
This is not even remotely true. We use at everyday at my workplace (Education) - thousands and thousands of employees as well as students . All of our contemporary peer institutions do the same.
Micro Snitch is a small MacOS toolbar application which runs in the background looking for system calls made to the camera or the microphone. It visually indicates when either are being used and logs the activity to a file for future review.
Do people who understand networking better than I do (i.e., almost everyone) want to explain how to universally prevent this localhost garbage? Like, some kind of firewall, combined with a simple command line trigger to open up a port when I actually want to? There's gotta be an open-source firewall for this kind of thing, right?
The notion that some random app can just spin up a server on localhost without my permission is completely insane. Also, this is why Gatekeeper, and the App Store "walled garden" are good---nothing should get the kind of permissions necessary to run a fucking localhost server that can reinstall a deleted app w/o user interaction!!
Even if you have the camera disabled (and I never gave camera permissions to zoom to begin with), it will still join a random stranger's meeting, which will leak your name (or whatever name you have configured). This may be important for some.
yup, shocked me when i noticed. changed the name and unchecked the 'remember name' feature. but it turns out that unchecking that will prefill the user account name, and next time the remember name feature is checked again. so that the only two choices are: either remember the name used last, or, if you uncheck it, have it fill in the account name.
Zoom’s UX has always come off as invasive. An application default that allows hosts to enable automatic camera join is an overstep, and the lengths they go to facilitate this while ignoring long standing, industry standard appsec guidelines to prevent XSS is relatively unsurprising yet hopefully not inconsequential to their enterprise customers.
I guess he reported to Chrome first because that’s what everybody uses, found their answer (or lack of answer) unsatisfactory, and then went to report to Firefox.
It should be pointed that an empty directory (even if owned by root) placed in your home directory can still be deleted by you, without requiring root. You need to place a file into the directory.
Or if you want something drastic, run
chflags simmutable ~/.zoomus
as root. This will make sure that not even root can delete it.
Yeah because removing a file or empty directory only changes the table at the parent directory. So you only need the write permission of the parent directory.
In order to verify that the opener was running, I ran the following command.
ps aux | grep zoom
To kill the opener I ran the following.
killall zoom
Then I followed the rest of the instructions above to create a locked down version of the directory. You could also create a file called .zoomus instead (similar to the suggestions made farther down this comment thread).
UPDATE no need to do this any more. Zoom actually conceded they were wrong and pushed out an update that removes the local webserver: https://imgur.com/gallery/INvYaH4 (from the discussion below in the thread).
* Our users don’t care about security.
> Our video-first platform is a key benefit to our users around the world, and our customers have told us that they choose Zoom for our frictionless video communications experience.
* We have no way of knowing if this has been exploited in the wild, so it’s probably fine
> Also of note, we have no indication that this has ever happened.
* Other products have the same vulnerability
> We are not alone among video conferencing providers in implementing this solution.
* We decided not to fix it
> Ultimately, Zoom decided not to change the application functionality
And also a lovely one I haven’t seen before:
* We tried to buy the researcher’s silence, but he refused
> Upon his initial communication to Zoom, the researcher asked whether Zoom provides bounties for security vulnerability submissions. Zoom invited the researcher to join our private paid bug bounty program, which he declined because of non-disclosure terms. It is common industry practice to require non-disclosure for private bug bounty programs.
1. https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...