But like all things, they can also be used to keep software insecure, hide issues, and instead buy off researchers.
I know companies don’t always respond right the first time, I know I haven’t, but Zoom had over 90 days to consider their responses and possible options / software changes. Instead, they were dismissive of the entire thing and only changed course after loud public pressure.
It wouldn’t. From the article:
> The move is a surprise reversal of Zoom’s previous stance, in which the company treated the vulnerability as “low risk” and defended its use
They’re backpedaling because of the bad press, not because they think this is better for users. And if they don’t believe what they did was wrong (if they did, they would have never done it or would have fixed it previously), it’s just a matter of time until they pull other crap like this. This is not the only user-hostile behaviour of their app, it’s just the most egregious we know of.
They didn't really seem to understand that it was a bug.
(i.e. "Your password contains spaces, which is disallowed by our policy. Please try again.")
I found it particularly egregious that Zoom's form auto-trims any spaces from the end of the string - so they are deleted as you type with no feedback (unless you happen to be watching the dots flicker).
Wow, you're optimistic :)
The reason I can empathize with your complain is it being highly unlikely they are able to keep those restrictions consistent across all password forms & login methods.
Apple usually needs to be shamed into admitting to and repairing any broken hardware design. They had to be sued in multiple countries to stop misleading customers to buy AppleCare and allow them to use the warranty guaranteed by law.
This is a perfectly workable system, that does not require any party to not be selfish. It's a much better system that those that require somebody to be 'good' rather than 'rational'
There's not really much "willing to accept responsibility" here as far as I'm concerned.
Do we want a world of people who change their ways, even if for somewhat impure reasons, or a world in which no one ever does because it's pointless?
There is no reason to extend this philosophy to corporations though. We can decide to use a competitor instead. No one needs to show loyalty to a supplier who screwed up.
You forgot another option: not making the mistake in the first place.
"Farley maintains that the relative security risk of the vulnerabilities that security researcher Jonathan Leitschuh disclosed yesterday were not as severe as Leitschuh made them out to be."
That's the CIO still unwilling to accept what a poor decision it was to subversively install a web server on users' computers. They need to shut the fuck up unless the words coming out of their mouth are "we're sorry and we'll do better in the future".
The mistakes I see here are:
- UX Dark Patterns – making uninstall hard/duplicitous
- Helper process having security vulnerability - unauthenticated requests, providing unnecessary privileged operations like update/reinstall etc.
- Providing the control of participant video on/off to meeting host
- Not acknowledging the mistakes quickly and fixing them fast. Being defensive and using 'others do it too' excuse.
Also, in an internal fight for resources/prioritization and just plain philosophical alignment between security vulnerabilities vs UX funnel optimization (reduce number of clicks), in a company like Zoom, I am not at all surprised that UX side own always and security side lost and it took public pressure the shift the balance. Anyone here who has been in this situation knows what I'm talking about.
Unless the cost equation changes, it is hard to get business users to change their priority – from their perspective, they didn't understand what the heck their internal security guy was talking about.
It would have been one person/security-team who they normally don't interact with. So why will they listen to that guy over the UX Product guy who they interact with daily, who they see as the one who built the hockey stick growth in their customer NPS scores and that guy wasn't happy about adding the extra click back.
So, only workable answer I see is public outrage like this (still not very scalable or consistent) and better yet, legal protections/regulations that make it extremely expensive for companies to ignore this stuff.
“Always allow zoom.us to open ‘Zoom’” within browsers.
Even Spotify runs a local web server for this.
On Ubuntu, xdg-open phrases the checkbox as something like "always allow X program to handle foo:// URLs?", which is probably not comprehensible to the average user; more accurate phrasing would be "always allow websites to open X program?" Which I think indicates why I'm so skeptical that this is a good option to give to users.
Unless installing an always running service on my device is directly related to the intended functionality of your software, setting one up is unwelcome and deceptive. Especially when it is done to work around existing security controls.
In Zoom's case, if the user exits the app, the web server keeps running. When the user uninstalled the app, the web server still keeps running.
The user twice said "I don't want Zoom's software running on my computer and both times Zoom ignored the user's request.
This behavior is both unethical AND malicious.
Edit: wrote up more thoughts on this: https://salibra.com/p/viral-growth-doesnt-mean-writing-virus...
Video conferencing has nothing to do with a web server or any server listening to ports.
When I install a video conferencing client its only function should be me initiating a connection.
From a regular user point of view, it would be acceptable to have a helper agent as long as it follows:
- platform provided background process methodology (example: launchd could launch your process when you hit the socket),
- and it is made clearly apparent that such a thing is installed on your system (say, via system preferences panel, via status bar icon menu, and via in-app preferences panel),
- and it does cleanly uninstall as part of a simple standard regular uninstall.
And from a technical/security point of view, it would be acceptable if it:
- has minimal necessary privileges and proper separation of concerns.
- and does only what it needs to provide a user-expected functionality and doesn't do random egregious things.
- has secure ways to allow only expected/authorized caller to talk to it.
- does not violate any platform guidelines or tries to circumvent protections.
It would be not, stop pretending acquiring consent from a statistical model counts as acquiring consent from the actual user.
Thing you wrote may make it acceptable for you, but certainly ain't sufficient for me.
I don't know what you are referring to here. Care to elaborate?
> Thing you wrote may make it acceptable for you, but certainly ain't sufficient for me.
This isn't about individual taste. Nothing I wrote above was about my personal taste. My point was about differentiating between the OS provided valid architectural mechanisms vs surreptitious dark patterns applied on top of it by an application developer.
You make assumptions about individual user's consent from whatever bulk experiences you might have measured. Either that, or you didn't even measure anything and therefore you're just making things up about what's "acceptable."
> This isn't about individual taste.
Who said anything about taste, it's about individual boundaries.
> My point was about differentiating between the OS provided valid architectural mechanisms vs surreptitious dark patterns applied on top of it by an application developer.
First, that's a word salad. Second, after untangling it, I'm pretty sure you mean "if there's a mechanism in the OS that enables this then it's okay" in which case that's even more absurd than the usual "if it's legal then it's okay." Look, even if you take Zoom's "let's leave a tray icon there when you thought you quit the app without putting a honking huge notice you just did that like a decent app usually does" is more about having a way to disawov ("see, we did leave a notification, lol") than actually ethical design. That's the _essence_ of a dark pattern.
Seriously, though, you're being creepy and advocating pushing people's boundaries here.
My caveat is that a helper service is acceptable when it is doing something necessary for the basic function of the software.
Virus scanners, file sync, and things which are obviously servers fit the bill. Not much else I can think of does.
I don't want installing an office suite to permanently take away a percentage of my computer's resources.
My Linux desktops are also always full of processes which I have to dig to figure the purpose, unless I build my own distribution it's hard to make anything work which feels satisfactorily under control.
Apache has permission to start at boot, run in the background, and listen to 0.0.0.0:80,443. Photoshop has permission to write to files in $HOME, and connect to network services while the application is running optionally with explicit permissions for each access. Adobe's update service can be disabled with a click.
With GDPR getting teeth (see recent fines of BA and Marriott) for security breaches, I think this is the beginning of that cost equation changing.
But also bear in mind this is a company who have someone with the title Chief Information Security Officer. If alarm bells didn’t start ringing for that person when this vulnerability was reported, then they likely aren’t the right person for the job. Especially as Zoom have customers in the EU so that person is also likely their nominated Data Protection Officer and should therefore be well aware of the privacy requirements imposed by GDPR and the penalties for a privacy breach (which someone secretly recording webcam footage would surely qualify as).
As for local helper agents accessible from the internet, you only need to browse google project zero to see what a bad idea that is.
Is that a common thing that programs do? Should I be expected to portscan myself frequently to see if software is unexpectedly running web servers? How much battery am I losing to this stuff?
From the article, a tweet
They are far from alone, a quick `lsof -i | grep LISTEN` shows that I have: Spotify, Keybase, KBFS, iTunes, Numi, https://t.co/MVSAJgN9yY… All running locally listening web servers.
— Matthew Gregg (@braintube) July 9, 2019
Edit: here you go http://cgbystrom.com/articles/deconstructing-spotifys-builti...
It's one thing to run a webserver while your software is running.
It's quite another to leave it installed and running even after the user has uninstalled your application.
And to actively evade the user's attempts to remove the webserver component. Until this update, if you removed ZoomOpener from your Login Items and via `rm -rf ~/.zoomus`, it would miraculously reappear every time you participated in another Zoom meeting. (To stop this, you had to touch .zoomus as a file or otherwise make it harder to recreate as a directory. But if they had chosen to, Zoom could have coded around these countermeasures thus leading to an arms race, at least for a while.)
Unless they coded something very stupidly, a listening socket that nobody connects to is not going to be on the CPU. It will be asleep waiting to be woken up by actual activity.
Not sure if any operating system would use that socket as a reason not to enter a low power state but I kind of doubt that.
In this particular case, I don't think we can exclude that possibliity.
In general of course I agree with you.
But I could be wrong.
But responsible disclosure absolutely does not mean “no disclosure”. It means give them a chance to fix it. If they choose not to you disclose so that people know that they need to take steps to protect themselves.
The important thing is that the disclosure must become public. It doesn’t matter that they pushed an update, as none of the victims who had deleted/“uninstalled” zoom will get the update, and without the update they’ll still be running the server.
The only way anyone would know about it is with the details being public.
I’m waiting for Apple to use xprotect to kill the server on all machines, as that’s the only true solution for the uninstalled victims
At least some of the reason for putting it there in the first place is so it can hang around and be insecure in case they want to use it later.
The really insidious part is that users who uninstalled it previously won't receive this update removing it now. And a vanishingly small percentage of those will see this and respond by removing it.
I certainly won't ever hit the install button on a video conferencing browser extension ever again. If Zoom was doing this I have zero confidence in
do `lsof -i | grep LISTEN` to find out what server is running on your machine.
To be fair, dragging an "app" to Trash does not constitute un-installation. It was a poor design decision to implement features using a local web server, but let's not be so quick to attribute covert, malicious intentions.
Dragging an app to Trash MUST constitute uninstallation. If it doesn't, it is a bug.
Leaving configuration files in home folder for easier on-boarding after a reinstall is not the same thing as leaving a self replicating rootkit running all the time.
> It was a poor design decision to implement features using a local web server, but let's not be so quick to attribute covert, malicious intentions.
Zoom, with all its advertised features, works like a charm from the user's standpoint. It is not that easy to craft such seamless video conferencing apps which makes me believe the team behind it is formed by really experienced people. If this assumption is true, "the poor decision" is actually the true intention and is probably a feature in case zoom needs to install extra software on my device when their business needs change. It feels more like a back up plan than a dirty hack.
In my humble opinion, experts ignoring the ethical consequences of such decision are dangerous to society and their intention can be considered malicious if not criminal.
I'm sick of seeing the blame always belonging to the business people. Unless taken hostage and forced to act despite not giving consent, the developer should be equally responsible. We don't treat murderers, burglars and scammers the same when they work under a boss.
P.S: I probably strawmanned your answer to express my own opinion. English isn't my native language, sorry if my words sounded offensive.
Unlike windows which has the add/remove programs control panel there isn’t really a standardized way to uninstall things on Mac. (I think you can make a .pkg uninstaller but it’s rare to see that)
I went through the launch agents and launch daemons on my personal computer a few months ago and found plenty of obsolete stuff that was hanging around even after I no longer had whatever the associated app was installed.
lsof -i :19424
Am I reading this correctly, their CIO believes it's the "extra process" that people are concerned about -- not the webcam vulnerability?
If the reporter had agreed to the NDA required for the bug bounty, Zoom could have - and based on their earlier responses, would have - continued to ship this malware. But now because of the researcher signed an NDA they wouldn’t be able to inform the at risk public.
More seriously: I would guess no, as the GDPR is concerned with data collection and compromise, but I can’t imagine they store all the video they forward.
Of course I wouldn’t be surprised if someone sues them in the US (but given that the US sees companies as people for rights, but not punishment I imagine that they’ll be fine).
If this were done then even with the sneaky reinstalling, the user would be alerted by a system dialog requesting access to their webcam.
Read: anyone can write a kernel driver for macOS, you are too trusting of your software vendors. Get a hardware switch
No they can't. With SIP you won't be able to install it if it is not signed (for kernel extensions only a very few developers have certificates) and in any case you will be alerted about it. Also there are plans to completely disallow making kernel extensions in release after Catalina (since they can now run in userspace, I imagine that userspace will not get access to pre-installed hardware)
(Better still, reconsider whether you really need a daemon in the first place...)
> We’re adding a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server. Once the patch is deployed, a new menu option will appear that says, “Uninstall Zoom.” By clicking that button, Zoom will be completely removed from the user’s device along with the user’s saved settings.
The exploit can then be divulged to the public, automatically, on the expiration of the 90-day window, regardless of whether it's fixed or not, as that may also be educational.
For example, after this Zoom issue other companies will hesitate to use a localhost webserver, but if the issue had quietly been fixed by Zoom other companies may still have been tempted to use similar approach.
Even if we don't do that, I think we should at least reveal the issue after it's been fixed in all the cases so that other entities can learn from that.
Is the exploit already being used in the wild? Should affected users be doing something asap to protect themselves? How quickly could someone transform your disclosure into something malicious? How much risk are we forcing the users to bear while we wait for a self-imposed arbitrary time limit to expire. Is there an actual benefit to be gained by delaying public disclosure? (Is the company using the time responsibly to implement a fix, or are they denying the problem exists?
Handling a public disclosure meas adapting to the specific risks of the situation.
 I would say 30 days is more appropriate, with the option to delay the public announcement if the company has handled the situation well and is working on the fix, but needs more time for a legitimate technical reason.
I have the same criticisms as others do about a company like Zoom that only responds to security issues after they wait-and-see if it will impact the bottom line. And that quick peek behind the curtain where their own employees view this as a "PR crisis" (their exact words in the article) rather than something more tells me everything I need to know about their leadership's DNA. Buyer beware.
Good on Zoom to do a rapid course reversal here although naturally trust is now damaged given they only came to their senses under strong public pressure. Also a good case study of how putting “user experience” over security can come back to burn you.
For many tasks (probably most and certainly for most people) the iOS model is the best.
It is, however way too restrictive for a number of use cases. Prohibitively so.
Imagine two very different and isolated environments. Terminal, compilers, file managers in one, most other software in the other. With perhaps shared folders between them.
I currently have a separate limited user account just for meetings, and that’s where I install various meeting apps. So in my case is there any way to know if it affected all my accounts or just the one?
Incompetent company. Incompetent management.
Investors got their money, what’s the problem here?