There's an ocean of difference between can and will.
plus i don’t really see a bright line between system level software and an app when apps can access your video cam, mic, all your files - basically your whole computer.
The setting ostensibly refers to the operating system, i.e. macOS, which I have no problems with Apple modifying if you've enabled that option, and which their EULA probably has a clause about. But from a legal perspective, modifying a third-party application which Apple does not own and did not install seems an overreach; unless their EULA explicitly grants them the right to do anything they want with the files of the system it's installed on, they could find themselves in legal trouble. (That notorious CFAA and the like.)
No more zoom for me.
Would you let it scan all your files and delete e.g. "suspected images of child abuse" (to use an old cliche)? Suspected copyrighted material or fragments thereof? "Extremist" content, or content which is contrary to current social norms? How authoritarian does it have to get before you start being creeped out?
If Apple starts being abusive, they'll get their hand slapped. If they don't, they don't.
There's no better company positioned to do anti-malware than the vendor of the OS itself. Which is why Apple and Microsoft both do it. You can disable updates on both platforms if, for some reason, you don't want anything to change on your system without your explicit action (pros and cons to that, obviously). But for most end users, the tradeoff of control vs. security is a very easy one, since the average user is in no way qualified to secure their own system or audit the code that runs on it.
Contrary to that, the demands from governments and others for tech companies to "take responsibility" and become enforcers of all sorts of perceived virtues is reaching a crescendo.
And it's not just about clearly dangerous things like child porn or terrorism. The UK government seriously demands the takedown of "harmful but not illegal" content.
Just think about that concept of "harmful but not illegal" for a moment and you'll see that the sort of overreach that userbinator is talking about is anything but "some absurd extreme".
You have that trouble because you are focusing on the "critical vulnerability" part and ignoring the fact that Apple decided to uninstall a program they had nothing to do with from your computer without your consent.
The intentions might be noble, the implications however are less so.
Replace this instance with something that you disagree about (imagine Apple removing VPN software from Chinese customers due to demands from China or "fixing" existing VPN software with backdoors that enable Chinese authorities to wiretap Chinese people) and see what the issue is here.
(if that example would happen or not is irrelevant, i'm making it to help you see the issue in a context i think you'd disagree with Apple about, i'm not making it for you to argue if that would happen or not)
Apple engineers go to China. Anything they do to help the Chinese government can immediately affect their own workers. If they did that, and a bunch of people with Apple devices got thrown in jail / whatever, their stock, and moral standing, would suffer some serious blow-back for it.
Google Chrome has a thing that pops up when it thinks you might be getting attacked / phished by somebody. I wouldn't mind if OS X terminated my connection and said "Hey, we don't think this is safe" to me, especially if it was something that the average person isn't likely to notice and can cause damage to them (also, in China [relative to the US], the stakes for everything are generally higher- the US probably tracks you around, China for sure does that and is actively nabbing people a lot more frequently, too.)
> (if that example would happen or not is irrelevant, i'm making it to help you see the issue in a context i think you'd disagree with Apple about, i'm not making it for you to argue if that would happen or not)
It is in its own paragraph. That China part wasn't meant to be debated, it was meant as an example of an event that if it happened would make you disagree with Apple. The important part of this example is you disagreeing with Apple, not the reason why.
That's not equivalent, equivalent would be doing something you don't realise, the point is about user agency: keeping users uninformed and, for those that get the information out-of-band, unable to exercise their own control over the situation.
This is not true. You can disable all the automatic updates in System Preferences.
Incorrect. Users who are vulnerable to this had already decided to uninstall Zoom and that's why it was a vulnerability. Zoom had decided to ignore the user's wishes and leave their server behind so that it could re-install the software. Apple's update simply enforces the users' past decision to uninstall the application.
It's like Google making an ad hoc decision to use Chrome autoupdate to silently patch a particularly bad vulnerability in Microsoft Word just because they can.
So what is the principle behind this kind of exception? It's simply this: If it's bad enough, normal rules can be suspended and anything goes. It's like declaring a state of emergency. It's not normal or mundane.
Now the question becomes what is bad enough and who gets to decide what is bad enough? People will point to incidents like this and ask questions like: Why was the San Bernardino attack not bad enough for Apple to suspend its ususal rules? Why can people store tons of pirated music on their Macs without Apple taking action? Why does Apple allow criminals to hide behind end-to-end encrypted messaging software?
If Apple has decided to take responsibility for the security of all third party software on macOS then they should say so. They should change the rules instead of breaking them in an ad hoc fashion.
Then we can all decide whether or not we want to hand total control to Apple (and to those who have control over Apple).
There were a few instances in the last few years where the repos or built-in update systems of legitimate programs were compromised and bundled malware (and in one case, ransomware) along with their apps. In those cases, Apple also silently updated XProtect to remove the malware.
In this case, just because this was a webserver and not something more traditional like a trojan doesn't mean that it isn't still malware. The Risky Business podcast asserted the existence of the RCE before Apple jumped into action that it says Zoom knew about for months. Given that the only way to remove the webserver is to update Zoom (something that won't help any user that has already uninstalled Zoom, which kindly left the insecure webserver behind), this type of update makes perfect sense -- especially since Zoom itself is removing the server from its own application bundle.
This was malware, pure and simple. It wasn't third party software. It was malware left behind/included with a third-party app. It's not as if Apple removed the Zoom app -- it removed the piece of malware Zoom was including alongside its app. The fact that Zoom was including this malware as a way of bypassing Apple's access control in Safari (God forbid the user have to click a button confirming they want to open a meeting) is beside the point -- this was malware.
Additionally, users can turn off the auto system updates and they can disable Gatekeeper entirely.
I understand the broader concern of an OS maker being able to remove files a user chose to install -- but this is a very unambiguous
case of malware. Just because the RCE wasn't actively exploited doesn't mean it wasn't malware.
What Zoom did was negligent and incompetent, but I don't see that there was malicious intent. I do agree, however, that what they tried to do is unacceptable even if implemented competently.
But even if it weren’t — and we can agree to disagree on the intent — the second the RCE is popped, it becomes a massive security issue and it becomes traditional malware. As I said, I’m convinced Apple would do the same thing if this was something left behind or associated with Java or Flash.
But I will admit that I'm starting to see the question of Zoom's intent a bit differently after thinking about what you have said.
Instead you defended Apple fixing security issues in third party software (as I understood it without user consent) and you compared any concerns about that with concerns about buses intentionally running over pedestrians.
So apparently our debate took wrong turn and that wasn't entirely my fault although I will take some of the blame.
I agree that Zoom's intent (and even more so their methods) is icky. So perhaps we should have focused on that, because I can understand the reasoning that this makes Apple's actions look far more justified than I initially thought.
It is actually very competent of them, except for the security part.
I don't see anything in the article that suggests this - as I read it, it pretty much says the opposite. What else have you read that outlined these rules and the exception Apple made?
As far as I know, there is no system-wide update mechanism for third party software not distributed through the Mac App Store that does not require any user interaction. So apparently they (ab)used the system update mechanism.
Zoom is clearly not malware. It just has a bug. Is updating regular third party software documented behaviour of macOS? If so then I agree that it is not abuse. Otherwise Apple has some explaining to do.
That is not a bug
The problem is that Apple appears to have made an exception to its own rules in this particular case. If I understand correctly, they used a first party system update mechanism to change third party software.
I don't think any of these are established facts and I don't understand how you, a fellow fact-fancier, haven't acknowledged that before breezily moving on to a discussion of the precise definition of the term 'malware'.
There's a line between the OS and third-party software. There's a line between malicious software and accidentally vulnerable. Apple has just shown that it is willing to cross both those lines. Where is the line at which Apple will stop?
To make this look scary, you have to misrepresent what Apple actually did and then extrapolate to some frightening hypothetical to end up at nothing more than a risk inherent in all self-updating software.
If the position is 'all self-updating software is an unreasonable risk', fine. But at least argue that unvarnished, and I imagine to most people, extreme and impractical view instead of trying to dress it up as some novel and intricate argument about morality and creeping authoritarianism.
Point to me any other manufacturer who has gone to those lengths to protect their users. There was no reason for Apple to develop that tech. No one else in that space, but they developed it anyway.
Can they do all this stuff? Sure, but I don't think they will. It does not seem to be in their interest.
I also purged Zoom. They’ve blown it in the trust department. They better start working on a web app because that is as much access they’ll get from me in the future.