softwareupdate -ia --include-config-data
It will show up as MRTConfigData if you look under Apple Menu->About This Mac->System Report->Software->Installations. The latest version is 1.45 and was updated today which includes the Zoom mitigations.
softwareupdate -i MRTConfigData_10_14-1.45 --include-config-data
If I'm correct MRT stands for Malware Removal Tool.
I'd feel pretty bad if anything I worked on had to be uninstalled by that.
MRTConfigData_10_14-1.45: No such update
No updates are available.
Update: According to this macworld article, there is a Zoom patch out that fixes this. https://www.macworld.com/article/3407764/zoom-mac-app-flaw-c...
There are also commands at the bottom to manually kill the zoom localhost and disable it. I have opted to run those commands regardless:
pkill ZoomOpener;rm -rf ~/.zoomus;touch ~/.zoomus &&chmod 000 ~/.zoomus;
pkill "RingCentralOpener";rm -rf ~/.ringcentralopener;touch ~/.ringcentralopener &&chmod 000 ~/.ringcentralopener;#
I would also set the ‘user immutable’ flag. If you want even better, set the ‘system immutable’ flag (see ‘man chflags’)
softwareupdate -l --include-config-data
Is there a book you can recommend? Or did you pick these up over the years
system_profiler SPInstallHistoryDataType |grep -A5 MRTConfigData
>To help distinguish Gatekeeper and XProtect updates from other updates in the software update feed, Apple marks them as being ConfigData updates.
>Marking these updates as ConfigData cues the App Store to not display these as available software updates in the App Store’s list of software updates. These updates are meant to be under Apple’s control and to be as invisible as possible.
MRTConfigData 1.45 2019-07-11, 13:10:59 softwareupdated
Apple: Hey, your app poses a threat to macOS security. We're going to remove your server app with the built-in macOS anti-virus.
Zoom: Oh crap. Okay, give us 2 sprints to release a new version that removes it.
Apple: We're killing it in 48 hours.
Zoom, after an all-nighter: HEyyy users, we have a patch for youu
The loading modal would come up, but the app window would never open and I would have to force kill the app entirely, since I couldn't close the modal.
I suspect that Apple had already closed the possibility of the loophole on Catalina, which is why it wasn't working.
So I suspect they had probably noticed it weeks ago.
rm -rf ~/.zoomus
The who found the Vulnerability where at least talking to the people at chrome and firefox:
Quote from there Blogpost (https://medium.com/bugbountywriteup/zoom-zero-day-4-million-...)
"Apr 10, 2019 — Vulnerability disclosed to Chromium security team.
Apr 19, 2019 — Vulnerability disclosed to Mozilla FireFox security team."
So, not entirely unrealistic that the other Browser manufacturer also got a notice.
Makes me happy to be a customer. Hope they keep enforcing their own rules and protecting their users' privacy and security in this fearless manner.
Apple definitely does make some commendable decisions, but I think it's also important to distinguish between bravery and what Ben Thompson calls "Strategy Credits" (https://stratechery.com/2013/strategy-credit/):
> Strategy Credit: An uncomplicated decision that makes a company look good relative to other companies who face much more significant trade-offs.
Do they have any information about enterprise apps? As I understand it, Apple never phones home with app info (such as the identifier, name, etc) when verifying or installing enterprise-signed apps, so the only thing they know is probably the IP address requesting to verify the enterprise-signed app and the frequency of how often Apple devices do this certificate verification.
Considering FB and Google have many employees in all different parts of the world, it wouldn't be too suspicious to see a good amount of diversity between GeoIP regions.
Correct me if i'm wrong about what info Apple collects about enterprise apps.
Anyway I wholehartedly agree with you here and I think Apple genuinely had no knowledge of this activity until news outlets reported on it. Or if they did, it did not make its way to the higher-ups that revoke developer certs.
In fact, the world around you will bend to meet your values whether you’re even aware of it. And that includes any companies you run.
The world does extend beyond your knowledge of it.
Otherwise, it only takes one person to short-circuit that value to set the ball rolling on a shift towards lower standards.
You need to run a super tight ship, which I think is not as hard as it sounds until you put VC, investment, and shareholders into the mix. You at least need to be super diligent about those people you bring in who are not accountable to you, but you are accountable to them.
Basecamp is an amazing example of a company that has succeeded without compromising itself a jot. They do all kinds of things that we might consider unthinkable because they won’t budge on their values. Probably the one company I’d drop everything to work for if I had a chance at getting through their hiring process.
This tends to further degrade as new employees are added and any whatever original vision was going on continues to degrade over time. Especially as both the times and even business models change.
Nonsense - this is a solved problem. You simply need to remove the ability to make defined classes of bad decisions by binding the company's future decision making capability with a Ulysses pact. Cory Doctorow gave a good talk about using Ulysses pacts in the tech industry.
>> It's not that you don't want to lose weight when you raid your Oreo stash in the middle of the night. It's just that the net present value of tomorrow's weight loss is hyperbolically discounted in favor of the carbohydrate rush of tonight's Oreos. If you're serious about not eating a bag of Oreos your best bet is to not have a bag of Oreos to eat. Not because you're weak willed. Because you're a grown up. And once you become a grown up, you start to understand that there will be tired and desperate moments in your future and the most strong-willed thing you can do is use the willpower that you have now when you're strong, at your best moment, to be the best that you can be later when you're at your weakest moment.
>> The answer to not getting pressure from your bosses, your stakeholders, your investors or your members, to do the wrong thing later, when times are hard, is to take options off the table right now.
This shouldn't be a problem for anybody that actually wants the moral outcome. Why would any=body insist on preserving the option to behave badly in the future unless that bad behavior is part of their future plans?
 https://www.youtube.com/watch?v=zlN6wjeCJYk (transcript: https://d3j.de/2016/06/24/cory-doctorow-how-stupid-laws-and-... )
Companies lack unified decision making. The founders can’t predict every moral choice any employee will make. And for any large organization the CEO has no idea what most of the day to day decisions involve.
Consider something as simple as an old brick company setting up an email server for the first time. That opens up a host of choices management likely had zero idea even exist.
I think once a company develops a reputation like that, most people just get desensitised to it. Also there is a degree to which lower cost products get a pass, because hey, they must be cutting corners somewhere.
We should still reward/praise companies who make decisions that are morally superior to their competitors, regardless of whether the morality itself was a primary motivation.
Humans might have too short lifespans, memories and limited rationality for the long term benefit of morality to be strongly in our individual self interest though... one of the possible benefits of anti-aging and cognitive enhancement tech is it might incentivize us to be more moral all other things equal as a side effect.
1) Public sentiment is hammering companies for perceived privacy violations
2) Our business model does not rely heavily on selling user data
3) Make public statements about how much we value privacy at literally no cost to us
4) Get in a good dig at our competition at the same time
1. Lists of people who have purchased [music genre] from iTunes & listened on Pandora is for sale by data brokers, and
2. App developers (they specifically call out Pandora) who use the MediaFramework API have access to iTunes library metadata that they can then collect.
I haven’t looked at Apple’s Developer Agreement recently but I suspect Pandora (and potentially others) hasn’t complied with the terms.
We recently went back to PC's and it was immediately obvious we needed wall to wall antivirus protection which was not always the case on macs.
"We are not going to keep any data at all about you unless we are forced to do so legally. We are bound by that contract with you when you purchase our device."
Then followed that with
"We're going to make it as hard as humanly possible for anyone else to collect and keep data about you if you own one of our devices. Including both legal and technical solutions and we will sue them for breach."
As it is, we're praising "least worse" which is effing awful. Apple's excrement stinks less than some others, eat it up!
But of course Apple literally wrote the book on selling their customers as product to third parties. They've been wildly successful at it. Microsoft, IBM look on in envy at how they've managed to get away with it.
Since it has been massively profitable for them to turn their customers into their product, they see no reason to change and I guess why should they? Profit maximisation is their business, yours and my health and welfare is only of interest in service to maximising profit. If they did anything else they might be guilty of securities fraud(!) So yeah, they can be completely horrific and still win the PR battle because others seem even worse.
I see these statements of fact are always jarring for people to notice for the first time, especially if they quite like the machines (I do), and quite like liberal democracy and free market economics (again I do!) and more so that this utter hideousness is our best option right now because there is no option even remotely on the same planet as good. It is thoroughly depressing all round.
Sure, Apple has more leverage, considering their size, but that also comes with its own set of problems. Plus, their customers have nowhere to go to in protest - all other phones are full of conflict minerals too.
I feel sick when apple says they are deeply committed to upholding human rights, while they continue manufacturing electronics, because I need authenticity. I would like Apple to use more of their resources to figure out how to do conflict-free consumer electronics.
I would like that as well, but I understand how that's difficult for them to do, too: making public that you're working on that, is also making public the deficiencies you have in that area currently - something many consumers are not aware of, and of which they may think it applies only to you.
That's why initiatives like Fairphone's are good. That said, I've followed their blog  for a while, and occasionally they've been part of initiatives of which other phone manufacturers have been part as well (I recall something about Nokia and Congo). I think they just don't publicise that for the reasons I outlined above.
Apple charges $1k for their monitor stands. I think they can afford to build their stuff at a factory that doesn't use modern slavery.
Apple certainly wasn't looking out for their users' privacy and security when they let an iTunes bug go unfixed for 3 years (see http://www.telegraph.co.uk/technology/apple/8912714/Apple-iT... for more). That bug was said to allowed government spying. Apple's iPhone back door lets Apple delete a user's apps (per http://www.telegraph.co.uk/technology/3358134/Apples-Jobs-co...) but Steve Jobs said it was okay because we can trust Apple ("Hopefully we never have to pull that lever, but we would be irresponsible not to have a lever like that to pull."). Back doors aren't moral, they exist to grant another party over the device the user bought and should own.
The root of all of this is the power of proprietary software (software the user can't inspect, share, or modify, and in some particularly restrictive cases can't always run). Proprietary software is unjust power over the user. There's nothing moral about proprietary software.
And in any case, there is a checkbox in the software update preferences labelled "Install system data files and security updates" which presumably allows you to opt out of these critical security updates.
And if you really wanted to have the zoom backdoor server run on your system, you could probably just strip the code signature and run it manually. Apple isn't stopping you from running whatever software you want on the Mac. Apple is helping all those users that don't follow Hacker News to keep their Mac safe.
That seems highly unlikely to me. Do you have evidence to support that assertion.
On first use "Do you want us to automatically remove apps we think might damage your system: Y/n."
Don't users need a notification, at least, to inform their choices when installing software.
I guess Apple Computers would rather you just mindlessly relied on them, however, so anything that lets users know that Apple's system exposed them from risk is going to be avoided.
Every relative who never installs updates. I ask them why they are on an old version with major security holes that were on the news, but they just don't care. They always click "later".
Sorry, but this is absurd. Automatic security updates are necessity. And no user read through all changelogs of all updated software (except extremely critical systems).
Maybe you wanted to argue for ability to downgrade and disable updates?
It should be up to the user to decide whether to take on updates, regardless of what you think because that's their computer and not yours and you each deserve control over the computers you own. Just as freedom of speech means sometimes people will say things you disagree with, free software computers means not everyone will keep up with the updates. But not offering software freedom is unethical and neither Zoom nor Apple are distributing software freedom. Apple has a clear record of using the power of a proprietor to expose their users to harm (more examples at https://www.gnu.org/proprietary/malware-apple.html ) and this story is an example of how Zoom apparently does as well.
What you and other posters are tellingly refusing to address is the immorality of software nonfreedom. As I wrote before, this is the core of the issue.
Which is why the user can CHOOSE to have automatic updates. Or not to. The default when buying a new Mac is that automatic updates are enabled, because that’s the product Apple wants to sell and that they believe most of their users want to buy. It’s secure, it’s practical, it’s fun.
If you want to be your own IT department you simply deactivate all or some automatic updates. If you want a secure computer and trust Apple you leave it on.
I don’t see how this is a big moral question at all. Let people organize their computing needs in a way that’s safe and practical for them, not in the way that’s safe and practical for you.
There's nothing accurate about this description.
The user can turn off all update checking, or use the granular permissions to just turn off silent security updates.
>To allow macOS to update automatically, go to System Preferences > Software Update, then check Automatically keep my Mac up to date. The Mac offers some more granular update options than iOS. If you click Advanced…, you see a number of options:
If you only want to turn off silent security updates, the option to uncheck is "Install system data files and security updates".
What Apple did here is also a dark pattern. We cannot commend them and normalize this behavior.
This is a dictatorial one-sided decision by Apple. What else can they do? Can nation state governments compel Apple to push stuff silently? Can this system be abused by hackers?
Why are we dependent on the good moral behavior of Apple business decision makers for the well-being of our digital lives? Haven't we learnt anything at all from all the incidents in the recent past w.r.t trust in corporate benevolence?
"By using the Apple Software, you agree that Apple may download and install automatic updates onto your computer and your peripheral devices. You can turn off automatic updates altogether at any time by changing the automatic updates settings found within System Preferences."
On the other hand, Apple itself is guilty of not addressing gatekeeper vulnerability in time (is still yet to fix this bug): https://9to5mac.com/2019/05/25/macos-gatekeeper-vulnerabilit...
Why are you using "we"? I for one am quite happy how Apple manages Gatekeeper.
You may agree with its decision this time, but will you always agree? Apple's wielding of power in this way is likely to attract the attention of groups such as copyright/IP lobbyists, which have an immense desire to have all "non-authorised" files/software erased from all user's machines.
As the saying goes, "two wrongs don't make a right".
In any case, the idea of the OS/platform vendor meddling with third-party software that it doesn't like just feels wrong. I know Apple has historically held tight control over its mobile platforms, but the Mac is meant to be different.
I am not an Apple customer, and I now feel even more reluctant to become one.
any OS (and many other apps) that update have the power to do what you’re afraid of, and much more.
plus i don’t really see a bright line between system level software and an app when apps can access your video cam, mic, all your files - basically your whole computer.
This isn't third-party anything. No one even knew this was running on their machine and it was demonstrably abusable. Good riddance!
One: On the family iPad we had at the time, we hadn't ever uploaded music to it from the family library, because there wasn't enough space for it all. Whenever anyone opened control panel and accidentally pressed "play", something by U2 (with a not-appropriate-for-little-children album cover) would come on.
Two: It was hard to remove that darn album. I couldn't figure out how to get it to go away for the life of me.
There's also a fundamental difference between someone adding something like Clippy to your desktop (or a U2 album) and someone saying "you need to fix your stuff or you get kicked out."
It was an entire U2 album, a far greater offense.
There's an ocean of difference between can and will.
The setting ostensibly refers to the operating system, i.e. macOS, which I have no problems with Apple modifying if you've enabled that option, and which their EULA probably has a clause about. But from a legal perspective, modifying a third-party application which Apple does not own and did not install seems an overreach; unless their EULA explicitly grants them the right to do anything they want with the files of the system it's installed on, they could find themselves in legal trouble. (That notorious CFAA and the like.)
No more zoom for me.
Would you let it scan all your files and delete e.g. "suspected images of child abuse" (to use an old cliche)? Suspected copyrighted material or fragments thereof? "Extremist" content, or content which is contrary to current social norms? How authoritarian does it have to get before you start being creeped out?
If Apple starts being abusive, they'll get their hand slapped. If they don't, they don't.
There's no better company positioned to do anti-malware than the vendor of the OS itself. Which is why Apple and Microsoft both do it. You can disable updates on both platforms if, for some reason, you don't want anything to change on your system without your explicit action (pros and cons to that, obviously). But for most end users, the tradeoff of control vs. security is a very easy one, since the average user is in no way qualified to secure their own system or audit the code that runs on it.
Contrary to that, the demands from governments and others for tech companies to "take responsibility" and become enforcers of all sorts of perceived virtues is reaching a crescendo.
And it's not just about clearly dangerous things like child porn or terrorism. The UK government seriously demands the takedown of "harmful but not illegal" content.
Just think about that concept of "harmful but not illegal" for a moment and you'll see that the sort of overreach that userbinator is talking about is anything but "some absurd extreme".
You have that trouble because you are focusing on the "critical vulnerability" part and ignoring the fact that Apple decided to uninstall a program they had nothing to do with from your computer without your consent.
The intentions might be noble, the implications however are less so.
Replace this instance with something that you disagree about (imagine Apple removing VPN software from Chinese customers due to demands from China or "fixing" existing VPN software with backdoors that enable Chinese authorities to wiretap Chinese people) and see what the issue is here.
(if that example would happen or not is irrelevant, i'm making it to help you see the issue in a context i think you'd disagree with Apple about, i'm not making it for you to argue if that would happen or not)
Apple engineers go to China. Anything they do to help the Chinese government can immediately affect their own workers. If they did that, and a bunch of people with Apple devices got thrown in jail / whatever, their stock, and moral standing, would suffer some serious blow-back for it.
Google Chrome has a thing that pops up when it thinks you might be getting attacked / phished by somebody. I wouldn't mind if OS X terminated my connection and said "Hey, we don't think this is safe" to me, especially if it was something that the average person isn't likely to notice and can cause damage to them (also, in China [relative to the US], the stakes for everything are generally higher- the US probably tracks you around, China for sure does that and is actively nabbing people a lot more frequently, too.)
> (if that example would happen or not is irrelevant, i'm making it to help you see the issue in a context i think you'd disagree with Apple about, i'm not making it for you to argue if that would happen or not)
It is in its own paragraph. That China part wasn't meant to be debated, it was meant as an example of an event that if it happened would make you disagree with Apple. The important part of this example is you disagreeing with Apple, not the reason why.
That's not equivalent, equivalent would be doing something you don't realise, the point is about user agency: keeping users uninformed and, for those that get the information out-of-band, unable to exercise their own control over the situation.
This is not true. You can disable all the automatic updates in System Preferences.
Incorrect. Users who are vulnerable to this had already decided to uninstall Zoom and that's why it was a vulnerability. Zoom had decided to ignore the user's wishes and leave their server behind so that it could re-install the software. Apple's update simply enforces the users' past decision to uninstall the application.
It's like Google making an ad hoc decision to use Chrome autoupdate to silently patch a particularly bad vulnerability in Microsoft Word just because they can.
So what is the principle behind this kind of exception? It's simply this: If it's bad enough, normal rules can be suspended and anything goes. It's like declaring a state of emergency. It's not normal or mundane.
Now the question becomes what is bad enough and who gets to decide what is bad enough? People will point to incidents like this and ask questions like: Why was the San Bernardino attack not bad enough for Apple to suspend its ususal rules? Why can people store tons of pirated music on their Macs without Apple taking action? Why does Apple allow criminals to hide behind end-to-end encrypted messaging software?
If Apple has decided to take responsibility for the security of all third party software on macOS then they should say so. They should change the rules instead of breaking them in an ad hoc fashion.
Then we can all decide whether or not we want to hand total control to Apple (and to those who have control over Apple).
There were a few instances in the last few years where the repos or built-in update systems of legitimate programs were compromised and bundled malware (and in one case, ransomware) along with their apps. In those cases, Apple also silently updated XProtect to remove the malware.
In this case, just because this was a webserver and not something more traditional like a trojan doesn't mean that it isn't still malware. The Risky Business podcast asserted the existence of the RCE before Apple jumped into action that it says Zoom knew about for months. Given that the only way to remove the webserver is to update Zoom (something that won't help any user that has already uninstalled Zoom, which kindly left the insecure webserver behind), this type of update makes perfect sense -- especially since Zoom itself is removing the server from its own application bundle.
This was malware, pure and simple. It wasn't third party software. It was malware left behind/included with a third-party app. It's not as if Apple removed the Zoom app -- it removed the piece of malware Zoom was including alongside its app. The fact that Zoom was including this malware as a way of bypassing Apple's access control in Safari (God forbid the user have to click a button confirming they want to open a meeting) is beside the point -- this was malware.
Additionally, users can turn off the auto system updates and they can disable Gatekeeper entirely.
I understand the broader concern of an OS maker being able to remove files a user chose to install -- but this is a very unambiguous
case of malware. Just because the RCE wasn't actively exploited doesn't mean it wasn't malware.
What Zoom did was negligent and incompetent, but I don't see that there was malicious intent. I do agree, however, that what they tried to do is unacceptable even if implemented competently.
But even if it weren’t — and we can agree to disagree on the intent — the second the RCE is popped, it becomes a massive security issue and it becomes traditional malware. As I said, I’m convinced Apple would do the same thing if this was something left behind or associated with Java or Flash.
But I will admit that I'm starting to see the question of Zoom's intent a bit differently after thinking about what you have said.
Instead you defended Apple fixing security issues in third party software (as I understood it without user consent) and you compared any concerns about that with concerns about buses intentionally running over pedestrians.
So apparently our debate took wrong turn and that wasn't entirely my fault although I will take some of the blame.
I agree that Zoom's intent (and even more so their methods) is icky. So perhaps we should have focused on that, because I can understand the reasoning that this makes Apple's actions look far more justified than I initially thought.
It is actually very competent of them, except for the security part.
I don't see anything in the article that suggests this - as I read it, it pretty much says the opposite. What else have you read that outlined these rules and the exception Apple made?
As far as I know, there is no system-wide update mechanism for third party software not distributed through the Mac App Store that does not require any user interaction. So apparently they (ab)used the system update mechanism.
Zoom is clearly not malware. It just has a bug. Is updating regular third party software documented behaviour of macOS? If so then I agree that it is not abuse. Otherwise Apple has some explaining to do.
That is not a bug
The problem is that Apple appears to have made an exception to its own rules in this particular case. If I understand correctly, they used a first party system update mechanism to change third party software.
I don't think any of these are established facts and I don't understand how you, a fellow fact-fancier, haven't acknowledged that before breezily moving on to a discussion of the precise definition of the term 'malware'.
There's a line between the OS and third-party software. There's a line between malicious software and accidentally vulnerable. Apple has just shown that it is willing to cross both those lines. Where is the line at which Apple will stop?
To make this look scary, you have to misrepresent what Apple actually did and then extrapolate to some frightening hypothetical to end up at nothing more than a risk inherent in all self-updating software.
If the position is 'all self-updating software is an unreasonable risk', fine. But at least argue that unvarnished, and I imagine to most people, extreme and impractical view instead of trying to dress it up as some novel and intricate argument about morality and creeping authoritarianism.
Point to me any other manufacturer who has gone to those lengths to protect their users. There was no reason for Apple to develop that tech. No one else in that space, but they developed it anyway.
Can they do all this stuff? Sure, but I don't think they will. It does not seem to be in their interest.
I also purged Zoom. They’ve blown it in the trust department. They better start working on a web app because that is as much access they’ll get from me in the future.
Do you have a source for that. It doesn’t peek around my computer a little and/or send back any telemetry? I’m being serious, I’d like to know.
I had to install Zoom in school in 2014, I ended up uninstalling it the next week and reformatted after the quarter. I’m with Apple here. It’s shit insecure non-consenting software that wastes battery 99.99% of the time.
This is a good point; we shouldn't act as though users are necessarily making an informed choice or meaningfully consenting to all the software that's on their computers. Lots of people are forced to install software at economic gunpoint (and probably can ill afford a separate computer to isolate it on).
You can't depend on users and the marketplace to select against insecure software. The market is too distorted to function that way; the people forcing others to use shitty software are often isolated from the consequences themselves, so there's no effective feedback loop to stop it. Having the OS vendor step in is really the only good solution in the short term.
The part that freaks me out is you can’t uninstall it.
“The undocumented web server remained installed even if a user uninstalled Zoom.”
I’m not sure if this is common. Sony got caught with their XCP rootkit (I’m not sure if they called it this at the time) you had to fill out a “uninstall request” form on their site with your email and location. I’m not sure if the uninstaller fixed the vulnerability.
So maybe a rootkit might describe this if the vulnerable webserver is privileged. In Sony’s case, the side effects were unintentional (though their history with DRM is egregious). I think Zoom is just polluted MVP in production.
Most of the insecure software that I run has enough grace to not silently leave behind a web server to automatically re-install itself after I dumped it in the trash can.
This is a horrifying bug. Is Facetime malware? Or do developers with earnest intentions sometimes write buggy code?
If you don't think that's scary then you're just incapable of long term thinking.
Yes, they can do that. So can Microsoft with Windows. So can Google with Android. Will any of them do that? Hopefully not, at the very least. Will all of them do that? Probably not- there's money to be made being the last company standing that actively protects privacy.
For the same reason Apple removed VPN apps from the Chinese iOS app store and multiple news organisations. Because they feel that the money from China is worth obeying oppressive regimes. I really don't think the backlash would be any worse.
Apple didn’t flex anything here, it removed malware from its users computers.
> You may agree with its decision this time, but will you always agree?
Yes, I will. At least I am not going to lose sleep over it until Apple does abuse that power.
I am actually even more happy to be an Apple user knowing that the mothership said hell naw to the naw naw naw naw to this horseshit Zoom has been pulling.
If I were Apple, I'd be taking this as a personal slight against my entire user base.
What if Apple abuses that power in ways that not everyone sees as "abuse", yet they are affected by it?
The reason you are not already seeing this act as abuse is because you happen to agree with it. What if you didn't agree? What if you were in the minority? What if the reason you where in the minority was that the majority simply didn't had the necessary understanding, experience and/or knowledge to see the issue you see?
If something can be abused, it will be abused, there isn't a matter of if, it is a matter of when. And with that in mind it is better to try and avoid being abused than wait for the abuse to happen and see what you can do after the fact.
Is it appropriate for Microsoft and Apple to push updates that disable and remove those from infected computers? If so, what is the significant difference?
I remember many years ago when the first widespread worms for Windows started circulating. All MS did was publish news and a removal tool. It was publicised greatly, but the ultimate choice was left to the owners of the computers, and that's how it should be.
All the big tech companies (and even a lot of the smaller ones) are becoming increasingly authoritarian, and that's the most concerning thing about this.
I love this. This is why I'll keep buying Apple.
To be honest I'm kinda sick of this argument. Someone brings this argument up _every single time_ a tech company takes action against something malicious. It's a strawman argument at best and at worst a way to give people an out on acting against something that could harm the user.
> Apple's wielding of power in this way is likely to attract the attention of groups such as copyright/IP lobbyists, which have an immense desire to have all "non-authorised" files/software erased from all user's machines.
This will never happen.
This seems like overreach to me. It's annoying but it's not like the Zoom app was silently letting people watch me for hours through my webcam without anyone noticing - the app opens a full screen video sharing GUI for goodness sake. Is being joined to a VC without me wanting it when I click a link annoying? Sure. It also serves the attacker no real purpose and thus has never actually happened in the wild. It's also easily fixed. This is a storm in a teacup.
Moreover it seems from the last discussion of this on HN that videocall firms do this for a good reason - lots of users get confused by bad Safari permissions GUIs and end up locking themselves out of the app by cancelling the URL open prompt without thinking (which is apparently persistent!) Then they can't join the call. So the only reason these firms are using such a bad workaround to begin with is because Apple screwed up their user interfaces: why is this not on Apple to fix?
This appears to send a message to Mac devs that a single troublemaking blogger can cause Apple to kneejerkingly nuke features in your app overnight, regardless of whether you are fixing them, whether they're serious or not or whether it will result in legions of confused and stuck Mac users. Not a great message.
The argument isn't about taking action against something malicious, the argument is about the implications of being able to take that action and what sort of power the company has and if they should have it in light of past abuses (not necessarily but that company, but this is totally irrelevant since companies are made up of people that come and go, they do not have a single "brain" or morality).
> This will never happen.
You cannot guarantee that.
> > This will never happen
> You cannot guarantee that.
What I personally care about, privacy-wise, is the present and near future- will my family be safe on the internet with what I've set up for the next five years? Probably. Will the computer I'm using to type this reply on be replaced within a decade? Probably so. Will my family get a new PC within the next ten years? Yes.
Can you 100% guarantee that the Government of the United States will be intact in twenty years? No, the threats from Russia and China (both nuclear countries) and North Korea (armed or not, they're still dangerous), and space asteroids and epidemics and terrorists and politics and civil wars are not zero.
Can you 100% guarantee that California won't sink into the ocean in 100 years? That would make for some really bad real estate investments, yet people still buy and sell and build there.
People are still living in California, trading with each other, the US Government is still stable, and Apple is currently upholding and protecting user privacy. Also, we still have electricity and the internet. Now is a great time to be alive.
The setting is called "Install system data files and security updates": it's not just system components.
And it is a security update meant to annihilate a serious malware threat, not to mess with legitimate third party software.
I am an Apple customer, and I now feel even happier about being one.
You shouldn't want. But at the same time i do not think it is a good idea to want Apple to be able to silently remove arbitrary applications they had nothing to do with from your computer.
At the very least they should ask the user about it or quarantine the software and inform the user about it. AFAIK this is what Windows Defender does when it finds malicious files.
Then, Apple, can push a silent update to simply kill software on your machine which as I understand it wasn't installed through the app store.
In this case I may be happy that it's no longer running, but the whole thing is disturbing. Looking at the Security & Privacy settings on my MacBook, I see nothing about running any anti-virus or anti-malware. The closest setting I can see that might be this is under software update, where I have the option to install automatically the system data files and security updates.
It's kind of a stretch for me to consider the ability to kill some software Apple might construe as malware at anytime the same thing as a "security update". To me, a security update would patch Apple code which had a vulnerability.
Where do I tell Apple to whitelist software in the future they might not like which I've chosen to install not going through the App Store?
It's actually news to me that I'm running Anti-X on my Mac, I didn't think I was.
Considering the fact that I have to learn new places for all the buttons every time Microsoft gets bored and changes things for the "better" I'm really disappointed.
System76 is looking better and better.
Real principles would involve switching now.
b) Linux seems pretty solid, no...?
c) I'd argue that a better position might be 'support Linux now, in case you need it in the future'.
Oh, and a company that defended the insecure web server up until the moment the public outrage exploded and/or the RCE it had willfully ignored was about to be revealed.
Oh, and a public company at that, that’s trying to convince businesses to use its product as it primary video chat system.
Apple worked with Zoom insofar as Apple cleaned up Zoom’s mess because of Zoom’s poor/unethical software practices.
For example if you classify possibly unwanted programs from annoying toolbars to randomware on a scale of 1-3 it might be reasonable to provide a checkbox to allow the user to switch between being warned of a negative program and being given the option to uninstall and having this happen automatically for non critical situations.
If the default is to on then 99% of users will be protected.
Arguably stuff like ransomware shouldn't be optional lest the malware set the option.
Apple made the right call for this instance, especially after the completely insufficient excuses given by Zoom’s CIO.
> Apple's wielding of power in this way... the idea of the OS/platform vendor meddling with third-party software...
This is THEIR app store for THEIR operating system. Why in the world would they not be allowed to control their software's features or third party integrations? It reminds me of the ridiculous argument over Windows setting IE as its default browser (and I've been a web dev since the late 90s).
On MY computer.
> Why in the world would they not be allowed to control their software's features or third party integrations?
Because it is not THEIR computer but MY computer.
But Apple didn't install macOS on your computer. You chose to use THEIR platform.
It reminds me of the ridiculous argument over Windows setting IE as its default browser
What do you mean by that? Instead you reminded me that saying "$our_competitor's product is not secure, so we've helpfully removed it and recommend you use $our_equivalent instead" is likely to run afoul of antitrust laws.
At this point it is up to the user to decide what to do and most non-technical users will leave it at that (and wont know what else to do) which should keep them safe.
That said, in the context of my original comment, AVs are a bit of a special case because their sole and expected purpose is to detect and remove software they don't like.
I am with the "two wrongs don't make a right" people here. Zoom was reckless and their casual disregard for the initial security report left a very bad taste in my mouth. I'm now highly unlikely to use one of their products willingly. But having Apple initiate unattended removal of software that a person willingly installed on their own workstation machine is also unacceptable, unless they specifically opted in and checked "enable" on something that is very similar to windows defender.
Do you think anyone would have installed Zoom if they knew that it would allow any random website to activate your camera?
The question I see is really that Apple doesn't inform its users of the existence of this feature, unless you really search for it. Having something as simple as a functional-equivalent to Windows Defender with its own icon in Control Panel, which is fully enabled in the default operating system installation, should be sufficient.
Personally, unless I am specifically aware of the existence and enabled status of some anti-malware application, I don't think it's a good precedent to set for operating system vendors to start silently removing software from peoples' machines. Really all it should take is apple making people aware of the feature's existence.
For a bunch of my family members, even simple errors mean almost nothing to them. They'll stop what they're doing and wait for help even on an error that (it seems to me) they could have simply read and addressed themselves. They've never examined the system tray, and dismiss any popups that come from it. Making them aware of systems like this only serves to confuse, because they don't really understand the problem it's addressing in the first place.
Machines for power users aren't going away. There are more operating systems than you can shake a stick at, and the number keeps growing. But for a lot of users information can be paralysing, and I wonder if having a strongly managed and simplified system akin to a phone isn't a better idea.
Yes, I'm quite confident that millions of "normal users" would still have installed Zoom knowing that.
Microsoft do exactly the same, and have done for over a decade now:
> Malicious Software Removal Tool is a freely distributed virus removal tool developed by Microsoft for the Microsoft Windows operating system. First released on January 13, 2005, it is an on-demand anti-virus tool ("on-demand" means it lacks real-time protection) that scans the computer for specific widespread malware and tries to eliminate the infection. [...] The program is usually updated on the second Tuesday of every month (commonly called "Patch Tuesday") and distributed via Windows Update, at which point it runs once automatically in the background and reports if malicious software is found.
> Having something as simple as a functional-equivalent to Windows Defender with its own icon in Control Panel
MSRT is independent from Windows Defender.
Yes, Apple does have the power to change every bit on your hard disk if you let it. Things like EULAs are supposed to govern to what extent they can use that power.
Please help me understand this. Where in the EULA or elsewhere have users allowed Apple to remotely install a U2 album, or delete third-party software? What of the fact the EULA itself can change at any time without any posted notice? Or worse, what if Apple is forced or breached to deliver a "security update" which steals personal information or bricks your device? Why is this not a legitimate concern to technical professionals? Or, where is the documentation which at least explains this behavior to put curious minds at ease?
I wonder if there's a known exploit for the Zoom server specifically, or if Apple discovered one while looking into it. It seems strange for them to go to these lengths in this case when it sounds like other software has been using a similar technique too. Maybe it's just the reinstallation aspect that makes Zoom's case exceptional?
"Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day."
> Zoom spokesperson Priscilla McCarthy told TechCrunch: “We’re happy to have worked with Apple on testing this update. We expect the web server issue to be resolved today. We appreciate our users’ patience as we continue to work through addressing their concerns.”
Yeah, I bet.
It looked like they decided to remove the server themselves (or at least, as a response to pressure), but maybe they didn't actually have a choice at all.
Zoom also didn't reverse their decision until there was a huge amount of public backlash.
There is, though it's not public yet.
Thank god for Apple putting down the law. This is why I happily pay premium prices...
They have a bunch of cool little apps (that are free) like BlockBlock that let you know when things are happening you wouldn't have otherwise allowed.
For example, BlockBlock warned me randomly about 30 minutes ago about an app that was being silently installed in the background.. something I hadn't seen before called MRT.app.
Turns out - that was Apple silently updating the OS to protect against Zoom. Wouldn't have known if it weren't for these apps.
The "nice folks" at Objective-See is Patrick Wardle, a former NSA rootkit expert who would like nothing more than to install various close-sourced components on your computer.
Funny how both companies have "objective" in their names.
Occam's Razor, asking for root was probably just the path of least resistance for the Zoom developers.
What else have they pushed like this? Is there a transparent log? Can we verify if their track record is clean? How many times have they silently broken and fixed their own things? How do we know they won't abuse this?
Isn't this the same dark pattern that we criticized zoom for? Did I consent to Apple doing silent editorial changes to my system?
For having exercised this editorial privilege, will Apple take accountability for every thing that is done by every app on my computer?
It seems like we are being slow-boiled into accepting outrageous things as normal.
If you want complete control over your computer you have the choice of getting yourself a PC with some flavour of *nix, and combing through each update as it comes. I really don't like the future of Apple that you seem to want. Apple has made missteps, sure - like that idiotic U2 album - but I actively want things like this to happen, and I imagine the vast majority of Apple users do too (if they actually ever think about it).
When it comes to public-facing servers that we run it's a different matter of course, but then I'm performing (and delegating) the same task that I want Apple to perform for my MacBook Pro.
I'm the same as you. The whineyness exhausts me. Let the market decide. I just want shit to work.
sh-3.2# softwareupdate --history
Display Name Version Date
------------ ------- ----
Safari Technology Preview 87 07/10/2019, 21:40:18
Gatekeeper Configuration Data 171 07/03/2019, 14:00:23
Safari Technology Preview 86 07/02/2019, 01:27:12
MRTConfigData 1.42 06/29/2019, 11:52:13
Gatekeeper Configuration Data 170 06/29/2019, 11:50:33
Safari Technology Preview 85 06/13/2019, 14:48:10
Safari Technology Preview 84 06/10/2019, 00:51:57
TCC Configuration Data 17.0 06/05/2019, 07:04:21
Gatekeeper Configuration Data 167 06/04/2019, 04:17:26
Safari Technology Preview 83 05/30/2019, 19:48:10
iTunes Device Support Update 05/15/2019, 16:27:15
Safari Technology Preview 82 05/15/2019, 16:27:15
macOS 10.14.5 Update 05/15/2019, 16:27:15
Gatekeeper Configuration Data 166 05/14/2019, 02:36:07
Safari Technology Preview 81 05/03/2019, 00:47:53
MRTConfigData 1.41 05/02/2019, 06:36:59
XProtectPlistConfigData 2103 05/02/2019, 06:36:37
You can view the history of these installs by running softwareupdate --history | grep -E "^MRT|^Gatekeeper"
I see an average of 2 updates per month.
FWIW Apple can also mark something as visible (but install automatically) with a certain config file key. A blog mentions this as being done for patching the NTP bug from a while back:
> Marking these updates as ConfigData cues the App Store to not display these as available software updates in the App Store’s list of software updates. These updates are meant to be under Apple’s control and to be as invisible as possible.
> Meanwhile, an automatically installed software update like OS X NTP Security Update 1.0 shows up as a normal software update, but has extra keys in its catalog listing to mark it as a critical update whose automatic installation is set to occur as soon as possible.
You can turn off silent security updates from System Preferences without affecting software update checks for other components.
The option is called "Automatically: Install system data files and security updates" in the Software Update advanced settings.
Because as opposed to what Zoom did, this is a feature that you can turn off if you want in "Updates" preference pane.