It sounds like silent updates from Apple without automatic updates turned on is also an undisclosed RCE - or an Apple backdoor, depending on how fine a point you wish to put on it.
Being my OS or hardware vendor does not entitle you to permanent RCE on the machine that now belongs to me.
Unless of course this is just a XProtect rules update or a Gatekeeper CRL update, then ignore what I said.
Of note, Apple has had its own malware detection and removal system in place since the Mountain Lion - Snow Leopard timeframe. Since this article speaks to removal, it's sounding like the Zoom local server may have had its signature added to that system.
Beyond the unsolicited reinstallation (== malware) behavior other commenters rightly mentioned, the entire existence of the vulnerable server was a hack to work around a Safari security feature. Zoom wanted to eliminate an extra user click, required by Safari to confirm that it was OK to invoke a local application based on the public zoom link. This server was an implementation of that security "workaround".
That makes this server at least doubly malware. And "vulnerability" understates the case: a negligent implementation that utterly disregarded any security concerns should be considered beyond the pale. That times 1000 for a major software vendor like Zoom.
The server was intentionally left behind, and running, by the "uninstaller". The server would respond to requests by reinstalling the intentionally uninstalled software.
That's malware.
The server itself was deliberately added to work around a Safari security feature, that was designed specifically to prevent what they wanted: allowing arbitrary web content to open an app without user consent. They literally added an always on, persistent server, to avoid a security dialog
> The server was intentionally left behind, and running, by the "uninstaller
FWIW, I don’t think there was a uninstaller? What I’ve seen people describe is that dragging the .App file to the trash wouldn’t remove the server, since it was installed in a different folder.
There was no uninstaller until the Zoom update this week added an uninstall option to the menu.
MacOS users expect an app that does not come bundled with an uninstaller to be "uninstalled" by dragging the .app bundle to the trash. This generally leaves behind metadata, but that is not a big deal as it is just data - not code.
Leaving behind code that continues to execute after the user has removed the application without an option to uninstall it is obviously something that never should have shipped in the first place from a customer trust perspective.
The server was deliberately left running: if it was unintentional it would just error out if Zoom had been removed. Instead it downloaded and installed the client again. That’s fairly clearly designed to override the user’s attempt to remove the software, and is exactly the kind of thing malware does.
There is a real problem many (often primarily windows) apps have this bizarre desire to “install” content randomly scattershot across the OS. There is no reason to do this on Mac OS. OS X happily supports multiple binaries and services per bundle.
If you write an app that needs an uninstaller on OS X, and you aren’t needing to install some kind of driver, your app is doing things it should not be doing, and does not even need to do.
It is perfectly supported to leave daemons in your app’s bundle. There was no requirement to install the executable to a different directory. They appear to have done it to ensure the web server could silently reinstall the app after the user deleted it.
Yes, as is, dragging apps to the trash cannot do that.
Apple should probably implement an API that allows developers to tell the OS what other stuff should be removed if the app is dragged to the trash.
To prevent devs from removing other people's stuff, you could require that the subcomponents need to be cryptographically signed with the same key as the main app bundle.
edit: or, even simpler solution (potentially) - have a fairly basic API via which devs can install subprograms elsewhere on the system. When you use this API to install something, it adds the thing installed to an OS-level registry. When the user drags the app to the trash, the OS checks the registry and removes anything that was added by the application.
The real solution is to not install anything outside of your app bundle in the first place. In this case, instead of sticking a plist in ~/Library/LaunchAgents, you can use an API to add a login item, pointing to a helper .app inside the main app bundle:
Then the system will automatically disable the login item if the app is removed.
Edit: It seems like Zoom was using a login item, but using the "shared file list" API instead of the newer (but still dating back to 10.6) SMLoginItemSetEnabled.
An alternative is to just make your daemon check for itself whether the main app has been removed, and delete itself if so.
No, on OS X there is literally no reason to move anything from out of your application bundle. Copying the server out of the application bundle had one effect: it makes removing the app not remove the server.
There are apis to register for launch, there are launch service plists, etc which all correctly, and by default handle the user deleting your app bundle.
There are already “cryptographic mechanisms to detect modification of your app”, it’s the platform code signing mechanism, and Xcode will produced signed binaries by default.
Finally, your post comes across as saying that you believe leaving the server was unintentional due to lack of APIs (ignoring that removing an application is a sign that the user doesn’t want your app to run any code). Their “unintentionally” persistent server explicitly check for the app being removed, and if it had been, would redownload it and install it without user consent.
I think your edit is describing a package manager :D Or the Mac App Store... (But yes, it would be good if this feature were available for apps that can’t come through the App Store)
There was a reference to an uninstaller in one post, but also it is expected the uninstalling an app on macOS is simply a matter of deleting the app package.
But again, this was deliberately subverted by the client copying the server out of the app package and into a hidden folder in the user's home directory. Again, this is an intentional choice mimicking malware: the only reason to move the server binary is to break the standard app removal process.
But I maintain that the smoking gun for this being intentional is that the server will download and reinstall the client software, which only makes sense if you expect it to run when the user has uninstalled/removed your software.
Yes because even if you purposely uninstalled the application it would reinstall if you went to a link even by accident without really notifying you that it did so.
That's Apple's closed garden, even when they allow you to sideload application, they still have the ultimate decision. Of course it's not malware, but probably enough users have vulnerable software which could be remotely exploited, that they decided to blacklist it.
Malware is a software written to harm user. Their purpose was not to harm user, they wanted to make their service more convenient for users. Bugs are bugs, every product have bugs and many products have security bugs. That does not make them malware.
Intent doesn’t matter only results. The result is that unless you like for random websites to be able to activate your camera without your permission, it did harm users. It wasn’t a “bug”. They purposefully hacked around a security feature. Do you really think it was a “bug” that it reinstalled itself?
Then you should call Chrome a malware because there were vulnerabilities with remote code execution (and there will be similar vulnerabilities), so every website could install anything. But that is absurd.
> The result is that unless you like for random websites to be able to activate your camera without your permission, it did harm users.
First of all, you need a hard data that this vulnerability was exploited in the wild. Otherwise it did not harm users, it only opened a way for malicious websites to harm users. And, again, every vulnerability could be counted as a malware by that definition which makes malware term meaningless.
> Do you really think it was a “bug” that it reinstalled itself?
It is intended behaviour. And I don't see anything drastically bad with it. If you're opening their website with corresponding link, you want to use that service. In order to use that service, you have to run additional software and they are making it easier for you to run it. Only a few years ago every browser supported Java Applets and with Java Applets every website could run arbitrary code on your machine. And that feature was actually used a lot. Does it make all services which used that feature to overcome browser weaknesses malware? I don't think so.
They probably should have communicated better about that aspect and provide proper uninstaller software for security-concerned users. And not making those vulnerabilities in the first place, of course. But world is not perfect.
> So would it also be okay for a credit bureau to post all of your information to a website? Would that be okay until it was “exploited in the wild”?
I'm not saying that it's okay. I'm just saying that it's an overreaction to call their software malware.
> Are you really saying it’s okay to run knowingly insecure software until there are reports of it being exploited?
Yes, it's okay. When Windows security team receives some vulerability report, they do not shut down all Windows systems in the world until the bug is fixed. Those systems continue to work insecurely until they got update.
> Java applets ran in a sandbox.
Signed Java applets do not run in a sandbox and have full access to the computer. That's why they were widely used for things that JavaScript did not have access to, e.g. using USB secure tokens for website authentication.
There is tons of malware that doesn't harm the user. Take crypto mining for example: it raises the temperature and the electricity bill, but those things aren't per se harmful (albeit financially). Or how about a botnet client that harms some other entity but not the user who owns the machine it's on? Or adware? The list goes on and on... calling this nefarious behavior "harm" is a huge stretch but calling it "malware" is not.
> A program that surreptitiously reinstalls software when you uninstall it is by definition malware.
Not quite. Malware is short for "malicious software" and malice has a specific legal definition: the intent to harm. By your standard, removing a Mac system daemon only to have it re-installed by a software update would classify the entire OS as "malware" - which is an unfair, emotionally-driven characterization.
Windows Defender wouldn't do it silently though, it would make sure you know what happened and get more information about it - even reverse its actions.
Yeah you’re just looking at the version of the xprotect binary, not the malware signature data files, which don’t live in the bundle and get updated more regularly.
They also ship silently via system_installd, you’re not going to see anything in the software update GUI
The signature files also live inside the XProtect.app bundle, unless in true Apple fashion they’ve got other stuff that’s lurking elsewhere in /System that I can’t locate.
I wonder if this was the real reason behind the Zoom backflip. It certainly cannot be good for business if your app gets marked as malware.
Seriously though, they should have owned the mistake, apologised and reversed their decision rather than handling it with a PR spin. It is a great product but somehow it has left me with little trust for zoom. It's probably still not too late.
Does anyone know if bluejeans et all are also removing this? Or does it require public shaming, like the zoom case?
Wasn't there once a company with the motto "Don't be evil."? If that motto is abandoned, Apple should claim it since they genuinely try do their best to live by it.
Apple only does so when it's also convenient to their bottom line. They provide the Chinese government backdoor access to iMessage, remove VPN apps from their store to enable censorship, and have we all forgotten they are a PRISM partner? These actions seem pretty "evil" to me.
> They provide the Chinese government backdoor access to iMessage
No. What gave you this idea? iMessage is end-to-end encrypted. The keys are managed by the devices themselves. There is no facility to backdoor or intercept the messages.
Apple acts as a registration server, notifying your devices when a new device signed in as you joins the pool but the devices themselves tell you when this has happened. That’s all client-side. If the server didn’t tell the client about a new peer it would never encrypt a copy of the message for that peer and that peer wouldn’t get the messages.
> iMessage is end-to-end encrypted. The keys are managed by the devices themselves. There is no facility to backdoor or intercept the messages.
That is only a half-truth. Apple controls the key infrastructure; they may replace your keys with arbitrary ones at the demand, coercion or compromise by any number of bad actors. The software is closed source, making it impossible to verify any actual claims made otherwise. If they truly valued privacy, why not open source iMessage, allow users to verify iMessage keys, hire an independent third party to audit their infrastructure, or all of the above?
Moreover, Apple has moved iCloud infrastructure to Chinese data centers to enable spying on millions of innocent people. They have removed apps from their store which circumvent Chinese censorship. These are truly shameful acts which has appropriately drawn criticism from human rights watch organizations.
The government is bad, so is a trillion dollar American company choosing to collaborate with the government by enabling spying on their users just so they can make even more money. They're enabling a government to track down & torture/murder dissidents
Regarding them as distinct actors is a mistake. Both enable each other: Apple enables pervasive surveillance; in turn China enables huge profits. How bottomless is the guile of a corporation willing to put lives at risk to make some money?
They are basically solving a self-inflicted problem. The real issue there is the fact that macOS doesn't provide a standarized way to completely uninstall an app.
Snapd, flatpak, appimage all can do this on Linux. Even docker/singularity can sort of do the same for some, if you pass through all the necessary devices and sockets from the host. When you remove the app (or container) all the files it brought with it or created during runtime are now gone.
Even the regular Linux package managers like apt, dnf, pacman track which files were installed by which packages, so they can be removed when the package is uninstalled. The downside to these is they don't track files created at runtime so a lot of the config or cache files created can be left over if the package itself doesn't remove them.
A sandboxed macOS app would offer similar protections as what snapd, flatpak, etc provide on Linux.
> Even the regular Linux package managers like apt, dnf, pacman track which files were installed by which packages, so they can be removed when the package is uninstalled.
Technically speaking, Zoom could have abused dpkg post-install scripts, or pulled similar tricks, to install their malware server and leave it behind after the package was removed. Linux distributions aren't invincible to these shenanigans.
> Technically speaking, Zoom could have abused dpkg post-install scripts, or pulled similar tricks, to install their malware server and leave it behind after the package was removed. Linux distributions aren't invincible to these shenanigans.
This is correct, but such a package should not make it into the distribution's package repositories.
The install scripts in rpm/deb can do all sorts of stuff that doesn't get tracked/reversed. There are package linters to help detect some of this buts it's largely a faith based endeavor either way.
> all the files it brought with it or created during runtime are now gone.
How would this work with apps that create things that a user would expect to persist, like downloads (kept after uninstalling a browser) or office documents (kept after uninstalling the office suite), or media production apps, IDEs, etc.?
It could have some rule like "let it be if it's in the user home directory" or "only remove stuff in these system directories" but it seems kind of fragile, like what if you use a text editor to create a system config file and then uninstall the text editor?
> How would this work with apps that create things that a user would expect to persist, like downloads (kept after uninstalling a browser) or office documents (kept after uninstalling the office suite), or media production apps, IDEs, etc.?
I can't see how it would. Making the uninstall remove all the files created at runtime is the wrong solution to a real problem, which is better solved by forcing all packages to be self-contained and making their installation an idempotent operation. If we're able to do this, there's not much to clean up afterwards, and we ensure there's no privileged malware left (such as Zoom's one) as both the package install scripts and runtime would be able to run unprivileged.
I use NixOS, an OS built on Nix, a functional package manager that does this, and being able to set my system to a known state without going through the steps of formatting and reinstalling is really nice.
Such a system would not allow the application to write to any place outside of its sandbox or a designated user document volume. The system can't be touched. If you want to create a system config file, you are responsible to break the glass and move it and then all bets are off.
I can imagine an allowance to "break the glass" within the context of the app so long as the app invokes an obvious-to-the-user common dialog (like the typical file>save / file>open / choose-a-folder), but there would be no "breaking the glass" in an invisible / programmatic manner. That sounds quite nice.
That’s exactly how the macOS AppStore apps work. Your app gets permission only to those files that were intentionally opened by the user via the system open dialog.
Malware intends to harm you, with intent to steal or destroy or hold hostage your information, data, and computing resources.
Does the ZoomOpener app have malicious intentions towards you, your computing resources, or your data? Not just “could it be unintentionally exploited” - note the unintentionality! - but specifically “this was intended to harm”.
If not, then you need to reconsider your use of the word “malware” - a word shortened from “malicious software” - and find a better way to describe software design patterns that aren’t compatible with “drag it to the trash” but also aren’t intended to harm.
Adobe Creative Cloud, any $$$$ audio software DRM, and HP printer drivers all use similar patterns of “can’t just drag it to the trash”, and are all similarly annoying to remove - but they are not malware, any more than this ZoomOpener is.
>The real issue there is the fact that macOS doesn't provide a standarized way to completely uninstall an app.
reply
And Windows does? Uninstallers are completely at the behest of application developers. No consumer OS but iOS actually provides any sort of true app level sandboxing.
Control Panel “add and remove programs” usually works?
There’s no equivalent on Mac. Yes, dragging the app to the trash is a thing but that leaves behind content in ~/Library/caches, ~/Library/Application Support, and ~/Library/Preferences . It’s been somewhat of an issue with Mac ever since they first put a hard drive on the original ones back in the 80s...
Edit: I literally cleared several GB of junk out of my application support folder left behind by just one app today, so it’s fresh on my mind. OmniDiskSweeper is great for finding this stuff.
The point is, all of those files are generally still left behind on Windows as well. Uninstalling a program in Windows is roughly equivalent to dragging it into the trash in macOS.
> all of those files are generally still left behind on Windows as well.
Citation needed. That was the case in the 90s, these days most apps (device drivers aside) uninstall fairly cleanly with only settings/configurations stored in the registry still resident afterwards. Startup programs remaining after an uninstall is straight-up a bug.
Windows installers are declarative and data-driven. An installer script does not simply use the file copy operations a usual app would use during runtime.
Instead an installer is driven by a number of "database" tables that specify the installer actions in a declarative way.
There are several benefits to this approach. The declarative actions are reversible and the uninstall actions can thus be inferred automatically. This removes the burden on install authors to create a script to reverse the install operations.
This includes registering services/daemons (will unregister on uninstall), registering protocol handlers, filetype/program associations, desktop shortcuts etc.
The author of the install script can (non-Store installers) escape and execute a specific program during the install process. This is rarely needed, though. So the default is that the uninstaller will uninstall by completely reversing the install actions.
No, dragging an app from "Applications/" into Trash is equivalent to dragging a program folder from "Program Files\" into Recycle Bin. Actually that's exactly what it is.
Have you checked ProgramData, Local, and Roaming recently? Or your registry?
Also, Add/Remove only works if the program adds itself there. And remove only works if the program added an uninstaller. Also, even then, it still leaves crap behind.
At least there is a convention to have an uninstaller. If you create the installer through conventional toolboxes the uninstaller comes for free. That's how most Windows programs have both.
macOS has the same convention. Conventional toolboxes are mostly not-Windows and not-Microsoft toolboxes, indicating that even if it's a convention, it'a apparently not a windows-native convention.
Doesn't "Add and Remove Programs" just run whatever custom Uninstaller came bundled with the program? Is there anything it can do with programs that don't come with an uninstaller?
Which is completely frustrating, because Mac is totally in the position of using its built-in capabilities to deal with this. The Mac Bundle (.app) format could solve this entirely. All application specific data should be written inside of the bundle folder, so that when you delete the app, you delete the thing entirely.
I mean, maybe you need a "user data" bundle of sorts tied to the specific application. If you delete the app, it deletes all the user data bundles as well.
The default installer and bundle runners should be controlling the process. "XYZ App is attempting to write data files outside of its bundle location. These may not be cleaned up if you delete the application. Do you want to continue?"
The unix permissions system and the Mac bundle format should completely solve this problem. I honestly just don't get why this still happens. Doesn't iOS at least get this right?
As a Mac app developer, you mostly have to go out of your way to install things[1] outside of the app bundle, and you typically don't want to because that's only extra maintenance you have to do to update / version those files. Most apps don't, and that's why most apps actually are effectively removed when you drag the bundle into the trash.
There is already a concept of a "user data" directory for the app, which is determined just like on iOS. There are also other directories for things like caches that the system can clear if it's low on disk space.
Of course they could sandbox Mac apps just like on iOS, and Mac App Store apps essentially already work that way.
I can assure you, though, that any barrier they put in the way of letting non-App Store apps run however they always have will be met by strong resistance. It's easy to point to this occurrence with Zoom and call it unreasonable, but prompts like what you're describing will undoubtedly disrupt what other people see as totally valid use cases and ruining of the UX.
[1] By "things", I mean things like a separate web server process. Apps do store files in app-specific folders like "Application Support" by convention, but not for separate processes.
But what I don't get, why even have an "Application Support" directory at all. There is absolutely nothing of value added to me (as a user) to have files stored there. It's just one more place I have to look to clean up after an application is deleted. So dumb and adds zero value.
I'll put my files into Documents (or whatever). And you (as an application developer) put your files in your app bundle directory. That should be the contract for most (all?) user space applications.
I agree my prompt idea is generally poor and wouldn't work, it was mostly just for discussion purposes. But the mechanics of a fix for this are in place, rogue daemons that can't be deleted are just unacceptable.
> But what I don't get, why even have an "Application Support" directory at all.
The bundle isn't normally writeable by the app itself. It's generally good security practice to not have your app capable of rewriting what itself can do, and iOS is the same way. You can't write into your app bundle, so anything at all that you want to persist needs to go somewhere else (typically in "Application Support").
That would include mostly anything the app persists to disk that the user didn't explicitly choose, like an sqlite database or something.
> But the mechanics of a fix for this are in place, rogue daemons that can't be deleted are just unacceptable.
I see where you're coming from, but I think this is really the only balance Apple can strike here. That balance is essentially that non-App Store apps/installers are not sandboxed and can mostly do whatever they want, but Apple can step in with X-Protect (like they did here) and remove anything egregious.
They've also moved forward recently with the concept of notarizing, which will still allow apps/installers to do whatever they want, but they'll at least need to be validly signed by a verifiable private key that Apple can then revoke if you (as the developer who signed things with it) do anything egregious.
Again--it seems unreasonable in this case, but there are undoubtedly numerous very popular unsandboxed Mac apps that do sketchier things for more valid use cases. Any prompts or restrictions are going to be very disruptive and prompt a new wave of blog and HN posts about Apple trying to kill app distribution outside the App Stores.
Again, good reply thank you. Just to continue a little more...
> The bundle isn't normally writeable by the app itself. It's generally good security practice to not have your app capable of rewriting what itself can do, and iOS is the same way. You can't write into your app bundle, so anything at all that you want to persist needs to go somewhere else (typically in "Application Support").
I don't buy this. If you are writing "anything at all" into Application Support, this is no different than writing to the bundle. An executable binary written into Application Support is just as effectively the same thing as an executable binary written into the bundle itself. I don't see a difference between a program rewriting its own code vs. writing an executable into another location.
There's no added security and no added user value to writing into Application Support vs. the App Bundle itself.
Not arguing with you, I appreciate you explaining the current state and conventions. Thanks for the discussion.
Mac OS is a multi-user operating system. Most applications that are installed are global to the system although each user also has their own Applications folder. The Application Support folder resides in the user's Library folder and contains information that the app needs when running for that particular user. Storing information in the .app bundle would affect every user on that computer.
You're saying that an application can't write user specific information into the bundle and sort that out? There's no difference between these two (hypothetical) file paths:
These two file paths are effectively the same. And when the "global" application gets deleted, I most definitely want all the user data deleted with it as well.
And no, I'm not talking about saved output (documents, etc.) that are generated by the application. I'm saying there is just no need for Application Support at all; it adds no value and is just used by convention.
It’s a design choice, and of course it arguable but it have it’s (good) reasons.
By design choice user data of any App should be only be readable and modifiable by the active user. If user data is stored somewhere under home folder, this mechanism comes for free for the App developer and the OS handle it. If you want that same level of privacy in the App Bundle developers would have to handle it themselves (and well).
But then what happen when you uninstall ? Either you are blocked because you can’t remove data from another user or you somehow require escalation to admin privileges to remove data of all users at once. Which might or might not make other users happy if they were unaware of the uninstall...
With current data model choice (pretty much shared across all UNIX world and even Windows). If user A delete the App but user B still needed it you just have to reinstall to keep user B unharmed.
Your proposal is not bad to optimize for disk space gain at removal, it just happen that (quite rightfully) OS vendors chooses to rather optimize for data persistence and security.
- The Unix permission model makes it difficult to set that up in a way that doesn't allow someone to access someone else's user-specific data, or otherwise tamper with the app.
- Some setups sync home directories across the network, so everything related to a particular user needs to be under their home directory. Applications, on the other hand, would be part of the read-only OS image.
- If you delete a user in System Preferences, it gives you the option to delete their data or archive it. This works by deleting or archiving their home directory; things would be much messier if their data were scattered all over the disk.
- It breaks the concept of an app bundle as a immutable, sealed tree that can have its validity checked via code signature.
- If you're searching through your disk to see what's using up disk space, mingling user data with the app itself makes it harder to distinguish between the two.
- If the app decides to start storing its data in iCloud and syncing it across devices, it wouldn't make any sense to have it within the app bundle: among other things, you don't want the app's immutable data (i.e. what's in the app bundle today) to count towards your storage quota, and the data may be shared between, e.g., the macOS and iOS version of an app. (Application Support itself is not synced to iCloud, though.)
> And when the "global" application gets deleted, I most definitely want all the user data deleted with it as well.
Do you?
- What if you're just upgrading the app? If you download a new app bundle, drag it into Applications, and tell Finder to overwrite, it will delete everything in the old bundle.
- What if you're migrating to a new Mac? You may prefer to reinstall applications manually instead of copying everything, but that doesn't mean you want to lose your data.
- Even if you do want to delete the app entirely, does that actually mean you want to lose your data from that app? What if you're planning on reinstalling it in the future? iOS does automatically delete data when deleting an app, but it's not at all obvious to me that that's the best behavior.
All these are fair and good points. But all of these are still workable problems that should be very much in the capable hands and constraints of the operating system and "app bundle runner" (call it) to deal with.
Your points are all solved by using the user's home directory, very true. But the problem simply remains -- I think this is my main point -- that the "App bundle" has failed the user by not allowing for cleanup of everything that the application has created. If an application can simply write any executable into any user directory it wants, there's no code signing or integrity checks on the app itself that matters.
Applications should be treated as hostile, just like a user is treated in a multi-user system.
I think it's a failure of an operating system to not be completely in control of the limitations and installation of any program. It's also a failure of the developer community to not stand up and insist on this too. A sand-boxed model is what we should all be striving for here. Force the bad actors out.
I'm not a Mac developer (obviously). But I am an old Unix neckbeard. So I get all of your points; Unix invented this problem.
In a multiuser system, individual users are treated as hostile. Going forward, so too should applications. That's the failure in all of this (and it's been with us a long time now). Our security model is based on not trusting users, but in trusting applications. This thinking was born in the 60's when users couldn't install/execute any random download.
It's interesting that the unix model of security is hurting us more today than helping. The time for sandboxed applications is definitely overdue.
For the record, sandboxed macOS apps do exist (as someone mentioned upthread) and have a design somewhat similar to what you're describing, but uglier for the sake of backwards compatibility. For each app you have a directory like /Users/foo/Library/Containers/com.some.bundleid/Data, which is an entire virtualized home directory, containing not just a Library subdirectory but also Desktop, Documents, Downloads, etc. The latter directories shouldn't actually be used, but they're there in case some legacy code tries to access them. When an app presents an open or save dialog, the dialog is out-of-process and unsandboxed, so the user can pick a file from their real home directory or anywhere else; once they do so, the app is automatically granted the ability to access that particular file.
You're trading one problem for another. What if I want to delete a user instead? Now I have that user's crap in every application bundle.
However, the OS should perhaps insist on a particular location within "Application Support/" that each app can write to, and when the application bundle is deleted, provide a way to delete those support files as well, either for that user alone or for all users within permission (can be a system-wide configuration).
I would argue, I think most systems have more quantity and churn of applications than users. Meaning, it would be better to pay some overhead to deal with "user's crap" in every application bundle than to deal with "application crap" in every user's home.
Your second paragraph though is probably closer to a realistic solution. That is, the OS provisions an Application Support directory and restricts the application to using it exclusively. Any application uninstalls can (via admin prompting or configuration settings) then also delete the support directories as well.
You absolutely wouldn't want user data stored in app bundles. You can back up all user data by backing up /Users/ or /home/ on Linux.
As an application developer, I want to be able to drop a new bundle over the old version and NOT LOSE ANY USER DATA. The only way to do that is separate them completely.
True, for many uses it would be cleaner to write application data to the bundle, even though this isn't commonly writeable.
As someone who develops professional apps for Mac, I can thin of a few circumstances where this definitely won't work, or at least will introduce other compromises or require a whole lot of extra effort of developers and/or users.
- You want to uninstall/re-install an app without removing the application's data
- The application's data can become large (think raw audio or video libraries) and users request to store it in a separate disk
- Users want to personally organize the data they make with your app (by project, client, personal/work, etc.), or use it with other apps
- Your "installer" is just a zip of the .app bundle and there is no obvious opportunity to assume admin privileges make the bundle writeable
Most of these could be solved by having a separate "sandbox" a la iOS or MAS that can be moved or, at the user's option, remain on disk when uninstalling; as far as I know neither system offers these capabilities.
Most of the time, the app's directory in Application Support has the per-user configuration files. And games put their saves there. In both cases, I really don't mind the files staying. I can always change my mind, reinstall the app and resume where i left off.
For example, Steam games stored in Application Support. Why?? If I install a game from Steam, it should be installed somewhere in the Steam app bundle. When I delete the Steam app, I delete everything related. So dumb.
Yeah but this is clearly a special case as Steam is not AppStore distributable AND it is basically an alternative to the AppStore.
They clearly could have made other choices and they have their own logic, but that clearly not on Apple role to oversight how their concurrents operate.
Antitrust, geeks and media network would instantly gather pitchforks to run on Apple if they even dare to hypothetically mention it.
I'd argue that linux distros have this power, too, and they haven't either (unless you use a snap, which has compatibility and performance issues).
If you uninstall a .deb or .rpm or AppImage, the files you wrote into XDG_CONFIG_HOME (defaults to ~/.config) won't magically get cleaned up.
I'd love to be wrong here, BTW! I've had several PhotoStructure users try to reset their configuration by uninstall/reinstall, but that just removes the files in the installer, it doesn't do anything to files in user directories (and I'd be really surprised if that was ever a thing). Can you imagine the havoc from `apt remove vscode` and having it remove user's keybindings, extensions, and anything else?
That would only purge config files that came with the package (which would live in /etc/). Config files in the homedir of a user are not managed by apt.
Thanks, I was wrong, not sure why I believed that it worked. so in the end Linux needs a CCleaner tool like Windows, it should also contain a browser cache cleaner, I sometimes find a few Gb of space in Chromium localstorage/cache and have to hunt them down and delete manually.
> The default installer and bundle runners should be controlling the process. "XYZ App is attempting to write data files outside of its bundle location. These may not be cleaned up if you delete the application. Do you want to continue?"
If you do that, the entire system stops working. Everyone will just click "ok" and then still gets mad when uninstalling doesn't fully clean things up.
So users who want crapware can get it, and users who want a clean secure system can get it, and app developers are pressured to build apps correctly. And App Store/GateKeeper can prevent misbehavior for apps distributiled through Apple's friendly marketplace. Win-win-win.
Well, that's fair. But hopefully in the process of getting mad, it starts to reflect negatively on the application vendors and/or Apple directly. Maybe that will be enough for them to change.
Maybe the app bundle runner should be logging files written outside of the bundle folder? Then the uninstall process will wipe those out?
Wipe out all the things that you create with the app? All the text you created when you uninstall a text editor, the photos you touched up and saved under a new name, the audio recordings you made?
> All application specific data should be written inside of the bundle folder, so that when you delete the app, you delete the thing entirely
Not even Apple follows that ideology though, I can't be the only one who has had to delete the gigs of Garageband data from ~/Library/Application Support on an under-specced company laptop 128GB SSD
Nor does Windows for that matter. Running installshield with some command line parameter doesn't count.
Linux package managers come close, but not all third party apps are installed like that.
> Having a standard installer toolkit that comes with the os and used by many os updates along with a centralized uninstall UI is nothing?
If we are talking about OS updates, OSX has the same thing.
You are not required to use Windows Installer. And even if you do, you are not guaranteed that everything will be removed, be it due to malice or incompetence.
Not even Linux can guarantee that. Something like the Nix package manager would be closer to what's required. Plus a sandbox.
Funny, it is also self-inflicted because Safari inspired Zoom to do this hack by breaking the correct behavior of protocol links.
> This is a workaround to a change introduced in Safari 12 that requires a user to confirm that they want to start the Zoom client prior to joining every meeting. The local web server enables users to avoid this extra click before joining every meeting.
I don’t understand how a “you’re about to jump out of the app” confirm panel is breaking protocol links. I actually want this behavior for zoom and any other app...
The first time you use that protocol, of course a warning is appropriate. To prompt the user on _every_ external protocol click seems.. hostile to the concept of linking
It needs to happen at least for every combination of source domain and protocol. Otherwise websites can drive-by open zoom, reminders, or whatever other app you have installed to achieve some marketing or malware goal (or just DoS your computer).
Edit: Once you consider social sites with user-submitted content, like reddit, it might be best if you’re prompted every time.
I think "breaking the correct behavior" might be a bit of misleading term -- it's obvious that in some contexts we would want to be warned about the context switch and in others we'd be annoyed by it. I could totally understand why Zoom would, for frequent users, want the users not to get an annoying dialogue. On the other hand, if I'm a rare user or don't know a program is installed on my system, the context switch dialogue would be super useful alerting me that something is happening.
So I think you could argue Apple might want to let you override that, I don't know what the language is and whether there's a "click here to skip this next time" box on the dialogue. It's possible they got the annoyance versus security tradeoff wrong.
But imho they didn't break the correct behavior any more than Microsoft "broke the correct behavior" of privilege escalation by adding a dialogue box with UAC in Windows 7 .
> But imho they didn't break the correct behavior any more than Microsoft "broke the correct behavior" of privilege escalation by adding a dialogue box with UAC in Windows 7 .
well, it did a shitton didn't it ? to this day, most people I know disable UAC because of how annyoing it is.
What apple should do though is provide an API that developers can hook into, where in when the user drags the app to the trash, it can also uninstall anything else the app placed elsewhere on the system.
The bigger question -- what other desktop apps have similar, latent daemons hanging around? I'm always wary of installing stuff like this (e.g. zoom, go2meeting, teamviewer).
Razer gaming keyboard drivers spin up a webserver for controlling the chroma, which I've always found scary. (Using the much more reasonable community open source drivers that don't do that.)
Why in the world would a keyboard driver need to run a webserver? Client software should just be able to call driver functions directly in order to configure the keyboard. It sounds like they hired a web developer to write their driver configuration tool and didn't give any architectural constraints or have someone managing the project who knows best practices or security principles.
I don’t have the keyboard, but it’s my understanding that application developers can customize the lights on the keyboard. For example if you die in the game your keyboard turns red.
To do that you need IPC, and a JSON endpoint is the most popular form of RPC. If the server listens on localhost, I don’t see any issue with it - any issue you would have with IPC, you would have with this style of of RPC.
Now they could have provided a library to communicate directly with the keyboard - but I think the drawback was games developers didn’t want to integrate it into their games.
> Now they could have provided a library to communicate directly with the keyboard
They could have also opened a named pipe. Much cleaner, faster, less overhead than a web server, and way more secure (last time I checked, a website could not simply perform a request on a named pipe via JavaScript. With a local web server however...).
> they hired a web developer to write their driver configuration tool
This is the real problem behind all of these cases of "why the heck is tool/driver/app/whatever X running a web server locally?" - the market is full of developers only knowing HTTP, and when someone just has a hammer, every problem looks like a nail.
There is a real shortage of devs who know about all the other IPC techniques supported by modern OSes (of which practically all of them are much faster, lower latency, less overhead-y, more secure and come with less unintended side effects than a local web server).
Most devs of most OS X desktop apps are convinced their junk is important enough to pollute LaunchAgents with and none ask for permission. Be a "normal" user, install software you think is useful and you'll end up running a hosting service for a thousand "latent" daemons and "helper" programs.
Not as serious as leaving an httpd around and then letting sites to hot mic you with it -- obviously -- but on par in terms of a few select adjectives.
I don't know about the rest of you, but since crap like Zoom runs just fine in Firefox, that's where I'm keeping it. Ironically, I trust the browser's sandboxing way more than the vendor's app, which inevitably seem to open up my computer to some crazy vulnerability or phone home with my personal data or some other nonsense. I feel, perhaps wrongly, that I have more control over what the browser executes and what (web) applications can access, so Zoom, BlueJeans, Slack, Discord, and the rest are getting trashed.
Because zoom’s patch will only help users still using and updating zoom while those who have uninstalled zoom are still vulnerable (because the uninstalled leaves the web server behind)
I’m sure Apple is as upset as anyone else. It effectively breaks their sandbox model so they’ll probably be working hard on a way to plug that hole gracefully.
The sandbox only applies to software devs that want to use it or those that wish to sell through the Mac App Store. I don't think Zoom is in the MAS at all (I don't see it in a quick search anyway), and a standalone installer is free to do whatever it wants and can convince users to go along with (up to and including, in principle, bypassing SIP though since that significantly raises the effort bar I've only ever seen niche stuff request it). And it's completely legitimate to want to run a server on your system too, there is no hole. Zoom simply acted as malware, taking actions without user permission.
SIP cannot be disabled by anything running in the current boot session. Once the root volume is mounted, the SIP flags are set in the mountpoint. The root volume obviously cannot be unmounted while it is booted from.
I don't fully understand the difference. But signed is different from notarized. Notarized means you uploaded the binary to Apple. Previously, you can sign without doing that.
I haven't looked enough to understand what is gained by notary. Does Apple want to search your binary for maliciousness or rulebreaking (potentially even at a later date) so that it might revoke the notarization/signature?
Some of this stuff seems a tad disingenuous. Like preventing debugging. The debugger APIs on Mac already pop up a password prompt, limiting the usability in malware (and actual use, like trying to debug over ssh). Meanwhile, a culture of producing separate binaries for debug and for end users (debug builds lacking optimization, allowing additional permissions) is in my experience a great way to fail to reproduce legit customer-facing bugs during development and have greater difficulty diagnosing them when they occur on a real live user machine.
As far as I understand, notarization is intended to catch malware before it can be distributed. The traditional signing mechanism can protect users against malicious software because Apple can pull certificates used to sign malware.
Watch the security talk. macOS is on the path to adopt iOS permissions model and long term roadmap is to apply the sandbox to everything, with the option to explicitly disable it on per-case basis.
A path similar to how Windows 10 is now converging the Win32 and UWP sanbox models, or how ChromeOS sandboxes GNU/Linux processes.
The MacBook is a general purpose computing platform. The iPhone and iPad are not. Locking down the Mac will make it unusable for many, many people. It will indeed be the death of the platform, as most devs abandon it entirely.
As a Mac user who develops high performance scientific applications portable between all UNIXen (Linux/Mac/BSD), I still can write my code pretty easily on the platform.
I personally don't use Homebrew, et al. but, I have a Linux VM which handles that stuff pretty well.
I didn't develop a Mac specific "application" though.
My experience has been the opposite. Of all the people I've worked with using Macs, all of them were developing cross-platform open source software. I've yet to meet a single developer making MacOS applications.
There is no real sandbox model on Macs if you don’t go through the Mac App Store, only code signing to detect that the app hasn’t been tampered with and to validate the author.
There is, but even the control click will only allow you to open signed software. Unless you build the software yourself (I'm not sure how homebrew still works) you cannot run it if it's not been notarized by Apple.
Firefox was broken on Catalina for a while, even though the main app was notarized. Some internal binary wasn't notarized, and no amount of control clicking would get Firefox to work until Mozilla notarized everything in the build.
The users who really know what they're doing are going to refuse to disable system integrity protection. I paid a shitload of money for the T2 chip, secure signed boots, a virus-free environment and complete peace of mind from malware. No way I'm turning that off on a work machine.
I have a Raspberry Pi for hacking, I'm happy to root the hobby computers, not the work ones.
That's why if find postings like this dangerous. If an author is asking someone to run a command, they really need to explain what the command does and what the tradeoffs are.
I find it weird that you don't expect root privileges on devices you do work with.
It seems like this conflates the notion of having root privileges with turning off security. There is no meaningful connection between the two save in situations where there is no meaningful way to control said security layers save destroying them.
For example refusing to boot a bootloader that isn't signed doesn't require your oem to hold the only possible key that can be used to sign said bootloader.
I would not be surprised at all if this episode has some concrete ramifications in 10.15. It could take the shape of something akin to iOS’ location permissions for applications that want to run a server, or even a first-party framework for accomplishing the thing Zoom was trying to do, but it’s probably safe to say what Zoom was doing won’t be possible next year.
I mean. The issue at hand was that they purposely left the webserver behind to auto reinstall if a zoom link was clicked. This was an intended feature, and the same could have been done on Linux or Windows. Package management or containers are irrelevant to this conversation.
Sure, the "package managers" on Linux, Windows, and macOS all behavior in pretty similar fashions. A manifest of files that the installer knew at time of install. That doesn't stop a program from installing anything else at run time, or even in the installer (since they can define what to remove in a lot of cases). This wasn't an "accident," it was purposely left behind with the intention of being used to onboard users easily even after they removed the client. This would have pretty much been an issue on every platform (had it been implemented on other platforms). And please, don't tell me "but Docker!" Docker, at present, isn't really usable with GUI applications yet.
A package manager would be designed to remove any non user hostile features. Intentionally hostile behavior would be unaffected. One might hope that the packager, the person that is, might have refused to include software from incompetent or hostile developers.
Something an app store due to volume and default allow pending mostly automated checks has a problem with.
As far as I know most traditional package managers only remove files and folders declared in the package. Not files installed somewhere else during the install script or created by the binary when it runs.
Not really, If you have a standard app that can be dragged into the trash, sure. If you have a kernel extension, or a launch deamon or any application data you store locally you cannot clean up after yourself without a custom uninstaller.
Windows is far ahead in its centralized Add&Remove Programs area.
Not really, If you have a standard app that can be dragged into the trash, sure. If you have a kernel extension, or a launch deamon or any application data you store locally you cannot clean up after yourself without a custom uninstaller.
This is false. Kernel extensions can be part of the application bundle and will be unloaded and removed when the bundle is removed.
Installing KEXTs in an application bundle allows an application to register those KEXTs without the need to install them permanently elsewhere within the system hierarchy. This may be more convenient and allows the KEXT to be associated with a specific, running application. When it starts, the application can register the KEXT and, if desired, unregister it on exit.
For example, a network packet sniffer application might employ a Network Kernel Extension (NKE). A tape backup application would require that a tape driver be loaded during the duration of the backup process. When the application exits, the kernel extension is no longer needed and can be unloaded.
It is true that you cannot remove application data, but that is a feature (maybe users want to retain the data) and also does not happen in e.g. Linux package managers.
No, the de vintentionally circumvented the bundling and containerization that exists, and MacOS couldn't prevent it. That said, Windows and Linux can't prevent it either, in common configurations.
macOS has a very good concept of package management and containerization. It's just optional because people get even more up in arms when their old software doesn't work any more.
Apple also makes a fantastic computing platform with very good mandatory isolation, namely, iOS. If you're interested in isolation in preference to compatibility with traditional desktop software, an unjailbroken iPad Pro with Smart Keyboard is a pretty good option.
I don't think it's just old software that might break. One must also consider the software not yet to be written. If the isolation is too constraining for some type of application that really needs a privilege, and isolation is mandatory, then some amount of innovation will just have to happen somewhere else or not at all.
Remember old school Mac was full of hacks upon hacks upon hacks, many by third parties, and there was cool stuff in there too. The App Store mentality has caused everyone to overreact and think that every third party app on the planet will turn into the worst conception of Win98 era malware overnight if unconstrained, when this is just one outcome among many possible.
I was thinking to myself “it is too bad Apple can’t just disable this like they could have on iOS, cause I suspect most people I know with Macs would be vulnerable to it and it is next to impossible to explain to a nontechnical user how to actually uninstall this”.
Apple is punishing Zoom, because they explicitly built this mess to get around Safari appropriately prompting users to decide whether they wanted to open the app on each meeting join. If you are a safari user, there was never a vulnerability. You’d be prompted. Why is no one talking about Chrome and Firefox’s lax security posture here? It’s frustrating.
Note that dropbox also opens up three servers on your Mac, though when you exit the app they go away, so are arguably discretionary. I assume they are for lan syncing, though I don't know why that would require three ports.
They're blocked in my little snitch anyway so no problem.
You can disable it in Preferences -> Software Update -> Advanced -> Install system data files and security updates
From there you can manage the updates manually from the command line with the `softwareupdate` command. e.g. `softwareupdate --list --include-config-data` will show available updates, `softwareupdate -ia --include-config-data` will install them, etc.
As one of the posters already mentioned, it can be easily disabled through the OSX GUI by changing the setting in Preferences -> Software Update -> Advanced -> Install system data files and security updates
Windows Update leaves behind a lot of logs with KB entries, so I don't think they're trying to do anything secretly. If Microsoft changes your software, you know about it.
When I was at Microsoft, Windows' policy was only to kill applications like this with the explicit consent of the manufacturer (i.e. they were usually asking us to do it because they are unable to patch themselves, not the other way around), and only with a very specific version range.
I had a prospective client/customer, just today, schedule a Zoom meeting for tomorrow. I was aware of the issue this week, but I wasn't really going to make a lot of noise with someone who was just a prospect.
I figured.. this has probably evolved already to an acceptable situation.
So the calendar invite came in, I started up zoom to see what it would do. There's an update available, where zoom says they're abandoning the local web server.
Upgraded, should be good for tomorrow.
Came here, noticed the "softwareupdate -i MRTConfigData_10_14-1.45 --include-config-data" command and ran it. Checked last update -- back in June, to 1.42 ...
Updated that, ready to go.
Business continues, maybe I'll delete zoom another day, but not just yet.
Many users are unable to enable Video feature even after applying recent patch released by Zoom. Also zoom has become security joke/conversation topic while starting a con calls!
It bothers me that I see an increasing number of apps that run a local web server. I've got half a dozen apps, mostly development tools like pgAdmin, that force me to run the app and then access the UI through a browser.
How many such apps am I running that I don't know about? And how many of them are exposing my system to malicious web sites, or to curious people in my office on the same subnet? I wish I knew.
Why do web browsers even allow access to localhost? Seems like developers just use this to abuse/violate user preferences anyway.
I think I'd be happy with a popup once-per-tab asking me for permission for a web page to talk to a local web server... Might even be okay if it's scary (Are you a developer?)
Does anyone know if issues like this would only affect the current user account?
I currently have a separate limited user account just for meetings, and that’s where I install various meeting apps. So in my case is there any way to know if Zoom or WebEx would install stuff on all accounts?
I haven't been following this issue, but i am very wary of Zoom since they automatically turn your camera on and broadcast your stream as soon as you click a Zoom link in your web browser which seems like a major issue to me...
Once again, a popular site that is completely GDPR non-complient. To opt out of tracking you have to go through six layers of obfuscation. And don't take the wrong turn, or you will just come to walls of text, meant to do nothing but make you throw up your hands and give up. Or you can just opt in to everything with one click.