For example, I can run "terraform apply" and it could take up to 5 minutes to start, leaving my computer almost unusable until it runs. The weird thing is that this only happens sometimes. In some cases, I restart the laptop and it starts working a little bit faster, but the issue comes back after some time.
It's already been a few months since I try to run every command from a VM in a remote location, since I am tired of waiting for my commands to start.
I have a macbook air from 2013 which never had this issue.
Any easy fix that I could test? Disconnecting from the internet is not an option. Disabling SIP could be tried, but I think I already did and didn't seem to fix it, plus it is not a good idea for a company laptop.
Don't we have some sort of hosts file or firewall that we can use to block or fake the connectivity to apple servers?
Certain BSD-syscall-ABI operations like fopen(2) and readdir(2) are now not-so-fast by default, because the OS has to do a synchronous check of the individual process binary's capabilities before letting the syscall through. But POSIX utilities were written to assume that these operations were fast-ish, and therefore they do tons of them, rather than doing any sort of batching.
That means that any CLI process that "walks" the filesystem is going to generate huge amounts of security-subsystem request traffic; which seemingly bottlenecks the security subsystem (OS-wide!); and so slows down the caller process and any other concurrent processes/threads that need capabilities-grants of their own.
To find a fix, it's important to understand the problem in fine detail. So: the CLI process has a set of process-local capabilities (kernel tokens/handles); and whenever it tries to do something, it first tries to use these. If it turns out none of those existing capabilities let it perform the operation, then it has to request the kernel look at it, build a firewall-like "capabilities-rules program" from the collected information, and run it, to determine whether it should grant the process that capability. (This means that anything that already has capabilities granted from its code-signed capabilities manifest doesn't need to sit around waiting for this capabilities-ruleset program to be built and run. Unless the app's capabilities manifest didn't grant the specific capability it's trying to use.)
Unlike macOS app-bundles, regular (i.e. freshly-compiled) BSD-userland executable binaries don't have a capabilities manifest of their own, so they don't start with any process-local capabilities. (You can embed one into them, but the process has to be "capabilities-aware" to actually make use of it, so e.g. GNU coreutils from Homebrew isn't gonna be helped by this. Oh, and it won't kick in if the program isn't also code-signed, IIRC.)
But all processes inherit their capabilities from their runtime ancestors, so there's a simple fix, for the case of running CLI software interactively: grant your terminal emulator the capabilities you need through Preferences. In this case, the "Full Disk Access" capability. Then, since all your all CLI processes have your terminal emulator as a runtime ancestor-process, all your CLI processes will inherit that capability, and thus not need to spend time requesting it from the security subsystem.
Note that this doesn't apply to BSD-userland executable binaries which run as LaunchDaemons, since those aren't being spawned by your terminal emulator. Those either need to learn to use capabilities for real; or, at least, they need to get exec(2)ed by a shim binary that knows how.
tl;dr: I had this problem (slowness in numerous CLI apps, most obvious as `brew upgrade` suddenly taking forever) after upgrading to 10.15 as well. Granting "Full Disk Access" to iTerm fixed it for me.
Is this actually new in macOS 10.15? I seem to recall this being a thing ever since sandboxing was a thing, even all the way back to when it was called Seatbelt.
> That means that any CLI process that "walks" the filesystem is going to generate huge amounts of sandboxd traffic, which bottlenecks sandboxd and so slows down the caller process.
Is this not implemented in the kernel as an extension? I thought the checks went through MAC framework hooks. Doesn't sandboxd just log access violations when told to do so by the Sandbox kernel extension?
> Unlike macOS app-bundles, regular BSD-userland executable binaries don't have a capabilities manifest of their own, so they don't start with any process-local capabilities (with some interesting exceptions, that I think involve the binary being embedded in the directory-structure of a system framework, where the binary inherits its capabilities from the enclosing framework.)
I am fairly sure you can just embed a profile in a section of your app's binary and call the sandboxing Mach call with that…
Maybe you're right; I'm not sure when they actually put the Seatbelt/TrustedBSD interpreter inline in the BSD syscall code-path. What I do know is that, until 10.15, Apple tried to ensure that the BSD-userland libc-syscall codepath retained mostly the same behavioral guarantees as it did before they updated it, in terms of worst-case time-complexities of syscalls. Not sure whether that was using a short-circuit path that went around Seatbelt or used a "mini-Seatbelt" fast path; or whether it was by hard-coding a pre-compiled MAC ruleset for libc calls that only relied upon the filesystem flag-bits, and so never had to do anything blocking during evaluation.
Certainly, even as of 10.12, BSD-userland processes weren't immune to being exec(2)-blocked by the quarantine xattr. But that may have been a partial implementation (e.g. exec(2) going through the MAC system while other syscalls don't.) It's kind of opaque from the outside. It was at least "more than nothing", though I'm not sure if it was "everything."
One thing that is clear is that, until 10.15, BSD processes with no capabilities manifest, still had the pretty much exactly the same default set of privileges that they had before capabilities, which means "almost everything" (and therefore they almost never needed to actually hit up the security system for more grants.) I guess all Apple really needed to have done in 10.15 to "break BSD", was to introduce some more capabilities, and then not put them in the default/implicit manifest.
I suppose what actually happened in 10.15 can be determined easily-enough from the OSS code that's been released. :)
> Is this not implemented in the kernel as an extension? // I am fairly sure you can just embed a profile in a section of your app's binary and call the sandboxing Mach call with that…
Yeah, sorry, you're right; updated my assertions above. I'm not a kernel dev; I've just picked up my understanding of this stuff from running head-first into it while trying to do other things!
They are definitely doing something way too slow.
This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly. It had already been put in charge of authorizing most Cocoa-land system interactions as of 10.12. But the capabilities-ruleset interpreter wasn't in the code-path for any BSD-land code until 10.15.
A capabilities-ruleset "program" for this interpreter can be very simple (and thus quick to execute), or arbitrarily complex. In terms of how complex a ruleset can get—i.e. what the interpreter's runtime allows it to take into consideration in a single grant evaluation—it knows about all the filesystem bitflags BSD used to, plus Gatekeeper-level grants (e.g. the things you do in Preferences; the "com.apple.quarantine" xattr), plus external system-level capabilities "hotfixes" (i.e. the same sort of "rewrite the deployed code after the fact" fixes that GPU makers deploy to make games run better, but for security instead of performance), plus some stuff (that I don't honestly know too much about) that can require it to contact Apple's servers during the ruleset execution. Much of this stuff can be cached between grant requests, but some of it will inevitably have to hit the disk (or the network!) for a lookup—in the middle of a blocking syscall.
I'm not sure whether it's the implementation (an in-kernel VM doesn't imply slowness; see eBPF) or the particular checks that need to be done, but either way, it adds up to a bit of synchronous slowness per call.
The real killer that makes you notice the problem, though, isn't the per-call overhead, but rather that the whole security subsystem seems to now have an OS-wide concurrency bottleneck in it for some reason. I'm not sure where it is, exactly; the "happy path" for capabilities-grants shouldn't make any Mach IPC calls at all. But it's bottlenecked anyway. (Maybe there's Mach IPC for audit logging?)
The security framework was pretty obviously structured to expect that applications would only send it O(1) capability-grant requests, since the idiomatic thing to do when writing a macOS Cocoa-userland application, if you want to work with a directory's contents, is to get a capability on a whole directory-tree from a folder-picker, and then use that capability to interact with the files.
Under such an approach, the sandbox system would never be asked too many questions at a time, and so you'd never really end up in a situation where the security system is going to be bottlenecked for very long. You'd mostly notice it as increased post-reboot startup latency, not as latency under regular steady-state use.
Under an approach where you've got many concurrent BSD "filesystem walker" processes, each spamming individual fopen(2)-triggered capability requests into the security system, though, a failure-to-scale becomes very apparent. Individual capabilities-grant requests go from taking 0.1s to resolve, to sometimes over 30s. (It's very much like the kind of process-inbox bottlenecks you see in Erlang, that are solved by using process pools or ETS tables.)
Either Apple should have rethought the IPC architecture of sandboxing in 10.15, but forgot/deprioritized this; or they should have made their BSD libc transparently handle "push down" of capabilities to descendent requests, but forgot/deprioritized that.
There is a bunch of new stuff in 10.15, mostly involving binary execs (and I don't understand all of it), but I'm pretty sure it doesn't match what you're describing.
I believe it is actually a Scheme dialect, and I would be very surprised if it is not compiled to some internal representation upon load.
> This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly.
I am fairly sure Gatekeeper is mostly just Quarantine and other bits that prevent the execution of random things you download from the internet.
In the latter, Apple's sandbox rule set (custom profiles) is called SBPL - Sandbox Profile Language - and is described as a "Scheme embedded domain specific language".
It's evaluated by libSandbox, which contains TinyScheme! 
From what I could understand, the Scheme interpreter generates a blob suitable for passing to the kernel.
Running any kind of I/O during a capability check is a broken design.
There is no reason to hit the disk (it should be preloaded), much less the network (such a design will never work if offline).
On a clean Catalina install this does not happen. Does “terraform version” have the same delay? If not, check your remote configuration - maybe run with TF_LOG=trace. Terraform Cloud will definitely highlight the inherent performance problems of using a VPN.