This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly. It had already been put in charge of authorizing most Cocoa-land system interactions as of 10.12. But the capabilities-ruleset interpreter wasn't in the code-path for any BSD-land code until 10.15.
A capabilities-ruleset "program" for this interpreter can be very simple (and thus quick to execute), or arbitrarily complex. In terms of how complex a ruleset can get—i.e. what the interpreter's runtime allows it to take into consideration in a single grant evaluation—it knows about all the filesystem bitflags BSD used to, plus Gatekeeper-level grants (e.g. the things you do in Preferences; the "com.apple.quarantine" xattr), plus external system-level capabilities "hotfixes" (i.e. the same sort of "rewrite the deployed code after the fact" fixes that GPU makers deploy to make games run better, but for security instead of performance), plus some stuff (that I don't honestly know too much about) that can require it to contact Apple's servers during the ruleset execution. Much of this stuff can be cached between grant requests, but some of it will inevitably have to hit the disk (or the network!) for a lookup—in the middle of a blocking syscall.
I'm not sure whether it's the implementation (an in-kernel VM doesn't imply slowness; see eBPF) or the particular checks that need to be done, but either way, it adds up to a bit of synchronous slowness per call.
The real killer that makes you notice the problem, though, isn't the per-call overhead, but rather that the whole security subsystem seems to now have an OS-wide concurrency bottleneck in it for some reason. I'm not sure where it is, exactly; the "happy path" for capabilities-grants shouldn't make any Mach IPC calls at all. But it's bottlenecked anyway. (Maybe there's Mach IPC for audit logging?)
The security framework was pretty obviously structured to expect that applications would only send it O(1) capability-grant requests, since the idiomatic thing to do when writing a macOS Cocoa-userland application, if you want to work with a directory's contents, is to get a capability on a whole directory-tree from a folder-picker, and then use that capability to interact with the files.
Under such an approach, the sandbox system would never be asked too many questions at a time, and so you'd never really end up in a situation where the security system is going to be bottlenecked for very long. You'd mostly notice it as increased post-reboot startup latency, not as latency under regular steady-state use.
Under an approach where you've got many concurrent BSD "filesystem walker" processes, each spamming individual fopen(2)-triggered capability requests into the security system, though, a failure-to-scale becomes very apparent. Individual capabilities-grant requests go from taking 0.1s to resolve, to sometimes over 30s. (It's very much like the kind of process-inbox bottlenecks you see in Erlang, that are solved by using process pools or ETS tables.)
Either Apple should have rethought the IPC architecture of sandboxing in 10.15, but forgot/deprioritized this; or they should have made their BSD libc transparently handle "push down" of capabilities to descendent requests, but forgot/deprioritized that.
There is a bunch of new stuff in 10.15, mostly involving binary execs (and I don't understand all of it), but I'm pretty sure it doesn't match what you're describing.
I believe it is actually a Scheme dialect, and I would be very surprised if it is not compiled to some internal representation upon load.
> This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly.
I am fairly sure Gatekeeper is mostly just Quarantine and other bits that prevent the execution of random things you download from the internet.
In the latter, Apple's sandbox rule set (custom profiles) is called SBPL - Sandbox Profile Language - and is described as a "Scheme embedded domain specific language".
It's evaluated by libSandbox, which contains TinyScheme! 
From what I could understand, the Scheme interpreter generates a blob suitable for passing to the kernel.
Running any kind of I/O during a capability check is a broken design.
There is no reason to hit the disk (it should be preloaded), much less the network (such a design will never work if offline).