It's a nice enough reminder about the limitations of abstractions but the thesis isn't as strong as it is classically regarded to be.
Something that is inherent to any abstraction is giving up control; this is simply necessary in order for the abstraction to have implementation space. Take the example of SQL queries that are slow. The abstraction itself isn't leaky unless the queries give the wrong results sometime. The abstraction isn't some guarantee about the run-time of a query.
An abstraction abstracts away N of M parameters (N < M). Joel would presumably say that N has “leaked”. But more straightforwardly you can say that these concerns have been abstracted away.
Therefore the Law—always a good sign when the original author calls it a “law”[1]—can be restated as: all abstractions abstract.
If you see abstraction as allowing you to ignore the implementation details then having to know the implementation details to get stuff done can be seen as the abstraction failing and the implementation details that are supposed to be abstracted away are not and are leaking out of the abstraction.
Either way, whether you consider it a leak or not, the results are the same: the abstraction becomes less useful or even useless.
The law states that it's unavoidable for the implementation details to show in some cases, since you can't abstract away everything without being as complicated as the thing you're abstracting -- a "trivial" abstraction.
For non-software examples look at the history of the laws of physics. Newtonian gravity worked great for many things, but at some point scientists noticed for example that the orbit of Mercury didn't drift as predicted. This was fixed with the theory of General Relativity, which could be said was "leaking through" with the orbit of Mercury.
However, this doesn't mean that the abstraction is useless. Newtonian gravity is much easier to work with and works just fine for many applications, which is why it's still widely taught and used. But at some point, you have to deal with the leaks, and for that you have to know relativity (or become a research physicist, since we now suspect relativity to have leaks of its own)
Those scientific theories are moreso models. Models that get progressively less wrong.
Garbage collection is wrong to the degree that it leads to memory unsafety or memory leaks that are not caused by humans. But using more memory is not a “leak” because it has nothing to do with the guarantees.
Again, Joel has pointed out a truism: abstractions abstract.
Nothing is leaking. Did someone guarantee that you would get as predictable performance with garbage collection compared to manual memory management? No, because that guarantee was never part of the abstraction.
Either way it’s safe to the say the abstraction has failed and isn’t doing its job anymore if you have to know the implementation details.
To use your garbage collection example, regardless you consider it a leak or not, the result is the same, if you have to deep drive into the workings of the garbage collector and how it manages memory, the abstraction has failed to do its job: to free you from the burden of managing memory.
His point is that all nontrivial abstraction breakdown at some point / under certain circumstances.
Something that is inherent to any abstraction is giving up control; this is simply necessary in order for the abstraction to have implementation space. Take the example of SQL queries that are slow. The abstraction itself isn't leaky unless the queries give the wrong results sometime. The abstraction isn't some guarantee about the run-time of a query.
An abstraction abstracts away N of M parameters (N < M). Joel would presumably say that N has “leaked”. But more straightforwardly you can say that these concerns have been abstracted away.
Therefore the Law—always a good sign when the original author calls it a “law”[1]—can be restated as: all abstractions abstract.
[1] cf. “Hyrums Law”—not the case