Modern GC implementations offer an important advantage - very cheap allocations and batched deallocations which allow to sustain high allocation traffic that goes through the heap.
This comes at a cost of non-deterministic memory usage and object de-allocation. Further tradeoff is made between GC pause times, allocation throttling (Go) and even higher memory footprint (ZGC).
My knowledge on the exact overhead of Swift's reference counting is very limited but there's a good chance it comes with significant upfront cost that you don't have to pay in languages with GC or with compile-time-defined allocation semantics (Rust and C++ RAII). There's a reason why Apple invested in atomics being as cheap[0] as they are on their ARM64 cores.
Overall, every time I find Swift benchmark numbers on the internet, they turn out to be far[1][2] from near-Rust performance despite being a language built on top of LLVM.
Modern GC implementations offer an important advantage - very cheap allocations and batched deallocations which allow to sustain high allocation traffic that goes through the heap.
This comes at a cost of non-deterministic memory usage and object de-allocation. Further tradeoff is made between GC pause times, allocation throttling (Go) and even higher memory footprint (ZGC).
My knowledge on the exact overhead of Swift's reference counting is very limited but there's a good chance it comes with significant upfront cost that you don't have to pay in languages with GC or with compile-time-defined allocation semantics (Rust and C++ RAII). There's a reason why Apple invested in atomics being as cheap[0] as they are on their ARM64 cores.
Overall, every time I find Swift benchmark numbers on the internet, they turn out to be far[1][2] from near-Rust performance despite being a language built on top of LLVM.
[0] https://dougallj.github.io/applecpu/firestorm-int.html (e.g. CAS, CASAL, DMB)
[1] https://github.com/ixy-languages/ixy-languages
[2] https://github.com/jinyus/related_post_gen#multicore-results