> That said, it is still the case that most programs aren't penalized by GC much. But people do still work on operating systems, games, databases, video editing tools and such where getting great performance is really important.
This statement simply isn't true though, at least in the general sense. Sure, for old school atomic RC and compilers that didn't do RC elision etc it might've been true. But we can and have made smarter compilers, just like Rust has gotten smart about life times.
Like @netbioserror I use Nim, but for embedded and real-time stuff. There is a minimal overhead (one machine word per allocation), but using Nim's ARC "memory management system" gives me results comparable to Zig or Rust though with much less work. Technically ARC without the cycle collector isn't a GC.
There's no reason D and other languages with "memory management" couldn't do similar RC based systems. As the compilers get smarter it approaches the same result as Rust's manual lifetime analysis. In many cases compilers can be smarter than human programmers about memory.
Even then, there's occasions where in Nim one can and will reach for manual memory allocations. It's easy to do when wanted.
> 2. There will be things the language won't let you do, because it is garbage collected
> 3. There will be extra FFI costs, like when making OS calls or calling into other languages
For generational GC's this is true, but for RC systems not as much.
> There will be cultural tendencies, if a language uses GC, the people who designed it and used it are less likely to be interested in compromising other things for extreme performance.
This is certainly a thing. However, cultures can vary a lot. For example Nim provides zero-cost iterators as well as minimal-cost "closure" iterators. Now Nim's string libraries do a lot of copies by default which makes it easy to ruin performance. However, there's an open RFC to provide CoW strings to reduce this overhead. There's also the new `lent` type too. After all, it's used in games and embedded so there's desire for that.
Certainly if your language is tedious about memory management, you'll spend more time on it. Though as an end user I'm not sure I've noticed a practical difference between Go and Rust programs as compared to C#/Java applications. The latter I avoid running if possible because of their general bloat.
This statement simply isn't true though, at least in the general sense. Sure, for old school atomic RC and compilers that didn't do RC elision etc it might've been true. But we can and have made smarter compilers, just like Rust has gotten smart about life times.
Like @netbioserror I use Nim, but for embedded and real-time stuff. There is a minimal overhead (one machine word per allocation), but using Nim's ARC "memory management system" gives me results comparable to Zig or Rust though with much less work. Technically ARC without the cycle collector isn't a GC.
There's no reason D and other languages with "memory management" couldn't do similar RC based systems. As the compilers get smarter it approaches the same result as Rust's manual lifetime analysis. In many cases compilers can be smarter than human programmers about memory.
Even then, there's occasions where in Nim one can and will reach for manual memory allocations. It's easy to do when wanted.
> 2. There will be things the language won't let you do, because it is garbage collected
There will be things that systems like Rust's lifetimes makes harder / less performant as well: https://ceronman.com/2021/07/22/my-experience-crafting-an-in...
> 3. There will be extra FFI costs, like when making OS calls or calling into other languages
For generational GC's this is true, but for RC systems not as much.
> There will be cultural tendencies, if a language uses GC, the people who designed it and used it are less likely to be interested in compromising other things for extreme performance.
This is certainly a thing. However, cultures can vary a lot. For example Nim provides zero-cost iterators as well as minimal-cost "closure" iterators. Now Nim's string libraries do a lot of copies by default which makes it easy to ruin performance. However, there's an open RFC to provide CoW strings to reduce this overhead. There's also the new `lent` type too. After all, it's used in games and embedded so there's desire for that.
Certainly if your language is tedious about memory management, you'll spend more time on it. Though as an end user I'm not sure I've noticed a practical difference between Go and Rust programs as compared to C#/Java applications. The latter I avoid running if possible because of their general bloat.