In contrast, in the ~10 years I spent working in Java, I frequently ran into problems with programs which needed an excessive amount of memory to run. I spent countless frustrating hours debugging and "tuning" the JVM to try to reduce the excessive memory footprint (never entirely successfully). And of course, as soon as your JVM needed to interact with native code (e.g. a quant library written in C++), you were even worse off, because there was no language support whatsoever for ensuring when, or even if, some native memory would be freed. It made it even harder to debug these leaks - because the "leak" was always very complicated to reproduce.
Garbage collection is an oversold hack - I concede there are probably some situations where it is useful - but it has never lived up to the claims people made about it, especially with respect to it supposedly increasing developer productivity.
It doesn’t always work. A dynamic tracing GC (or reference counting or using arenas) can be more efficient in ways that static approaches probably won’t be able to effectively emulate. Rust is a great experiment for seeing how far we can go in this area, so we will see.
Solving memory fragmentation requires moving objects around. Usually you get that via a moving (or compacting) garbage collector. But there's no reason RAII style resource allocation couldn't move things around.
And there are plenty of garbage collection strategies that don't move objects around.
I'm a bit "GC hater" so obviously I'm heavily biased but to me it just means that GC make it easier to write sloppy, poorly architectured code where you have a mess of a data-dependency graph and nobody knows who owns what and when things need to be deleted, so you let the GC (hopefully) figure it out. In my experience this leads to all sorts of issues independently of memory management, such as having obsolete data lingering in certain parts of the program because it's not expunged properly.
IMO GC or not having clear ownership semantics are necessary to properly architecture anything more complicated than a script. And if you have ownership semantics then why do you need a GC anyway?
Ownership is not a fundamental architectural property - it is "only" a powerful technique for tracking data with a specific lifetime (of course, even in GC languages you still need proper ownership semantics for some pieces of data).
That's why Java and C# both had to introduce a way of (semi-)automatically closing resources, waiting for the GC was a catastrophe.
Java and C# did nothing new there, only catching up with was already the state of the art before they were born.
The basic assumption with a GC'ed language is that the lifetime ends once the object isn't referenced anymore. You rarely have to deal with it at all, so those "logic errors" happen much less frequently.
If we look at the major real-world languages without GC (C/C++), it is your job to figure out the lifetime. You will fuck this up unless you structure your program in very disciplined way and that's frankly too much to ask of the vast majority of programmers.
In other words the DAG would never be traversed to allocate or free memory, each vertex would be held by a vector or other container and the edges represented by an adjacency list/matrix.
It's faster and safer than pointer chasing.
No, it solves the problem because resource lifetime is no longer a concern. A field advances as we decrease the number of irrelevant details we need to attend to.
Turns out, most of the time, memory is one such concern. You can argue that we need better approaches when memory is a concern, but that doesn't discount the value of GC as the right default.
Until you need to close a socket or file or unlock a mutex or respond to an async response handler.
Once upon a time, this statement may have been true, but it isn't any more. ASAN and Valgrind are widely available, and runtime test suites are much more prevalent than they used to be.