The advice is that consider memory layout when doing repetitive operations on your data.
> The usual application where GC is most relevant is connected to 10s of services over some network protocol, and often has to load each business object/entity it operates on into memory to check permissions, sometimes even issue another network call, etc. Loading that entity from a list of pointers is absolutely no bottleneck in such a setting, at all - the way you might speed up the app is doing the work on the database itself, which is already written in some low-level language, efficiently.
No my example isn't a complaint on it using GC language. It is that it is loading and unloading business entities multiple times and doesn't take into account that operations could be done linearly much faster instead of hunting a field in object graph to accumulate in various ways. That is how e.g. numpy works if you make sure that the array you do your operation on is of the same type.
> Your anecdote is just a badly written application, I have plenty examples of very performant ones written in managed languages.
Yes following the ideas of what is considered SOLID principals. The objections isn't managed vs. unmanaged it is going full OO and not giving a shit about how computers work.
OO encourages objects graphs scattering your data unpredictably in memory and modern CPU:s likes to have it's data in continuous memory area where it can do prefetching and prediction on where your data resides.
That's not really true anymore. RAM is so big these days that most databases can fit into it and be blazingly fast. It's all the classic shit that slows people down. Loops in loops in loops in loop with nested if's. Not understanding the correct way to structure IF statements (Common case first etc).
The point I was making is that a trip to disk is generally not occurring, and that having a soup of pointers in RAM isn't as bad having badly laid out memory on disk that needs to be paged in. Most performance being lost, isn't from memory layout patterns and not getting full pages. It's not even that much from bad branch prediction. Most of the bad performance these days is the stacks of extra app layers that exists for little benefit.
And I see comments saying that Casey is doing some strawman thing here and yet here you are with claim "1. No need".
I'm just coming here with personal experience and my own anecdote that memory layout does matter even on modern CPU:s and it isn't niche. And because it was designed with OO mindset there wasn't any quick fix available.
> The usual application where GC is most relevant is connected to 10s of services over some network protocol, and often has to load each business object/entity it operates on into memory to check permissions, sometimes even issue another network call, etc. Loading that entity from a list of pointers is absolutely no bottleneck in such a setting, at all - the way you might speed up the app is doing the work on the database itself, which is already written in some low-level language, efficiently.
No my example isn't a complaint on it using GC language. It is that it is loading and unloading business entities multiple times and doesn't take into account that operations could be done linearly much faster instead of hunting a field in object graph to accumulate in various ways. That is how e.g. numpy works if you make sure that the array you do your operation on is of the same type.
> Your anecdote is just a badly written application, I have plenty examples of very performant ones written in managed languages.
Yes following the ideas of what is considered SOLID principals. The objections isn't managed vs. unmanaged it is going full OO and not giving a shit about how computers work.