FWIW i am pretty sure Java's HashMap has the same behaviour - it grows the table, but never shrinks it. Even if you call .clear(), it just clears out the table, rather than throwing the table away.
I imagine there are lots of scenarios in which this is what you want, because after emptying the map, you're going to re-fill it, and it saves reallocating the table. But it would be frustrating in a scenario when that isn't what you want.
If a map has this behaviour, i would say that the most important thing is that it should be clearly documented (Java's isn't). The second most important thing is that there should be a way to get round it - either a .clearHarder() method which throws away the table, or a .compact() method which downsizes it while retaining the content.
Rust uses "shrink_to_fit()". Personally never had to use it, but you always end up scrolling by the backing allocation management for all the standard collections when looking through the docs.
> Shrinks the capacity of the map as much as possible. It will drop down as much as possible while maintaining the internal rules and possibly leaving some space in accordance with the resize policy.
Wouldn't reallocating the map be very cheap when running in a JVM? Yes, it isn't something to do in the hot path, but surely it should be faster than a direct malloc(), right? I'm being very naive and ready to be proven wrong.
The main difference with Go is that the table of a HashMap in Java is a table of pointers, not a table of structs, so the overhead to not shrink the table is less an issue.
You're right that Java's Hashmap only ever resizes upwards - the most common use case. However, if there's a need to remove that memory after clearing() then the typical use case is to do something that allows the GC to sort it.
E.g.,
var map = new HashMap<Bla, Bla>();
// Many things added to map
// Many but not all things removed from map
// to shrink it to size of remaining items
map = new HashMap<>(map);
Yeah, it's not nice, but the people who wrote the JDK are far smarter than me, so I figure they optimised for the 99% use-case, not the 1%.
>The second most important thing is that there should be a way to get round it - either a .clearHarder() method which throws away the table, or a .compact() method which downsizes it while retaining the content.
Wouldn't just creating a new map be a pretty good way of doing it? If the reduction in size is significant, then the cost of the new allocation will be small compared to the cost of all the allocations you've done to build the original map.
> If a map has this behaviour, i would say that the most important thing is that it should be clearly documented
Iirc, most hash table implementations don't automatically shrink. More documentation is always nice, but isn't not automatically shrinking kind of the expected "default" behavior?
Yes, Go maps never shrink. This is good for most use cases in practice.
Because in practice, map entry deletions happen seldom.
And when map entry deletions are needed, users often hope maps don't shrink, to avoid potential later unnecessary memory allocations and entry moves.
For example, I only do map entry deletions in one of my projects,
In the project, I clear all entries of a map and re-use the map to avoid making new allocations. The current design satisfies my need well.
This is more an optimization than a memory leak.
To avoid the kind-of memory leak, just make a new map and discard the old one.
A few times I have wished for a way to grow maps, but generally to shrink maps I don't think there's any big advantage to a built-in rather than your own function making a new one. (It would only cover the case where you know you want to shrink but don't know if you need to shrink because you don't know how many elements you removed or overestimated by, which I think is pretty unusual.)
Personally, I am not against the proposal to add a "shrink(aMapOrSlice)` builtin function. In fact, there are more builtin functions on my wish list, such as "concat(slices)" and "decap(aSlice)".
It is just that the Go core team have their own philosophy.
The parent said “most” so both can be true: most maps don’t need to shrink and the example in the article is a valid case where shrinking a map would be desirable. This seems obvious so I’m confused by your confusion. :)
> However, let’s say we want to store one hour of data. Meanwhile, our company has decided to have a big promotion for Black Friday: in one hour, we may have millions of customers connected to our system. But a few days after Black Friday, our map will contain the same number of buckets as during the peak time. This explains why we can experience high memory consumption that doesn’t significantly decrease in such a scenario.
> What are the solutions if we don’t want to manually restart our service to clean the amount of memory consumed by the map?
I don’t see that as a strong argument for making the implementation of maps more complex by adding some auto- shrinking. If your process survived Black Friday with a map full of items, why would it run out of memory after it with a map that’s mostly empty but didn’t return bookkeeping memory to the heap? Your process likely will keep running. There may be a slight performance drop because cache lookups become more likely to lead to cache misses, but otherwise, the typical system won’t care much about the use of virtual memory space.
what happened to a memory leak being some memory that was allocated but had no reference to it so couldn't be freed? If you can copy the map and release it and the memory usage drops, there is no leak?
This is also why valgrind classifies the leaks it reports with stuff like "still reachable" or "possibly still in use" (I might be remembering the exact phrasing incorrectly). It would be pretty hard to programmatically determine whether the memory that's still kept around was intended to be kept around or not, which is why valgrind supports generating "suppressions" (and specifying them in subsequent runs to be ignored).
This is the use case for weak maps, which both Java and JavaScript have. In the latter case, the map is not iterable, so one cannot observe JavaScript GC (through WeakMap at least).
A memory leak has always meant "this program keeps allocating more memory as it runs, even though it's not being asked to store anything new". That is equivalent to saying that a program has a memory leak when it fails to free memory that is no longer needed, not just memory that is no longer reachable.
For example, a common example of a memory leak is adding items to a "cache" without any mechanism that evicts items from the cache in any scenario. The "cache" is thus not a cache, but a memory leak (a common implementation of this leaking scenario is that items are put in a map, but never removed from the map).
Memory leak has never, as far as I know, referred to the specific case of memory that is no longer accessible from program code to be freed. In fact, this definition doesn't even make sense from a runtime system perspective, since even in C, the memory is actually always still reachable - from malloc()'s internal data structures.
Those pretty much can't happen in garbage collected languages, so the usage of the term has been widened to include things like this. I agree it's a shame.
Memory leaks have always meant "failing to free memory that is no longer needed".
Garbage collection literature often stresses the difference between "no longer needed" and "not reachable", noting that the former is not automatically enforceable (it amounts to solving the halting problem), but the latter is only a heuristic. So, the fact that garbage collectors can't prevent all memory leaks is always stressed by the literature.
I read about this in the Garbage Collection Handbook [1], which is an excellent overview of the entire field (at least up to ~2016), which discusses the distinction at large. I don't have it on me to quote, but a very clear distinction is made between "live objects" and "reachable objects", with reachabillity acting as a computable proxy for the uncomputable property of liveness. Liveness is defined as "this object will be used again by the program in some way", and a memory leak is defined as "failing to free an object that is no longer live". An unreachable object can't be live, but there are many ways of having a reachable object that is not live.
To prove that this is used in the literature at large, here is the abstract of a random GC paper I found [0]:
> Functional languages manage heap data through garbage collection. Since static analysis of heap data is difficult, garbage collectors conservatively approximate the liveness of heap objects by reachability i.e. every object that is reachable from the root set is considered live. Consequently, a large amount of memory that is reachable but not used further during execution is left uncollected by the collector.
Although, it's pretty useless as an exploit, since it requires you to be able to run arbitrary Go code to begin with (the author admits as much). It's _very_ unlikely that a remote attacker could exploit a data race in a regular Go program.
Every GC language by definition are memory safe, memory safety in programming does not mean than accessing the same resources from two thread should be safe.
I don't know how it works in other languages, but accessing a partially overwritten slice in Go (as will happen in the presence of data races) can cause your code to access out-of-bounds memory. And as we all know, once you have read/write access to arbitrary areas in memory, you've basically opened up Pandora's box.
I don't think you can have data races (but certainly you can have race conditions) in python because of the GIL. I imagine Ruby is similar. Otherwise, no, the other languages you listed are not "memory safe". Once you start reading and writing to arbitrary locations in a process, almost anything can happen. But certainly you can say that there are different degrees of memory safety. All of the languages you mentioned are leaps and bounds above C/C++.
The same goes for Rust and most other "safe" languages. They all have synchronization primitives that make it safe, but you need to use them - the compiler won't always tell you.
For Rust specifically, the compiler does force safe programs to have no data races. That's actually what the ownership system, Send and Sync are about. If you manage to corrupt memory or have undefined behavior in safe Rust, that should be a compiler or library bug.
That is basically the entire shtick of rust. That data is "owned", and only the owner can write. You can "borrow" something for read access, but if something is borrowed it can't be written to.
There are of course workarounds for this like reference counted wrappers and so on.
This all looks like reasonable implementation behavior which will give optimal runtime performance in most "common" cases. If one really wants or needs a map that'll free memory as it shrinks (and sure, for some folks, that'd be super useful) one's always free to just implement your own.
Why is it? Or rather what limitations are there to enforced by go compiler that won’t allow someone to implement their own hash map that can free memory with the same generic possibilities, even with the recent introduction of generics?
Seems like I was wrong about my assumptions about Go's generics - it does seem like they're specialized at build time and it is possible to operate over generic values rather than fat pointers. So it is possible to implement a fully featured hash map without extra pointer hopping now. I stand corrected.
You probably won't be able to match the built-in's performance without a similarly large surface area of unsafe usage. And Go's maintainers won't maintain your unsafe code for you as they do the default, nor consider the impact of other language changes on your micro-optimizations.
> Also, in this case, the amount of required memory is less significant during peak times due to some optimizations to reduce the memory consumed.
Pretty sure it’s the same: because of collisions, maps always have a certain fraction of empty slots (much more so than vectors). I’ve not heard of Go maps being very high load factors, so they probably resize around 50% or so full. I didn’t check any of the numbers, I’ll assume 50% load factor, doubling on resize, and entries of {hash, key, value} with a word-sized (8 bytes) hash, but some and likely all of those are likely off.
Anyway with a load factor of 50% and doubling at any point you’d have 2n to 4n slots in the map total (depending whether you’re nearing resize or just went through one). And thus if you make entries smaller, you make all those overhead slots smaller as well.
hash + int + blob would be 8+8+128 144 bytes, a million of those is 144MB, x2 is 288 and x4 is 576, so the observed bounds are pretty close: IIRC Go maps use some form of separate chaining (though not “trivial” linked lists), I assume those do get reclaimed as entries are removed even if the map itself does not get shrunk, which would explain why memory consumption goes down as elements are removed from the initial map.
With a pointer each “entry” is 24 bytes instead, for a map overhead of 48~96MB (so clearly Go maps use either a higher load factor or a more efficient storage than the trivial scheme being assumed here, possibly both).
The actual data accounts for 128M plus some, there’s a 144M difference between the full and the emptied pointer-maps, which would account for some chain links being removed, and maybe some object headers / allocator overhead for the on-heap arrays.
> 50% load factor would be incredibly low for any hash table.
Might be that I’m more used to open addressing maps? IIRC circa 50 is a pretty good starting point for open addressing unless the collision resolution is specifically designed for that (e.g. swisstable). Not to say it can’t get higher, but for some the that’s where the performance starts going down (e.g. non-bucketed cuckoo hashing).
Sure, if you grab an algorithms 101 textbook and implement its example of an open addressing table, 50% is a performance inflection point, but nobody is really using such things. And even then I recall seeing numbers more like 70% before actually doubling due to the high cost of rehash; 50% was only for the initial allocation.
Let's be kind and assume that prior to removal, the map has just trebled in size and the extra space wasn't used. Doesn't this imply that a map has an overhead of about 100 bytes per key/value pair? How can this be so?
There's some some waste involved in not pre-allocating the map to begin with. Check the output of this variation (https://go.dev/play/p/vQwg3GajzXx -- n shrunk to 1,000 to stay within the playground's memory constraints) which fills the map again after the GC. You'll see the map doesn't grow back to the max size.
And if you specify the size of the map up front in the `make()` function, it never grows or shrinks (or not significantly): https://go.dev/play/p/AGW35kMMOc5
IIRC, what's happening is that whenever the map hits what was pre-allocated, it expands by a certain growth factor of the current size. So at some point between 1 and 1,000,000 a big chunk was allocated that went over what was necessary for 1,000,000 entries. In my testing it happened around i == 851_500:
i don't see any issue here. same behavior can be achieved with slices:
foo := make([]int, 0, 1000)
for {
for k := range bar {
foo = append(foo, k)
}
foo = foo[:0]
}
the slice will grow as much as the largest dataset. map will be the same. you need to let go of it to be GCd and create a new one.
The point is that you can remove the entries from the map, and the map won't ever shrink. If you're using large value type in the map[1], the map's dead storage will be large - by a functionally unbound amount. Most sane collection libraries shrink their backing store after some sufficiently large portion becomes dead.
[1] I would argue a general purpose hash table/map should really switch to using a hash code=>index mapping automatically if the value type size is sufficiently large.
The map will shrink, just not by as much as you might expect. You could also argue that they have picked a pathological case for their example because they use 128 byte entries ("If a key or a value is over 128 bytes, Go won’t store it directly in the map bucket. Instead, Go stores a pointer to reference the key or the value" - which probably leads to less memory consumption).
It does, and as the article mentions, it’ll collect the map elements. But it doesn’t collect the map infrastructure that grew to accommodate the elements, and doesn’t shrink when the elements are removed, because the map never stops referencing them.
[1]: https://github.com/golang/go/issues/54766