The thing about DRAM is that it isn't SRAM; cost matters. You struggle to find deployment environments that have less than 1 GB DRAM available per core, because at that point, ~95% the HW cost is typically CPU anyway. Shrinking that further is kind of pointless, so people don't do it. Hence, when utilizing 16 cores, you get at least 16 GB DRAM that comes with it, whether you choose to use it or not. If you use only 10% of that memory by removing the garbage from the heap, then while lower seems better and all that, it's not necessarily any cheaper in actual memory spending, if both fit within the same minimum 1 GB/core shape you can buy anyway. It might just under utilize the memory resources you paid for in a minimum memory per CPU shape, which isn't necessarily a win. Utilizing the memory you bought isn't wasting it.
Each extra GB per core you add to your shape, actually costs something. Hence every GB/core that can be saved results in actual cost savings. But even then, usually every extra GB/core is ~5% of the CPU cost. Hence, even when going from 10 GB/core (sort of a lot) to 1 GB/core, that only translates to ballpark ~50% less HW cost. Since they did not mention how many cores these instances have, it's hard to know what GB/core were used before and after, and hence whether there were any real cost savings in memory at all, and if so what the relative memory cost savings might have been compared to CPU cost.
I'm building this JEP for automatic heap sizing right now to address this when using ZGC: https://openjdk.org/jeps/8329758
I did in fact run exactly 200 JVMs, running a heterogeneous set of applications, and it ran totally fine. By totally fine I mean that the machine got rather starved of CPU and the programs run slowly due to having 12x more JVMs than cores, but they could all share the memory equally without blowing up anyway. I think it's looking rather promising.
Each extra GB per core you add to your shape, actually costs something. Hence every GB/core that can be saved results in actual cost savings. But even then, usually every extra GB/core is ~5% of the CPU cost. Hence, even when going from 10 GB/core (sort of a lot) to 1 GB/core, that only translates to ballpark ~50% less HW cost. Since they did not mention how many cores these instances have, it's hard to know what GB/core were used before and after, and hence whether there were any real cost savings in memory at all, and if so what the relative memory cost savings might have been compared to CPU cost.