If you do a lot of cross node talk with say, the Phoenix channels, then lowering the total number of nodes helps (n*(n-1) for a full mesh). And since the BEAM scheduler will try to use up all available cores by default, past a certain threshold of nodes, it makes more sense to use smaller numbers of nodes with more cores per node. When you also have say, an observaility agent taking up 1000 mcore, doubling the instance size makes a difference. In the last shop, we would idle at 4-cores, jump to 16-cores instance sizes for the morning rush, and go back to 8-cores for the late afternoon traffic. I had tried to horizontally scale it with 4-cores machines, but saw a lot more instability and performance issues.
This is one of the advantages with using Elixir or Erlang. You simply don’t get this kind of operational characteristics with Ruby, Python, or Nodejs.
If you do a lot of cross node talk with say, the Phoenix channels, then lowering the total number of nodes helps (n*(n-1) for a full mesh). And since the BEAM scheduler will try to use up all available cores by default, past a certain threshold of nodes, it makes more sense to use smaller numbers of nodes with more cores per node. When you also have say, an observaility agent taking up 1000 mcore, doubling the instance size makes a difference. In the last shop, we would idle at 4-cores, jump to 16-cores instance sizes for the morning rush, and go back to 8-cores for the late afternoon traffic. I had tried to horizontally scale it with 4-cores machines, but saw a lot more instability and performance issues.
This is one of the advantages with using Elixir or Erlang. You simply don’t get this kind of operational characteristics with Ruby, Python, or Nodejs.