We started out with a three-node cluster using the smallest (1 vCPU/2GB RAM) VM you can get. Our initial requirement was to be able to do zero-downtime deployments and have a nice web UI to perform them (alongside the other advantages you get from a distributed system). We have now rescaled them to the next tier (2 vCPU/4GB RAM).
The hardware requirements depend on your workload. We process about 200 requests/sec right now and 600-700 per second on Saturdays (due to a larger client) and the nodes handle this load perfectly fine. All our services are written in Go and there is a single central Redis instance caching sessions.
Our database server is not part of the cluster and has 32 physical cores (64 threads) and 256GB RAM.
I say start out small and scale as you go and try to estimate the RAM usage of your workloads. The HashiStack itself basically runs on a calculator.
We started out with a three-node cluster using the smallest (1 vCPU/2GB RAM) VM you can get. Our initial requirement was to be able to do zero-downtime deployments and have a nice web UI to perform them (alongside the other advantages you get from a distributed system). We have now rescaled them to the next tier (2 vCPU/4GB RAM).
The hardware requirements depend on your workload. We process about 200 requests/sec right now and 600-700 per second on Saturdays (due to a larger client) and the nodes handle this load perfectly fine. All our services are written in Go and there is a single central Redis instance caching sessions.
Our database server is not part of the cluster and has 32 physical cores (64 threads) and 256GB RAM.
I say start out small and scale as you go and try to estimate the RAM usage of your workloads. The HashiStack itself basically runs on a calculator.