Hacker News new | past | comments | ask | show | jobs | submit login

Both the old and new systems were using licensing based on processor cores, not VMs or instances.

If I remember correctly, my version had something like 8 + 8 cores in an active/passive configuration where the passive node is free. There was also a single dev/test server also with 8 cores, but that's free too.

The replacement used a few hundred cores shared by the various instances and environments. If I remember correctly, they had something like 10-20 databases per virtual machine, and then about 5 virtual machines per physical host. The cores in the physical host were licensed, not the logical layers on top. (I can't remember the exact ratios, but the approach is the point, not the numbers.)

The "modern" cloud approach of having dedicated VMs for a single thing is actually terribly inefficient, and that approach would have bloated out the above to thousands of VMs instead of "merely" a few hundred.

The correct architecture for something like this -- these days -- might be to use Kubernetes. This provides the required high availability and instancing, while efficiently bin-packing and deduplicating the storage.

Still, you can't Helm-chart your way out of an inefficient application codebase.

Again, for comparison, my version could run on a laptop and had about half a dozen components, not thousands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: