The default sort is price per vCPU, but all the ones that slow up initially are actually fractions of a CPU. Granted, getting the actually fraction is not made easy by the vendors.
Some providers, like digital ocean and linode include some bandwidth with the instance. If you have any egress at all, choosing one of those may have a large cost savings.
Thanks for making this tool, cloud pricing is a maze, and things like this help a lot.
Great tool. Would be good to also include other services like: advania, linode, Hertznet. Often these are cheaper and have more RAM, some even offer DDoS attack. (AWS costs about $6000 from their DDoS team).
When your initial goal was to find cheap GPU instances did you look at Genesis Cloud ( https://www.genesiscloud.com/ )?
They currently offer Nvidia 1080ti and apparently soon RTX3080 and RTX3090 plus some AMD types.
It would be nice to filter by region too. Currently, when you select a global area (like North America) you end up with lots of duplicate machine types for the various regions within. I know the prices can vary between Central US and East US but it would be helpful to narrow these down when searching.
Google Cloud does sustained use discount pricing so the charges decrease as you progress through the month. It looks like your chart would be accurate for an on-demand 1 hour of use machine but you're missing about 20% discount on GCP if you run the machine all the time.
Nice work including some general network capacity in your chart. That's something that's often missed in comparing machine types.
Vertical scaling, I assume. For some services that may even be a sensible option, given the development cost and flakiness of many horizontally scaled systems.
But as usual, what AWS charges (1x hardware cost per month) seems a bit excessive.
Mostly SAP, specifically HANA. That's what AWS markets them for anyway. The price of the machine at this scale is probably not even the main cost, as HANA licenses are based on RAM allocated.
This is really neat and I've been looking for something like this for a while. It'd be really useful if there were benchmarks for each instance so that it'd be easier to compare the vCPUs. I know that benchmarks aren't 100% representative but they might make the comparisons easier.
(I can't edit my comment as the timer has expired) What I'd find ultra useful would be to have common workloads, like CI/CD builds. For example, building a medium-sized node project and comparing the results.
Of course, that makes things even more subjective and makes burstable instances more difficult to compare but the results could be interesting.
The use case is different. For the price of a cloud instance, on top of the instance itself, you are paying for:
* availability. On AWS, you can start a couple dozens or hundreds of instances on demand, for a limited time. You are paying for that spare capacity. VPS/Dedicated servers generally have much lower spare capacity, and you're booking things by the month, not by the minute.
* reliability. Most real cloud instances live on networked drives, and your risk of losing data is very low. On root servers, you have to handle data reliability yourself earlier.
(you should do backups either way, but you're likely going to use your backups more often on VPS/Dedicated offerings than in the cloud)
* surroundings services. Private networking, security features, etc.
You pay a premium for all that, so for the same raw compute performance, cloud prices will be at least 2-3x the price of a basic root or dedicated server. On the other hand, vps/dedicated servers typically include bandwidth in the price. The best choice depends on your requirements, but most people will blindly go towards cloud servers..
Which tips the balance even further in the direction of the root server/dedicated server providers, since not only is the server cheaper but so are the bandwidth charges (usually by a lot).
I do terabytes of data across 10 Netcup servers for less $ per month than really, really underpowered AWS VPSs. Also, 0 failures / maintenance in over a year.
Awesome project -- bookmarked. Once thing I'd love would be to load test the CPUs, get a baseline, and just divide that number straight into the price/minute or hour and come up with a simple number. The formula probably won't be that straight forward but I'd love to be able to characterize CPU price/performance in a single number and use that to compare.
For those out there that may not know, there's also an EC2 instance specific site that's pretty useful (it's been around a while and I use it often):
This is great work. I know it would be a herculean task to reconcile across different clouds, but a reservation cost comparison would be really useful too.
Just make sure you have a plan to move from the tX instances into something non-burstable as soon as load gets to 30% or do.. when those machines run out of burst, the pain is instant and severe.
They're not even necessarily comparable within a vendor. The underlying hardware changes, and newer generations will (hopefully) perform better than previous generations. E.g., Azure publishes CoreMark perf stats for some¹ of their VM types, and the score can vary by 60%. (Now, I don't know how CoreMark scores translate into raw CPU perf.) But hopefully it's trivial to show that 1 vCPU of yesteryear's model != 1 vCPU of this year's model.
I think EC2 some sort of "ECU" to try to normalize them? I don't know of any other such attempts though.
Azure used to sell physical cores but AWS was selling Hyper-threaded cores. This made a big difference on pricing and performance of certain workloads like Oracle. These days, everything seems to be sold under the term vCPU which is an abstraction to represent the performance you can expect and is roughly similar to a hyper-threaded core. https://azure.microsoft.com/en-us/blog/introducing-the-new-d...
For digitalocean at least, there is quite a large difference between the “basic” droplets and ones with dedicated vcpus. In this table, they both look the same. I’ve seen 2-4x differences between the two: https://rob006.net/en/blog/how-more-efficient-is-cpu-optimiz...
Can be very different. I recently benchmarked it with a simple python based benchmark (as I use python most often) and in some cases the differences were even 2x.
These were coming mostly from the fact that some providers updated to the latest gen AMD chips which was making big difference compared to 3-4 yr old Intel chips.
Interruptible, "spot", or pre-emptible VMs are sold at a much lower cost with the caveat that they can be terminated at any moment. For workloads that pull from a queue for data processing or other "pausable" systems it can save a huge amount on costs. The cloud vendors get to collect ~1/3 - ~1/2 of on-demand rates instead of $0 while their hardware goes unused, effectively monetizing incremental compute power. Usually they are priced via auction, so its a truly efficient market.
I created this originally to be able to find cheapest interruptible GPU resource.
It has grown to cover all possible instance types for major cloud providers.
Current features:
- Data refreshed every week
- Seven major cloud providers covered
- CPU or GPU instance hunting
- On-demand or interruptible pricing
- Filter by CPU, RAM, Region or Vendor
- Sort by Price, Price per CPU, Price per GB RAM, Price per GPU and many others
Final goal is to allow one click depoyment of desired instance.