
Show HN: EC2 prices per GB of memory - bzz01
https://instaguide.io/#priceCalc=ram&period=mo
======
nodesocket
Pricing is yet another area where Google Compute Engine is superior. Automatic
sustained use discounts[1], the ability to commit long term to a certain
amount of cpus and memory and get a discount without any upfront cost[2].
Extended memory[3], basically you can craft a VM instance of any size. Need
just 1 cpu but 32 GB of memory, no problem.

AWS on the other hand... A labyrinth of pricing tables, spot instances, EBS
optimized, enhanced networking... Complexity.

[1] - [https://cloud.google.com/compute/docs/sustained-use-
discount...](https://cloud.google.com/compute/docs/sustained-use-discounts)

[2] - [https://cloud.google.com/compute/docs/instances/signing-
up-c...](https://cloud.google.com/compute/docs/instances/signing-up-committed-
use-discounts)

[3] - [https://cloudplatform.googleblog.com/2017/05/Compute-
Engine-...](https://cloudplatform.googleblog.com/2017/05/Compute-Engine-
updates-bring-Skylake-GA-Extended-Memory-and-more-VM-flexibility.html)

~~~
zjfroot
Google Cloud maybe is superior in the pricing of compute, but it is actually
more expensive on network egress traffic, object storage and block storage.

I recently created a simple tool
([http://theprice.cloud/](http://theprice.cloud/)) to compare egress traffic,
object storage, and block storage cost among AWS, Google Cloud and Azure. It
seems like Googel Cloud is the most expensive one in egress traffic and object
storage.

~~~
boulos
Our egress pricing (from all sources) is certainly higher because we operate a
private backbone rather than just dumping your packets straight onto the
internet.

Your site seems to be hugged to death (so I can't see it), but there are lots
of gotchas with S3 pricing (like rounding up file sizes with Infrequent Access
and Glacier) that in our experience means our customers come out ahead.
Glacier and Coldline also aren't really comparable in the sense that GCS
_always_ responds within milliseconds. We only economics to discourage
frequent access from Coldline, not delays. As above, our egress is more
expensive because it's better (we hear you though if your response is "I don't
care! Give me cheaper instead, then!").

Disclosure: I work on Google Cloud.

~~~
rpedela
AWS also has their own backbone. Starting at about 7 min:
[https://www.youtube.com/watch?v=bqPfQgatMko](https://www.youtube.com/watch?v=bqPfQgatMko)

~~~
vgt
Google's backbone is a little bit different. Google Cloud shares this backbone
with Youtube, Maps, and the rest. Not only are there DC-to-DC cables, our
backbone extends to a vast number of Edge POPS (more than AWS and Azure
combined), detailed at [0]. Our DC-to-DC network is pretty great too, allowing
things like Spanner to exist (minimizing P in CAP etc) at [1].

Here's what it means for you in practice:

\- Google's Load Balancer supports a single global anycast endpoint.

\- When your packet tries to hit Google network, it hits the nearest POP,
which acts as an on-ramp to the network and traverses only Google network, not
touching public.

\- Similarly, when data is en route to a customer, Google network will take it
all the way to the nearest DC or POP on its private backbone.

\- Google by default gives you a global software-defined VPC. No need to
create VPN tunnels between zones/regions/etc.

(work at G)

[0]
[https://peering.google.com/#/infrastructure](https://peering.google.com/#/infrastructure)

[1] [https://cloudplatform.googleblog.com/2017/02/inside-Cloud-
Sp...](https://cloudplatform.googleblog.com/2017/02/inside-Cloud-Spanner-and-
the-CAP-Theorem.html)

~~~
colmmacc
I work at AWS and I think there's definitely some similarities and
differences. We do share our backbone with CloudFront, and hence our video
traffic, of which there's quite a lot these days. We also advertise our
network ranges broadly, it's our mission to carry the traffic as much as
possible ourselves. So those aspects are very similar.

But a genuine difference is that we don't try to operate a global "seamless"
network. The reason is that we optimize for the "A" in CAP. Our experience is
that at the low-ish level of a network, it can be too easy for outages and
availability issues to spread quickly. For example, with global networking
then a misconfiguration or error can more easily propagate globally and bring
everything down.

Instead, we have autonomous uncoupled regions and it's one of our core
principles that faults and errors stay within these regions (or better yet,
availability zones). That does mean that partitions can happen, but find that
most customers use active-standby configurations (where it makes no real
difference) for key data, and we also build the tools that work with
partitionable networks at a higher level. For example Route 53 supports multi-
region routing and failover, and does it measurably better than simple anycast
routing can achieve.

Over time, we're offering more and more multi-region services, such as cross-
region replication for data, but the coordination is done at higher levels
where we can achieve higher levels of availability in simpler ways, built on
top of a more solid foundation.

~~~
delhanty
>For example, with global networking then a misconfiguration or error can more
easily propagate globally and bring everything down.

This sounds like Nassim Taleb's antifragile meme [1].

If I was running IT for some large enterprise (which I'm not!) then I might
replicate services on both AWS and Google Cloud. A bit like Apple try to have
more than one supplier for their hardware components.

[1]
[https://en.wikipedia.org/wiki/Antifragile](https://en.wikipedia.org/wiki/Antifragile)

~~~
delhanty
You don't always need to have enterprise $$$ to take an antifragile approach
though.

I'm mirroring my private Git repositories between Gitlab.com and Bitbucket,
which can be done for $0.

I might even end up paying for the bottom end Github.com ($7 per month) and
Gitlab ($4 per month) accounts and have three way redundancy.

------
bzz01
I've built this page since I often just want to pick a spot instance type that
gives me biggest bang for the buck. Amazon's own pricing page is still very
confusing, and ec2instances.info is great but it is a static website and
adding spot prices there is nontrivial.

I've also fixed a few minor things that annoyed me, such as correct sorting by
instance type (so that r3.16x goes after r3.4x) and added a mode to display
spot savings vs on-demand.

Would appreciate any feedback!

~~~
jbverschoor
Best bang for buck is not at amazon.

~~~
chx
Right. So the Show HN page says you can barely go below $5 per gigabyte with
on-demand pricing. OVH RAM instances consistently cost 1.33 $/GB for a machine
with 30, 60, 120 or 240GB of RAM.

~~~
nodesocket
I havent looked at OVH lately, but are they really a cloud? Do they provide
full VPC with central firewall? Dynamic network disk storage that can be
mapped to servers?

~~~
mschuster91
How does EBS work, anyway? Is it some form of NFS with caching under the hood?

~~~
eropple
It's not NFS, it's a block storage mechanism. NFS presents a file-based
interface, EBS is a block/device-based interface. The underlying
implementation isn't widely publicized, but if you think about how a block
device works on a Unix and the AWS bigger-is-faster model for non-PIOPS EBS,
you can probably draw some reasonable inferences.

~~~
loeg
PIOPS?

~~~
snuxoll
Provisioned/guaranteed IOPS.

------
mayank
There's also [http://ec2instances.info](http://ec2instances.info) which has
RDS pricing as well.

------
dzdt
Forgive me for a dumb question from someone who doesn't do cloud computing
currently : the units are really $/GB/time period, yes? What is the time
period? One hour? day? month?

~~~
boulos
AWS bills instances hourly; both Azure and Google Cloud bill by the minute
(albeit with a minimum of 10 minutes). For data analytics that you want fast
turn around time for, this ends up being a big deal: a 30 minute job is
literally 2x less expensive than "round up to an hour". This also shows up in
auto scaling of services, since a standard business day of traffic obviously
only has a fairly short actual peak combined with what is ultimately fairly
short term load.

Disclosure: I work on Google Cloud.

------
floatboth
Would be nice to see $/GB at the same time as absolute $

