
Introducing E2, new cost-optimized general purpose VMs for Google Compute Engine - raybb
https://cloud.google.com/blog/products/compute/understanding-dynamic-resource-management-in-e2-vms
======
mythz
Trying to find out how much cheaper these new E2 instances are, these are the
Monthly prices taken from their Compute Pricing page [1]:

    
    
        Machine type  VCPUs Memory Price   Preemptible price
        n1-standard-2 2     7.5GB  $48.55  $14.60
        e2-standard-2 2     8GB    $48.92  $14.67
    

So how does E2 offer "31% savings compared to N1"?

Whatever it is, there seems to be some kind of disconnect as it's not obvious
from their pricing pages and they should provide better transparency on
exactly where these savings are.

[1] [https://cloud.google.com/compute/all-
pricing](https://cloud.google.com/compute/all-pricing)

~~~
boulos
Disclosure: I work on Google Cloud.

That should have been made more clear, sorry about that. On a per-second basis
_without_ a sustained use discount (so for burstier workloads, autoscaling,
etc.) the E2 is ~31% cheaper (therefore the marketing "up to").

Clarifying your example: n1-std-2 is currently 9.5c/hour (which would be
$69.35/month without Sustained-Use Discounting), but e2-std-2 is 6.7c/hr
regardless of how many hours per month you use it. Which is about 30%.

~~~
mythz
I'm not a fan of manual inefficiencies of AWS's Reserved Pricing but at least
their pricing is clear.

I see all these different prices being floated around but I'm still not clear
on how much GCP's compute cost for the most popular scenario of running a
__website 24 /7 __would be.

It looks like AWS's t3.large may be the most comparable with 2x VCPU/8GB RAM
which costs $426 for 12 months / $35.50 month.

Or for m5.large (2x VCPU/8GB RAM) the cost is $501 for 12 months / $41.75
month.

What would the n1-std-2 and e2-std-2 compute cost for 12 months be?

~~~
boulos
Sorry for the lack of clarity (I've tilted at this windmill, and failed).

E2-std-2 != t3.large. A t3.large only has a "baseline performance" of 30% [1].
That's more like our e2-small though they have more memory.

Instead, I'd compare e2-std-2 to m5.large like you started to do so. An
m5.large _on-demand_ is $.096/hr => $70/month, while the e2-std-2 is
$48/month. I think your $41.75/month is from the 1-yr Standard RI (a 40%
discount). For that, the most direct comparison would be to use a Committed
Use Discount on our side which comes with a similar percentage discount (I
can't find this on my phone right now) so that's like $29/month.

Does that make sense?

[1] [https://aws.amazon.com/ec2/instance-
types/t3/](https://aws.amazon.com/ec2/instance-types/t3/)

~~~
mythz
As stated I'm only trying to get a comparison of the cost for the very popular
use-case of running a "Website 24/7 for 12 months". I use this simple basic
core metric as a baseline for comparing hosting costs amongst different
hosting providers.

This is trivial to work out in AWS, I just go to their Reversed Pricing Page
[2], look for the total cost for an m5.large (2x VCPU/8GB RAM) instance for 12
months which is $501 (12 x 41.75).

I'd like to be able to do the same for GCP, I see the committed usage page [2]
but I don't see any easy way to work out the cost for 12 months / 24/7, it
mentions things like "discount is up to 57% for most resources like machine
types or GPUs", what does "up to" mean? Is that the discount for running 24/7?
So is the E2-Std-2 monthly price of a $48.92 * (1 - .57) = $21.04?

All I see back in GCP's compute pricing page related to "committed usage" is
"1 year commitment price" of "$10.03 / vCPU month" but this says it's for "E2
custom vCPUs and memory", does this apply to E2-Std-2 instances? So is the
cost for 2x VCPU = $10.03 x 2 + 8GB = 8x $1.34 = 10.72 for the total monthly
cost of $30.78?

If it's not how am I supposed to workout what E2-Std-2 cost of 12 months /
24/7 is? It's frustrating that there's no clear/easy way to determine the
pricing of a simple and popular hosting scenario like this.

[1] [https://aws.amazon.com/ec2/pricing/reserved-
instances/pricin...](https://aws.amazon.com/ec2/pricing/reserved-
instances/pricing/)

[2] [https://cloud.google.com/compute/docs/instances/signing-
up-c...](https://cloud.google.com/compute/docs/instances/signing-up-committed-
use-discounts)

~~~
013a
> This is trivial to work out in AWS, I just go to their Reversed Pricing Page
> [2], look for the total cost for an m5.large (2x VCPU/8GB RAM) instance for
> 12 months which is $501 (12 x 41.75).

Well, nowadays, you don't buy reserved instances; you buy compute savings
plans. And be sure to compare the strengths and limitations of an EC2 Savings
Plan versus a general Compute Savings Plan; they have differing
characteristics concerning instance type convertibility, regional transfers,
and applicable products.

And with Compute Savings plans, its not like an RI where you say "I'm paying
for one instance upfront, give me 30% off". You instead commit to a level of
spend, in Dollars per Hour. Then they convert that spend into fungible credits
that have differing exchange rates depending on the instance type, region, and
even compute product. Then, through the magic of the AWS Billing System, you
save money.

Very rarely do you come out the other end of AWS Compute consumption with a
good understanding of the exact trace of "dollar spent to which compute
product?" With products like Fargate, its even worse. Don't get me started on
Fargate and its billing characteristics.

I'm nitpicking here. But only because: Nothing is ever as simple as it seems.
GCP is just different; I wouldn't classify it as more or less complex.

~~~
PetahNZ
Isn't savings plans brand new? And can't you still just buy typical reserved
instances?

~~~
013a
Yes, and Yes. Though, as far as I know, there is no financial reason to buy
RIs at this point. Check out this comparison table [1]; you get the same
savings, but _far_ more flexibility.

AWS very rarely removes features. People may still buy RIs because they have
corporate or technical processes in place where they make sense. But, from a
pure financial standpoint, RIs are inferior to Savings Plans.

[1]
[https://docs.aws.amazon.com/savingsplans/latest/userguide/wh...](https://docs.aws.amazon.com/savingsplans/latest/userguide/what-
is-savings-plans.html)

------
devhwrng
The technical details about the E2 instance class are really interesting:

[https://cloud.google.com/blog/products/compute/understanding...](https://cloud.google.com/blog/products/compute/understanding-
dynamic-resource-management-in-e2-vms)

Rather than a guaranteed core and RAM as with N1/N2, resources for the
underlying host can be dynamically balanced through live migrations, which GCP
has already been using for years. Cool solution, and should work to save money
for most workloads.

~~~
bullen
I wonder when we will get instances that can scale dynamically at runtime!

That would be so cool, just adding cores if the load goes up!

You would have to make sure your code has enough threads ready to fill those
cores though! (if you use non-blocking async. stuff)

Or is this what they mean it already has?

Edit: thinking more about this it must be really hard and require kernel
fixes?

I mean how would linux behave when you add/remove cores and RAM f.ex.?

~~~
synack
CPU hotplug has been supported for a long time. I once managed some Sun boxes
that allowed replacing/upgrading CPUs without shutting down... They don't
build em like that anymore.

~~~
boulos
Disclosure: I work on Google Cloud.

Yes, _but_ most workloads are fairly unprepared for this sadly. And they're
really not ready for _memory_ unplug. (I also miss the days of my multi socket
boxes and plugging in CPUs and memory).

~~~
derefr
> And they're really not ready for memory unplug.

What do VM-guest memory-ballon drivers do right now when the host suddenly
attempts to reserve more memory than the guest has free? I'd presume the
kernel would just consider itself to be in an OOM condition, and start killing
processes to free up the memory until it can return OK to the balloon driver,
no?

Because, from what I understand, that's closer to the scenario we're talking
about here: you're not abruptly yanking DIMMs (like physical memory hotplug);
rather, you (the hypervisor) are gracefully letting the guest know that some
memory is about to go away, and since you (the hypervisor) have your own
virtual TLB, you can let the guest OS decide _which_ "physical" memory (from
its perspective) is going away, before it happens.

~~~
boulos
Yep! I was just responding to the explicit "how come you don't do hotplug" :).

------
frew
It looks like the play here is to get a bunch of small, committed workloads
that GCE can move around where they've got spare capacity. On-demand pricing
is very similar to the existing n1 type, but 1yr committed discounts are 30%+
cheaper.

More details from when I was working through it:
[https://twitter.com/fredwulff/status/1204861220165017600](https://twitter.com/fredwulff/status/1204861220165017600)

~~~
boulos
Disclosure: I work on Google Cloud.

Your analysis is close, but the on-demand (per second) pricing is also a lot
less expensive. You should think of it as:

\- Less than a full month w/o commitment => E2 up to ~30% cheaper
(particularly for say 273 minutes per month or something).

\- Full month w/o commitment => Roughly identical.

\- Full month with a 1-year or 3-year commitment => E2 ~30% cheaper.

~~~
amq
I wish it was simply flat 30% cheaper. It is very misleading that 0.99% of
month will be 30% cheaper than a full month, considering that Google Cloud is
advertising sustained usage discounts everywhere.

~~~
brianwawok
Sustained usage has tiers. So .99% of a month would only be like 1% cheaper.

------
dang
Since commenters are saying that the technical post is more interesting, we
switched to that from [https://cloud.google.com/blog/products/compute/google-
comput...](https://cloud.google.com/blog/products/compute/google-compute-
engine-gets-new-e2-vm-machine-types), which is the announcement post. Maybe
we'll keep the original title so it's clear it's a new thing.

------
Someone1234
Just because the page didn't actually include pricing...

E2-Standard:

\- 2x vCPU, 8GB: $48.92/month.

\- 4x vCPU, 16GB: $97.83/month.

\- 16x vCPU, 64GB: $391.35/month.

E2-HighMem:

\- 2x vCPU, 16GB: $65.99/month.

\- 8x vCPU, 64GB: $263.97/month.

\- 16x vCPU, 128GB: $527.94/month.

E2-HighCPU:

\- 2x vCPU, 2GB: $36.11/month.

\- 8x vCPU, 8GB: $144.45/month.

\- 16x vCPU, 16GB: $288.90/month.

I didn't list Preemptible Pricing because it is very workload specific/niche.

------
seriesf
The accompanying technical blog is more interesting than the announcement. It
implies they may have ported or adapted Borg’s antagonistic workload
scheduling features to cloud. Huge if true, as they say.

------
dmix
> Performance-aware live migration

Does this mean your VM getting dynamically migrated to a new server as
performance issues start happening?

Sorry if it’s a silly question, I’ve never dug into how VPS stuff works in
practice.

------
Copenjin
They say that the performance is similar to n1 and the price is lower, but
they are not talking about preemptible price, that's more or less the same for
both types.

------
fortytw2
Both EPYC and Intel chips behind these?

So is there potential for much different performance between the same sized
instance based on chance?

~~~
rbanffy
I would assume the better the core performance, the more vCPUs it can offer,
vCPU being an arbitrary performance reference.

~~~
fortytw2
Ah, I've been assuming that a vCPU == a CPU hyperthread since forever, hadn't
realized that changed.

~~~
boulos
Disclosure: I work on Google Cloud.

Your mental model is correct: vCPU means hyperthread (except for shared core
things like the f1-micro, g1-small, etc.).

We had a different measure of "relative performance" called GCEU (GCE Units)
but stopped publishing that as it's pretty meaningless for most people. We do
our platform qualifications at Google to ensure that for users that "don't
care" which CPU platform that they're on, that they get improving
performance/$ and so on. But for GCE, we clearly document instead the
platforms and base/all-core/single-core frequencies we use [1].

tl;dr: if you want to choose your processor, stick with N2/C2 and our upcoming
AMD machine types. If you're okay with us deciding for you and want a big
discount, give E2 a spin!

[1] [https://cloud.google.com/compute/docs/cpu-
platforms](https://cloud.google.com/compute/docs/cpu-platforms)

~~~
rbanffy
Thanks for the correction. When did you stop using GCEUs?

------
londons_explore
I'm imagining the average workload within GCP VM's to be 95 percent idle time.
From oversized VM's, to machines sized for peak loads, to machines where the
developer has just used a standard machine size for a 3 seconds per week cron
job, to machines that are forgotten about and idle, to machines spun up as a
hot spare, to machines part of build infra which are idle between builds and
every weekend. There's a lot of idleness.

If machines really are idle 95 percent of the time, why is the price only
discounted 30%?

~~~
qeternity
> If machines really are idle 95 percent of the time, why is the price only
> discounted 30%?

Because that's apparently what Google believe the market will bear.

------
paule89
Can somebody help me figure this out. I want a VPS, 2 cores 2-4gb of ram in
europe. How much would it cost per month? Also how much does storage cost? And
if i were to say put a minecraft server there. How would it be able to
dynamically ramp the machine up and down if needed? Only via the interface? or
after trying to connect to a specific port? I am not the typical target
audience for these kind of server deals. But i want my small cheap server for
myself.

~~~
rat9988
You won't find a better deal than hetzner. I have my production server on
their cheapest vps :)

Ovh is quite good too.

------
londons_explore
I see these machine types have a virtio balloon memory driver so the host can
reclaim memory from the VM.

What's at it for me financially to allow that? Why should I give up memory
I've paid for unless I get a discount/refund? That memory will be useful even
as caching of disk pages, so giving it up is making my application slower for
no financial benefits.

~~~
remus
That's why these instances are 30% cheaper.

~~~
londons_explore
Yeah, but I could just unload that balloon driver and get to keep all my ram
all the time while still paying 30% less?

------
amq
Strangely, the E2 type seems to be available when checked with "gcloud beta
compute machine-types list", but not with "gcloud beta compute machine-types
list --zones". Launching also doesn't work.

------
benbro
If anyone from GCP is watching, are there updates about the following? \-
Global load balancer for UDP. \- GCS signed urls for a prefix instead of only
per object. \- Better latency between Europe and India.

~~~
boulos
Disclosure: I work on Google Cloud.

I don't wanna distract from the E2 launch, but we definitely have gotten the
message on all of those and they're at various stages of in-flight / complete.
As an example, Policy Documents should let you do prefix-based matching for
GCS: [https://cloud.google.com/storage/docs/xml-api/post-
object#po...](https://cloud.google.com/storage/docs/xml-api/post-
object#policydocument)

~~~
benbro
It seems that Policy Documents only work for uploads but I'm talking about
downloads.

------
rubyn00bie
More opaque pricing... awesome.

As someone who just setup services on Google Cloud, I could not be more
disappointed and outraged at their billing and performance. It's outrageously
high for even small services (and I'm comparing it to Heroku of all places),
and the documentation is even worse. Yes, there are examples but the docs are
outdated, and make it almost impossible to relate what you're paying to what
you're doing until you get the bill.

The $300 credit promo they offer is a joke, it's not $300 in the sense you'll
get to try it out, it's that you're likely to rack up at least $300 is
bullshit charges before you're even aware...

------
maxdo
Google cloud pricing and prediction is a complete mess I’m moving away from
gcloud all our instances just because of this reason. Their billing prediction
can jump up/down 1000% in few days with no use change. And they don’t dare to
say even sorry for that. But yeah they will invite Gwen Stefani and will drive
you to Alcatraz spent tons of money instead of hiring engineers who can
calculate basics for billing. Their CEO will tell a fairy tail about best AI
for billing... and after one year it doesn’t work at all. I just don’t feel
I’m ok with this customer approach and priorities.

------
ik8s
I wonder if they will make these available for GKE node pools at some point;
sounds like being able to autoscale with these would make sense.

~~~
amq
They should be available to GKE as soon as they are available for the usual VM
instances. If you can't launch a node pool, I bet you also can't launch a VM.

------
cerberusss
But what do they cost? Isn't there a monthly price tag, so I can compare with
other VPS providers?

~~~
bullen
old - f1-micro $0.0076 - .6GB 1 core @ .2 fraction

new - e2-micro $0.0083 - 1GB 2 cores @ .125 fraction per core?

Pretty cool if your code can use multiple cores efficiently, specially if each
virtual core is guaranteed a separate physical core then this is really good!
= if one core gets a congestion peak maybe the other won't.

For less than a buck/month you get better parallelism and .4GB RAM!

Sadly still no pre-purchased committed usage discount for the shared CPU
instances!

\----

For those with big budgets:

old - n1-standard-1 $0.0475 - 3.75GB 1 core

new - e2-standard-2 $0.06701 - 8GB 2 cores (0.00001 really?)

I also wish we had some computation power comparison metric so that we could
stop looking at apples and bananas without committing.

For monthly you multiply by 730 I think.

~~~
bullen
Replying to myself again: So you will NOT get two physical cores, both
HyperThreads will be on the same physical core so no real benefits to having
these smaller shared instances give you "2 cores"?

See my comments for feedback from AWS engineer. I'm guessing since both AWS
and GCE run the same hypervisor now that GCE will have the same "feature"?

------
nicoburns
What servers are they using with 200 physical threads?

~~~
seriesf
Not sure but a 9200-series Xeon with two sockets would be 224 threads in a
box.

~~~
Dylan16807
4 socket boards (or 8) make a lot more sense than those abominations, though.
Two entirely separate CPUs in one package for a vastly increased price...

------
mrwnmonm
does it have auto vertical scaling?

~~~
ZeroCool2u
>Flexibility: You can tailor your E2 instance with up to 16 >vCPUs and 128 GB
of memory. At the same time, you only pay >for the resources that you need
with 15 new predefined >configurations or the ability to use custom machine
types.

I didn't think so, but this sentence almost seems to imply that you pay for
the performance ceiling when you're using it, but not when your application is
idle. Would be nice to have this clarified if that's not what this means.

------
TysonGersh
So cool!

------
TysonGersh
Go Tyler!!!!

------
wyldfire
Still no ARM VMs? AWS is (still) out ahead of GCE.

~~~
raybb
I'm ignorant here. Why does it matter to you if your code is running on ARM or
x86?

~~~
bryanlarsen
One advantage of Amazon's ARM is that it doesn't do hyperthreading, so when
you rent a "thread" you actually get a full core rather than half a core.

~~~
user5994461
Which ironically would probably prevent adoption of ARM. People care about and
compare core counts.

Get a single core ARM instance for almost the price of a double core x86
instance. Not a great marketing speech.

~~~
wmf
No, it's the opposite. An ARM core is much cheaper than an x86 core so you can
get more cores for the money. And an ARM core is cheaper than an x86 thread
while providing more consistent performance.

------
blaisio
It seems like this cements Google Cloud's lead in the hardware/infrastructure
side. But the real problem with Google Cloud - the lack of software feature
parity with AWS - is not addressed. If only there was a provider with AWS
services and reliability and Google Cloud infrastructure.

~~~
9nGQluzmnq3M
What AWS products do you find to be missing on GCP?

~~~
amq
MySQL 8, for starters.

