Hacker News new | past | comments | ask | show | jobs | submit login
EC2 Price Reduction (C4, M4, and T2 Instances) (amazon.com)
66 points by jmgtan on Nov 15, 2016 | hide | past | favorite | 55 comments



Software for bare metal is catching up - you can buy couple of dedicated servers, install something like Kubernetes and will be shaking your head in disbelief how much you over paid for AWS.


AWS is not a virtual machine company. Virtual machines are just one of its offering. At AWS you must architect your software to use other AWS services as much as possible, and spin up minimum count of virtual machines, only when you really need it. Only in that way you pay really what you use and in most cases you will see that the final cost is cheaper than any other infrastructure.

This applies to the most of companies at small and medium scale. If you are at huge scale it might not apply to you.


Until you need to add another server, or 10 or 100... Not to mention it's another set of skills you need to have. It's a tradeoff. (I'm not talking about "we may need 100 servers next year because we'll have all this traction by then" -- I'm talking about "our load is growing at 1.5x every month or next month we need X capacity")

I just finished booting up two new clusters with 5 and 15 nodes, respectively and cycled them a couple of times after making changes to the AMI. The clusters are in ASGs and will scale based on resource usage. I can't do that with bare metal.


This argument always comes out when someone points out how much bare metal stomps AWS. Here are some counterpoints

a) A large portion of AWS hosted stuff probably doesnt need that level of sudden, burst scaling

b) With something like SoftLayer/IBM you can scale physical servers, usually within 30 - 60 minutes

c) If your burst scaling requirement are temporary, if you are located in a decent DC, you can probably spin up some infra in AWS and access your physical stuff over a private network connection and get the best of both worlds.

As always, use what's best for your environment.


a) I don't really have a good enough sample size but I'd imagine a lot don't.

The biggest selling point of AWS is everything around it. You don't just get EC2, you get Route53, ELB, VPC, RDS, S3, CloudFront (although it's kinda expensive), ECS, etc... If I can pay AWS to do something instead of building it, I'll do it.


Most startups hope that they'll suddenly need to increase capacity by 100x, but it nearly never happens. Most vendors can provide dedicated servers within a few minutes (if you don't order too many at once), so scaling is still possible in the vast majority of cases.

Even if you always have to scale up for 1-2 hours per day, using dedicated hardware that's idle the rest of the day is probably cheaper in most cases.


Oh, for sure. A lot of startups don't need that ability. I work for a pretty infra heavy startup so AWS is simply required at this point. But we've hit AWS capacity limits during the worst times (one of our clusters processing 20k events/sec hit 100% utilization) and they literally had no capacity left for that instance type. It's not a perfect thing all the time.

But in the end, the pros significantly outweigh the cons. Our resource consumption is naturally extremely elastic. While we'll always need to slightly over-provision to maintain some headroom, adding/removing nodes throughout the variance saves quite a bit of $.


There are other benefits also:

1. You can get started for dirt-cheap or in some cases, free

2. There's a common API for requesting new instances and performing maintenance tasks

3. There are extra services available to help build your apps such as SES, S3, and RDS to name but a few I found very helpful.

I'm not saying anything in this thread is wrong. But in software engineering, we say "write the code that only you can write", which is a suggestion (but not a rule) to use pre-built libraries instead of trying to make your own. Perhaps we should also say, "run the instances that only you can run".


>2. There's a common API for requesting new instances and performing maintenance tasks

Only true if you commit to vendor lock-in. If you use a higher-level cloud agnostic library, then it likely works with openstack as well so you can manage on-prem and off-prem instances the same.


At a high enough scale, you have a lock-in _somewhere_. Spending time trying to abstract yourself from any lock-in can be wasteful.


You can also rent VPS servers that are still cheaper than AWS temporarily and add them to the cluster whilst waiting for dedicated hardware.


Unfortunately, mixing and matching ends up really complicating things especially with security in mind. Many people run within a VPC and bridging to another private network is, well, I don't really want to think about it at this time.


We've found OpenVPN to be our friend here: create an overlay network that doesn't really care if nodes are bare metal or "cloud".


I thought about that too, but as far as I see with OpenVPN you have the single OpenVPN server as single point of failure and all the traffic goes through the server, which quickly becomes a chokepoint. If I needed this again, I'd try out tinc first. It does not appear to have the single point of failure issue.


We have multiple standby servers to prevent the SPOF issue.

One problem we HAVE seen is a reduction in maximum bandwidth. Since we're CPU limited, however, it hasn't really been an issue.


That's the thing - it is much easier nowadays. Kubernetes requires your containers to run on flat shared networking namespace, so your new machine joins that network. It is like running within VPC. Software like Rancher makes the process of adding new server a matter of executing a one liner on server.


"Sure, you could just buy a toyota corolla and get to and from work without much hassle. However, I commute in a lamborghini gallardo in case I need to get from 0-60mph in 2.8 seconds to snag a narrow spot on the expressway from an onramp. I can't do that with a toyota corolla."

I can't wait until the day that we mature as an industry enough to consider running any kind of baseline workload on EC2 negligent.


I'm talking about _my_ use case for using AWS. I'm sure other people have similar requirements. We manage hundreds of servers, process over 50 billion events/month and losing data is unacceptable.

In the HN echo chamber, you might think everyone just has a SPA and just needs a Digital Ocean droplet. Everyone has different requirements and AWS fits those for many people.


>you can buy couple of dedicated servers

You can rent them too, like here : https://www.hetzner.de/us/hosting/ (not affiliated, would like to know of other decent hosts too) - I think this is the best of both worlds (don't need to deal with HW, getting bare mettal perf and reasonable prices)


You cannot really compare cloud providers to classic bare metal providers like Hetzner.

FWIW we're in the middle of going 100 % AWS after having used a dual setup with Hetzner for base load and AWS for dynamic load. Using spot instances for base load and autoscaling groups for dynamic load proved to be competitive in machine cost with Hetzner.

Plus there's a lot less management overhead with AWS on our scale (50+ machines). And the security of VPC and IAM cannot be replicated on Hetzner. Plus as always, it's the integration of all AWS services that's hard to beat. And we really need to be multi-region anyways (5 regions and counting).


OVH is another decent host at similar prices. They have a larger range of servers and more datacenters, but are slightly more expensive. But they also offer renting older servers at kimsufi (similar to Hetzner's server bidding).


Online.net also offers great value for money


'Friends dont let friends use OVH'. Just google OVH and how horrible they are, every few months they have some gigantic filure - last time I've heard it was taking people money and not delivering servers for MONTHS.


I have no clue about the reliability of OVH, but I would definitely not say AWS is short on failures. At a scale of ~2000 instances, it's really not uncommon to encounter a bunch of AWS instance failures every week. Multiple times per year we see a massive failure that affects a large number of servers. The saving grace with AWS is that usually the problems are limited to a single availability zone so as long as you're redundant across multiple availability zones you can survive.


'Friends don't confuse friends with anecdotal evidence'. My anecdotal evidence is that I am a satisfied customer. Great prices, good customer service.


That would be anecdotal if it wouldn't happened to me twice. I won't ever use them again, nor recommend them to anyone, unless they're masochists in nature ;)


Same as all replies. 3 years using OVH and not a single regret. Don't confuse dedicated ovh servers with their sister brands kimsufi and soyoustart, which has less uptime guarantees. Also OVH VMs suck, from my experience. But their dedicated server line is completely reliable. I use T3 servers and will start testing T4 servers (everything redundant) next few months.


Counterpoint: I've been delighted with the last couple of years at OVH, as have several people I know who use them for dedicated servers.

I also host a few servers at Hetzner and have nothing but praise for them. I've found network performance to be way better at OVH - think it may be a your mileage may vary thing, though.


Just one data point: I have been using OVH for 4 months with no problems yet. I use a large memory, multiple core VPS for development, not production.


~3 year OVH user here, multiple dedicated servers. No problems whatsoever, and DDOS protection that actually saved us a few times.


I actually meant renting - thanks for pointing that out.


And shake your head in disbelief further when you find the new setup performs much better.


DockerCloud allows you to deploy to AWS/DigitalOcean. It is server hosting agnostic. That is if the Ocean is cheaper, you can simply switch from the Amazon.

This will be a deal breaker in the future.

Amazon is being disrupted with the same tech it used for disrupting the other players.


So is Kubernetes


'til your RAID array fails...


Has anyone switched to Google cloud & found it cheaper overall? It seems like there was a lot of interest in Google before but not so much as of late.


Yeah. Love it. It's way cheaper than AWS because of automatic discounting if you use it for a full month. Performance, boot up time & the dashboard UI/UX is better. (Regarding performance and boot up time, my experience is a bit outdated so things might have changed)

Downside: Some documentation (stackdriver for ex) can be really confusing. Also for opening up port 80 firewall rule wasn't enough, you had to apply the http label (Haven't tested this recently)


I'm very happy with them and believe it is cheaper, although it's hard to compare considering the explosion of services on both platforms. I initially chose them because I/O performance on gce was ahead by about factor 10 (may have been workload-specific, may have changed – this was about a year ago, may only apply to the smaller machine types with SSD I tested).

I've also had a much easier time getting started, but my AWS experience may be out of date now. But both the web UI as well as the cli client are excellent.

I also prefer google because of their excellent contributions to OSS, their advocacy for an open internet, their lack of sweatshop-warehouses, and their investments in hard problems. (and I know altruism may not be the motive, but still...)


  their lack of sweatshop-warehouses
You are now hitting below the belt.


I admit writing that – if I were at Amazon – I'd spend the rest of the week making sure every future package to "that unfair guy on the internet" gets thrown against the wall an extra four times :)

And, more seriously, I didn't want to imply that Amazon deserves any hate – they're probably a net positive for the world. And Google is no saint. Just that, on balance, I'm still more inclined to be loyal to the latter.


To be fair, I'm indifferent which side you take.

[I was perhaps amused to see it mentioned in the first place.]


Yup. No more reserved instance nightmare and you can play with the custom instances a lot to optimize pricing (like low CPU/high mem etc).

Overall it's much easier to predict than AWS. There are some aspects where documentation could be better. And obviously no managed Postgresql.


Actually I don't think that Google Cloud is cheaper, their price model is just superior.


Google is 10 to 60% cheaper last I checked ;)

They're not even playing in the same league.


After an in depth analysis:

Google instances are 50% cheaper in average, your mileage may vary => http://imgur.com/g6Tz7K7


honestly , the new prices starts from December/1 and there's only 5% price drop for US datacenters. 20-25% mostly for Singapore and Frankfurt where the original price was higher on that 20-25%.


There are also these guys - Packet: https://www.packet.net/ They have a lots of what you need from "cloud", bare metal servers and very competitive prices (even comparing to hetzner)


Well, that almost makes up for our price increases over the last few months thanks to the worse GBP/USD exchange rates...


Now is the time to check your reservations and account again for how much was saved or lost.


Arggg, does not apply to west coast instances (us-west-1 and us-west-2).


Asking as an ignorant foreigner: Does anyone know what makes us-west more expensive than us-east? Is it just land or electricity prices or something like that?


It's near rich SV customers, so maybe customer value due to latency?


Our Amazon rep tells us it's all about the cost of electricity.


11c/kWh in bay area, 6c/kWh in Oregon and Nevada. Old data here but salient: http://www.npr.org/sections/money/2011/10/27/141766341/the-p...


I think it does -- "Price cuts apply across all AWS Regions."

The regions listed were just examples. Check back Dec 1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: