Hacker News new | past | comments | ask | show | jobs | submit login
EC2's M3 Instances Go Global; Reduced EC2 Charges and Lower Bandwidth Prices (aws.typepad.com)
98 points by jeffbarr on Feb 1, 2013 | hide | past | favorite | 34 comments



I wonder how many people actually use the on-demand instances on a regular basis. For me, I purchase reserved instances in the marketplace and I never touch the on demand pricing. The past two price "reductions" have not affected me because the reserved pricing remains the same. For someone who does not want to sign 1/3 year lease, I find spot pricing to be more attractive than on demand. For the small chance that your instance will get terminated, you get to enjoy the same performance at 1/10 the on demand price. What if you need reliability / short term lease? I would use the amazon marketplace and find other people selling their reservation in the 1-11 month range (not guaranteed)


Agreed. I've moved my projects to reserved instances


There is a reason why Amazon is one of the most "overpriced" companies out there as according to the stock market. Every decision is made for the user. And I just LOVE Amazon!


This is a pet peeve of mine, and I wonder how many HNers share this feeling. Why doesn't Amazon provide hard spending limits? Back in 2006 [1] this feature was supposed to be "in the works", but clearly it's not.

Just as with the low security of many domain registrars, I guess there are not as many horror stories as my paranoid mind would lead me to believe. Any thoughts?

PS: Yes, I understand the dangers of enabling these limits, which ideally should be accompanied by previous alerts, etc.

[1] https://forums.aws.amazon.com/thread.jspa?threadID=10532


Technically, using the billing alarm, you can use the APIs to shut down [respectively] all services.


This is one of those ideas that makes all the sense in the world in practice, and then someone turns on the limits and you hear about how "NewHotness.com" goes down on Black Friday because of the Hard Limits feature in a big news story and how AWS should never offer a feature like this because "you never know how much you might scale in a flood of users" and "that is the whole point of cloud computing!"

Amazon is just saving themselves the heartache of all the irony :)


I imagine that must be their thinking. Clearly a lack of technical resources it's not. Moreover, Amazon has already shown [1] good will when negligence led to an unexpectedly high bill.

I'm surprised (though empirically without much reason) not to hear stories from NewHotness et al about "how AWS burned five times our monthly budget in one day of frenzy". Even if those new visitors may very well be welcome for the business, I imagine there are many folks who would like some control (say, to double the limit), maybe at the risk of leaving the site offline for a few minutes.

[1] http://www.behind-the-enemy-lines.com/2012/04/google-attack-...


Would a spending limit require terminating the user's EC2 instances, deleting their data in S3 and DynamoDB, etc.? If not, these things could continue to accrue cost.

Not sure if it was mentioned in this thread, but AWS launched a billing alerts feature: http://aws.amazon.com/about-aws/whats-new/2012/05/10/announc...


Amazon's not amazing at announcing useful timelines for their products/services. There are presumably a bunch of internal factors at play, but spending limits aren't the only thing that's been "in the works" for ages. If you're holding out for something that may or may not be coming (eg DevPay in your region, hard spending limits, long-term-queues, scheduling, ability to cloudfront s3 'subfolders' etc) it's frustrating, but on the other hand, they do tend to release well thought out services that make sense.

If it's any consolation, I have heard of individuals getting huge bills and having them cleared by Amazon when it became apparent that they hadn't willingly racked up that many fees (like the guy who included his S3 images in a google spreadsheet causing several TB of requests - http://www.behind-the-enemy-lines.com/2012/04/google-attack-... ).


Why not take the approach of monitoring it instead and triggering alarms based off specific traffic..? So if you use too much foo vs HTTP, trigger an alert. This is one of the things we use Boundary for.


Do these price drops ever trickle down to, say, users of heroku? I've not used heroku for very long, but this is the 2nd time since I started that AWS has dropped their prices, however I've heard nary a tale from heroku.


Heroku is a big enough user to negotiate their own prices with Amazon, so I'd guess not.


Just to expand on this, if you spend $2-5 million on reserved instances, you get a standard 20% discount[1]. If you spend more than $5 million, Amazon just say "Contact Us". Heroku certainly qualify for the "Contact Us" discount: at an average of $1,000 per reserved instance, they only need 5,000 servers, yet they're hosting well over 1 million sites.

Heroku provide a service that significantly simplifies the deploy experience, though you pay a premium for that privilege. If you're willing to take the complexity hit of deploying to EC2 instead of using Heroku due to price, you're not really Heroku's target market. I'd expect a price decrease eventually, though likely in reaction to direct competitors rather than the drop in cost of the EC2 infrastructure itself.

[1]: http://aws.amazon.com/ec2/pricing/#reserved-volume-discounts


Heroku only has 5 cents an hour to work with.... their back is kind of against the wall unless they use a whole other billing method.

I would bet the dynos themselves have improved though.


It would be really useful if you included the M1 and M3 on the pricing page. It's annoying the way information and pricing are kept so separate.



The massive drop in inter-region pricing is going to be really nice for folks running apps with failover to other regions. We're working on cross-region replication for our new infrastructure, so this comes at a perfect time for us...


This drop in inter-region data transfer prices is huge, and frankly, long overdue. Distributing across regions might actually be viable for companies that don't have money coming out of their ears.


It's a nice incentive to get people to fail over across regions so when US East goes down again your application has less of a chance of embarrassing them. :)


I do a lot of number crunching on the CPU. I dont need security or reliability.

Is EC2 a good choice if you just want to do that? Or what would you guys use for that?


Amazon offers several instance types targeted at high-CPU usage. They typically come with recent CPUs, sometimes GPUs as well. They are decent platforms for CPU intensive stuff.

I recall some benchmarks showing Google's cloud offering (Compute Engine) had better CPU performance than EC2. There are other providers (e.g. Digital Ocean) that might be worth a look -- I expect their hardware is on average much newer than Amazon's.

It really depends on how much your time is worth vs the cost of the jobs you want to run. EC2 is the IBM (or Microsoft, depending on your age) of cloud offerings -- you won't be fired for choosing it, but you might be able to shave some time and $s off by going elsewhere.


In my experience CPU of cloud servers is generally... bad.

If you do regular CPU work (24/7), you may be better off getting dedicated servers (with raw CPU grunt) for far cheaper cost per month.


dedicated hardware. common. that might have been an option in the 90s. i want to be able to use as many cpus as i want when i need them. not have a bunch of machines rotting away idle.


I would actually recommend a hybrid approach. If you don't need too much memory there is a cpu heavy Amazon instance that you should scale up and down and have a VPN between a dedicated co-lo machine that handles the base load. If you need memory and cpu, its going to be costly to use EC2 unless you pay upfront for dedicated instances. In this case though the economics are probably not as favorable as getting some co-lo servers and letting them rot away. In short you will need to do a cost benefit analysis to see what makes sense.

For our service we are going to use a co-lo server for our processing with an Elastic Beanstalk frontend and use a VPC + OpenVPN setup to bridge the two. We will incur some bandwidth charges because of this, but the cross talk between the boxes is actually minimal, since the client will post directly to the co-lo box(es) when it needs to upload, etc.


I worked on the LHC's CMS data team about 2-3 years ago. We had thousands of machines crunching data. Comparing the cost to Amazon, we laughed at the deal we were getting with disposable hardware (compared to EC2 pricing). Even buying in bulk from Amazon, the dedicated hardware was cheaper.

Amazon is a huge premium over physical hardware. You use them when a) you want to scale immediately, b) you have financial reasons for not buying your own gear for long-term use (1-3 years), and c) you don't mind paying the premium.


The bigger instances (like high CPU) tend to be virtualized less aggressively, so they have good performance.

It's really cheap to try and see if it's good enough for you.


so you ask for a question saying you do "a lot of number crunching", then complain at the fact that dedicated hardware will do what you need if you actually process a lot of data?

ass.


If you don't have fixed demand, you may want to consider spot instances. The upside is that they can be much cheaper, but the downside is that they can disappear at any second. If you design your app around high availability, it may work for you.

This HNer runs core services on spot instances and claims at least a 70% savings:

http://news.ycombinator.com/item?id=4662442


The value of EC2 depends a lot on your use case.

I frequently want quick and easy access to a bunch of instances, but I don't expect to use any single instance for very long. For example, I might partition my data into 30 pieces, and I want to simultaneously operate on each partition for five minutes.) Since Amazon charges a 1 hr minimum for each instance you set up, I'd pay for 30 hours.

I'm experimenting with a switch to PiCloud. PiCloud charges twice as much per hour, but they only charge for the time I use (150 minutes in this example).

PiCloud has the additional advantage of being vastly easier to use than ec2.

There are use cases for ec2 (and elastic map-reduce)...


If you're going to use them a lot you're cheaper off using consumer grade computers with the most expensive intel processors available. What I usually do is take the Passmark CPU list, add $500 for the cost of the rest of the system and sort by score/price.


It depends on your utilisation vs. idle time. EC2 is more expensive than most other virtual server providers, but they let you start up or shut down many servers within minutes, and bill hourly instead of monthly.

If your load requirements are mostly fixed and spread out evenly, alternatives including buying your own hardware may be cheaper.

If you do a lot of number crunching in short (hours or days) bursts, EC2 can be significantly cheaper, as you are not paying for the idle time. Of course, if the numerical computation is scalable across computers, it can be made bursty - instead of running one average server for a month, you could use 30 servers for a day and get the results faster for the same total cost.


The easy way: go for the high CPU instances (don't bother with the 'regular' ones, other providers offer cheaper and better machines)

The slightly harder: go for technologies that help you spread the load, so you spawn several instances that run simultaneously, run for some time and them stop, costing you maybe what would cost you for 1 instance/month but giving you the results faster

The even harder way is going for spot instances and orchestrating the sharding and reassembly of results


For your req's I'd go with dedicated servers on something like ServerBeach.


very excited for the lower S3 -> CloudFront prices. that change alone is going to save my site probably 25-35% on our CDN bandwidth charges.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: