
Comparing Bandwidth Costs of Amazon, Google and Microsoft Cloud Computing - xref
https://www.arador.com/ridiculous-bandwidth-costs-amazon-google-microsoft/
======
Lazare
Meh.

The big cloud platforms offer a rich selection of different offerings, which
(just like in every other industry) cross subsidize each other.

When I go to a restaurant, I don't expect that they will be making the same
profit margin on every item on the final bill, and in fact, they almost never
do. Drinks tend to have a very high profit margin, some labour intensive items
may be a break even at best, and the complimentary bread sticks or chips and
salsa (if offered) will certainly be a loss.

I guess I could write a very upset article about how my local mexican
restaurant is SERIOUSLY SCREWING ME OVER with their drink prices, but if I
don't write the companion piece about their cheap burritos (subsidized, of
course, by the drink prices), it would only show half the picture.

The reality is that I'm buying a whole package (at AWS _or_ a restaurant) and
I should evaluate the whole picture. Yes, I can get bandwidth cheaper outside
AWS (or a can of coke a lot cheaper from a big box retailer). But I _can 't_
really get the total package of integrated, managed services outside AWS
(certainly not for the cost they charge), any more than I can get someone else
to show up in my kitchen and cook a three piece meal and then do all the
dishes. (Which is to say, I totally could hire a chef to do that, but it would
cost me a lot more. I could BUILD an internal SQS clone if I had to, but my
employer would never break even on the cost of getting me to do so.)

AWS is very cheap for some things and very expensive for others. Depending on
your usage and workload it may or may not be economical to buy the package
they offer. If it is, go for it. If not, don't. Just like, you know, every
other good or service you purchase in both your personal and professional
life.

~~~
kyledrake
There's a site on Neocities we host for free that would cost $560/mo to host
on Amazon's CDN.

It costs me less than $5 with my current configuration (which is also a global
CDN). It's way over our "soft limit", but it's an awesome site so I don't
care. The important part is, I don't have to care.

This isn't about slightly more expensive tacos. It's about spending $560 on
tacos instead of $5. "Meh" wouldn't exactly be my first reaction to getting
that bill at the taco cart (or the fanciest taco place in the world, for that
matter).

I can get IP transit in datacenters from $240-$600/Gb right now. So even his
$960 transit cost for datacenters is off by quite a bit. He's comparing with a
pretty high price and it still looks ridiculous.

~~~
paulddraper
It's the drinks that are expensive, not the tacos.

And soda has a 1,150% markup [1], so it actually is like that.

[1] [http://www.businessinsider.com/products-high-
markups-2014-7](http://www.businessinsider.com/products-high-markups-2014-7)

P.S. You could host it on Github for even less, if you were _really_ price
conscious.

~~~
kyledrake
Alright, so $560 for a soda instead of a $5 soda.

> P.S. You could host it on Github for even less.

I'm not going to attempt to save an inconsequential amount of money by hosting
a site I'm hosting on another host.

But that strategy probably wouldn't end well. The $560 site uses 25x more
bandwidth than Github's fairly low soft limit of 100GB
([https://help.github.com/articles/what-is-github-
pages/](https://help.github.com/articles/what-is-github-pages/)) and is likely
above the 1GB site limit (which is also just the sum of any changes ever,
because git) of a Github pages site.

From seeing who their CDN provider is (one that charges basically the same CDN
rates as AWS), my guess is that Github is paying a lot more for CDN transit
than I am. It's perhaps not a coincidence that the 100GB BW limit was quietly
introduced after they started using the CDN.

~~~
boundlessdreamz
Yeah Fastly (github's CDN) is ridiculously expensive and they charge for
requests just like cloudfront which can become very expensive. They are the
only major CDN provider other than cloud services which charges for requests.

We went from cloudfront -> edgecast -> keycdn and our bill dropped from $3000
-> $500 -> $120

Edgecast when you buy is through a reseller is quite cheap but we moved
because we needed custom domain SSL which is quite expensive in edgecast

------
mikiem
As a provider of IaaS Cloud and of dedicated servers and colo, I hear this
argument all the time. No one ever seems to include the Network Engineers,
monitoring systems, the routers (better have more than 1!), the switches
(distribution and access layers), the maintenance, software licenses (where
applicable), customer support, cost of IP addresses, Account Payable, ARIN
membership, RADB membership, cross-connects, optics, spares and/or support
contracts, etc... and finally, you do not use a 1Mbps at 100% for 24hrs per
day, so while 1Mbps for a month is ~320GB, in reality, the way most people
transfer data, 320GB would look more like 3Mbps at 95th percentile (the way
burstable bandwidth is billed)

A basic 1Gbps commit on a 10Gbps port in a data center might cost you from
$0.50/Mbps (something like Cogent) to maybe $1.50/Mbps (let's say Level 3),
other providers could be $4+/Mbps. By the time you factor in all of the above
overhead costs, the true cost of the bandwidth is much much higher on a per
Mbps basis.

Don't forget to significantly over-build your stuff, or you might get knocked
off-line for anomalies or DoS attacks.

Admittedly, the scale of Google, AWS, Azure makes the cost per Mbps much much
lower, but when as others have pointed out, AWS, Google, Azure don't need to
charge less than they do.

~~~
mmaunder
This is bull. I've used colo and been paying 95th percentile billing for a
decade. We've run our own hardware. Much of your list didn't even come in to
it. E.g. you don't need your own router, you get awesome ddos mitigation from
upstream providers like NTT super cheap. It's been a major competitive
advantage to get bandwidth this way.

Cost of IPs? Memberships? Wtf are you talking about? Cross connects? This
stuff is all free, not needed or cheap and a one time fee.

~~~
king_phil
Really depends on your scale. If you just run half a rack of equipment a lot
of these costs are factored in by your housing company. But you come to a
certain point where you need to invest in a RIPE membership, where you need a
router that costs 100k€. Plus a spare one. Where you need someone to watch the
monitoring 24h sitting in the DC - just in case. Even if a lot of this stuff
is free, it needs to be setup up and managed all the time.

Let alone the cost for DDoS mitigation. Thats an easy 300k€ worth of equipment
plus lets say 2x40 GBit/s links for lets say 40k€ each per link and month.
Over the course of two years thats easy 1 million euros, just to handle DDoS.
Even if you just need 10 GBit/s capacity, you might need 10x that capacity in
a DDoS situtation.

~~~
ethbro
Here's the reason no one thinks about those things: no one makes their numbers
public.

Or at least I've never stumbled across a blog post with anything like a line
item cost range for everything that goes into DC or cloud networking.

And in the absence of transparency, yeah, people are going to assume they're
getting screwed.

(I understand why this is. The networking free market seems to do a decent job
at fulfilling needs, but it turns all those things into secret sauce that
shouldn't be shared.)

------
Terretta
Router? Gateway? Firewall? Network Access Control? VLANs? Ability to manage
all this through declarative version controlled code w/ rollback?

The costs of doing those (well) yourself are not cheap.

Getting them from a provider that's certified to do them well while giving you
software control also isn't cheap.

You're comparing cost of gas per gallon to to expense of miles driven per
gallon. Pretty sure on your IRS or corporate expense report those aren't the
same.

~~~
paulddraper
Exactly. Olive Garden is $10 for a pasta plate. I could have bought the
ingredients for less than $2.

It takes times and skill to turn the ingredients into something useful.

Amazon, Google, and Microsoft are IaaS/PaaS, not ISPs.

~~~
mod
Want to disrupt the restaurant industry?

Sell food at cost!

/s

~~~
dbenhur
Restaurants operate on pretty thin margins. You're confusing the cost of raw
materials with the cost of prepared and served meals.

------
shiftpgdn
I have posted a few times about how absurdly expensive all the cloud providers
are. If you have a baseline load you should be co-locating bare metal. Any
excess capacity you need should spill over into your AWS/GCE/Azure account.

For example: A dedicated m4.16xLarge EC2 instance in AWS is $3987/month. You
could build that same server for $15,000 through Dell, lease it at $400/month
(OpEx), and colo it with a 1GB/s blended bandwidth connection billed at the
95th percentile for $150/month.

~~~
didip
1\. Will Dell servers magically appear in your racks all mounted correctly?
Shipping delays is a very serious and real problem.

2\. Will Dell servers arrived with 0 problems on its components? Any time I
ordered meaningful amount of servers, I usually get about 5% fail rate. VMs
aren't perfect but you can destroy and recreate in different region almost
real time.

3\. Ever had to deal with difficult-to-work-with network admin? The cloud is
significantly less pain in the ass.

~~~
MBCook
#3 is why we went to Amazon, and its one of the reasons we were successful.
Sad as that was. No weeks/months of delays/hassles/fights/sabotage. Just
success as fast as we could do our part and figure out the few small hurdles.

~~~
_jal
...That sounds more like an HR problem than a technical problem.

There are network engineers in the world who don't live in caves and gnaw at
the bones of administrative assistants.

~~~
Spooky23
There are, but they are hard to find.

Networking is a great place for assholes to build empires and exert control.
In 20 years of professional engagement with distributed and data center
networks, most of these guys (and they are almost always guys) running network
orgs are an impediment and spend more time helping out their vendor of choice
than anything else.

I've run into awesome network guys in position of power only a few times, and
they are 100x employees. The last major rollout of a service that I did was
literally 3 months ahead of schedule solely because of the efforts of this
awesome network dude snowman as recently put in charge.

~~~
MBCook
We saw some good people, but they generally left because they didn't want to
work in that environment.

Eventually replaced with someone helpful, it was amazing what we started to
get done. Got lucky there we found the replacement and he stayed long enough
to be there when needed.

------
thebestman
This analysis is way too oversimplified. It completely ignores the shape of
the traffic (real apps have peaks and valleys of usage - they don't pump
exactly 100 Mbps every second of every day). Cloud providers charge the same
amount regardless of how bursty your app is, and they have to provision
capacity so that all customers get good performance even under unusual spikes
(the more spiky your traffic, the better a deal per-MB pricing is for you).
And of course it ignores all the ancillary networking HW and SW that supports
these services, and all the labor you save by not having to manage that stuff
yourself.

I've analyzed the cost of cloud services to death (I've worked for a couple of
them) and the only way they aren't great deals is if you don't need high
quality operations (i.e. if you can deal with slow-downs or occasional outages
then you can do better elsewhere). Otherwise, if you're small-scale then these
marginal cost differences don't matter, and if you're larger scale then call
up these cloud providers and get yourself a discount off the list price.

------
desdiv
(Bandwidth is ambiguous in this context so I'll use "data transfers" instead)

I personally don't see the outrage. AmaGoogSoft overcharges for data transfers
because they know they can get away with it and that lowering it won't attract
more customers.

Customers with transfer-heavy applications will always buy their servers from
providers with unlimited transfers like OVH[0][1], where you can do hundreds
of terabytes a month with no extra charges (1.5 Gbps * 3600 * 24 * 30 = 486
TB). Even if AmaGoogSoft lowered their transfer prices by 100 fold their
pricing still can't compete with OVH.

Companies with enough engineering resources can always go with the best of
both worlds: transfer-heavy servers on OVH, and "regular" servers on
AmaGoogSoft. The expensive data transfers will only hit smaller outfits, but
these customers won't switch because it's not worth the hassle to split your
hosting across two providers.

[0] [https://www.ovh.com/us/private-
cloud/options/bandwidth.xml](https://www.ovh.com/us/private-
cloud/options/bandwidth.xml)

[1] [https://www.ovh.co.uk/web-
hosting/unlimited_traffic](https://www.ovh.co.uk/web-
hosting/unlimited_traffic)

~~~
FTA
> I personally don't see the outrage. AmaGoogSoft overcharges for data
> transfers because they know they can get away with it and that lowering it
> won't attract more customers.

I know a handful of scientists, myself included, who would consider cloud
computing were it not for the expensive egress costs. The ease of spinning up
lots of computing power for scientific modelling is useless if retrieving the
vast quantities of raw output data is costly. I will have to investigate OVH
though as a potential opportunity--thanks.

~~~
lowbloodsugar
May I ask why you want to retrieve it? Wouldn't you just leave it there, and
download only the conclusions?

~~~
FTA
I can think of a few off the cuff.

You're likely not running a model just for yourself. You have collaborators at
other institutions that need data to compare or combine with something else.

Your grant may require you distribute the output for use by other researchers.
That could either entail being hosted by you or by an agency or other entity.
But you still have to get the data to them.

A reviewer for your publications may request the data.

That brings up another point that the review process can be upwards of a year.

Oftentimes you have to go through an exploratory data analysis, where you
don't even know what your final analysis will entail.

------
slackingoff2017
How is this is a surprise to anyone? The big players are all pushing their
clouds because its a cash bonanza. It's the SaaS model for hardware, make
money forever because your customers never own anything.

I've done the math many times and it's orders of magnitude cheaper to colocate
as long as you can afford an IT guy and the upfront cost of hardware.

~~~
mostly_harmless
Theres other hidden costs as well.

What happens when a harddrive fails in your colo? Unless you have best
practice backups, you will lose customer data and trust.

GCloud abstracts the issue of hardware so you can focus on real business
development. And to some, that's worth the cost until they can properly afford
all the hidden costs of independent hosting.

~~~
slackingoff2017
These are solved problems though. RAID works great around 99% of the time and
for the others you can use off-site backup. It's extremely rare that something
needs a true 100% uptime. Salesforce and Reddit still go down for maintenance
windows in 2017. For one project I setup log streaming with Postgres to the
cloud(since inbound BW is free :) ) and ran a hot backup there just in case.

Really though in my experience the cloud is far less reliable than colocation.
I've had AWS VMs "degrade" or randomly die countless times but I can't
remember the last time my Colo boxes went down, probably not since I last
upgraded the OS.

------
benwilber0
Worth noting that Digital Ocean doesn't _actually_ bill for bandwidth. They
say they do in their Droplet template descriptions, but they really don't.
I've pushed many many terabytes to/from my Droplets and never received a bill
for it. But you need to cap your individual Droplet bandwidth using something
like tc[0] around ~400MB/s or they'll shut off the network interface (DDoS
detection).

[0] [http://tldp.org/HOWTO/Traffic-Control-
HOWTO/intro.html](http://tldp.org/HOWTO/Traffic-Control-HOWTO/intro.html)

~~~
tribby
indeed. I'm surprised how buried this comment is, because it's a _really_ good
deal (until they change things). anecdotally, DO sent me an email when one of
my droplets was sustaining around 200MB/s, but it wasn't a "knock it off"
email, it was an "is this intentional" email. is the ~400MB/s figure from your
own experience?

------
QUFB
The bandwidth is the soda:
[https://news.ycombinator.com/item?id=12270129](https://news.ycombinator.com/item?id=12270129)

------
cobookman
I dont think its fair to compare GCP's egress costs to a colo's. A collocation
is simply sending your packets straight to the internet, where-as GCP routes
your packets over private fiber to its closet POP to your user. Giving you
better latency.

~~~
slackingoff2017
This is only true with Google as far as I know. And I'm fairly sure that all
cloud providers only have maybe 10 edge routers in the US so the utility is
limited. You can't have too many edge routers because they rely on IP Anycast
which is dangerous to do if your routes are too similar.

------
elevensies
Back of the envelope calculation:

\- assume $100/TB for cloud data transfer

\- assume one employee full time equivalent to manage colo'd servers
($10,000/mo), plus $30/TB data transfer

The break even point for the colo'd setup from a networking perspective is:

    
    
        10,000 + 30X = 100X
        X = 10,000/70 = 142 (TB/month)
    

At 1MB per "request" I believe this works out to about 50 requests per second
average to reach this traffic level.

Weaknesses of this model:

\- Data transfer only. Depending on what else you're doing you could also save
a lot on compute and storage.

\- I don't know that much about how colocated data transfer would be priced
... i.e. do you need overprovision to guarantee availability, etc.

\- one employee to handle servers to replicate the Amazon AWS experience ...
could be highly variable depending on what AWS features you are using.

~~~
lumost
This calculation is an all or nothing affair though. The reality is that you
can avoid most AWS bandwidth charges by direct connecting to an internet
exchange. It's still only worth the effort if you're looking at a 6 figure
bandwidth bill, but it doesn't require new employees.

~~~
elevensies
I guess what I'd like to know is: at what point is it worth seriously
evaluating alternatives to just using cloud services and paying the sticker
price -- which means evaluating the labor cost as well as the cost of the
service, where this submission and many others I've seen will quantify the
cost of the services and say that bare metal is so much cheaper while ignoring
the cost of managing it.

~~~
lumost
The general break even point is circa 200k USD per month, assuming the company
has spare capital, and the ability to commit to using the same gear for 3
years.

It's useful to keep in mind that in this type of scenario the goal is not to
replace a cloud provider but to leverage capital to get better bang for the
buck on select services. ex. using in-house compute farms and leveraging s3
for storage

------
mrkurt
When you buy in mbps, you're actually billed based on 95th percentile usage.
So this comparison is way off, depending on traffic patterns, 1mb/s committed
can work out to about 120GB in a month on average. If you use reasonable GB
per mb/s numbers the cloud providers don't look all that bad.

------
dddchk
Cannot agree more with title. I did a comparative model of AWS bills and colo
bills in the context of companies of different sizes
([https://blog.paxautoma.com/buy-or-rent-the-cost-model-
hostin...](https://blog.paxautoma.com/buy-or-rent-the-cost-model-hosting-the-
cloud-and-in-a-colo-2/)). It turned out frequently overlooked costs for
bandwidth and provisioned IOPS can be responsible of large chunk of the EC2
bill.

------
deafcalculus
I suspect the high bandwidth price is a targeting tool and is primarily for
repelling those wanting to host seed boxes, porn sites and the like. You can
probably get a much better deal if you're paying > 10k$/mo.

Even then, cloud bandwidth is insanely expensive. For example, Hetzner offers
1.3$/TB (if you happen to exceed their generous 30 TB quota). In comparison,
Amazon is 70x more expensive at 90$/TB.

~~~
mad182
Hetzner is great. I'm running a site with ~45TB monthly traffic on 3 physical
servers and it costs me just over $150. I would have to pay many thousands to
any cloud hosting provider and it wouldn't pay off in my case.

Sure, if high availability is critical for you business and money is not an
issue, it may be worth it. YMMV.

------
llukas
Pitchfork mode on: Outgoing bandwidth should be even more expensive! Then
maybe, just maybe my mobile data cap wouldn't be drained that quickly by
bloated webpages and stuff.

;)

------
mmalone
Keep in mind that with cloud providers you're also paying for the SDN that
makes dynamic provisioning of VMs and logical network segmentation possible.
Scalable SDN is much harder / more expensive than traditional networking.

~~~
wmf
Yet they charge zero for the SDN and crazy high prices for Internet bandwidth.

~~~
mmalone
True, and it's still probably overpriced... but a lot of the SDN expense is in
the stateful routing between networks (e.g., traffic going to/from the
internet).

------
havetocharge
Screwing their customers? What kind of entitled attitude is this? This is a
highly competitive market and the customers are voting with their own dollars.
Don't like it? Don't eat it.

------
kev009
FWIW cost for a small biz at most major metro US facilities is closer to
$1/mbit for a multi-carrier (which means generally multi-route and high
quality with some caveats), and $0.20 if you do something like he.net. For
higher volume customers you can easily cut both of those rates in half right
now. And you can also participate on public peering switches for generally
just a low setup fee at the best facilities.

AWS, GCE, azure seem like the platforms of yesteryear in all dimensions when
compared to something like packet.net. I think these providers could be in a
rock and a hard place due to the unsuitability of native Linux containers for
secure multi-tenancy. This does leave a nice runway for Joyent as both a
provider and software vendor for at least a little bit, but I think packet.net
is really going to change the economy of infra.

------
nodesocket
While public bandwidth is indeed significantly more expensive on clouds, not
all traffic needs to be public and different clouds charge different amounts.

I wrote a blog post on Google Cloud latency and pricing across zones and
regions which may be useful for others:

[https://blog.elasticbyte.net/comparing-bandwidth-prices-
and-...](https://blog.elasticbyte.net/comparing-bandwidth-prices-and-network-
latency-between-google-compute-zones-and-regions/)

------
geetfun
It's suffice to say that the cloud providers have a different set of customers
in mind. I have servers on both OVH/Linode variety of service providers as
well as one single app running on AWS. For the products I run on OVH/Linode, I
sell the service at less than $20/month. The one on AWS sells for $200+ per
month. Again, it's because of the requirements/SLAs. Based on experience, AWS
is a lot more robust for what I'm using it for.

------
radimm
As others have pointed out, this view is extremely short sighted.

To add to the mix - what if you need a multiple data-center
deployment/replication? Both Amazon and Google will provide you a greatly
discounted traffic there $0.02 / $0.01. And that's only start. You can easily
migrate from one data center to another, with no or little cost attached to it
(try that in colo).

------
clhodapp
I've always figured that the point of this was to allow a) overall costs to
generally scale with the "size" of the customer while simultaneously creating
a sticker shock effect on migrating out of whatever cloud. For example,
looking at Google, the cost to transfer a terabyte out of their Cloud Storage
product is six times the cost of just keeping it there for month. Of course
some of this collapses if you really look into it (e.g. you are going to pay
that egress anyway if people are actually accessing the data) but I'm not sure
that that is always clear to execs doing back of envelope math. I think that
to some degree this desire for lock-in is explicitly visible in the asymmetric
ingress/egress pricing but I do think that it's a little bit underhanded if I
am right because it would mean that slightly-deflated e.g. instance prices
would be subsidized by lockin.

------
sbov
This subject is never productive on HN because almost every reply argues with
a certain use case in mind but people never actually outline that use case.
Those who read your post and reply do so with their own use case in mind and
obviously what that other person suggested is madness (in your scenario that
you never actually verbalize). Ultimately no one learns anything because they
all think everyone is in exactly the same scenario as they are. Or maybe they
think their personal choice is a "silver bullet". Probably depends on the
poster.

Then it all repeats next week.

------
kernelsanderz
I would imagine contention ratios would also factor into pricing. At least in
Australia you might have a 8:1 ratio of actual bandwidth available for a
consumer plan, and 3:1 for a business plan. I'd imagine that you pretty much
get all the bandwidth you pay for in a data centre. I've certainly saturated
100mbps connections on servers in the past, 24x7. But perhaps people more
knowledgeable than me could comment on this?

------
abalone
Fundamental methodological error: It compares _provisioned_ capacity (colo &
Google Fiber) to _utilization_.

In order for this comparison to be valid you'd need to get 100% utilization of
your colo or Google Fiber pipe. You only pay for what you use with AWS et al.
And quite obviously the pricing of GF and Amazon Lightsail assumes less than
100%. Nobody's getting "screwed".

------
bkruse
I mentioned this in the price reductions of S3 announcement

People fail to realize the true cost of operating on S3, specifically when
hundreds of TB of usable data is in play

"By putting the "tax" on bandwidth, a lot of these business cases are solved.
I see why Amazon does that. AWS is great, but as you get into high scale
(specifically in storage - 2PB+), it becomes extremely cost prohibitive."

------
Arador
As the author of this post I need to clarify something. I love Amazon AWS, and
I love the flexibility and awesomeness of cloud computing. I just don't like
the bandwidth pricing ;) Sorry for the interruption, feel free to continue
crucifying me. P.S. If someone has more accurate data I'd be happy to update
the post or add a guest post. Cheers, Love Arador xoxox

------
Dylan16807
If colocation actually cost that much, it would make sense for a connection
that allows extreme bursting to charge 3x as much per byte.

The real number to compare to is the google data for business rate. You can do
lots of colocation in that price range. And _that_ is why the cloud prices are
unreasonable.

------
FLUX-YOU
New fun:

Take credit card churning and apply it to cloud data. Build tools to
seamlessly move apps between cloud providers.

------
venning
Since Lightsail is mentioned, it's probably worth including Digital Ocean
since they offer almost identical network transfer for the money [1].

[1]
[https://www.digitalocean.com/pricing/](https://www.digitalocean.com/pricing/)

~~~
RKearney
It's also worth noting that Digital Ocean gives you half the amount of RAM
that Linode[0] and Vultr[1] offer for the same price point.

    
    
      [0] https://www.linode.com/pricing
      [1] https://www.vultr.com/pricing/

------
throwaway-1209
Amazing to see the number of people who try hard to justify this blatant rip
off pricing. This is coming from the same group of people who complain
endlessly about the cost of wireless data and telco data caps.

~~~
overcyn
With telcos, there are only a few companies to choose from. If you want to run
your own dedicated servers, nothing is stopping you. So what if people want to
pay more for the ease of cloud platforms?

------
m-j-fox
> Google Fiber for Business

Maybe worth renting an office in Provo just to get the deal.

~~~
allemagne
SLC if your employees like the mountains but might want a beer or caffeinated
beverage once in a while ;)

~~~
m-j-fox
I'm not cruel enough to locate actual people in Provo, just servers.

------
zero_intp
As an ISP architect, I think you overlook a great many cost centers in your
apples-to-doughnuts explanation.

------
zengid
So is Lightsail really worth jumping into?

------
sdenton4
Houses are such a rip-off! Just look how much more expensive they are than a
pile of wood!

------
mankash666
"Amazon EC2, Microsoft Azure and Google Gloud Platform are all seriously
screwing their customers over when it comes to bandwidth charges."

Disagree. There's no false advertising here, they're making you pay for their
service and convenience of using a combined [Paas, Iaas, Saas ..etc]. It's
unfair to view these services as a singular function, you typically touch MANY
features/products in production. The cost includes the convenience of offering
everything under one roof, because, face it, doing everything by yourself at
the SLAs provided by the giants is no trivial task.

Unless you're a BIG company that likes to distract itself with infrastructure
instead of building and sharpening the core offerings, chances are that you
will NEVER really build anything as reliable, inter-operable, configurable and
manageable at cost.

~~~
slackingoff2017
I disagree strongly with this. Before all these cloud providers websites were
not noticeably less reliable than they are now. I was around back when Apache
and CGI was all there was, even then uptime was so good that it was rare to
hit a website that was down.

There's a lot of koolaid being thrown around by the companies with cloud to
sell. Unfortunately these also happen to be the big "market leaders" so it's
hard to deny what they're saying and get taken seriously.

It's like six sigma, agile, or stack ranking. The big guys are doing it so it
must be the right thing to do... Right? Until everyone realized it's mostly a
ploy to make money selling books and conference tickets, or in this case rent
out a bunch of excess capacity for huge profit.

I disagree with reliability as well. Most of my bare metal and colocated
machines have uptime of many years. Most AWS VM's die after a year or two.
With the cloud you have to worry a lot more about fault tolerance whereas with
dedicated equipment simple offline backups are often enough to meet any
reasonable SLA.

~~~
mbesto
> Before all these cloud providers websites were not noticeably less reliable
> than they are now.

Before all of these providers existed:

1) The volume of internet traffic that exists today didn't exist then. Mobile
devices didn't exist. Mobile devices that aren't always connected to the
internet and consume hours of our day didn't exist. Large downloads (1GB)
didn't exist. They couldn't exist because the infrastructure that exists now
can properly support it...at scale.

2) Websites that had large volumes of traffic had top tier expensive admins to
maintain them (surprise surprise, Amazon did and turned it into a service)

> Most of my bare metal and colocated machines have uptime of many years. Most
> AWS VM's die after a year or two.

3) What's your definition of "most"? Sources for those numbers please?

~~~
slackingoff2017
As we've gotten more device computers have gotten more powerful. Server
software has gotten better. HTTP has Keepalive and multiplexing now.
Encryption and networking are offloaded to hardware and we have a lot more
cores.

I would guess a single server can handle thousands of times as many users as
it could handle years ago. HAProxy, Netty, Nginx, and others can handle over a
million (simple) HTTP requests per second. That's more requests than
Google.com gets.

Most as in I've been watching over 100 AWS VM's and maybe 30 on Azure for
years and they die or crash far more often than than the VM's hosted here, at
colo, or our old bare metal machines. It's anecdotal but it seems like AWS
doesn't really care about warning you before shutting off your machine. Azure
is slightly better but still goes down regularly.

I know everyone says "it's okay! Just make your servers fault tolerant!". Well
that works great for load balancers and frontend, but doesn't work at all for
SQL databases. ACID compliant transactions require a single source of truth
and a true multi master SQL database is impossible. Failover yes, but you
always risk losing data in the switchover unless you use two phase commit
which actually makes your multimaster database slower than a single system. In
practice the failover almost always causes some data loss and log conflicts
you have to diddle with later. And God help you if the replica falls behind
more than a couple seconds.

Anyways, for SQL databases system reliability is as essential as ever and it's
a lot easier to get high SLA numbers when you control the hardware and the
power switch. The closest you can get to the Holy Grail is running KVM VMs
locally and doing live machine migrations when hardware starts to fail, but
even that won't keep your database running if something really bad happens.

~~~
garyclarke27
Thanks for the info very useful, I didn't realise how unreliable AWS is for
database servers. You're correct random unannounced shutdowns of db servers is
just not acceptable for critical data. I will be launching a new business
based on Postgres soon and the thought of this is terrifying. I'm not keen on
the RDS type services or CoLo so this is an unexpected problem I need to
overcome. Do you know whether VPS provider such as Digital Ocean or CloudSigma
would be more reliable?

~~~
slackingoff2017
I would say colocation is most reliable. I'm sure dedicated VPS is better but
most still reserve the right to pull the plug for hardware replacements. Colo
isn't terribly expensive if you buy used equipment.

Really consider how important 100% uptime is though. Google and S3 have gone
down multiple times without killing the internet or losing a ton of customers.
Plenty of large SaaS providers still use maintenance windows. Heck, GitHub
went down today. Not sure if you use ADP but that goes down for a couple days
a week!

I know it's not the popular thing to do but you can get much better relative
reliability by running a single database per tenant and running a limited
number of tenants per VM.

------
bitmapbrother
Did he factor in latency? if you want to ride in first class you have to pay
more.

------
cagenut
This is a super irritating feature of "enterprise" vendor pricing. What almost
everyone on these platforms is doing is moving most of their bandwidth out of
one of the CDN services (like say cloudfront) and then negotiating custom
pricing on that bandwidth that is often as much as a full decimal place
cheaper as long as you sign a couple grand a month yearly commit. There's
still this massive massive pricing cusp between using the cloud as a utility
and jumping into the suits & drinks & lunches sales guy game.

