Hacker News new | past | comments | ask | show | jobs | submit login

One major thing this doesn't consider is the technical limitations which are latency & bandwidth.

(1) You save a ton of money on bandwidth when you move data from AWS to AWS

(2) Your stack, in most cases, needs to be near each other to minimize latency. Databases get wrecked by this.

This is why, cloud database providers have to often transparently show you which cloud you're launching on [1] which effectively means AWS is going to get a good share of it anyway. My uninformed guess is that EC2&S3 are by far their biggest money maker which is going to be what unbundlers target.

I'm all for the unbundling and will probably take part in some of it, but I don't think it will be that easy.

[1] https://www.cockroachlabs.com/product/cockroachcloud

> You save a ton of money on bandwidth when you move data from AWS to AWS

This is only because AWS grossly overcharges for bandwidth. If you move all services that have high bandwidth requirements to providers with reasonable prices you'll save a significant amount of money.

Within a single AZ. Inter AZ is charged. It's also frustrating that AWS infra doesn't optimize for same AZ (and only fall back to another AZ if an AZ is impaired). For example, a client in us-east-1a can hit the Aurora reader endpoint and be directed to an instance in 1b even though there's a healthy instance in 1a.

A consultant on Twitter recently posted how a client got a bill for more than $100,000 because they unwittingly had a multi-AZ Kubernetes setup and moved a ton of data that way.

Third parties like Snowflake get around this by having you pick where their service is hosted so that bandwidth and latency aren’t a concern.

Further, Snowflake is a good example of an unbundled service that has more capabilities than AWS Redshift (for instance, zero copy clones). They use AWS for infrastructure - the entire warehouse is stored on S3 - and value add on top of it.

If a company needs an almost commodity software, like most companies, it's clear that over the long term, AWS will win on reliability and probably price over an AWS startup.

So why choose differently ?

Because AWS knows that many of their customers think like this and pursues ever more minimal Minimum Viable Products accordingly. Their quality is dismal and it stays that way over the long term.

AWS's killer feature has nothing to do with tech, it's the smooth billing process (techies choose, management pays and supervises). If you can put together a smooth process for paying for 3rd party software in your organization, you can unlock massive improvements in quality for a pittance.

There is nothing smooth about AWS's billing; it's notoriously byzantine, and so difficult to interpret that there are multiple third-party services whose entire offering is based on ingesting and parsing your AWS bill.

The value of AWS billing isn't that it's easy, it's that the spending decisions are in the hands of the right people so software development moves faster.

Yes, exactly. The billing tools are awful, have always been awful, and will always be awful, but bad tooling can be worked around much more readily than bad bureaucracy.

I would argue the killer feature is actually IAM: technically better than anything any other vendor offers, even if there are a million and one problems with it and not a single annoyance has been fixed since 2013.

I could see that.

I almost mentioned IAM in my post as a co-killer-feature, but I've only ever witnessed its power in AWS learning material / conferences, not being leveraged IRL, so I decided not to speculate. I'm more dev than ops, so even though I've seen a decent number of AWS environments the fact that they have all used IAM in a clumsy, coarse manner doesn't really mean much.

Do you have a feel for how frequently ops manages to actually leverage the fine-grained power features in IAM?

Essentially every large scale environment I’ve seen (as an ex-core maintainer of Terraform, that is quite a few) either does an ok job of this or has projects to do better.

The issue I have with IAM is that it is not possible to be sufficiently fine grained - for example I cannot grant an instance permission to read its own tag values but not those of other instances, since the EC2 IAM API is stuck in the state of 85% done at which most AWS services eventually seem to plateau.

> AWS will win on reliability and probably price over an AWS startup

From my experience, Snowflake is both more performant AND cheaper than Redshift or other RDS options from AWS.

Bandwidth/latency was my first thought too. Still, for higher-level services, it is entirely possible to compete with AWS running on lower-level AWS infrastructure, rendering the network issues moot.

Good points.

Would add that edge compute, running cloud paradigms (code instead of config; automation; management abstractions), partially addresses these limitations for many use cases.

Costly to move the data off, but longer-term ROI for those orgs that are willing to make long-term decisions.

Meanwhile, as edge matures, greenfield apps should be edge-centric, rather than cloud-centric (doesn't mean they won't have cloud components...they will do the processing and storage where it best makes sense).

I thought the article was talking about avoiding cloud vendor lock-in by using software that keeps you portable between clouds. (Instead of software that dictates that you buy from a certain cloud provider.)

Couldn't you build your software stack in a cloud-portable way but, at any given time, still have 100% of it on whatever your current cloud provider is? And then switch to another cloud provider from time to time if/when the costs make it worthwhile.

Yes you can, but it’s very difficult and not worth the effort IMO.

The first problem is that you wind up using the least common denominator. You’re paying public cloud providers a premium, so this essentially equates to throwing away money.

The second is that there are some cases where what seems like the same thing isn’t, or best practice is wildly different between providers. Get in a room with an AWS expert and an Azure expert and talk about what an account is.

You raise a good point. Upon re-reading, portability to a different cloud is probably not the point. It's more likely about being on the cloud you're already one but having the choice to take only parts of the platform, instead of taking Amazon everything at every layer of the stack. So you're still on Amazon or whoever's cloud, but you may not use (for example) their database if you like some other database.

In other words, it's not about moving your entire stack off Amazon's cloud, it's about moving parts of your stack off Amazon's software even if all of it may still run inside Amazon's cloud.

>avoiding cloud vendor lock-in

You're simply moving from being locked in by AWS to being locked in by a bunch of much smaller cloud vendors.

1) Doesn't Amazon charge for ingress and egress transit, and isn't this unusual, and isn't it more expensive that way? I could have swore I recently calculated AWS transit to be something like 20x the cost of that in a normal colo datacenter. Maybe there's something in AWS billing where this only comes into play when you cross certain technical boundaries, I don't know enough to say, but I'm thinking it would have to be something like that in order to "save a ton of money."

2) Contemporary site/app development renders the latency of individual requests irrelevant. The whole page is going to boop and bounce around for 20 seconds anyway, why make a big deal about it?

The web is a measurably worse experience now than it was 10 years ago (maybe even 5), so it's ironic that Goodhart's Law has led so many astray toward measuring smaller and smaller trees within a growing forest. "Well but we have to get the request for number of friends below 10ms" while the other 400 requests on the page are dilly-dallying and experiencing their own latencies. Then the CSS gets applied.

AWS egress prices are so insanely high that for bandwidth heavy apps I've had consulting clients where we cut their hosting costs by 90 percent by moving them off AWS. That's extreme, but their bandwidth costs are easily 5x-50x that of alternatives, so it doesn't take much egress before it dominates the cost.

If you don't mind me asking, which cloud provider did you move your client to? Or, moved them on-premise, you mean?

Depends on their need; I tended to move them to managed hosting, which used to reduce their hosting costs massively and reduce their devops costs. Hetzner for anyone with most traffic in Europe. DigitalOcean in some cases. Sometimes we just put caching proxies (external to AWS) in front of EC2.

I've managed racks for customers too, but managed hosting at Hetzner is now usually cost-effective vs. colo-hosting in London where I am. Since they also offer cloud services (though pretty basic) now, there's the option of mixing and matching.

Amazon consistently charges for egress only. Ingress is free.

Live and learn, that seems to be the exception. That sounds like it might be an accident though?

Their inter-AZ pricing is certainly no accident. I'm betting for a large chunk of custs it nets them something on a par with egress

Amazon and Azure both charge vastly higher rates for egress than colo datacenters, but I believe both charge only for egress and not ingress.

Well any other AWS service almost exclusively built on top of EC2/S3, so it’s safe to say it drives all of AWS revenue either directly or indirectly. Probably some service’s use EC2 instance hours as its revenue measure

I think the real moneymaker is that AWS keeps making their servers more efficient but doesn’t lower their costs

Isn’t AWS fairly widely reputed for having lowered its prices dozens of times since first launching? https://aws.amazon.com/blogs/apn/new-research-from-tso-logic...

Arent there many providers with unmetered bandwidth, in nearby datacenters?

AWS always charges for outgoing bandwidth. Unfortunately, they aren’t part of this: https://www.cloudflare.com/bandwidth-alliance/

All of the unmetered bandwidth offers I've seen come with low link speeds like 100Mbps or sometimes even 10Mbps.

Paying for use isn't awful, as long as you're not paying AWS list prices which are rather high.

all hetzner servers are guaranteed 1Gbit

Yes - hetzner supposedly pushes 324TB/month for something like $30/month INCLUDING the server itself. So let's say $15/month for the bandwidth.

AWS is lets say .08 per GB or $30K/month for same bandwidth?

And if you beleive you can run your business on their $15/month network then they would own the market - but oddly they peer with basically NO ONE of any quality - because THEY don't actually pay for the bandwidth either and just totally oversaturate their peering links.


for some of the common whining at least in the past.

My experience is a bit different. Peering works pretty well to many providers, especially with Cloudflare. The big disadvantage of Hetzner is that they're located in Germany and Finland only, so not great if your customers sit in the US. But OVH Canada provides reasonable options for that.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact