Hacker News new | past | comments | ask | show | jobs | submit login
AWS’s anti-competitive move hidden in plain sight (lastweekinaws.com)
261 points by mooreds on March 18, 2023 | hide | past | favorite | 152 comments



In my opinion there are better examples, such as how AWS' managed products are deliberately hobbled and handicapped to make it complicated to migrate away from them once you're in. Two examples that I have first-hand experience with:

It's easy to set Aurora up as "slave" for accepting replication data from an external MariaDB/MySQL server in order to make a smooth low-downtime migration into AWS, but there are... oddly peculiar kludges... when wanting to set Aurora up as "master" and have it feed its binlog outside in order to migrate away in the same smooth fashion.

For their managed Redis ("Elasticache") they have intentionally disabled a few functions that relate to exporting/migrating live data from one Redis instance to another. In particular the "MIGRATE" function, which is used specifically to make a fast and accurate dump of all data when you want to supplant (read: migrate) a Redis instance. Instead you have to hack manual outbound migration together yourself, key by key, database by database.


IMO these two examples might just be cases they were not optimizing for, and didn't want to invest engineering time (and future support) into something seldomly used.


They are seldomly used because they are difficult to use.

They are difficult to use because using them means loss of money.

Combined with other entrapping features means the all features that help escaping the ecosystem are seldomly used. So any individual products off boarding features are even less used.

These are also features that don't use more than once.

So of course when a product manager looks at usage metrics they see this feature is seldomly used and continue to deprioritize improving it.

Self-fulfilling prophecy.


“Made it easier for customers to migrate to our competitors” isn’t a great look on your review.


“Made our customers happy by allowing them to easily migrate to a self-hosted redis instance in AWS so they were able to build their business and brought in twice as much traffic”.

Your comment echos the crux of the article. Yes, this probably wasn’t an explicit goal for AWS, but the end result is that people aren’t using Amazon’s products because they’re the best, they’re just using them because it’s harder to change or more expensive not to (when it could be cheaper if the playing field was level).


> Your comment echos the crux of the article. Yes, this probably wasn’t an explicit goal for AWS, but the end result is that people aren’t using Amazon’s products because they’re the best, they’re just using them because it’s harder to change or more expensive not to (when it could be cheaper if the playing field was level).

I think you hit the nail on the head, at least regarding my own experiences.

Most of the time when a colleague says “hey we can use X service because Y” in AWS, the “Y” is something along the lines of “because it’s just easier to use Z with it,” where “Z” is another AWS service.

AWS has many impressive tools. That doesn’t mean they’re built with friendliness in mind. Their tools are meant to provide value for money, but a lot of that value is predicated on using other tools they provide. They’re building a platform, not -your- platform.

We learned that lesson early on with my business. We ultimately replaced all but 2-3 AWS services with third parties that were actually best-in-class, and kept the 2-3 from AWS that AWS can honestly say are best in class. It was ultimately more expensive and kludgy to build everything on AWS rather than diversify our stack.


When you see the same case across many cloud providers, data easy to get in and hard to get out, you start to get the point.


It’s “data gravity,” as Dave McCrory put it a long time ago. Make it easy to put your data and workloads on a cloud, the “gravity” makes it hard to escape. The more you have the harder (more expensive) it is to break the pull.


It's not simply a case of more data = more work to migrate, which by the way is a fallacy in many cases. This is an artificial type of "gravity", because the migration procedure itself was hampered by deliberate disabling of the necessary tools.


See also: egress fees


They invested engineering time into specifically removing these things and more.


> It's easy to set Aurora up as "slave" for accepting replication data from an external MariaDB/MySQL server

Huh. I tried setting up AWS RDS MySQL to replicate from an external MySQL awhile ago, and the experience was almost comically poor. Is Aurora better?

If you generate a mysqldump and have the command line options set to correctly store the binlog or the GTID data to allow replication to start in the right place, RDS chokes.


>"Is Aurora better?"

This was back on Aurora 1 (based on MySQL 5.6), and setting it up as "slave" was identical to how one would go about it for regular MariaDB/MySQL.


What kinds of kludges are you referring to about reading Aurora binlog? I’ve been using Debezium to read the binlog to transition off of aurora and it works as expected.


Free ingress, expensive/hard egress.


> Free ingress, expensive/hard egress.

Let's call AWS's "Free ingress, expensive/hard egress" business model as the AWS roach trap pattern.


I’ve heard it called the Hotel California pattern.


"Roach trap pattern" is actually a great name.


"The first one's on me, friend."


Pretty much every single network provider offers free ingress or egress. That is the industry standard pricing model for paying for bandwidth.


speaking of Elasticache -- I noticed that the "append-only file/AOF" feature for recovering from crashes/reboots is only available on AWS redis up to version 2.8.22, and we're now at like redis version 7 in the rest of the world..


I'm convinced that, too, is disabled deliberately, in order to coerce you into buying one more overpriced Elasticache instance to use as fail-over. This type of "funneling" design pattern is recurring across their managed products.


Both AWS and Azure's managed SQL Server database solutions are hobbled like that. They can be replication targets but not sources.

At least RDS for SQL Server lets you do native backup/restore, so while it's slow there's still a good way to get your data out. Azure SQL, you're stuck with something like the Elasticache solution. (Or theoretically BACPACs, but they aren't transactionally consistent and don't work for large databases.)


True but you can't blame them to add some friction to stuff that's a loss for them and make profit features more convenient. What's better is to know these things like you do and not use AWS is the above area deal breaker. Unfortunately they don't play nice since they can do that which is anti competitive but hey they'll give you 1st year free lol.


Some other things:

1. Depreciation of NAT instance ami in favor of managed NAT

2. No EKS free tier


How are these anti-competitive?


Makes it harder for you to roll your own on EC2 for lower cost.


You have the choice of running redis on an EC2 instance.


Yes, which we're doing these days. It's how we ran into this "enterprise feature".


OP was a better example than yours by a large margin.


At a former gig, the Prometheus server cost 10% of the total AWS bill in cross-AZ bandwidth charges alone. We were in a single region.

I have long thought it would be interesting to build a highly economical workload that is cross-region, but which only uses a single AZ in each. Inter-region bandwidth costs a little more, but if all your heavy traffic (monitoring etc.) is local you will save a ton. And since your only failover is cross-region, it should be easier to maintain the ability to do so as an organization. Also no extra exposure to whole-region failures :)


This is why their new load balancer variants are insidious, as they require you to have instances in multiple availability zones :/.


But you don't need to enable cross-AZ traffic. Or to be precise you can disable it (if you give up on some features like stickiness).


Only helps if your instances don’t need to talk to each other


Instances can still do cross-AZ traffic, this just prevents the incoming load balanced traffic from crossing AZs.


They did this with managed Kafka too.


Ummm...

Quick back of the envelope [1] shows that the EC2 pricing is still only about 60% of the Aurora price, and something less than half of the RDS/PostgreSQL price, even counting the intra-region transfer.

There may be some transfer patterns and database sizes that cause the Aurora implementation to end up less expensive, but I think under most use cases Aurora is _more_ expensive, and RDS _much more_ expensive.

Just because the intra-region transfer isn't itemized doesn't mean that you aren't paying for it. Heck, the fact that the mark-up is so _low_ is pretty impressive, to be honest.

[1] Assumptions: [db.]r6g.large, x3 for ec2 instances, 1 year reserved capacity with monthly payments, 100GB storage, 1TB of intra-region transfer/month.


Intra-region markup just isn't that much anyway. It's typically $0.01 per GB across AZs. [0] If you ingest 1 TB of data (a lot of data) into a MySQL instance and replicate it to a replica in another AZ, that will run you $10.24 per month.

That's why a lot of services don't call it out.

[0] https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer_...


Except the entire premise is not true...

I am not sure what Corey has done to get to these numbers, but inter-AZ traffic within the same VPC is free.

Perhaps the problem is that the pricing page is very confusing [1].

Probably the confusing part is that VPC peering between one VPC and another that crosses an AZ is not free, but you do not need to do that to compete with RDS availability.

> The problem that I want to highlight today is that if I spin up MySQL myself on EC2 instances, I’ll have to pay 2¢ per GB that I replicate between AZs, whereas I will pay nothing if I use RDS.

This sentence specifically, is not true. If your EC2 instances are in the same VPC but different AZs the data transfer is still free.

There's a better blog post which explains what is and isn't free here [2].

[1] https://aws.amazon.com/ec2/pricing/on-demand/

[2] https://aws.amazon.com/blogs/architecture/overview-of-data-t...


I can tell you from experience of optimizing those bills that it’s very much NOT free in the same vpc for inter-az traffic.

From the link you provided:

> Data transferred between a Local Zone and an Availability Zone within the same AWS Region, “in” to and “out” from Amazon EC2 in the Local Zone Pricing Data transfer in $0.01 Data transfer out $0.01


A Local Zone is not the same as an Availability Zone, a Local Zone is an edge computing location.


Hm actually you’re right about this one (and the fact it’s confusing af) but they do charge you for cross-az traffic as anyone who tried to run reasonably loaded multi-az kafka cluster will attest to


> This sentence specifically, is not true. If your EC2 instances are in the same VPC but different AZs the data transfer is still free.

Your assertion is not true. We run intra-VPC services across AZs and they are billed for network traffic. It's also shown as incurring cost in your second source.

I had made this same comment then deleted it in order to check for sure. You are right that the Amazon docs are horribly confusing. The pricing page is up there with the descriptions of security on S3.


This diagram within your source 2 seems to directly contradict what you're saying: https://d2908q01vomqb2.cloudfront.net/fc074d501302eb2b93e255...


This diagram from a third party suggests that the traffic between different AZs in the same region is "No cost", but the written text suggests otherwise.

https://www.cloudbolt.io/guide-to-aws-cost-optimization/aws-...

Granted it is a third party website.

I wonder if something has changed with their pricing.


The author makes some glaring assumptions that these managed services are just sort of managed EC2 with no further optimizations. I find it very hard to believe that this is the case. Just on networking (which is his focus here) there’s a ton of opportunity to optimize networking between physical locations if you’re running a managed database service. It’s very reasonable for AWS to pass those savings onto their customers.

Corey likes to bash AWS for sport. Sometimes he has a point and other times he’s off the mark. Here he’s missed IMHO. He’s swatting in the air trying to find a controversy where there isn’t one.


AWS spawned off with Amazon realizing the value of dog-fooding, which is using their own products to build other products, and to sell all the steps. That is, the base premise was Amazon Retail being a client like any other of AWS. It is not a stretch to imagine the same goes for many AWS manages services to be built atop EC2. There are many products that are prime candidates for this: API gateway, Kubernetes, Lambda (which is likely built atop the managed kubernetes), etc

Heck, you can look at other cloud providers, and in some cases they are very open about stuff "being build atop public compute". One example that comes to mind is the VPC interconnect of GCP to get reachability between serverless and the rest of the VPC: it uses plain old compute to route the traffic; you can even pick which type and how many to provision.


You’ve said so many things that are factually incorrect it’s hard to know where to start.

There are very few services on AWS that started based on what Amazon Retail used. Even today, much of Amazon’s infrastructure is not on AWS.

Lambda is also not built on top of Kubernetes.

The one service that I can think of that came from Amazon Retail is Amazon Connect. It was originally used as Amazon’s own call center. You can tell it wasn’t built specifically for AWS because there was no public API for automated provisioning and no IAC support for years.

Google’s infrastructure is also not built on top of GCP. Very little of Google uses GCP and it’s mostly unimportant things.


Much of Amazon infra is on AWS. The capacity you request from capacity manager is just ec2 instances underneath including cloud desktops.


Saying it’s EC2 has a very specific meaning. Does it use the EC2 APIs underneath? Are you running on servers managed by the AWS organization (wheee I work).


> Lambda (which is likely built atop the managed kubernetes)

https://news.ycombinator.com/item?id=34964197 (Firecracker internals: Inside the technology powering AWS Lambda (2021)) may interest you, but the short version is no, not on top of kubernetes

Cloud Functions from GCP run on top of knative, one can see it in the metadata: keys, but unknown if it's "vanilla" knative or they have their own knative provider that does further trickery under the hood


GCP doesn’t run knative on top of k8s. They implemented the API on top of their own multi-tenant virtualization software.


Lambda was built on top of crosvm, which has been known for very long but only acknowledged recently in their 2021 paper.


Bro there is not a lot of dog fooding, from Apollo to LPT to CDO, retail is not just another customer.

It’s changing though, slowly


> Let’s say I want to run something open-source in my AWS account; call it MySQL for this thought experiment. I can set up MySQL on an EC2 instance, or I can use AWS’s own managed service (in this case, RDS) to do it for me. I’ll pay slightly more for RDS, but that’s fair; there’s value in having AWS’s operational expertise applied to running infrastructure for me.

The author acknowledges they know they are already paying a premium for the managed service. With that premium you are also paying for the traffic between instances. If the price was the same I'd see how this is anti-competitive but like this I don't understand the argument.


> With that premium you are also paying for the traffic between instances.

No, you aren't. You're paying for the service following an arbitrary pricing model that bears no relationship with their business costs.


You could've shortened that to: You're paying for the cloud integration and the management software*

Most paid software has no relationship between its price and business costs. That's what makes software special from the business perspective.


You have no idea how that’s calculated.


It's fairly easy for them to defend though. They can offer "free" data transfer on products they control for good reason. Meaning they can tweak how efficient the synchronization is, what network paths it takes, predict when they need to scale it, etc.

Offering the same cost for customer managed tech would mean they take on risk where they have fewer controls and less visibility to manage and mitigate it.


Who are these engineering suckers who are spending all of this dollar-labor on solutions YOU KNOW full well at every opportunity will they want to find a way to lock you in, or to obscure engineering costs to you to make a deal look cheap, and then not knowing that leaving the lock-in costs more than writing a cloud independent solution to begin with?

I blame this lazy attitude of not knowing how to use your tools and instead asking cloud providers to know how to do things for you. Every single time I see an engineering team fail to understand how to use their tools and do their homework, they take the easy and long-term expensive way out. Oh and by the way, those solutions engineers you talk to barely know any better either. You will STILL eventually have to learn how to actually use the infrastructure you chose.

I write solutions for enterprises that are entirely independent of cloud specific offerings and then make sales engineers fight for volume business knowing they have no leverage.

Oh, you price hiked on us without notice? We already had deployment scripts to move providers and perform db redirections same-day and had already gone through procurement with other providers for insurance. Oh darn. Guess you shouldn't screw around with your customers.

As soon as you enter the pool, they WILL pull the ladder from you.


If you fully believe this, why are you even using a cloud provider? Why not buy your own hardware and set up your own server racks?

Of course it's because you agree on some level that DIY and "knowing how to use your tools and instead asking cloud providers to know how to do things for you" is simply not worth it. Companies who use these premium managed services are just setting the bar 1 level higher than you do.

Even if these premium services end up costing $200K/year more than doing it yourself on commodity VMs, that's not even the salary of 1 FTE engineer at many companies. It's simply not worth the time and opportunity cost to care about those things if it's distracting engineers from working on the core product.


Yeah, I thought this conversation was open and shut many years ago and the business model of cloud provider is very clear.


> I blame this lazy attitude of not knowing how to use your tools and instead asking cloud providers to know how to do things for you.

If you want to have such a strong attitude towards people who use Aurora instead of running Postgres on an EC2 instance maybe you should be collecting some silica sand instead of worrying about cross-cloud compatibility.


It's typically people whose incentives in no way align with lowering long term costs for their organization. When you are building something new, the incentives are often on speed of delivery, plus whatever you can learn for yourself. Evaluating whether you'd be saving a lot of money 3 years from now is something that't rarely going to lead to raises or bonuses. You might not even be working at the same place by the time the problem is visible. And even if you do, it's often far better for yourself to build something that will catch fire and then put it out than to build something better in the first place.

This happens in basically every organization out there, even in the best ones. I've saved millions a month in AWS costs that came from decisions of people that are major tech leaders today. The original architect is now a CEO in a company you know, and I looked great for my efforts: Everyone won, other than the largest shareholders, who weren't looking into the decisions at all, as they had bigger fish to fry.

If incentives are such that the extra effort to be more cloud portable are useless to the person spending the effort, of course they are going to be "lazy". I've never worked at a place where I couldn't massively cut hardware or cloud costs by trying a little bit. It's just rarely been something that was anywhere near the top of the priority list.


We’ve been also living in a qe bubble for 15 years. While it doesn’t look like qt is gonna hold much longer we also probably unlikely to return in the “spend $$$ as fast as you can” mode for a while so we might see (actually already seeing) a resurgence of cloud expatriates


I correctly guessed this would have to do with bandwidth pricing, but I didn't successfully anticipate which specific aspect of bandwidth pricing; this quirk had never occurred to me before, but yeah... that sucks :(.


Yes and it's also not true, same VPC traffic is free.

The blog post is misleading or misinformed.


Intra-region traffic (i.e. inter-AZ) being free would make way more sense to a customer.

I’d really love to know what the marginal cost to AWS for this is though. I reckon there’s a very healthy margin in that 2c/GB.


> I’d really love to know what the marginal cost to AWS for this is though.

I'd wager that AWS's marginal cost for intra-region traffic is zero. They own their infrastructure and other than maintenance and upgrades they have no cost linked with traffic volume.

Some cloud providers charge a round zero for intra-region traffic, and they don't have anything resembling AWS's scale. I doubt it's a loss leader.


> Some cloud providers charge a round zero for intra-region traffic

That's likely a selling point to attract people to the competition. "Look, our bandwidth is free!"

Also there is quite a lot of cost for internal, inter-zone, traffic. You need all the routing equipment and the fiber links for that. And such traffic does not go through the same links as the public one, because you'd be paying the cost of public bandwidth trough transits or peerings. You want your own backbone, but this has a cost that needs to be amortized. Furthermore, properly designing such a network has a cost in engineers.


It's not just a selling point, it's a reflection on how providers see customers. I've been struck when using other platforms (OCI, some PaaSes) that it feels a lot more like a partnership, where everybody makes money when I succeed, than a short-horizon extraction machine. And sure, maybe they'd change their minds in AWS's position...but they're not, and maybe nobody should be allowed to be as big as AWS?

OCI also doesn't charge for NAT gateways, which makes sense because it's a configuration in a router that already exists and is already doing things. AWS, on the other hand, brings out the knives for you.


Do you think OCI would keep that pricing if they where the leader? Just check Oracle history for milking customers


What does that have to do with literally anything, except excusing anticompetitive and extractive behavior because somebody else might hypothetically do it?

They aren't and so they can't. AWS can and should be prevented from doing so.


A single 800Gbit/s port can do 259200000 GB a month. That's 5'184'000$ a month. Their traffic prices are ridiculous and there is no excuse for it.


You're not buying the gigabytes, you're buying the systems engineering that carries those gigabytes.


> you're buying the systems engineering

System engineering is not measured in dollars per byte transfered.


AWS engineers must be horrible because others are able to do it for at least an order of magnitude less.


Or they make more profit.


> That's likely a selling point to attract people to the competition. "Look, our bandwidth is free!"

But bandwidth is free in intra-node traffic, isn't it? I mean, in some scenarios this traffic doesn't even leave the server. What does localhost traffic cost you?


ssssh! If the FTC did their job, the way Cloud providers priced bandwidth would drastically change across the board.

How much does Google pay for bandwidth? What percentage of the backbone do they own/lease/run?


They will just charge you with some other fee.


What's the cost of not having routing equipment and a backbone? At that point they would just be two separate regions which are too geographically close to actually offer redundancy.


You think amzn pays zero for inter-AZ transfer costs over fiber/whatever they don't own?


> You think amzn pays zero for inter-AZ transfer costs over fiber/whatever they don't own?

Some of the traffic doesn't leave the server, let alone the server rack.

I'd wager these costs have zero to do with operational or infrastructure costs, and are just a way to price gouge customers.


The stated purpose of AZ (the A stands for "availability") would suggest that this is not true. If AWS is running AZ foo and bar on the same server rack, then they're not being honest about what an AZ is. I'm not saying that's impossible, but that if they lying about this, that's a bigger problem.


Maybe? The whole thing with running large parts of the internet is that no one can refuse to carry your traffic, and you are able to dictate very favorable conditions.

(Also, whatever they pay, it is not charged in GB-increments.)


They have to pay for fiber running across data centers, the connection costs, switches, etc. It's definitely not free and no the conditions aren't favorable. In a lot of places there's a monopoly on who owns the fiber etc.


It is free if you're in the same VPC, unfortunately nobody else in the thread seems to have pointed that out.


Nobody pointed it out, because it's not true.


This is not an "anti-competitive mode"

This is called a bundle

On the third-party side, we have: Compute + aws compute margin + network + aws network margin + storage + aws storage margin + software management + third-party margin

AWS is able to lower each margin parts, because they know you take a bundle

Why would you even think doing a cheaper but overall equivalent service in someone else's infrastructure .. THAT does not make any sense.


They lower the price because they own the whole pie? Sounds anti competitive to me.

https://www.ftc.gov/advice-guidance/competition-guidance/gui...


You did see the part about it only being relevant if the company is a “monopolist”?

There are 5 cloud providers in the US and according to Jassy (my skip * 7 manager) in public statements, less than 5% of all IT spin is on any cloud provider.

Did you even read your own citation?


> Did you even read your own citation?

-1, violates hn discussion guidelines.

I did. AWS is being anti-competitive, this isn't a board game with narrow definitions.

EU Monopolist or Chicago School Monopolist?

I won't respond.


The point being that you have misunderstood what a monopolist is, even according to your own source.

No, it is not a singular company, but yes it does require significant market power.

If there are 5 major cloud providers then this is pretty strong evidence against having significant market power, and yes the FTC definition and US court systems agrees with this definition, and disagrees with you.


I never talked about monopoly, I talked about anti-competitive behavior. You and scarface74 have put words in my mouth and twisted the conversation to meet your narrative.

AWS behavior is anti-consumer, and puts the house brand at an unfair advantage.

https://www.ftc.gov/advice-guidance/competition-guidance/gui...

From the link above

> Courts do not require a literal monopoly before applying rules for single firm conduct; that term is used as shorthand for a firm with significant and durable market power — that is, the long term ability to raise price or exclude competitors.

Where did I ever make the argument about monopoly? My citation was to point to an FTC page on bundling services, the way AWS "bundles" is the same effect. They charge everyone the premium and then remove it if they buy the house brand.

This isn't shopping for a brake job, AWS already has an egress moat.

> If there are 5 major cloud providers then this is pretty strong evidence against having significant market power, and yes the FTC definition and US court systems agrees with this definition, and disagrees with you.

This doesn't hold. AWS already has leverage over their customers and the cost of leaving to another cloud provider (which probably does something similar) is not worth the remedy, exactly the kind of behavior the FTC would be looking for.

I'll make my joke more clear next time.

What is the speed of an unladen monopolist?

EU Monopolist or Chicago?

Your comment comes off as demeaning and uncivil. It doesn't appear that you discuss in good fair.


> I talked about anti-competitive behavior.

Literally anything that any company does to try to compete can be labeled as anti-competitive. That’s why there are real lawyers…

> My citation was to point to an FTC page on bundling services, the way AWS "bundles" is the same effect

So now are you saying that all bundles are illegal even though your citation says just the opposite?

> This doesn't hold. AWS already has leverage over their customers and the cost of leaving to another cloud provider (which probably does something similar) is not worth the remedy, exactly the kind of behavior the FTC would be looking for.

So what do you think the “FTC would be looking for” when AWS and all of the cloud providers combine account for a minuscule amount of IT spend?

> EU Monopolist or Chicago?

The EU take is completely irrelevant - especially when citing the FTC.

> Your comment comes off as demeaning and uncivil. It doesn't appear that you discuss in good fair.

Honestly, you just got called out because your citation does nothing to bolster your argument and you’re just really throwing things against the wall.


> I never talked about monopoly

> Where did I ever make the argument about monopoly

Oh awesome! So then since you are not arguing that it is a monopoly, you should know that US law would not define this behavior as being illegally anti-competitive, because of that fact alone.

Yes, some behavior can still be anti-competitively illegal if it is not a monopoly, but not in this situation! For bundling, in the US, it requires it to be a monopoly for the behavior to be illegal.

> My citation was to point to an FTC page on bundling services

And bundling is legal if the company is not a monopoly, for the US, and you cited the FTC, which is about the US.


It violates the HN guidelines to ask did you read the submitted article. You posted your own citation they didn’t say what you seem to think it says.

“Words mean Things”. A “monopoly” isn’t something that have 4 major competitors and when another alternative that 95% of all IT spend is not on any major cloud provider.

And just because you don’t like the legal definition doesn’t mean it isn’t the real definition.

And since the citation is from the American Federal Trade Commission. The EU definition is meaningless.


I’m completely not getting why any of this is anticompetitive. They offer a discounted rate to run their own services but their own services lack certain features and flexibility you would get if you run them yourself. That seems the very definition of the traditional IT infrastructure tradeoff - cost vs functionality. Having options in that space is good.

TFA basically says it’s anticompetitive because if you don’t use managed services you pay fees for interzone network traffic. Yes, but if you don’t use managed services you don’t pay for them so you save on that cost. Which cost tradeoff is best for you is going to depend on a lot of factors (how much interzone traffic, your use case etc) and you’re going to need to make a sensible decision that works for you.

In general I’m not sympathetic with the ethos of complaining about cloud service providers. They are great for many things but not all things and there are many other hosting options you can explore that may be better depending on what you’re trying to do. In many case it’s a cost vs convenience or cost vs functionality type tradeoff.


I don't see how this is anti-competitive. It's a crappy business practice designed to promote lock-in, but Google Compute and Azure both exist and are run by large, well-known companies that have the marketing and technical prowess to compete with AWS (and they do, even if not as effectively as they--or the consumer--would like).


>It's a crappy business practice designed to promote lock-in

Making it harder to switch is by definition anti-competitive

The whole point is that companies should not be trying to trick or stifle customers in any way from making a rational decision that is in their own best interest, including leaving their service.

The whole game here is "what can we get away with" and playing that game to start with is unethical.


"Making it harder to switch is by definition anti-competitive"

It isn't. Anti-competitive would be using one's market power to drive out competition or keep competition from arising. There is no reasonable definition of "anti-competitive" for Amazon in this case, since Google and Microsoft are (arguably) even better positioned to compete (since they are tech companies, where Amazon is not). To avoid the lock-in, simply choose a different provider, or self-host. There is literally tons of competition in AWS' market.


> Making it harder to switch is by definition anti-competitive

Every provider out there optimizes the integration between its own services and makes sure the products in its ecosystem work nicely together. Meanwhile they cannot account for all other usages outside their systems. That's far from "making it harder to switch".


A market dominated by three large players is not particularly competitive, especially since all their offerings are mostly the same, cost mostly the same, and there's very little to differentiate between any of their services.


If only that was true, the only way to remain cloud agnostic seems to be to use Kubernetes and some kind of managed sql database. Once you factor in some things like pubsub, SNS, dynamodb or firebase you quickly lock yourself in. Running the same server less application across clouds gets even more complicated


Anyone who thinks that using K8s and running everything on your own makes “cloud agnosticism” realistic has never dealt with large scale migrations and the institutional complexity behind it. (Yes I’m agreeing with you)

Yes I work for AWS Professional Services now. But I’ve seen the same with Azure and on prem in previous jobs. It hardly ever makes business sense not to go all in on whatever your infrastructure decisions are.


You’re still paying for transfer, it’s just not itemized like hand rolled solutions.

The article should cover this, so maybe the author just isn’t knowledgeable enough to realize.

It’s like complaining that a buffet doesn’t charge for dish washing fees.


The actual cost of transfer is a minuscule fraction of what they charge, especially inside the region where it’s very near free. Cloud bandwidth markup is absolutely massive and is definitely strategically applied, such as the classic free ingress and expensive egress.

It’s all designed to get you in and then herd you into lock in. Why would they build it differently?


If you've seen the graphs, you'd understand why in is free and out costs money. Because it's almost all out already. It's also not nearly as free as you think to run massive clouds.


What does Amazon pay for outbound bandwidth?

I know what we pay for data center bandwidth. It’s priced by pipe size not transfer and works out to well under one cent per gigabyte, and that is with no scale compared to Amazon. Everything gets cheaper at scale unless it’s labor intensive.


Amazon does have a quality connection with multiple providers blended together. It's definitely better than the cheap networks with only a 1 or 2 lower end providers.


At most good data centers there are many peerings. We have infrastructure servers that have had near zero down time for years, including as measured by remote pings. Down time will be on the order of a minute or two a month (average) worst case. Sometimes it can be less.

The cost is less than a thousand dollars per month per drop with guaranteed 5gbps sustained bidirectional throughout. We can saturate this 24/7 at no additional cost. For a few thousand a month you can go up as high as 20gbps sustained. How much would that (fully saturated) cost at AWS?

Providers like Vultr and Digital Ocean charge 1/8 to 1/10 what AWS and Google charge per gig and make a profit. They are still much more expensive than raw connectivity.

My point is simply that cloud bandwidth pricing is massively inflated and that anyone who knows a tiny bit about tier-1 bandwidth knows this is almost pure profit for cloud providers. Bandwidth is very very cheap at peering points.

I’d be totally shocked if Amazon didn’t pay far less per unit for bandwidth than a typical colocation tenant due to their scale and negotiating power. The really big clouds even own some of their own tier-1 fiber.


Its clear y'all have no idea how much it costs to connect that amount of bandwidth everywhere around the planet. It's not just the edge costs that are huge, just carrying that bandwidth around internally and between devices is a massive infrastructure.


And the redundancies around it.

A lot of comments compare it to other providers (or themselves) but I do wonder if it's the same i.e. you do really get 0 downtime just because an incident never occurred "yet". The surprise kicks in when you realize there's a single router, switch or dark fiber path.


The author in this case is an expert in AWS billing and runs one of the most well known and respected AWS finops consulting firms in the world.


I was assuming best intent, so maybe the author knows and is purposely not mentioning it to mislead readers into agreeing with their point.


This is just an extension of what the egress fees, which are also anti-competitive.

Want to start a company called Snowflake, or Crunchy, or any of a number or similar providers? You’d better host in AWS (and Google Cloud, and Azure), because the respective cloud users wouldn’t want to pay for egress to use your service hosted elsewhere. Similarly, even Google can’t usefully undercut AWS’s pricing to get these providers to move out of AWS, because AWS would still charge the users.


Even without the egress fees, I'd prefer running software from 3rd party providers on the same cloud my that product also uses, since a) latency and b) integration with other services (VPC, for example).


If your customers are running their services in the cloud, it seems natural to run instances of your service in many clouds.


I'm up for AWS bashing more often than not, but I don't get what this post is proposing. Should AWS overcharge for the RDS traffic that is handled differently than generic customer connections? Or should they do deep packet inspection on all traffic to pick out MySQL replication streams and bill them at lower rate because... reasons? Or something else?

What does the better version of this situation look like?


It isn't a coincidence that they take their margins on bandwidth instead of compute. Makes you do the computations locally at AWS instead of sending it to some other datacenter.

The better version that will never happen is that AWS makes their margin evenly on all purchases. But that wont happen since by pricing bandwidth so highly encourages people to do everything at AWS instead of sending data between providers.


Security groups and NACLs are free. The way I see it, the margins on things that are easy to measure pay for things that would drive bad behavior if the margin was applied evenly.


Compared to pricing of non-cloud providers, I'm sure AWS also takes their margin on compute. It might be a magnitude less than for bandwidth, but it's still there.


Even compared to cloud providers AWS charges more for compute.


Why do you think they make huge profit on bandwidth. Larger customers all have PPA agreements with very large discounts. Smaller customers get the benefit of a stellar network with elasticity in the Tibps plus range for a small $/GiB fee.


> Why do you think they make huge profit on bandwidth.

Because I have a rough idea of what bandwidth costs me at a data center, I can make some conservative extrapolations for their economies of scale, and I can read the AWS price page.

(I have also negotiated discounted bandwidth fees for AWS, and while they can be substantial, AWS is not going hungry.)


> Smaller customers get the benefit of a stellar network with elasticity in the Tibps plus range for a small $/GiB fee.

Are you AWS sales? This seems unrelated to the discussion to me.


> I'm up for AWS bashing more often than not, but I don't get what this post is proposing.

The discussion topic is about anticompetitive practices.

AWS charges a premium on intra-region traffic, when other cloud providers do not. The thesis is that these arbitrary charges are designed to force users to consume their managed services which further leads to vendor lock-in.

Can these arbitrary intra-region traffic charges be interpreted as anticompetitive practices? I believe there's a good case to be made. Don't you agree?


The traffic between AZs is probably being priced anticompetitively and they should reduce that cost. It is very likely marked up enormously over the cost of the circuits.


RDS traffic that is handled differently than generic customer connections

Is this documented anywhere? I would assume it's handled the same.


I believe locking customers in to proprietary services is called "going full IBM."


The argument would carry more weight if it included data on pricing; how much more do AWS managed services cost vs bare ec2, and what are typical amounts of bandwidth costs for the non-aws managed options. Just because AWS rolls the pricing under single pricetag doesn't automatically make it anti-competitive


How many services were saved by HA? The cost of HA is loss of performance and increased complexity in addition to monetary cost. If your application is not highly critical, I am not sure you are benefiting from HA. Is hackernews HA? nope


> I’ll have to pay 2¢ per GB that I replicate between AZs, whereas I will pay nothing if I use RDS.

This is surprising as I remember that data transfer on EC2 within the same region was free. Maybe I always got it wrong.


i think you mis remembered . its 2cents per gb across az https://www.lastweekinaws.com/blog/aws-cross-az-data-transfe...


Overall egress/the bandwidth fee structure is probably more an anticompetitive move hidden in plain sight...

I've been directly involved in multiple systems being deployed on AWS solely to avoid egress fees


> I’m also not saying that this is some kind of mustache-twirling conspiracy on behalf of AWS to advantage their own services; I suspect this arose organically over time.

and so what if it was? all of what's described in the article are services offered by a company. It's hard to call that anticompetitive when it's their network and their services. It's like you're saying Toyota's car building divisions are anticompetitive because they don't allow Honda to build their engines.

If you don't like how AWS's pricing works you HAVE to vote with your wallet and support another vendor in their efforts.


As far as a list of moves by AWS that seem fishy, this one really isnt. This to me is closer to artificial scarcity pricing that matches a customers need and willingness to pay. Its no different than putting SAML in the “enterprise” column of SaaS pricing.

AWS had way more advantages and while this price is much higher than what the costs are, it’s sort of run of the mill business.


Agreed, the article a stupid take. AWS must be able to differentiate their services, otherwise, what reason would one have to buy vs. build yourself?


> what reason would one have to buy vs. build yourself

I dunno--AWS's could be better without spending effort on it? The same way every other cloud provider incentivizes using their managed services?

Don't get me wrong--a number of managed AWS services are very good. RDS is, even, for a lot of use cases. But AWS forces you to consider cost versus quality when you fall in the RDS gaps, and that's a shitty way to do business. AWS relies on being the only network transit provider in town to incentivize continued use of horizontally related offerings.

This is literal anti-competitive behavior, just as browser bundling was for Microsoft. Like that's the definition of it.


I mean, if you are building it yourself but on top of their cloud, why does it matter to them? And, if you are building it yourself and you aren't using their cloud, then you are already being charged egress bandwidth fees as your awkward differentiator (which is an entirely different issue).

Like, the premise of these offerings has absolutely no need to try to compete with stuff running on AWS as they are clearly barely taking any excess margin over running it yourself. I would always have said the goal of these offerings was to make it easier to use AWS so you didn't have to build it yourself (and some of this stuff is complex to get right).


> I mean, if you are building it yourself but on top of their cloud, why does it matter to them?

It matters when they see a profitable business opportunity in optimizing the bundled costs of the service offerings. And who can say "clearly" what the real margins are, especially at scale? All we know for sure is whatever those are, they are good enough for them to continue to offer these managed services.


Maintenance overhead. If I buy, they manage configuration, updates, backups, etc.


Which, depending on your business, may not be good enough if not coupled with a compelling price point that beats doing it in house or with another vendor.


another victim of the ridiculous prices for egress traffic that most vps providers impose.


He's not a victim - he mostly benefits from them. He runs https://www.duckbillgroup.com/


I have a suspicion that cloud vendors are using bandwidth pricing as a way to deter abuse. Can't use them for ddos, tunneling, file sharing or mass scraping that easily anymore. The lock-in is a nice side effect.

Though Corey is talking about internal network use and I am talking about bandwidth to internet


I wonder how much of this is a primary goal or just a nice side-effect for AWS. I imagine the custom internal forks of all this stuff, and their own data management systems are simply not compatible with the native variations that the products themselves have. From AWS's perspective it might seem wasteful to pro-actively support outbound data, and as a nice side-effect it makes it harder to leave or host an alternative.

For us, this is just part of the calculation; if we have special cases we run it ourselves, but the case also has to justify the inter-AZ replication and other side-effects.

The way we use AWS services also assumes migration paths outside of what you would normally do (binlog or CDC options) where we don't migrate 'the database' but 'the service' that relies on the persistence offering. This is of course much harder if some of your functionality relies on AWS-specific things, but for the average persistence system (ES, RDBMS, NoSQL, Redis, Memcache, object storage, Kafka, MQ) it really isn't all that exciting and the AZ cost is hardly a significant factor of the showstopper kind.

Perhaps this also is due to the way we designed our persistence systems; we only allow single-owner persistence, so no shared redises, databases or tables or schemas. We also don't do any binlog PITR, we use Kafka rewinds in between RDBMS snapshots since the database isn't the only factor in our consistency and partition tolerance conundrum. And Kafka itself is so 'thin' on the server side that binary exports/imports are only tied to the major version and highly dependant on what the client does (which is something AWS does not control - if they would their MSK offering would be useless).

Using AWS as a 'virtual datacenter' or 'managed version of the thing you already have' is bound to turn into a PITA. That applies to GCP as well, but almost 10x for Azure. Ironically, you're better off not using the big three if that is what you want, you don't get a lot of the features, but you also pay a fraction of the cost. And if you're not using the features anyway (or see them as a problem), Linode, Vultr, DO, even Scaleway are all a much better fit anyway.

In my cases where we use AWS, it's almost always for the same reasons:

  - We use enough of it to get discounts
  - We use enough of it to allow us to do 10x with the same people, it is a very significant force multiplier
  - We require some of the cross-service facilities like IAM, KMS, ACM, Security Groups

In a small number of cases it's also for compliance reasons, but I try to stay away from that since that usually comes with a checkbox clipboard manager/auditor regime as well, and I don't enjoy managing or working for those.


TLDR “Aws networking costs are a racket”


If I understand correctly, the author is upset that a business offers various pricing models to drive consumers towards certain decisions?

Doesn’t like, every company do this?


Why the down votes? lol. Funny, and weak, how so many here won’t engage when they disagree.

Oh well, haters guys hate, you defend Amzn and they cry.


That's how I also interpreted the article. The author makes a decent argument for how this is an anti-consumer practice, but doesn't highlight how the very intentional business design decision to invite consumers into a walled garden for a lower cost is anti-competitive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: