Hacker News new | past | comments | ask | show | jobs | submit login

When they say "running a datacenter" they almost certainly mean "buying servers to put into rented colocation space".

Just about anyone who has significant network connectivity has a footprint in an Equinix datacenter. In the Bay Area you want to be in Equinix SV1 or SV5, at 11, and 9 Great Oaks, San Jose.

If you're there, you can order a cross connect to basically any telco you can imagine, and any other large company. You can also get on the Equinix exchange and connect to many more.

But, Equinix charges you a huge premium for this, typically 2 - 3x other providers for space and power. Also they charge about $300 per month per cross connect.

So your network backbone tends to have a POP here, and maybe you put some CDN nodes here, but you don't build out significant compute. It's too expensive.

On the cheaper, but still highish quality end you have companies like CoreSite, and I'm pretty sure AWS has an entire building leased out at the CoreSite SantaClara campus for portions of us-west-1. (Pretty sure because people are always cagey about this kind of thing.)

I also know that Oracle cloud has been well know for taking lots of retail and wholesale datacenter space from the likes of CoreSite, and Digital Reality Trust, because it was faster to get to market. This is compared to purpose build datacenters, which is what the larger players typically do.

In the case of AWS, I know they generally do a leaseback, where they contract with another company who owns the building shell, and then AWS brings in all their own equipment.

But all these players are also going to have some footprint in various retail datacenters like Equinix and CoreSite for the connectivity, and some extra capacity.

Zoom is probably doing a mix of various colocation providers, and just getting the best deal / quality for the given local market they want to have a PoP in. Seems like they are also making Oracle Cloud part of that story.




So many people forget that running data center is a super complex business not just in point of technology but also in terms of operation.

I have known people who tried to setup a data center in India and it took them around 2 years to have the first rack installed. Biggest hurdle was to get a license to store fuel in large tanks for their generators. Not to mention many of those permissions have to be renewed annually and if you fail to renew it which can take months, you are not in compliance and hence can't use the generators.

In India you can not start your own power generation plant and you can sell electricity only to the government. Depending on many situations you have to technically register a separate entity, get licenses as a "power company" then on paper sell the electricity to government and then buy it back from government for your own use.


It’s difficult but it’s not that hard and it’s well worth the investment. When I was flirting with founding a company I looked at this trade off and with hiring/overhead etc... it was significantly cheaper to roll our own than it was to use AWS. It was only cheaper at the earliest stage.

The only actual reason to use AWS is to not divert any energy to doing anything else but scaling the company. The only problem was that by the time you are at some reasonable scale, AWS has you pretty locked in.


You can get around some of these operational constraints with technology. For example, Google had a server design with its own in-built backup battery supply, which incidentally could be cheaper than diesel generators. So backup power for your servers is solved but you still might need to figure out backup power for other parts of the datacenter.


Batteries don’t have anywhere near the energy density of hydrocarbons. Batteries are good for a few hours, but if you want to be able to run for days off-grid you will need hydrocarbons.


This, The certification barrier is steep and clients wants reliable DC (with ISO, redundant and working power, connectivity, etc.).


At what scale does colocation make sense as opposed to a long-term commitment on AWS or other clouds, where you get a discount?


If you’re building an infrastructure company, or any bandwidth intensive product, it makes sense pretty early. For example, it would be impossible to build a competitive CDN or VPN company based on AWS infrastructure. It would also be hard for Zoom to offer a free tier if they were paying per gigabyte for bandwidth, since it would essentially mean every extra second of a meeting cost them money directly.

The fundamental problem is that AWS (or any major cloud) charges you for the amount of “stuff” you put through the pipe ($/gb), but with colocation you can pay a fixed cost for the size of the pipe ($/gbps). This allows you to do your own traffic shaping and absorb bandwidth costs without needing to pass them onto your customers.

This is the dirty, open secret of cloud pricing models. It’s also their moat, which makes it infeasible to do something like “build AWS on AWS.”


Yeah, this is really where they get you.

For context, if you were to buy 10Gbps of dedicated internet transit, he.net is currently advertising that for $900/month.

If we convert that to GB per month it's 3,240,000GB, so we can calculate what AWS would charge based on list prices.

Using their pricing calculator: https://calculator.aws/#/createCalculator

Outbound from Cloud Front or US West (Oregon) to the Internet:

$165,891.11

That's a 184 x increase in price!

So yeah, you have to buy networking gear and other stuff, but you can get quite a bit of gear for $165K/month. Now you don't really want to run that 10Gbps link flat out like that, but you get the point.

The AWS markup on bandwidth costs is absolutely insane.

Pro tip: if you have a large enough cloud provider spend, you can negotiate the bandwidth prices down quite a lot, given their markup, they have some room to move.


In China the cloud provider pricing model is that you have a slider bar to select how many Mbps your instance can scale up (maxes at 100 to 200 Mbps). But that has more to do with controlling customers ability to burst whereas most other providers do the GB xferd. Some of the China provides have adopted the GB xferd recently, though.


So the way the slider works is that you pay for the max mbps irrespective of data transferred?


My swag would be around the $500K/mo opex and $5-10M/year capital expenditure mark it’s worth a conversation. I haven’t been deeply involved in infrastructure for a few years now. I did work in compute farms, networking, CDNs, etc for about 15 years previously. In my old comments you can find more detailed math on swagging out actual network & infra costs for “equivalent to cloud” infrastructure.


This sounds about right to me. The last time I was doing that kind of work it was at slightly large scale than that, and it always modeled out cheaper to stay with our on gear, in colocation space.


Any scale where you get good utilisation of the metal. You can rent a dedicated 8 core server for $100/mo or so, 1Gbps unlimited usage. If you use it at full throttle it is literally under 1/10'th the price, often under 100'th of the amount you will pay for a cloud host and the thousand little cost addon's.

But the trick is you have to actually use it and need it in real time. An AWS instance costs you nothing if you don't use it, and almost nothing if you let them kill it at their whim.

Zoom's strategy looks pretty optimal to me. Take the 100 fold price reduction on your predicable load, farm the rest out to the lowest bidder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: