Hacker News new | past | comments | ask | show | jobs | submit login

Even taking into account the cost of skilled staff, AWS was never price competitive with in house, at least in my experience. I simply could never make the numbers work.

In my industry (telco) we had two teams: my team ran our own hardware, the other team ran less than 10% of our workload on an AWS stack that cost as much per month as we paid per year - including annualised capital costs.

They also had double the ops team size (!!), they had to pay for everyone to be trained in AWS, and their solution was far more complex and brittle than ours was.

Assuming Oxide would have been price competitive with what we were already using, I would have jumped at the chance to use them, I could have brought the other team on board, and I think it would have given us a further cost and performance advantage over our AWS based competitors.




Where you using cloud scale style purchasing in house or were you on enterprise servers with enterprise switches with enterprise storage?

The cloud is good for many users, especially if they migrate to cloud native system design, but as a telco you would probably have facilities and connectivity which helps out a lot.

Companies like Mirantis, choosing a technology completely inappropriate for distributed systems (puppet) put a bad taste in the mouths of many people.

I implemented OpenStack at one previous employer to just convince them that they could run VMs, intending it to allow for a cloud migration in the future.

As they ran a lot of long lived large instances it was trivial to make it cheaper to run in our own datacenter. Well until I moved roles and the IT team tried to implement it with expensive enterprise gear and in an attempt to save money used FEXs despite the fact I had documented they wouldn't work for our traffic patterns.

Same thing during the .com crash. I remember our cage was next to one of the old Hotmail cages with their motherboards on boards. We were installing dozens and dozens of Netras and Yahoo was down the hall with a cage full of DEC gear...we went under because we couldn't right-size(cost) our costs.

A lot of the companies who save a lot in cloud migrations were the same, having decked out enterprise servers, SANs, and network gear that was wasted in a private cloud context.

Enterprise _ is often a euphemism for we are fiercely defending very expensive CYA strategy irrespective of value to the company or material risk.


We provided SaaS billing services to telcos. We were very successful in our market, but not very big. Just a couple of racks of gear.

Our production workload was pretty homogenous, and we were super cheeky - we’d use previous generation servers to keep capex down. We didn’t even use VMs, just containers; docker-swarm was good enough (barely). Our bottleneck was always iops, so we’d have a few decked-out machines to run our redundant databases. It worked fine, but I have subsequently enjoyed working with k8s a lot more.

We did use enterprise gear, but previous gen stuff is so much cheaper than current gen. So our perf per watt was not great, and we’d often make the decision to upgrade a rack when we hit our power limits, since we rarely had other constraints.

As mentioned elsewhere, we did use AWS spot instances for fractional loads like build and test. It’s not that we didn’t use cloud, it’s that we used it when it made sense.

All of that said, I do suspect the equation has changed - not with AWS, but with Vultr. I’ve deployed some complex production systems there (Nomad, Kafka+ZK, PG) and the costs are much closer to in house. They have also avoided the complexity of binding all their different services together. They also now provide K8s and PG out of the box, charging only for the cost of the VMs - as opposed to the wild complexity of AWS billing.

So maybe I’m coming around.


> in an attempt to save money used FEXs despite the fact I had documented they wouldn't work for our traffic patterns.

What is a FEX? Feline Expedited eXchange?


Cisco Fabric EXtender. Like a TOR switch but dumber.


At my previous employer (which was telco adjacent in a way) we basically came to the same conclusion. Big plans were drawn up to only locate specific hardware in physical data centers and move 90% of the load to the cloud. It never left the planning stage because our VP could never get the numbers to work. Once you cross a certain threshold of services, scale and reliability you’re paying a premium to be in the cloud.


This has been my (admittedly small and small size) experience - it's hard to make cloud competitive unless you have something like vastly changing requirements, huge burst needs, etc.


Right - we made a lot of use of spot instances for building and testing. It’s great for that kind of fractional use, for sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: