Hacker News new | past | comments | ask | show | jobs | submit login

I'm talking about his statement:

"Connecting to AWS managed services (s3, kinesis, dynamodb, sns) don't have this overhead so you can actually perform some task that involves reading/writing data."

That is due to network and colocation efficiencies. The overhead of managing such services yourself is another matter.




Not just the network overhead, the maintenance and setup overhead. I can spin up an entire full stack in multiple accounts just by creating a CloudFormation template.

I’ve done stress testing by spinning up and tearing down multiple VMs played with different size databases, autoscaled read replicas for performance. Ran a spot fleet, etc.

When you need things now you don’t have time to requisition hardware and get it sent to your colo.


As far as spinning up and down, a lot of this is solved with docker, while also being relatively platform independent.


So Docker allows me to scale up MySQL Read replicas instantaneously? And I still have to manage infrastructure.


Well, you can use a container service or use EC2 still.


And then you still have more stuff to manage now based on the slim chance that one day years down the road you might rip your entire multi Az redundant infrastructure, your databases, etc with all of the read replicas to another provider....

And this doesn’t count all of the third party hosted services.

Aurora (Mysql) redundantly writes your data to six different storage devices across multiple availability zones. The read replicas read from the same disks. As soon as you bring up a read replica, the data is already there. You can’t do that with a standard Mysql read replica.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: