Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As https://twitter.com/DEVOPS_BORAT says, At conference you can able tell cloud devops by they are always leave dinner for respond to pager.

Also, What is happen in cloud is stay in cloud because nobody can able reproduce outside of cloud.

(And many other relevant quotes.)



Possibly the most relevant:

"Source of Amazon is tell me best monitoring strategy is watch Netflix. If is up, they can able blame customer. If is down, they are fuck."


so true: "In devops is turtle all way down but at bottom is perl script. "

https://twitter.com/DEVOPS_BORAT/status/248770195580125185


To be fair, AWS downtime always make the news because they affect a lot of majors websites, but that doesn't mean an average sysadmin (or devops, whatever) would do better in term of uptime with his own bay and his toys.


SoftLayer that we use seems to be much more reliable than Amazon. At least more reliable than that particular Amazon's datacenter in Virginia.


I agree, been with SL for 3 years, never had an outage apart from a drive failure one time but it was fixed within an hour.


But this is part of the problem: we have multiple web properties, and the fact that AWS issues can affect all of them at once is a huge downside. Certainly, if we ran on metal, we would have hardware fail, but failures would be likely to be better-isolated than at Amazon.


@override: You are hellbanned.


When our gear is down, we can actually get into the datacenter to fix it.

What do you do when Amazon is down other than sweat?


1. Calculate the odds that a company with the resources of Amazon will be able to provide you better overall uptime and fault tolerance than you yourself could.

2. Calculate the cost of moving to the Oregon AWS datacenter.

3. Reassure your investors that outsourcing non-core competencies is still the way to go.

4. Try to, er, control, your inner control-freak.

;)


> we can actually get into the datacenter to fix it.

But better and faster than Amazon?

I'd rather spend three hours at home saying "Shit. Well, we'll just wait for Amazon to fix that", than dropping my dinner, driving to the datacenter, and spend three hours setting up a new instance and restoring from backup.


When our datacenter is down (cause of our last two outages), we can actually get into the datacenter ... and watch them fix it. Or not.


There's not much sweating to do, as it always comes up relatively quickly.


How long was Amazon AWS "degraded" today?


2 minutes if you checked the "multi-AZ" box on your RDS instances or ELBs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: