Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My raspberry pi at home has had in the past 3 years less downtime than aws.

Local datacenters in the city had even less.

I'm not sure where AWS is supposed to get that famous reliability from, but it's not in uptime. (I can't comment on storage reliability, because I only write a few terabytes of data a month — but otherwise, there's RAID 5 or other RAID setups to ensure data staying valid)

AWS has its advantages in its immense scalability within of seconds, it has its advantages in convenience.

But its uptime isn't much better than most home connections.

Home statistics:

Power downtime since 2006 is 29 minutes.

Internet downtime since 2006 is 6 hours in 2014, 2 times 30 minutes each in 2016.

This is on a 100/40 DSL line nowadays (the downtimes were, except for one, when switching ISPs), without any universal power supply, battery or generator.

For comparison, this is equivalent to a downtime of 99.99% — the same as AWS advertises, but better than what they delivered in this or the last year.



You probably do not get how this works. Let me try to explain: when you talk about the uptime of your raspberry pi you are looking at a single, very simple instance of a computer. It's really easy to get an insane uptime out of a single machine.

Here's one for you:

  > uptime
    02:52:56 up 714 days, 16:53,  1 user,  load average: 0.00, 0.00, 0.00
Which is pretty average for a small, underutilized server. Essentially the uptime here is a function of how reliable the power supply is.

But that's not what AWS is offering.

They offer a far more complex solution which by the very nature of its complexity will have more issues than your - and mine - simple computers.

The utility lies in the fact that if you tried to imitate the level of complexity and flexibility that AWS offers that you'd likely not even get close to their uptimes.

So you're comparing apples and oranges, or more accurately, apples and peas.


Agreed. What I question is whether a lot of the complexity is actually needed for a lot of the systems being deployed? For example people are building docker clusters with job based distributed systems for boutique B2B SAAS apps with a few 1,000 users. Is the complexity needed? And how much complexity needs to be added to manage the complexity?


How am I comparing apples and oranges?

The previous posters said that I should use AWS, because anything I set up myself will have more downtime than AWS.

Now. I've actually set up a few systems.

Some on rented dedicated servers, some on actual hardware at home.

Including web apps, databases backing dozens of services, etc.

As mentioned above, all of them have better uptime than AWS.

How am I comparing apples with peas if this is exactly the point made above — that even for simple services I should use AWS?


> How am I comparing apples with peas if this is exactly the point made above — that even for simple services I should use AWS?

That a single instance of something simple outperforming something complex does not mean anything when it comes to statistical reliability. In other words, if a million people do what you do in general more of them will lose their data / have downtime than those same people hosting their stuff on Amazon. The only reason you don't see it is because there is a good chance that you are one of the lucky ones if you do things by yourself.

And that's because your setup is extremely simple. The more complex it gets the bigger the chance you'll end up winning (or rather, losing) that particular lottery.


> The only reason you don't see it is because there is a good chance that you are one of the lucky ones if you do things by yourself.

Or maybe because I have less complexity in my stack, so it’s easier to guarantee that it works.

Getting redundant electricity and network lines, and getting redundant data storage solutions is easy.

Ensuring that of 3 machines behind a loadbalancer at least 2 work is also easy.

Ensuring a complex system of millions of interconnected machines, services which have never been rebooted or tested in a decade (see the AWS S3 post-mortem), none will ever fail, is a lot harder.


You're right. If you run fairly low volume services that don't need significant scale, you can possibly achieve better uptime than Amazon. You'll probably spend significantly more to get it, though, since your low volume service probably could run on a cheap VM instead of a dedicated physical server.

You're also likely rolling the dice on your uptime, since a hardware failure becomes catastrophic unless you are building redundancy (in which case you're almost certainly spending far more than you would with Amazon).


Actually, I’ve calculated the costs – if you only need to build for one special case, even with redundancy you tend to be always ~3-4 times cheaper than the AWS/Google/etc offerings for the same.

But then again, you have only one special case, and can’t run anything else on that.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: