Hacker News new | comments | show | ask | jobs | submit login

I don't disagree with your point per se as I do love working with physical gear, but I do think you're grossly missing the point in places:

> "My take: a single machine (or two for HA) will be enough"

2 bare metal instances isn't HA. Not even close.

> "if you really want to go big separate the web server from the database but that's it."

I would always recommend separating the web server from the database server on anything professional. It gives an easy clear path for scaling sideways (since you've already separated out your back end from your application), it allows you to tighten security (eg only allow access to the DB server from the web servers via the unprivileged DB user connection), it also makes maintenance easier. Even if you're only running on one physical box, put the web server and DB in their own VM or LXC/Zone/Jail container.

> The other day I saw quad E7-4870 (yeah won't win any single thread contest but has 40 cores and 80 threads) 512GB RAM servers for $299 a month, with 1TB RAM for $499. Had a low end 2TB SSD for boot and you could add 8x1TB HDD w/ HW RAID for $40...

I work with both bare metal servers matching your description and both self-hosted and private clouds. Frankly I think your rant misses one of the most important point of working with AWS and that's the convenience and redundancy that the tooling offers. AWS isn't just about single instances, it's about having redundant availability zones with redundant networking hardware and about being able to have disaster recovery zones in whole other data centres and having all of the above work automatically. Getting our self hosted stuff to even close the level of tooling that AWS offers took months of man hours and quite a considerable more initial set up costs. Having to buy at least two of every piece of kit for redundancy, having to have BT lay two dedicated internet links (we have 3 now) just incase a builder accidentally cuts one of our lines and having core infrastructure replicated off site all adds considerably to both the set up time and cost. So yeah, for small businesses and personal blogs AWS is a bit overkill. But you cannot use the "high availability" argument and say "2 physical machines is enough".

Disclaimer: I've worked for clients such as Sony, UEFA and News International as well as many smaller but still sizable national publications. Our infrastructure has consisted of both scaled up physical hardware and scaled sideways virtual machines and frankly I/we wouldn't be able to offer the kinds of services we do nor with the kind of uptime we do without running a fleet of virtualized web servers.




Let's be real on high availability. If you are honest with yourself, on the cloud that doesn't mean 2.. AWS regions but 2.. cloud providers. It's a yearly occurrence now that 90% of SaaS stop working because AWS is broken, and it's not any of the actually redudant parts like power supplies that are broken, but because a human pushed software or configuration and the whole thing came crashing down.


Idealistically I agree with you but pragmatically I think more than 1 cloud providers isn't really worth the effort. It's not often that a whole region goes down but even then I can't recall when the whole cloud platform last became inaccessible - usually it's just a region.

But once again it comes back to SLIs and client expectations.


But then, "more than 1 cloud providers isn't really worth the effort" and "2 bare metal instances isn't HA. Not even close." are really incoherent.

Two machines with two internet connections and a good UPS easily match the availability of AWS.


Not really as HA on physical hardware would be more like 4 machines if you factor in DB plus 2 stacks of switches. But really if you're running HA then you'd probably want 3 web servers rather than 2 so you can perform maintenance and still have redundancy. Which means you'd also need 2 load balancers and some method of code deployment, which will usually mean at least one other box or SAN. If your application is database heavy with lots of reads then you might also want memcache / redis. Or maybe other caching servers like varnish. Bare in mind that if your site is slow and unresponsive then it's as good as unavailable.

This is all in one physical location as well so you'd need to double this spec again.

Then once you've built all of that, you'd probably want to put it behind a CDN as leased lines are expensive.

Only then you're starting to reach feature parity with what I've described in my first post and there will be lots of kit I've not even touched on.

However even if you do just run 2 VMs (web and db) on each of the 2 physical boxes, and don't need redis etc. You still need to double your spec just for the multi-region point I raised earlier.


> So yeah, for small businesses and personal blogs AWS is a bit overkill.

Even then the answer is: it depends what you need. I run blogs on S3 + CloudFront. That's effectively content versioning and geo-distributed caching CDN for pennies. AWS is not just EC2.


True. One of my personal sites is a static site hosted on S3 and it costs me literally just $1.50 a month.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: