Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Anyone hosting their site on servers in their garage?
23 points by iamelgringo on Aug 2, 2010 | hide | past | web | favorite | 87 comments
Why more startups aren't doing this?

I'm all done with slow disk on cloud hosts. I have no budget to pay for a colo or a dedicated host. I'm planning on building a server with commodity components and an Intel SSD. I figure I can build a server with a quad core AMD chip, an Intel SSD and extra memory and Disks I have lying around for about $500.

A Comcast Business Broadband package gets me 22 Mbps/5 Mbps for $100 a month. Double that for 50Mbps/10Mbps.

I don't need 5 nines of reliability. I'm willing to deal with a couple of hours of down time in an emergency while we're getting off the ground.

Why aren't more startups doing this? I understand that you might want to buy decent "enterprise" hardware at some point if your site takes off. But, hardware is dirt cheap compared to the crap offerings you get on cloud hosts. Am I missing something?




Over an year this costs you about $1200 for bandwidth + $500 for hardware + electricity/service. Let's say an even $1600 or ~ $135 / month.

$135 / month buys you two 1.5GB Linode instances or one 2GB Slicehost instance (I use Slicehost but they seem to be much more expensive compared to Linode).

The reason I see renting VPSs, especially at the beginning, as a smart thing is that you are postponing the actual hardware costs towards a later time, when you are (presumably) profitable. And if you decide to cancel the whole thing, it's just a matter of shutting down the instances and that's it. It just seems much more convenient.

Now, if you already have the hardware and you trust the electricity/bandwidth provider you could self-host. But investing time and money in hardware and self-hosting might just be the one thing that stops you from doing the right job software-wise. (Of course, unless your service depends precisely on the fine-tuned hardware that you might manage yourself, but I don't think you are in this situation -- it's just a cost discussion).


Plus your time to spec and build the boxes and to maintain them. If your time is worth $100 an hour that is going to be a lot more per month than $135.


When you're bootstrapping a startup, you're more likely to have time than the money that your time is worth.


You're possibly double counting the bandwidth costs if he's also using the connection for his workstations.


That's absolutely correct. I'm already paying that for my connection at home, so the bandwidth costs are going to be a wash.


It's not really about cost, although that's part of it. I didn't mention it in my original question, but I also need quite a bit more performance than I'm getting with EC2.

It's about getting an order of magnitude more CPU ( Quad ore at 3 Ghz rather than single compute instance on EC2, which is spec'd at a single core 1.0 to 1.2 Ghz Opteron or Xeon from 2007. Ref: http://aws.amazon.com/ec2/faqs/#What_is_an_EC2_Compute_Unit_.... I need the extra CPU for what I'm doing.

But, the killer feature for me is about getting several orders of magnitude better on disk IO by using SSD's. I'm doing a ton of relational queries for my site, and the IO speeds on EC2 are killing me. Linode or Rackspace Cloud might be a bit faster than EC2, but the speed differential between SSD's and rotating disk is huge.

Running my datastore off of SSD's rather than spinning disks allows me to stick with postgres as my db. While sexy and quite fun, I'm very leery of choosing a NoSQL solution. I really don't want to spend the extra development time bolting the relational features I need onto a non-relational database.

So, while it might take me time maintaining the server, it's going to save me a crap load of time.


Well, if it's very high disk IO that you need then you either self-host or colocate. Go with self-hosting and see how far along you get; you can always colocate afterwards.


Also Linode and Slicehost handle power and network reliability issues better than your local service providers (probably).


It's fine until your garage burns down / floods, or someone digs through your street's connection.

Those things are unlikely, but even with a backup you're going to have pretty horrendous amounts of downtime. You could lessen the impact by having some off-site secondary system ready to go at any moment, but then at that point you might as well be renting a server.


More than calamities, I'd worry about simply going away for a week or two and having something happen. With Linode/whoever, even if I'm somewhere remote, I suppose I can at least call or write email and get some help. If everything's at my house, what would I do? Leave the keys to a friend? "Can you feed the cats and take care of the servers in case they have a problem at 3AM?"


> "Can you feed the cats and take care of the servers in case they have a problem at 3AM?"

LOL now this would be a friend.


And depending on where you live you could add until someone walks in and walks away with your server (and all of your user's data on it).


As opposed to some government agency taking a hosting provider's server, and you happen to be on the same VPS as the "suspect"?


Well, according to this article [1] a burglary happens every 15 seconds. I highly doubt any government agency is that efficient, but even if it did happen and you have a backup you could be back live in a couple minutes. Just like the scalability comments below allude to, the key is having the infrastructure right and then everything else is easier.

[1]: http://www.washingtonpost.com/wp-srv/artsandliving/homeandga...


Actually, I think the key is what you just mentioned...

backup.


A backup won't do you any good if you don't have something to restore it to, so that is why I said infrastructure is the key.


Infrastructure won't do you any good if you don't have a backup, like you mentioned in your GP post.


worst comes to worst, I can walk into frys and throw together something shitty. (note, I don't recommend running prod. on Fry's hardware... just saying, if you said I needed a server in two hours, I could do it.)

if your data is gone, well, I can't help you.


Besides fire/flood/theft/etc, is the garage clean and climate controlled? I used to run a server in a basement during college, and burned through 2 disks and one motherboard before I finally figured out that warm, damp, and dusty basements with dirty power supplied by 1920's wiring weren't a great place to run servers.

For me, it's a peace of mind thing. I have better things to worry about than hardware.

Also, read the fine print on that Comcast plan. Their residential service is now capped at 250GB (not sure about business-class). In my experience, Comcast is the last company I would want involved anyways. Our service goes out frequently enough that I now have a backup WiMAX modem for my laptop.

Edit: thanks olalonde


piece of mind -> peace of mind


Probably the startups that do something like this, don't really talk about it.

The negative replies here are one reason. Competitive advantage may be another.

Personal example: I had a dedicated rackspace server for about $650/month. I was doing basic web hosting, but also streaming video via Flash Media Server.

Now the video server and web server need not be on the same physical hardware. No more than 50 users simultaneously stream video. However, bandwidth overages pushed that monthly fee to over $1600 several months ago (and that was only two weeks into the month).

So, I switched from rackspace to some commodity php/mysql all-you-can eat webhost. I then set up additional video servers on Verizon Fios Business package, with a 35/35Mbps connection.

Now the video is not crucial to the success of the website, more like icing on the cake, so some downtime is acceptable in my case.

However, I do have a UPS that can run the server for approx. 14 minutes underload, which is enough time for me to manually switch to a backup generator.

Of course, I had all these things already (UPS,generator).

For me, it was a no-brainer. $150/month with 5 fixed IPs and I provide my own physical hardware PLUS unlimited bandwidth, versus $650/month.

The momey saved can be spent on hardware or software. As traffic increases beyond the capabilities of the Fios connection, I may consider rackspace again.

OTOH, I have several buddies that would be willing to colocate additional hardware at their FIOS enabled locations.

It simply depends on your needs.

Be careful with Comcast business, though. I believe they have some pricey install fees for business accounts, unless you sign a 3-year contract.

BTW, it is not in the garage, it is in a sideroom w/ it's own unit AC.


Right on - it is a good way to test a concept for some time, without suffering from high costs.

If/when it catches up, it will be time to externalize. I did something similar (~ same costs) for a project that was a huge gamble - risky, but with excellent returns if it succeeded. Turns out it didn't work, which was the most predictable outcome, but I lost much less with home hosting that with dedicated hosting.

Some people will say that's not trusting one's project. I disagree - like you allot time, you should allot a budget to any given project.

It should succeed or fail within the given budget (which can always be adjusted depending on the situation) unless you want to go bankrupt for a bet that failed.


$100/mo? That's $40/mo. more than I charge for 1U of real datacenter space with a 5Mbps (shared) connection. Plus, if you actually start generating a lot of traffic I'm not going to randomly cut you off or throttle your connection like Comcast might.


you know what bothers me? when people quote co-lo price in rackspace without power. If you are willing to go with 1/40th of a 15a circuit, 33.7 watts usable, I could sell you 1u for $20/month and I'd be doubling my costs. Of course, running almost anything in that power envelope would be nigh impossible. that's barely enough power to run three 3.5" disks and nothing else. the OP is talking about a quad-core CPU, so we're probably talking at least 120 watts. If you are selling 120 watts and a reasonable amount of bandwidth for $60/month, you are significantly below market (and I'd like more information)

but yeah, you do have a point, if he's willing to spend that kinda scratch on connectivity he can get a box co-located. From what Ive seen of the market around here, he's going to be a bit over $100/month, (but, if you are way cheaper than that... seriously, lemme know. I get people pestering me all the time about co-lo and I usually send them away.)


I include 1 UPS'd outlet. I don't get overly concerned about power draw, it averages out. There is only so much power you can draw in 1U. People coming in with a beefy-ass server are going to tend to want dedicated bandwidth and a small subnet of their own (if not immediately, then when their operation grows), which moves them out of the $60/mo package and in to something more custom.

But yes, power IS one of the more costly bits of running a datacenter and I could see where some sites would expect you to run no more than a nite-lite on the power that is included.


>There is only so much power you can draw in 1U

that sounds like a challenge to me, and is the opposite of my experience.

I used to run a bunch of these:

http://www.supermicro.com/Aplus/system/1U/1021/AS-1021TM-T_....

with high power CPUs, the goddamn thing would draw four amps. with the advent of socket g34, quad-cpu systems now cost no more than 2x dual-cpu systems, even without the 'two in 1u' bit:

http://www.supermicro.com/Aplus/system/1U/1042/AS-1042G-TF.c...

where are you located? I'm in the bay area, (where power is super expensive) and my experience has been that power doesn't average out. people keep wanting to bring in ancient dual netburst Xeon space heaters, "It's only 1u!" even though the goddamn thing eats two and a half amps.

this, you see, is why I don't do co-lo anymore.

Now, it's possible that power is just so much cheaper for you than it is for me that it doesn't matter... either way, I want more details. seriously, right now I'm recommending other co-lo places on my co-lo order page... http://prgmr.com/san-jose-co-location.html I know the guy running rippleweb is in sacramento, and is paying around 1/3rd of what I do for power.


where are you located

Massachusetts.


It's been a few years, but I used to do some webhosting from my house on "business class" cable. It wasn't that much of a sysadmin task once I had things set up. But what killed us was our connection's downtime. I think over a year we were disconnected at least 5 times that we noticed, from a couple hours to a 2 day stretch over one weekend.

One tip: if you're not a cable TV customer (just data), when they connect you, see if you can get the tech to physically label your connection in the junction box out at the street with something like "Business Broadband Customer". We were disconnected at least twice by random Larry the cable guys working on neighbors' stuff. He'd see us connected, but we wouldn't show up as a cable TV subscriber so he'd disconnect us. After the second time, I got our guy (same company, but different division) to label our connection and it stayed connected. That solved our longer outages, but we still had shorter ones.


I did the same a while back in early 2007, when FiOS was introduced in my city in France by Orange.

I needed a good server for a custom made webapp (8G RAM minimum, 500 MB with a spare, quad core 64 bit - that was a little expansive in 2007). The cost analysis revealed a low end Dell with a decent CPU, loaded with RAM + dedicated "home" hosting was the most reasonable option, compared to VPS or rewriting the app.

I paid 80 Eur/month for 100 Mbps symmetrical - add that power and lost rent costs (it was in a dedicated room of my appartment) for like ~ 150 Eur/month total.

The big advantage was a full control - I could do what the hell I needed with my hardware. It doesn't take too much time to do administration as you reported.

The only drawback was the Orange Livebox - a brain dead box sold to home users (100 Mbs/20 Mbs for 50 Eur/month) that had far too munch problems for any real use.

Seeing the downtime in the first couple of months was approaching 10 minutes a day, I did hook the optic fiber adapter RJ45 directly to the server. With some reverse engineering, running ppoe on a specific VLAN worked like a charm until this june, when I decided to pull the plug.

The project didn't catch up, it was very innovative in 2006, it is only moderately innovative now - time to kill it (parts of it will be used for a non profit help for Haiti)

Current hardware costs make it more logical to rent instead of build/host/maintain at home.


Way back in 2007 I launched my first successful Facebook app on a shared server from asmallorange.com that cost me $3 a month. Said Facebook app was making $50 a day when it outgrew that "server". I upgraded to a dedicated server with Softlayer that cost $400 a month. Said app was generating $3k a day in ad rev when it outgrew that server. Also I'm not some rockstar coder. Back in those days I didn't even know to index the join columns in MySQL. Had I done that I could have squeezed more traffic through one box. After this experience Ive decided that the majority of web apps can be launched for free.


Because you are in the software business, not the hosting business.

Over the long run, a hosting company should be able to keep your website up and running "in less time", "higher quality", and "cheaper" than you can.

If your business can host websites more efficiently and effectively than a hosting firm, maybe you are in the wrong business and should become a hosting company for other firms?


Well, holy cow. If he can cook a frozen pizza cheaper than ordering from dominoes maybe he should be in the restaurant business instead of the software business.


Not exactly, what is being proposed is more along the lines of making your own dough, sauce and then a pizza versus ordering a pizza or cooking a frozen one. Your home made pizza might taste a bit better, but why are you spending your time making pizza's when all you want to do is eat them?


Because they taste better.


Which brings us back to the very first point made by kitcar. If you can make pizzas that taste better than Dominos and have a comparable price, then why aren't you a pizza company?


Now I understand. Anyone who is a good cook, should quit whatever job they have, unless they work in a restaurant as a cook, and start a restaurant.


Interesting point. But, I don't think it applies. We're in the computing business. Computing is not just software. It's hardware. Developer time is expensive, hardware is cheap. Throwing cheap hardware at a problem rather than spending weeks or months coding around a VPS's performance limitations seems like a no-brainer to me.

My particular use case is that I need performance and cheap computing power, and I'm willing to sacrifice a bit of reliability to get there.

And, I'm sure that over the long run, you're right. A hosting company will be a better choice.


Core competency ESP?


I'm always baffled by this question. What on earth gives anyone the perception that data services are only about a connection and a server? How many concurrent TCP/IP sessions do you think that "free" Comcast cable router will support? I can guarantee you that you can overrun it's capabilities with even a single dedicated server. You might have a few spare UPSs lying around, but what about HVAC? What are you going to do when a squirrel blows the only circuit servicing your home? Sure, your servers and connection might stay up, but how long can you run without overheating? When you enumerate the things that data centers have, and your garage does not, I can't see how it makes any sense.

* Redundant electricity/generators * Real HVAC systems (also on generators) * Fire suppression systems * Redundant connectivity * Connections that can handle high levels of concurrency * Secure facility

Never underestimate the impact of downtime to your business' viability. At any given moment, your service will be "critical" to someone. Often times, the loss of confidence can't be regained, and to be brutally honest, that loss of confidence would be justified given that you think it's ok to run your hosting out of a garage.

Given how small the cost delta is between hosting infrastructure at your home versus paying for something like a VPS, you're assigning a very small value to your customer's importance. As a customer, how am I supposed to feel about that? That is probably the single greatest reason more startups aren't doing this. You have to fight and claw for every customer you get. I'm not willing to have them walk out the door because I decided I could save $70/month on hosting by doing it from my garage. If you can't sustain the hosting costs required to run your business today, you won't be able to do it six months from now. Time to reassess your model.


I did it for my company. Except replace garage with basement. I have a business ISP account with a dedicated staff rep. and 24/7 on-site repair. It costs just at $77 a month for 10/5 with 5 static IPs.

I purchased an older HP Proliant server off eBay with 12 drives for just under $600 shipped for everything. I installed VMWare ESXi on the machine and then created 5 virtual machines. One machine for an email server, one for database server, one for web, and two staging machines for database/web server. Total cost for those machines was zero, and thanks to virtual appliances I can expand.

At that point I have redundant power supplies, raid hard drives, and plenty of backup swaps. I also can configure the server whenever to allocate resources as needed to the machines running. I can quickly copy and create a new server whenever I need one, which is really nifty.

In the end the only down-time I have is when my power is totally out. If I host I usually have about a 97% up-time excluding regular maintenance. The machine running constantly makes less impact than my Dell XPS 720.


Brilliant. I've been looking at VMWare ESXi as well as Microsoft's Virtualization. Microsoft's offerings aren't nearly as nice, but I'm a BizSpark member, so the licenses are free for me right now. And, you're able to migrate live servers from one box to the other, which is a huge plus.

Thanks for the comment.


Awesome.

You could add an auto-switchover generator to that setup for less than $1.5k.


What users notice most is latency, not bandwidth.

For example, if every page asset has an extra 1 second latency, then you are adding several seconds to your base page load time as all those 1 second delays stack up. First the main frame has to load, then if you have a subframe, the subframe has to load, then finally the content in it. Nest frames or other content, and suddenly your site seems sluggish for a single user and you're nowhere near your bandwidth limit.

I've had Fios and Comcast and on both latency wasn't great. Latency on their networks varies widely by location and is harder to keep consistent and build a good marketing message around, so they don't sell on latency.


the latency difference between fios and co-lo is going to be tiny. 10ms vs 50ms at most. (yeah, 5x difference, but it's still fifty friggen milliseconds)


Same for cable.. on TWC I can ping 1,500 miles away in ~60ms.

L3 latency is usually a small fraction of the initial website load time. Inline the js + css, strip whitespace, enable gzip, use memcached, get a server that can fit the whole DB in RAM, etc.

Preventing packet loss is very important, though (a TCP retrans adds 2-3 seconds). I do think datacenters have a real edge there.


packet loss is pretty huge, and data centers, generally speaking, really are better in that regard.


there is no sense to plan for scaling when you are small.

you are wasting valuable resources for things that might not happen. Just spend $10/mo on a shared host and you'll be good to go.

if your product is in such a high demand, that it crashes from the traffic...then you can quickly spend a few bucks on a dedicated server. Until then preparing for millions of users is just stupid, since chances are, you'll only get a few hundred users that first week/month


"there is no sense to plan for scaling when you are small."

Agreed. And what's the marketing strategy? If there isn't one, then the business will stay small.


I've done this before when testing out micro business ideas, and if they take off then I get a more professional hosting setup.

Like you I build a fast box myself from parts and made sure I had a good broadband provider with consistent upstream speeds.

If you're idea's successful, you'll soon know about it.


While hosting anything that's too serious in your garage probably isn't a great idea- I've hosted stuff at non-traditional places before and been just fine. Nothing of ours is mission critical that I'm doing this with.

We've got a rack at a local university that's in a large (8x10) closet with its own power, HVAC and fire suppression. The facility is fairly secure and you'd have to know two layers of door codes to get in there.

Yet we've got 6 servers in a rack. The servers in total cost us around $3000 (could have done it for less, but one of them we bought new and went overkill on), and we pay nothing for monthly hosting.

If you sniff around local universities, you can likely find someone that is cool with you throwing something in their rack for free or cheap. I know many of the dorms at MIT have several 42U racks on each floor and people throw stuff in those all the time. I know of others that have cheap hosting for affiliated members.

Time to call your alma mater?


Fire and thieves do happen (seen a couple of times at customers who were hosting their servers themselves, unfortunately).

It's an option though if in your context, you could tolerate a week (not a couple of days) of downtime, and buy everything again!

Oh - and energy/piece of mind is I believe the scarcest resource for an entrepreneur, too :)


I host my sites on a server in my living room.

I use my HTPC at home as a server for my SVN and sites which haven't started making money for me yet. What I like about this setup is that I can deploy large amounts of data in an instant. I have a mostly static site with a few million pages I plan to update daily.

More importantly, now I can justify paying for the fast internet connection and keeping the PC running 24hrs.

The problem area is keeping backups. My server is a retired P4 I got from ebay for $50, which is not really reliable. I had to go through the pain of setting up all my backup scripts, which I guess I would have to do any way.

The P4 has enough processing power to handle all my sites' traffic while I'm watching youtube on it.

I also use the server to download anything and everything I need. It also serves as a file server and hosts some of my personal ssh tunnels and proxies.

Having said that, I would recommend using a home server unless you really don't want to spend money on hosting or if you are worried about bandwidth caps. Start ups can benefit from a home server, but plan to movie it to a real server once you start growing. Don't invest on home server hardware or try to upgrade. What you already have is probably good enough.


Yes, but I have an audience of at most twelve.

However, while I use extremely cheap hardware (i build a server for about $300 retail quantity one), the power supplies will fail. The time that they choose to fail will not be in alignment with your choice of time to fail. If you live in an area subject to power failures, you are down if it lasts longer than say 30 minutes. If someone in your neighborhood sucks too much bandwidth, it is likely to pull from your allotment. If your lawnmower or the neighbor's lawnmower cuts the cable, who knows how long it will take to fix.

I have several UPS units essentially for transient protection, and two or three internet providers (cable plus dsl) and a verizon cellular internet access.

If I were looking for users in the hundreds or thousands or more, I would certainly go for a low-cost provider with a good stack. These are not that expensive and I hear that many do an excellent job.


You could also just rent the smallest server at hetzner.de for 59 EUR/mo and have them add a 128G SSD for 44 EUR/mo.

That way you pay 103 EUR/mo (about 134 USD) and skip a lot of headache.


This is the 2nd day in a row I've heard of hetzner.de. If they remove their one-time setup fee of 149 eur I might give them a try.


One-time setup fees seem like one of the dumbest things a hosting company can do. I can think of few more effective ways to keep me from trying a service.


Yeah, it is annoying; they do sometimes have specials where the setup fees are waived or reduced. I have to say their service has been excellent for me though (I've used them for years).

Note that you might want to check if hosting in Germany has any legal implications for you - German data protection laws are vastly more strict than the US, say.


It's not just annoying it's counterproductive. In my head setup fees trigger warning bells on two areas:

1. The company is so un-automated that it actually needs considerable human intervention to prepare a server and doesn't actually bother to include this basic operation into their total cost (they sell servers after all).

2. It's a trick to squeeze at least the 150 euro from you. Because if I find out in the first months that their service is sub-par I can just cancel my subscription but I'm not getting my 150 euro back.


As lsc explained the setup fee is not normally for human labor but part of the hosts cost calculation.

Serious hosts will give you free trials (before you pay the setup fee) any time, unless you ask for special hardware that they themselves have to buy first.


speaking as a hosting company, setup fees are /really really nice/ - I've got to go buy a rather expensive server and get it to you - now, obviously, I've priced it so that I make a profit long term, but I need to go buy another server /now/ not long term.

Personally, I address this by having pre-payment discounts. cover your account for a year and I knock off 20% (and usually use the cash to go buy a new server) - I also allow people to sign up and pay monthly (at a higher rate, of course) but with my current price /discount setup, the majority of my income is from pre-paid customers.


I wouldn't know what the margin is in this business but I assume you buy a bit over the capacity, specifically to allow new customers to come in.

Perhaps setup fees /kinda/ make sense if you are some high-quality name in the business and people will know they are staying long-term. But if I see a well-known company without setup fees and your company with a setup fee it won't take long to decide.

What you mentioned is far more decent: a bonus for payment in advance but I don't see how they expect to charge me the equivalent of 3 months in advance just to test them out.


Yeah, well, I did what I did mostly 'cause i recognize that I don't have the name recognition to carry off the setup fees... your points as to the customers perception of setup fees are all very valid.

Really, it's less about margin, though, and more about credit. If I had more, it wouldn't really matter... but I don't, so it does.

for a while I was renting out whole servers at nearly co-lo rates... with the catch that the customer needed to pay a setup fee that would cover the parts cost of the server up front.

It was pretty popular, actually; I only stopped doing it because it put me in a bad place as a provider... thing of it is, sometimes you really do need to fire your customers. I don't want to be in a position, though, where I get a bonus for firing my customers.


The smallest root server at hetzner is the EQ-4 package for 49 EUR/mo: http://www.hetzner.de/en/hosting/produktmatrix/rootserver-pr...


Also if you explain that you are an American company (if you are), they will waive the VAT which bring down the cost per month to 44 usd or something.


The reason is because it doesn't scale, and you will end up spending most of your time managing the server stack rather than deploying and selling the product.


We have a couple of servers at the office handling various duties like file serving, staging environment, backup email and support website in case our data center goes down. We used to run a few sites on them before we migrated to fully dedicated hosting.

It's not necessarily a bad idea, but I'd probably not do it again now that cheap VPSes are a commodity. Outsourcing the physical management allows me to not worry about assembling hardware, backups, power outages, network outages, burglary, fire and flooding. Neither will the server leave users hanging every time someone on the network downloads alot of data.

Unless you're doing some serious processing, a cheap Linode VPS should hold you over until you're profitable enough to upgrade to some real hosting.


I'm done with slow disk on cloud hosts. But actually the one with slow disk was Slicehost, and I AM fed up to hell with that. They caused me no end of problems until I figured out they were the problem. It's a bit deceptive, really, since you're promised your fair share of resources - then it turns out you're disk i/o is as low as 3/mb a sec. W T F.

Instead of going nuts and hosting in my garage, though... I just changed hosts. A rack on a colo would make more sense than the garage route, too.


3MB/sec isn't a fair share of a disk? I think it is.

to start, your cheap sata disk is going to max out a 100Mb/sec for /sequential/ transfers. Now, cut that in quarters, because two different guests streaming data to the disk make sequential random. so you have 25MB/s. double it, as they are probably raid-10ing the disk (at least, that's what I do) so we have a best case 50MB/s to share.

Now, they are probably in a box with 32GiB ram. say they give a gig to the dom0, and you are on a 512MiB guest. that means you are sharing with 62 total guests. So your fair share is less than 1MB/sec of hard drive transfer.

Of course, best case, nobody else is using the drive and you get all 200MB/s, but with 60 guests, that won't happen often.

Hard drives suck. Sharing hard drives makes them suck a lot more.

Note, I know almost nothing about the actual slicehost setup; what I'm describing here is guesses about their setup based on what I know about xen and what I know of my own set up.

My understanding is that Linode is perceived by light users to have better I/O throughput, but this is because they monitor and take action against heavy I/O users. (I think this is a good idea, and may implement it myself)


Your calculations assume I am talking about bog $20/mo VPS. Nope, I'm talking about a $160 a month system with 2GB ram, 72 MB HD space, etc. As far as I know 4-8 of those fit on one of their systems. If I had a 256/384 slice, yeah, I doubt if the rest of the system could handle enough traffic that the I/O would matter.

My 2 year old, nothing special home desktop gives me 72 mb/s. I expect that a pricey VPS will perform at least as well. If not, I could get a dedicated server for $40 more.

Running hdparm, slicehost gave me 3-48 mb/s. When it was down under 20, my server was choking doing the simplest tasks - like say, logging into ssh while actually serving HTTP requests. Performance was sucking. Customers complained. I tried tuning my DB. Nope, it wasn't my fault. System load ranged from 3-50 for no apparent reason.

The same test on Linode yields 98-165 mb/s. I moved the server to Linode (saving $80 a month!!), have added new features and increased usage, and performance remains excellent. No mysterious slowdowns, no high system load. Everything hums along without a hitch, proving to me it is Slicehost's problem, not mine. Thank god it is theirs to deal with now. This caused me a lot of stress and I'm so glad to be on a host that is giving me what I am paying for.

I'm sure you know more about this from that side, but this is my experience as user.

3mb/sec is unusable for a moderately busy server. The fact that other hosts give better performance says all I need to know.

If they gave a steady 30-40 mb/s, from your calculations, that's what I'm paying for and yeah, that's probably all I need. The unsteadiness, though, and the dips to near nothing - that is unacceptable. I'm not a hobbyist. This is a business, other businesses depend on us.


Yeah, like I said, I from what I hear, linode is managing disk I/O well.

Have you ever gotten a warning about using too much IO from linode?

ok, so let's go back and see the math again... so there were 14 systems or so on that 32GiB system, so if everyone was going at the disk, you'd get what, 3.5MB/sec- about what you were seeing.

I'm just pointing out, when you 'virtualize' a hard drive, you are sharing a resource that really doesn't share very well. you were getting your 'fair share' of the server at slicehost; it sounds like linode just does a much better job of keeping everyone from trying to use all available disk at all times.


Our i/o isn't really very high. It is under 1 mb/s on average, actually! So I have no idea what the issue is. You're right, 3 MB/S SHOULD be enough because according to our Linode graph, the i/o hasn't been over 2 mb/s in the past week. I can only imagine that certain times at Slicehost must have been providing less than at the times I tested it.

That's what is frustrating for me. We don't have a demanding application. It's well written and tuned, but I was having these problems, and as far as I knew I was paying enough to not have such problems. Maybe we're on a spacious server at Linode or were graced with idle neighbors... all I know is, the system load on Linode is muuuch more stable, (and low!) even though our system is doing twice as much. And, it costs half as much.


I didn't say 3MB/sec was enough... just that it was fair. there's a difference, you know.

I suspect the differences you saw after moving had to do with how active the other users of the disk were. this may have been chance, or it may have been Linode's policy of pro actively doing things about heavy disk users.


Thanks for your insights on this.


Downtime doesn't sound so bad when your site is relatively unknown... What happens when your fortunate enough to get your first piece of good press that directs a flood of traffic to your site? Seems to me that's when downtime is most likely to happen, and when you'd have preferred to pay professionals to prepare you for it.


Im confused. I figured the new best way, with the insanely cheap prices was the cloud. It saves you redeploying in the cloud later. Need more CPU, RAM, etc, click a button, clone a server.

Looking into Amazon even, their prices are fractions of a penny on bandwidth. And I see reddit dump all their servers, drop employee load, and it seems the home garage is rad for testing, but maybe not that great, as one day, if it all works out, you have to retool for the cloud anyway.

P.S. I am doing hosting of non critical stuff myself on a Comcast 50/10 line which is really 60/15, on a few mac mini's, which use near no power, and seem to stay up pretty well. I am just toting around with my personally ideas mostly. I like the fast local lan access, and I jump to a VPN so I can feel it like the rest of the world would. But it is only temp, until the idea proves viable, and then off to the cloud I go.


Because disk IO on cloud hosting solutions generally suck. What I'm doing requires a lot of disk IO.


I have an unused dell pentium 4 box running my website at my parents house :D

It just hosts an info site and blog for my android app on a pretty basic wordpress site that I have backed up. It's also a nice dev/filehost server. I figure I'll deal with scaling if I ever get to it.

The most visitors I've ever had is maybe 50 a day and who knows how many of those are robots. I wouldn't want to run any extensive server side code or an actual web app on it though.


Gcan get a root server for €50 upwards (eg. at http://hetzner.de). They have great hardware, a 100MBps connection and a couple of TB data transfer included.

See also http://news.ycombinator.com/item?id=1564897


why oh why people does not look around for dedicated host price before going into that much trouble ?

see this for example http://www.kimsufi.co.uk/ks/ (note: I don't work for OVH)

that's about 25$/month, look at the specs for yourself.

the pros: * if the hardware go down in flame they replace it. period. * a nice choice of distrib (Linux / BSD / Solaris / Windows) * the fcking bandwidth (agreed it would be mainly fast for europe)

the cons: you will have to do the admin yourself (for the lower prices), and that can take time depending on what you need/do. * after using this you will find any broadband offer awfully slow * you will not be able to live without a dedicated server ;)

I'm not trying to be negative here, but this kind of hosting is much cheaper than hosting the hardware yourself.


That box has only one disk, which is inappropriate for most uses. Sure, they'll put up another server for you, but that's not the hard part; the hard part is restoring your backup. Until you are large enough to have redundant everything, mirrored disk, I think, is a must.


so ?

there is a 100MB extern drive accessible via FTP, and rsync is your friend

at the very worst, when your server is on a 100MBps link then it's dirt cheap (in time and money) to do the backup on the cloud


if you do that, and you test your backup, and you are okay getting paged, waking up, following the procedure, then dealing with the half day of downtime, sure, that's fine.

but a mirror is still a whole lot easier. it just keeps chugging through the drive failure.


i would argue that if you are only spending $500 on hardware (I'm assuming 8GiB ram there?) you are right on the borderline of co-lo vs vps.

you can get reasonable co-lo for a hundred bucks a month, and ridiculous amounts of bandwidth for two hundred bucks.

also, total up your power costs for the server. the thing is California charges you more the more you use; your top marginal KwH can be pretty expensive, especially if you live with other people.


Do you have a UPS in your garage?


Hosting is a commodity now. Stop wasting your time and focus on your code.


Its too early for you to need to scale, but it seems to me that a service like App Engine eliminates:

1- The cost. Right off the bat you're spending $1,700 that would probably cost you nothing if you hosted it with google.

2- The administration time - why spend any time configuring or administering servers? It doesn't benefit the product when you get this essentially for free.

3- The concerns about security, reliability, single point of failure issues, etc. These may not be critical now, but eventually they will be important and that means you'll have to move. Why not start on App Engine where this is taken care of?

4- Scalability. Again, not something you need now, but rather than re-architect your site and move it later, why not use App Engine from the beginning?

From my perspective, App Engine is such a compelling value proposition that there's no reason not to build your site there. If google somehow becomes untenable, you can host app engine sites on unix using open source software.... and the appengine SDK is open sourced itself. So you're not locked in, other than the "lockin" of providing free service until you get big and a great platform.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: