This of course ignores that it's much easier to get your hands on a cluster of average machines than one massive bloody server, and all the non-performance-oriented benefits of running a cluster (availability etc.).
Much easier to request a client provisions 20 of their standard machines, or get them from AWS. People don't like custom hardware, and for good reason.
Yes. Just yesterday in some other story everyone was arguing for the cloud because who wants to maintain their own hardware? This morning the hue and cry is "slap more RAN in that puppy".
I just speced out a 6TB Dell server. Price? It is already at $600K, and I haven't fully speced it out yet (just processor, memory, drive). Maybe that memory requirement is high (though it is about what I would need); 1TB is somewhat over $200K.
For the right situation that sort of thing maybe makes sense, though I'm SOL if I need high availability (power out, internet flakey, RAM chip goes bad, etc leaves me dead in the water).
So I would need to stand up a million or more in equipment in several places, or just use AWS and suffer the scorn of someone saying 'you could have put that in RAM'. Yes. Yes I could have.
> everyone was arguing for the cloud because who wants to maintain their own hardware?
Well, I keep arguing against that, because you still get 90%+ of the maintenance work, plus some new maintenance work you didn't have before, to avoid some some relatively minor hardware maintenance. And you can get most of the benefits of non-cloud deployment with managed hosting where you never have to touch the hardware yourself.
I work both on "cloud only" setups and on physical hardware sitting in racks I manage, and you know what? The operational effort for the cloud setup is far higher even considering it costs me 1.5 hours in just travel time (combined both ways) every time I need to visit the data centre.
For starters, while servers fail and require manual maintenance, those failures are rare compared to the litany of issues I have to protect against in cloud setups because they happen often enough to be a problem. (The majority of the servers I deal with have uptimes in the multi-year range; average server failure rate is low enough that maintenance cost per server is in the single digit percentage of server and hosting costs). Secondly I have to fight against all kind of design issues with the specific cloud providers that are often sub-optimal and require extra effort (e.g. I lose flexibility to pick the best hardware configurations).
Cloud services have their place, but far too many people just assumes they're going to be cheaper, and proceed to spend three times as much what it'd cost them to just buy or lease some hardware, or rent managed hosting services.
Even if you don't want to maintain your own hardware, AWS is almost never cost effective if you keep instances alive more than 6-8 hours of the day in general. Your mileage may wary, of course.
"The cloud" is basically a new non standard OS to learn.
I am reasonably happy configuring an old school Linux box. Heroku is much more of a pain in the arse to deploy to in my experience, despite much of the work being done for you already. Debugging deployment issues is particularly painful.
Depends on what you mean by massive bloody server. You can get a server with a terabyte of RAM for a price that's insignificant compared to the cost developing software to run on a cluster.
> You can get a server with a terabyte of RAM for a price that's insignificant compared to the cost developing software to run on a cluster.
This assumes that a) You're in the valley where average developer salary is $10k a month or more, b) You're a large company paying developers that salary.
There are lots of other places where a) Developers are cheaper, or b) You're a cash strapped startup whose developers are the founder(s) working for free.
Comparison still holds, because if you buy a cluster with X amount of RAM the price will be roughly the same as a single server with X amount of RAM. Except that for some large X there won't be any off the shelf servers you can buy with that amount of RAM (let's say 2000GB), but lets be honest here, 99% of companies needs are under that X especially if we're talking about startups.
If you could do the same job on a fleet of 8-16GB servers.. you can get a lot more CPU for a lot less dollars. Depends if you really need everything on 1 machine or not (as of course nothing will beat same machine in memory locality)
Not true, 8x16GB costs as much as 1x256 on Amazon. The issue here is that Amazon is hilariously expensive in general. Hetzner will rent you a 256GB server for €460 per month. Or you can buy one from Dell for $5000. These are not high numbers, in 1990 you paid more than that for a "cheap" home computer. For the price of a floppy drive back then you can now get a 32GB server.
these are on-demand prices. pre-pay, or use a term discount, and its cheaper.
Build it yourself: You can build a Dell or similar on a 2-Xeon-proc (E5 series), your main limit is getting good prices on 16x 32GB DIMMS. But lets say you can buy the RAM for ~$6500, then its just dependent on the rest of your kit, lets say $10,000 flat for the whole server. $277.77/mo over 36 months, but you still need network infrastructure, and you might want a new one in 12 months, but you get the general idea.
and fwiw, costs at amazon will scale linearly with resources. the 1 beefy box with 256GB RAM box costs about as much as 16 boxes with 16GB of RAM each.
Much easier to request a client provisions 20 of their standard machines, or get them from AWS. People don't like custom hardware, and for good reason.