Hacker News new | comments | show | ask | jobs | submit login

Should people also have their own power plants, wells, car factories and farms?

There are certain advantages when you "oursource" your basic infrastructure. As long as inter-device connections are still problematic (NAT, STUN, ...) and as long as harddrives die, I really enjoy being able to trade money for free time rather than setting up a redundant off-site backup system.

The electricity company doesn't try to tell me what devices I can plug into my power points. The water company doesn't call me a thief for pouring a glass of water for my friend. The car factory doesn't weld closed the bonnet of its vehicles and the farmer doesn't whine about lost profits because I dare to have a herb garden in my back yard.

They actually do - by the volt - amp offerings for residential consumers. Also, a big dream of the utilities is to be able to reach in and "hint" to your appliances when they should sleep. Power companies really don't want many people to buy electric cars because the grid has far too little capacity to handle charging them.

Give them time, and they will.

To stretch the cloud/powercompany analogy (perhaps too far)…

The power company only provides a "standard" volt and amp rated supply, and just like cloud computing, there are various standards to choose from[1]. You're probably thinking "120V 60Hz AC with type-b plugs", whereas for me the default assumption is "220V 50Hz AC with type-1 plugs". Fortunately, half my electronics doesn't actually care what voltage/frequency/current is available thanks to modern power supplies (I've seen my iMac happily keep running during a brown-out where the wall sockets were measuring just 90V AC, while all the routers/modems/harddrives with less capable power supplies were flickering and rebooting continuously.) Other devices I own I can use a transformer to change my 220V down to 120V, though still at 50Hz. If I needed to (and I never have) I could use a US targeted UPS or inverter to provide 120V AC @ 60Hz.

The "cloud" market is kinda the same. Cloud storage, for example, expects 8 bit bytes delivered over tcp (or perhaps udp). In general, "everything just deals with that", pretty much every modern-ish 8/16/32/64 bit device, regardless of native endianness, will happily emit and receive the same 8 bit bytes that S3 stores (converting on the fly, much like a switchmode power supply does for voltages). If you're an edge-case customer, perhaps wanting to use Amazon S3 to store data for your 12bit wide "bytes" from your PDP8, you just convert them on the way in and out. And much like the end result of electricity consumption is pretty much all the same, electricity is mostly just converted into heat - but with side effects varying from cooling your beer to blasting pixels onto your screen at 100fps to making your coffee machine hot; ultimately cloud storage is all just ordered bytes more-or-less reliably stored and retrieved - whether that's plain text passwords, or massively de-duped mp3s in Dropbox or Amazon/Apples cloud music storage, or cryptographically secure blobs which no-one can tell whether they contain your bank records or your secret research project data or child porn - it's all just bytes. There's no "vendor lockin" at the "it's just a bunch of ordered bytes" level. You might need to "change the plugs" if you want to switch you cloud storage from S3 to BigTable to Dropbox to Tahoe/LAFS, just like I need to switch cables or use a plug adaptor for US delivered electrical equipment. It's the same with cloud compute resources - sure EC2 and Linode and CloudNine and AppEngine have different interfaces, but you can view that as all just the plumbing on the way into the "remote universal turing mschine" which, much like the topologists who can't tell the difference between their coffee mug and their donut, in spite of their interface and language differences all the programmable remote computing offerings are identical - if you can compute anything on one of them, you can - in theory- compute it on any of them.

[1] http://en.wikipedia.org/wiki/Mains_electricity_by_country

Why not?

I mean there are certainly folks doing "off the grid" experiments that are moving this way, and I can certainly see a future where you create enough power for most of your home needs locally, including using some version of a 3D printer to build common items out of stock you have around the house (once they can also accept such). Things that are too costly to build at that scale or require special feedstock could be built by a neighborhood machine (much like mills used to service their neighborhoods), with the feedstock being ordered as needed.

Yes, as long as it is massively inconvenient there is a benefit for trading money for free time. But I would like to spend some of that free time making it possible for me to have a choice of my own - rather than a choice for my provider (who likely will eventually decide they need to make a profit).

We're just seeing economies of scale in action.

It is cheaper per unit of energy to make a natural gas power plant and sell the power than it is to build a small-scale generator to power one home on natural gas. Once that model is established, there is resistance to local power generation. The same scale works at neighborhood level power generation. (Or construction, or manufacturing)

Cost is the difference now between using the cloud and running your own hardware. It is cheaper both in time and money to let a company like Google run your e-mail than to run your own server. So people do it. It is a very simple matter of convenience. I don't expect a backlash until people become mistreated to the point of breaking.

I am an enthusiast of 3D printing and I like the vision you've laid out, but I see it always being cheaper per unit to make 10,000 of something than 1 of something. So delegation will continue unless the underlying behavioral incentives, cost and convenience, change.

It isn't cheaper ... yet.

Such visions are the stuff of startups.

Mature commodity items, services tend to be well regulated. Because it's generally better for customers and therefore economic activities.

We barely have SLA standards for uptime, privacy, data breaches, etc.

It's cool to be a pioneer. But you're largely on your own. A trade off.

From my experience, on-premise vs. cloud/hosted is a tradeoff between engineering talent (keeping the servers up) and SLA management (filing tickets/contacting providers when SLA is not being met or bad stuff happens).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact