"Thus, we decided to only run computations when the outdoor temperature near the user is below a certain level (currently 16 degrees Celcius). When it's that cold outside, we assume that the computer's room is being heated anyway. All of the electricity used to do computations gets turned into heat, according to the laws of physics. So the heat generated by the computations displaces the need for heat generated by a heater, eliminating or minimizing the net elecricity usage."
Doesn't work in the USA. Natural gas heating in the US averages $10.80/thousand cubic feet ~= $10/gigajoule [1]. Electricity averages $0.1179/kWh ~= $33/gigajoule [1] -- more than three as expensive for raw heat. Natural gas heating is twice as common in US homes, with about 55.6 million vs. 28.4 million using electric resistance heating [2] -- not including 9.8 million electric heat pumps.
In the common case, you're displacing cheap gas heating with expensive electric resistance heating; the cost "savings" on heating is small. At 100W power consumption, you're spending 1.2 cents/hour on electricity, saving 0.4 cents/hour on gas, for a net loss of 0.8 cents/hour. Meanwhile your CPU is being sold for 0.1 - 0.3 cents/hour [3] -- far less than the electricity needed to run it, apparently (?).
Break-even is 0.2 cents/hour per core for a 4-core system that uses 100 watts. A full i7-3770K system with integrated graphics uses 102 watts under heavy load [1], and when each of its 4 cores sells for 0.2 cents/hour, that covers the 4*0.2 = 0.8 cents/hour difference from natural gas to electricity. 0.3 cents/hour covers the entire electricity usage, providing free heat.
From [1] - "Our users volunteer their idle compute resources because they want to be part of something valuable to the world. We believe that this is a motivator far more powerful than paying people a small amount of money every month.".
Yeah, I don't think that is going to work, especially because you are for-profit organization not SETI
There's a surprising behavioral economics result that says people will do things for free that they won't do for small amounts of money. A check in the mail for a dollar or two every month isn't as compelling as being part of a group effort.
And empirically, it seems to be working. The number of contributing machines is currently in the six figures. The main risk right now is on the demand side, i.e. whether or not people want this type of compute.
Your "giving back" paragraph about what motivates your users is completely dishonest. You know your users are clueless and installed your client because it was bundled with an application they actually wanted to use.
hah, doh. Thought the article linked to the homepage, still think they should have a link from the /compute/ page, people might be interested in both using and being used for this.
I can think of precisely zero ethical ways you could already have more than 100k machines participating. An explanation from you would be very, very nice to see.
Well, people won't do things for free, if someone takes the money for their effort. What is the number of contributors who know exactly what's going on?
Wow, really, six figures? 250 results for "gridspot.com" on google, and yet you have 100000+ computers running this software?
I want to hire you! I'll pay you 10 figures/month!
This comment is quite rude, but I still haven't seen an answer for this yet. How did you get all these users to donate their computers even though nobody seems to have heard of you yet? Something seems fishy, and I wish you'd answer.
Fake it till you make it? I don't know for sure, but could they be subsidising the launch by running instances on EC2? ... When they get enough hosts signed up, switch over to their distributed model, or even run a hybrid combination to handle demand spikes.
Google shows results for "gridspot.exe" on ID-this-process sites dating from mid-March. So it's out there and has been for a few months. The question is whose machines it's running on.
An obvious way to do this is how consumers can upload solar power to the grid, and it comes off their bill.
With enough people involved, the compute version of this could be self-sustaining, and the infrastructure guys just take a wee percentage. They are unlikely to be replaced by purely free infrastructure, because money is involved (even if in the form of credits). If there was a shortage at particular time, some people might supply that themselves, for extra cash. Distributed compute supply = local compute supply, with latency advantages.
People install their program, contribute the power of their computers for free and Gridspot takes money.
Why would anyone do this? Why would anyone contribute their computer resource for other people to make money from that? I would rather, I don't know, mine bitcoins or set up a Tor proxy.
This seems too good to be true, but it's working great for me!
With no credit card, I got an instance with 3 GB RAM running in less than 60 seconds. It costs $0.002 per hour.
One glitch: Both the UI and the API say there are 2 instances running, but they both have the same IP address and port. If there really are 2 instances, how do I ssh to the second one?
How embarrassing (but fixed now). That was introduced when I added the ability to boot five instances without a credit card. Thanks for letting me know!
Ok!, the two instances you were seeing were duplicate listings of the same instance. That bug was introduced a few days ago by some database optimizations, and is now fixed! Thanks for reporting it.
The 1.56 core-hours is mostly from inst_KGN6tZdFgoeL2C9KPKBsNA which has four physical cpus and has been running for almost an hour at this point.
You are of course absolutely right! There was a bug (now fixed) that was causing only one cpu to show up in the VMs. All instances started after now should have the proper number of cpus visible.
(Note, the number of cpus might be higher than the number of physical cpus if the host cpu supports hyperthreading.)
This is great - it looks to have some very sensible answers to many of the questions of how a distributed cloud would work.
The thing that excites me most about a distributed cloud is that it could turn the notion of elastic computing on its head. You can buy your peak compute requirement, and sell back the surplus. So you can get elastic computing not by re-engineering your own systems, but by selling excess capacity to those whose workloads are a good match.
Here's how you sustain this model: package the VM with free software just like the toolbar guys do. You pay the software guys a cut of the profit for every setup they have running (referral fee) and users get a new way to "pay" for software. I like this idea much better than all the spyware/crapware software vendors have to package into their product to earn a living.
Consumers don't care about $2/month but a business would care about getting paid $2/month thousands of times.
Edit: If a VM is too big to download all at once with a freeware product, you just install the seed software. Then, over the next few days, the seed software downloads the VM and slowly sets it up so there's no disruption to the customer's activity on the computer.
I would also suggest you reserve the space needed on the disk with an empty file, roughly the size of the VM file. This way it's clear just how much space this newly installed software is taking on the customer's computer.
This is all based on the idea that the customer agreed to this during the installation. To what extent "agreed" is left up to you to decide. I'd suggest a dedicated page during the installation with a prechecked box that explains what this is and what it's about to do.
I disagree. I don't think many companies that have 1000 computers would do this for $2/mo per node. That's like $22,000, also known as a rounding error for those kinds of companies. Besides, those companies tend to frown on these suspect applications.
No, I don't mean for companies to run it on their own servers, I mean for them to bundle the software with freeware software. The kind you find on download.com.
Often, freeware software developers bundle toolbars and other spyware/crapware with their software to make money. My suggestion would be to allow them to include the VM with their software instead of spyware/crapware toolbars. These freeware authors would earn a commission for every VM install that makes money.
Freeware developers need to earn money but their main option is to bundle their code with crapware now. This would be a nice alternative to that awful option.
Sorry if i missed but : Who are you ? Please put some information about "yourself" somewhere on your site. And if you can, please answer these questions on your page :
- What is your company name and legal type ?
- What is your location ?
- How can i contact to you ? No, support@gridspot.com doesn't count, put some landline numbers too.
- Who are your team members ?
And please don't use " Whois guard" ?
edit: Looks like, downvoting is easier than answering the questions. Seriously, can you explain to me : why do you trust some company which doesn't give any information about itself ?
No, it's Adam but you're right that he should write something about himself within the company website. In the meantime, you can see him answering feedback in the comments of this discussion. Here's one of them http://news.ycombinator.com/item?id=4226286
So the customer pays Gridspot and Gridspot does not pay anything to the person whose computer is actually doing the work? Great business model if you can make it work.
The difference is that most Skype users don't even know they are carrying traffic for the network. The client silently punches a whole in their router and makes itself a pain to close (for example by reconfiguring what the red X does) without ever explaining why it wants to keep running so badly.
My understanding is that after MSFT acquisition, they don't use their customer devices as supernodes anymore [1]. For "silent hole punching" with UPnP, that is very much business as usual for any VoIP application as well as other applications, such as Windows' Teredo IPv6 thingy.
I don't know about 'legitimate', but I can think of plenty of specific ones. I say build it (just might want to incorporate it separately... possibly in Sweden..)
Increase the attack surface area. With anycast the nearest nodes to the attackers "sink" all of the traffic. The rest of the nodes continue to serve other customers with no I'll effect.
With this product you will have a very wide base of unicast addresses. Treat each node/address as disposable. Use fast failing health checks and an out of band control plane. As each node falls to attack remove it from your service discovery layer (DNS/http 302/etc). See "fast flux dns" for implementation ideas. The attacker will spend a disproportionate amount of resources (packets/s) attacking each of your disposable nodes. The majority of your "good" customers will continue to be served.
In a traditional tiered architecture the L3->L7 routing layer (LB/Proxy) is very expensive to scale vertically. Your data store & compute end are clustered behind these choke points. Remove/minimize shared state and you can have independent units of compute. Remove those L3->L7 choke points and you can get a wider & flatter L1->L3 network fabric. Besides increased durability you get more aggregate bandwidth per $.
"SETI but paid" is a very tempting idea, but we could never get the back of the envelope math to work out to the point where we could pay the contributors anything compelling (e.g., enough to offset their power cost).
I guess you can get a few people to sign-up for fun, but we could never come up with a reason other than serendipity that would make people donate their CPU time for us to resell. I'll be interested to see how/if they respond to this challenge, or if there's something I'm not seeing that makes this point moot.
Facilities management is the worst part of "cloud" services. Getting this outsourced for low/no cost is brilliant. Check out the OnAPP CDN implementation.
Please provide a standardish xen/kvm/vmware/vagrent client image. Being win32 exe only is probably costing you a lot of resource provides (like me). Clients running xen/kvm/vmware are also more likely to be providing higher value resources, like a colo'd host.
Move your API to a different (sub)domain ASAP. At some point you'll need to change your DNS architecture. Having your API tied to your zone apex is going to cause no end of grief. If it's in a (sub)domain you can easily delegate control to another authoritative name server. You might want to use a directional DNS product, CNAME to another resource, add another product api, add another API endpoint, etc.
Provide some sort of initialization hook. Every node I launch should be able to auto configure my stack without a "push" action, ssh or otherwise. I much prefer when each node can bootstrap and poll my instance/config management stack. For example EC2 provides this functionality through the UserData param of RunInstances.
"Sign in" doesn't have an option or link to "Sign up". From the docs I had to go back to the home page to find the sign up.
Support payment other than credit card. Paypal etc is nice in that I can create a balance without extending as much trust to an unknown party.
Don't get suckered in to providing 1:1 IPv4 with your proxy model. If the product's successful you'll quickly discover that even a /16 is expensive/unpossible. A 1:1 mapping with IPv6 is operationally plausible. As a bonus you'll get PR for a "shiny" feature.
Stay away domU egressing through the dom0 IPv4 for now. I guarantee you'll attract bad actors. A chinese gambling/porn site hosted on those domUs is going get DDOS'd all the little long day. If that takes out the dom0 internet connectivity you'll have a revenue generating customer who's upset.
Terminate long running domU instances. You can use this to shape customer expectations about the ephemeral nature of your product. Expose this to the instance owner the same way as a dom0 going offline. Try something like max(mean dom0 availability || 12 hours) + weighted random to get each domUs lifetime.
Provide a way to request & inspect network locality or latency. Three use case here I think. 1) Launch instances near $foo. Get decreased latency to a centralized endpoint, like a scheduler. 2) What is the location of $instance. I can determine the nearest S3 region for faster GET/PUTs. 3) Launch instances within $n ms of each other. If instances have shared state or messages during compute phase this can increase throughput
They'd have to download your custom image, probably hundred of megabytes or gigabytes, each time upon starting new instance. This makes no sense on WAN. It’s much better for both contributors (less stuff to download) and clients (less time wasted waiting) to only download your application stack.
It’s not EC2 alternative, this is a different kind of service.
I think you miss my point. Currently (AFAIK) the provider/dom0 must download and run a windows executable. Many people have no windows hosts, but do have existing xen/kvm dom0s running. If grid spot provided their service as a xen/kvm client image I would be able to host it.
There's no requirement for the host to download the client image multiple times. I'd imagine you have something like ephemeral domu disk and use kmods to provide a control plane and network tunnel.
> Many people have no windows hosts, but do have existing xen/kvm dom0s running. If grid spot provided their service as a xen/kvm client image I would be able to host it.
They should provide their image instead of the binary. Theirs, rather than their clients. Got you know!
But I don’t think their audience will have much use for it. Well it’s unlikely to be a priority, anyway.
> Don't get suckered in to providing 1:1 IPv4 with your proxy model.
BTW, he doesn’t. It’s all proxied through single IP on different ports. E.g. this is what I got:
There are many applications for this - including fighting DDOS attacks. There are still huge technical challenges with that particular application, but as I understand it...the only way to fight a DDOS is to a) get bigger guns, or b) lie down and take your raping.
Having something like this, can (in theory) give you bigger guns.
Do you guys pay the people who have idle resources?
Or cheaply launching them. For a mere $100 you get 100,000 machines for an hour, that's more than enough to cause problems for a sad number of websites out there. Edit: Or not, given that all traffic goes through a centralized node.
Yeh....was about to say that I doubt that given the architecture. The difference is that there is a for-profit company that controls the nodes, so I doubt they would allow that. Completely different than a real botnet where the botnet owner controls everything.
Something else just occurred to me....by not paying users to get axs to their computers, it removes the business model headache of squabbling over revenue and violating ISP TOS. I believe it is against most TOS for most (if not all) ISPs for a consumer to resell their inet axs.
That was another consideration that I looked at two years ago - which makes the model not as attractive. Although, I imagine that the holy grail is actually reaching a place where users are making money from their inet connection - I am sure that will drive adoption at an increasing rate.
"While we allow Internet traffic from software doing compuations on our platform, it is all proxied seamlessly through a small set of computers operated by Gridspot."
Seems that that small set of computers and IP addresses could be used to do the ill deeds of a bad actor.
"Run anything"
"Get SSH root access to each Linux instance."
So exactly what is to prevent someone from spamming here and how would gridspot even know that was happening? To mention only one possibility.
Based on my understanding, there's a tightly-locked down outbound firewall: you only get HTTP,HTTPS and SSH ports. So sending mail over SMTP (spamming) can't happen.
If someone is doing something evil over e.g. HTTP, presumably GridSpot can also block that and/or ban them from the system.
If you're sending spam over e.g. Gmail via HTTPS, Gmail will shut you down.
The people running the nodes don't need to worry about e.g. someone downloading illegal content and getting them in trouble, because it is proxied through GridSpot.
It is obviously a VM, so the virtual network interface that you get will be proxified. The fun thing will be to see if someone can do some VM escaping and own 100k machines for free.
If the data is not confidential, but you’re worried about someone messing with your computations, it still might be worthwhile to use defensive strategy and just run the same task multiple times on separate instances and compare results before accepting output. Since the task can be scheduled in parallel, there’s no significant overhead and it’s still way cheaper than Amazon/Google.
It’s not exactly replacement for EC2, but there’s a lot of applications that would fit Gridspot just fine.
But I guess one has to be much more aware of all the security issues when writing for a platform like this. The Gridspot folks should perhaps consider creating solid list of best-practice guidelines.
Now can someone also create Gridspot for storage? :)
Same here. It would be great to get end-to-end encryption, but I don't think that this is currently practical.
Some recent research [1] has shown that it's theoretically possible to delegate the ability to process your
data, without giving away access to it. But from what I remember, in practice the computation becomes orders of magnitude slower than on the unencrypted counterpart.
Can't someone create market-exchange type service, where price is set by supply and demand. People get paid for idle compute time, and other people buy it.
The price will go As high it needs to, for suppliers to think it's worth while.
It would interesting to see what the resting price is. I.e what suppliers think is worth while. It will probably be at max slightly higher than amazon, because people will do arbitrage with amazon.
In the bigger picture, there are a whole lot of interesting things that would need to be built to allow for non-batch processing on a cloud like this... a smart web proxy / front-end that could cope when your instances go down being #1 on the list.
Presumably you are using the results of the distributed system's work for something. Can you trust those results from people who can't keep a web server alive?
I'm not sure what you mean by "can't keep a web server alive" but I think the answer to your question is to run your computations on multiple instances at once to validate your results.
Give them some break. Probably they are still building it. Frequently people post to HN before things are ready so that friends can review it and post feedback.
My immediate response was "surely this is less than the cost of the electricity to run the CPUs" but it turns out it's equal to the marginal cost of heating a room with CPUs relative to conventional heating (natural gas or electricity) -- which is pretty genius.
In the long run you would expect this would drive up the bidding for idle CPU time, meaning that having a couple of idle computers in your house would generate you a small amount of positive revenue, in much the same way that contributing electricity back to the grid using solar cells could. Of course, most people have no use for solar cells (which are expensive), whereas a lot of us have use for local CPU.
How do they handle security? If the software they're using to virtualize these instances is exploitable, then you could break into the host machine and do whatever you like with it.
I agree, theres a level of trust when you use hosting providers like Amazon and heroku. With this, there's no way i'd do anything remotely sensitive with one of these boxes.
Yes but Amazon operates out of a datacenter that will /strive/ to be online for all but a few minutes of each year. Transient machines spread over the internet on home or office PC's /do not/ have the same operational parameter.
At this point I'm somewhat unsure of their actual business model. Are they reselling VPS instances from real providers or are they sourcing computational power from home/office PC's? This seems entirely unclear based on the reactions of this thread and the mostly ambiguous stuff written on their website.
If they are sourcing computational power SETI@home style my question remains, and I encourage it to be upvoted and answered on such a technically competent site such as this. Marketing hype is cool, but I'm still an engineer when it comes to making choices.
> Yes but Amazon operates out of a datacenter that will /strive/ to be online for all but a few minutes of each year.
Spot instances can be killed any time if your bid is too low. This will, of course, happen less often at Amazon, but the principle is the same. You must also be significantly more proactive about unreliable performance and poor security. But this may be worth it sometimes.
> At this point I'm somewhat unsure of their actual business model. Are they reselling VPS instances from real providers or are they sourcing computational power from home/office PC's?
They’re computational power scavengers :) Seems to be desktop PCs for now. Maybe in the future it would also make sense to also utilise smartphones plugged into a charger? I hope they will be successful, because unreliable and cheap computational power is a pretty cool resource, despite all of its shortcomings.
Have fallback to EC2 if GridSpot can’t provide capacity you need at the moment. The same is true for any other cloud service, actually. E.g. Amazon was having problems issuing new instances in some regions following the most recent power outage. In theory you should try falling back to other regions/other clouds. It remains to be seen how much of a problem this will be in practice.
Is it possible for someone to lend the idle time of their VPS / dedicated web Servers in return for cash ?
People could run their websites without issue as most of the time, they may not be using it to full capacity. And you will get better connection (better than DSL speed) and better performance machines (server grade). Could be a win win for everyone.
I use pay-pal or debit card, if I actually buy something online. Credit-card culture isn't so great where I live, and I personally don't have one - hence the question.
We're about to launch a service that allows comparison of benchmarks from cloud providers (UnixBench/IO & BW), would love to run it on your plans to compare against Amazon etc.
I've also tried 'sudo shutdown now' and while it did shut down the OS, the site says they are still running and the times are still increasing. How will I know when they've actually stopped? Also +1 for an API to end instances, please.
If I understand it right, the 'cloud' VM you get is running whole or in part on desktop user's machines out there somewhere in the world (and connected to the internet) who have donated their machine's spare CPU power to GridSpot.
Wouldn't work. As soon as you move the incentive to the world of financial norms people start wondering why they are bothering for a couple bucks a year.
They will get more people doing it to be part of a community if they can keep that community seeming "cool" in some way.
I work in the field too. These are exactly my thoughts on the matter, and I'm sorely disappointed with the comments in this thread hinting as such so far.
https://gridspot.com/gridspot_safe
Doesn't work in the USA. Natural gas heating in the US averages $10.80/thousand cubic feet ~= $10/gigajoule [1]. Electricity averages $0.1179/kWh ~= $33/gigajoule [1] -- more than three as expensive for raw heat. Natural gas heating is twice as common in US homes, with about 55.6 million vs. 28.4 million using electric resistance heating [2] -- not including 9.8 million electric heat pumps.
In the common case, you're displacing cheap gas heating with expensive electric resistance heating; the cost "savings" on heating is small. At 100W power consumption, you're spending 1.2 cents/hour on electricity, saving 0.4 cents/hour on gas, for a net loss of 0.8 cents/hour. Meanwhile your CPU is being sold for 0.1 - 0.3 cents/hour [3] -- far less than the electricity needed to run it, apparently (?).
[1] http://www.eia.gov/forecasts/steo/report/prices.cfm (residential retail prices, 2011)
[2] (.xls) http://www.eia.gov/consumption/residential/data/2009/xls/HC6...
[3] https://gridspot.com/compute/
European costs for comparison:
[4] http://www.energy.eu/