Hacker News new | past | comments | ask | show | jobs | submit login

10 servers with 100G will use a lot more power and will require distribution of your algorithm right along with your dataset, so instead of 10 server you will probably end up with a pretty high multiple of 10.



Isn't this what the cloud is for?

Surely you just rent an instance of the computing power you need for an hour or two then upload the data, a script and wait.


That really depends on your usecase. Not all analysis is 'one-shot' and not all businesses are free to upload their data into 'the cloud'.


Not only that, but uploading a 1TiB dataset implies a certain quality of connection which not all businesses want to take on...


As opposed to 10*100GiB that can be uploaded over a 56.6K modem, right? ;)


It's a reasonable cloud vs. on-premise argument. Obviously the scale of the data transfer to a cloud has more to do with the dataset size than the number of instances.


How often are you planning to do that? The bandwidth it takes to send data that fits in RAM somewhere else is somewhat expensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: