Hacker News new | past | comments | ask | show | jobs | submit login

> There's no a single simple answer, but sure, whenever less computers are enough, less should be used.

This - if you're using this website as your sole calculation then something has gone seriously wrong.

It was more intended to provoke discussion and thinking around overengineering things which can easily be done with, say, awk or a few lines of <scripting language>.

If you have large CPU requirements then sure, use Hadoop/Spark/some other distributed architecture. If you have a >6TB dataset but you only need 1GB of it at a time, then, well, maybe you still only need one computer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: