Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good. I've always thought throwing distributed brute force at a problem is a sign of laziness. You can get 32-core systems with 128GB of RAM for less than $10K, and they will have orders of magnitude faster turn-around time than your average Hadoop or Spark cluster.


Exactly. Compare the inter-core bandwidth and latency between a beefy multicore system and a distributed system, then use that ratio (of whatever your bottleneck is) to multiply how big your comparable distributed system would be. Usually people assume they are compute bound, when in reality they are memory latency/bandwidth bound, and they neglect to do this comparison when speccing out a distributed system.


I would say the opposite. Vertical scaling is more lazy than horizontal and also much less scalable.


The point is that it is often not nearly as necessary as it might seem, because instead of doing computations on large chunks of organized data, lots of computers end up doing latency bound tasks.


I think you're missing the point the article is making, or at least obstinately not addressing it.


What kind of problems have you worked with that you compared against Hadoop/Spark?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: