Hacker News new | comments | show | ask | jobs | submit login

That's obvious that not every developer could solve 10M problem on x86 commodity hardware, of course. But obviously there are particular cases where it's worth it clearly.

"Debug normally" isn't the case with BigTable or any other MapReduce implementation, is it? Do we think Google and big bunch of others made a mistake with it? Obviously not.




Distributed software has its own pain points in development and testing, but SFAIK BigTable and MapReduce nodes run on commodity hardware running a flavor of linux.[1]

[1] This paper seems to support that: http://int3.de/res/GfsMapReduce/GfsMapReducePaper.pdf


That was my point exactly. Running Big Data on commodity hardware nodes (instead of Oracle/SunFire) was just as crazy during early Google days as OS kernel bypassing today.


I see. So the argument is that because something that was considered crazy panned out against the odds, therefore everything considered crazy is actually a good bet?

That seems like dubious logic.

Also unlike Google's bet on commodity hardware, I don't see the millions of dollars of savings to be had if the micro-kernel idea works out. At best you'll get an incremental performance benefit and easier debugging. It'd be different if there were no open source OSes and so rolling your own was the only alternative to paying millions in licensing fees.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: