Hacker Newsnew | comments | show | ask | jobs | submitlogin

The answer is right there in the article: "Yet, what you have is code running on Linux that you can debug normally, it’s not some sort of weird hardware system that you need custom engineer for. You get the performance of custom hardware that you would expect for your data plane, but with your familiar programming and development environment."

Maybe Mirage will pan out, but for right now it's in the pre-alpha stage. We've heard about the glories of microkernals before (GNU Hurd, anyone?).




That's obvious that not every developer could solve 10M problem on x86 commodity hardware, of course. But obviously there are particular cases where it's worth it clearly.

"Debug normally" isn't the case with BigTable or any other MapReduce implementation, is it? Do we think Google and big bunch of others made a mistake with it? Obviously not.

-----


Distributed software has its own pain points in development and testing, but SFAIK BigTable and MapReduce nodes run on commodity hardware running a flavor of linux.[1]

[1] This paper seems to support that: http://int3.de/res/GfsMapReduce/GfsMapReducePaper.pdf

-----


That was my point exactly. Running Big Data on commodity hardware nodes (instead of Oracle/SunFire) was just as crazy during early Google days as OS kernel bypassing today.

-----


I see. So the argument is that because something that was considered crazy panned out against the odds, therefore everything considered crazy is actually a good bet?

That seems like dubious logic.

Also unlike Google's bet on commodity hardware, I don't see the millions of dollars of savings to be had if the micro-kernel idea works out. At best you'll get an incremental performance benefit and easier debugging. It'd be different if there were no open source OSes and so rolling your own was the only alternative to paying millions in licensing fees.

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: