

NYTimes.com Announces Map/Reduce Toolkit - bdotdub
http://open.blogs.nytimes.com/2009/05/11/announcing-the-mapreduce-toolkit/

======
misterbwong
I have to give a lot of respect to NYT. Whether this project sticks or not,
it's things like this that make me think NYT is going to be one of the few
newspapers to survive the crisis hitting the papers (albeit in a much smaller
and much different form). They're one of the few newspapers at least _trying_
to get it. Others are just complaining while hemorrhaging money.

~~~
patio11
I don't care for the NYT's editorial stances or reporting for the most part,
but they've _really_ been pushing the envelope in terms of using the
newspaper's website as more than a paperless version of a dead-tree product. I
could point to any number of their news-related projects. The election had a
number of great ways to present election results. Their Faces of the Dead
feature was also... how to avoid breaking the etiquette here... I'm going to
go with "technically well-executed".

------
SwellJoe
Interesting that it's built in Ruby. I was under the impression that NYTimes
did pretty much all of their dynamic language work in Perl.

~~~
bkudria
No, they do quite a bit of PHP now as well,and the Interactive News team uses
Rails and Django a lot as well.

------
matrix
Anyone know why they built this rather than use Hive or Pig? One thing that
drives me nuts is that all of these MR tools are very slow because they don't
take advantage of indexes and use inefficient storage (e.g. in this case,
plain text files), both of which would likely improve query performance
considerably.

~~~
vicaya
MR tools, especially high level wrappers like cascading (which gives you
capability to do easy joins on Hadoop) are very good for building indices for
query. You can use these tools to process the (log) data once and load the
results into a scalable db like Hypertable (or even MySQL if the results set
is small) and query them.

------
mlLK
I think it's interesting that this was released by the New York Times. . .this
could be prove to be an interesting new model/trend for newspaper publishers
to remain as a viable competitor in the 21st century, given the bad-rap they
seem to be giving themselves these days as news _paper_ publishers. D=

------
earle
This is pretty meaningless -- there's already a THRIFT interface which allows
easy job creation and control as well as HadoopStreaming which allows access
to creating map-reduce jobs for anything using stdin/stdout.

This has dubious benefit, and just adds another unnecessary layer into this
process. I'm not sure why this is news.

~~~
aaronblohowiak
This is a convenience layer, and it comes with a way to run your map/reduce
without having to have hadoop on your dev box.

------
adw
last.fm did something similar, called Dumbo, for writing your Hadoop jobs in
Python.

------
grandalf
the least they could do is post a link to the code on github to help out a
startup -- why use google code?

