But I am not in the favour of creating another stack as Hadoop replacement. Spark fills the role of distributed computation framework very well. It uses scala a functional language which provides neat abstractions for map,reduce,filter and other operations. It introduces RDD to abstract data. All the components graphx, sparksql, blinkdb use scala.
Also building a new map-reduce engine is a difficult task. The computation engine will lie in the middle of stack not on top as shown. Libraries will sit on top of that, so if you build an engine you will have to build everything on top of that.
See the Spark ecosystem : https://amplab.cs.berkeley.edu/software/