Hacker News new | past | comments | ask | show | jobs | submit login

Can you explain what Tachyon does that's different from what Spark already provides?

I haven't used either Spark or Tachyon. I thought the Spark solution was to just put my dataset in memory. But the Tachyon page seems to say the same thing




There's a slide deck[1] that explains it rather well.

Basically, Tachyon acts as a distributed, reliable, in memory file system.

To generalise enormously, programs have problems sharing data in RAM. Tachyon lets you share data between (say) your Spark jobs and your Hadoop Map/Reduce jobs at RAM speed, even across machines (it understands data-locality, so will attempt to keep data close to where it is being used).

[1] http://www.cs.berkeley.edu/~haoyuan/talks/Tachyon_2014-10-16...


Neat, thanks for the link, the code examples towards the end make it clear that this is pretty simple to use.


Yeah, most things coming from the Spark team are excellent in that respect.

I've never used Tachyon, but based on the wonderful "getting started" experience Spark gives I'd be confident it would be similarly well thought out.


As others have explained Tachyon does the "put dataset in memory" part.

Spark started off as the "in memory map reduce" but has now become a platform for Scala, Java, Python, HiveQL, SQL and R code to run. It is the most active Apache project and is getting more and more powerful by the day.

Given how easy it is to get running it wouldn't surprise me to see it in the years to come being using as the primary front end for all data needs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: