Hacker News new | past | comments | ask | show | jobs | submit login
48 days ago | hide | past | favorite

Previously I posted a link to Arc (https://news.ycombinator.com/item?id=26573930) a declarative method for defining repeatable data pipelines which execute against Apache Spark (https://spark.apache.org/).

Today I would like to present a proof-of-concept implementation of the Arc declarative ETL framework (https://arc.tripl.ai) against Apache Datafusion (https://arrow.apache.org/datafusion/) which is an Ansi SQL (Postgres) execution engine based upon Apache Arrow and built with Rust.

The idea of providing a declarative 'configuration' language for defining data pipelines was planned from the beginning of the Arc project to allow changing execution engines without having to rewrite the base business logic (the part that is valuable to your business). Instead, by defining an abstraction layer, we can change the execution engine and run the same logic with different execution characteristics.

The benefit of DataFusion over Apache Spark is a significant increase in speed and reduction in execution resource requirements. Even through a Docker-for-Mac inefficiency layer the same job completes in ~4 seconds with DataFusion vs ~24 seconds with Apache Spark (including JVM startup time). Without Docker-for-Mac layer end-to-end execution times of 0.5 second for the same example job (TPC-H) is possible. * the aim is not to start a benchmarking flamewar but to provide some indicative data *.

The purpose of this post is to gather feedback from the community whether you would use a tool like this, what features would be required for you to use it (MVP) or whether you would be interested in contributing to the project. I would also like to highlight the excellent work being done by the DataFusion/Arrow (and Apache) community for providing such amazing tools to us all as open source projects.

Edit: fix links

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact