Kafka is still a pretty chunky dependency.
Have you thought about getting this to work with the new streams stuff in Redis? https://brandur.org/redis-streams
Redis is SO MUCH less hassle to get running (in both dev and production) than Kafka.
Yes, we're definitely interested in supporting Redis Streams!
Faust is designed to support different brokers, but Kafka is the only implementation as of now.
What are the advantages of using Faust feature-wise or otherwise? Are you guys planning on having feature parity with Flink (triggers, evictors, process function equivalent, etc.)? I can definitely see its use in having better compatibility with ML/AI env and the other tools in python toolkit. But specifically regarding ML/AI, I can export those things into the JVM usually. And with regards with Flask/Django, I can use scala http4s.
Sorry if that seemed like a lot! Trust me very excited about this project!
But when it comes to streaming, I don't think Spark Structured Streaming was very close, although Spark 2.3.0 might be sufficient for your use cases, and I use spark structured streaming only when I have to deploy spark pipeline/ml.
But, besides the features I listed, I think Flink is just better written for streaming.
For example, defining checkpoints and state management in Flink is so much more expressive and easier to write (and perhaps more performant from what I saw at Flink Forward).
Flink Async is amazing!!!!
The Flink table/sql API is not as nice as Spark SQL for batch right now, but it is getting there.
Flink also seems to perform better than Spark Structured Streaming when you turn on object reusability, at least according to Alibaba and Netflix.
And then, the features I listed are amazing. Evictors and triggers are super helpful. Process Functions let me write custom aggregations without a lot of mental overhead.
Time Characteristics are really nice in Flink.
Flink has a lot of nice connectors and stuff written already (although Parquet files are troublesome though if they come from Spark because of the way that Spark handles it read and write support classes). For example, you have to write your own kinesis connector in spark structured streaming (or use the one put out by databricks on their blogs).
Sorry if this seems all over the place. I just posted what came to mind. And I should add, Databricks is doing a lot of work to make Spark Structured Streaming really comparable, so who knows what stuff looks like in 2-3 years. Like I said above, I use Spark Structured Streaming (2.3.0+) for some very specific ML stuff that my Flink environment cannot handle nicely yet.
My only complaint is that there isn't a lot of community written code out there, but the Data Artisans team is super active, helpful, and nice.
- A Stream iterates over a channel
- A Channel implements: `channel.send` and `channel.__aiter__`.
- A topic is a "named" channel backed by a Kafka topic
- Further the topic is backed by a Transport
- Transport is very Kafka specific
To implement support for AMQP the idea is you only need to implement a custom channel.
If you open an issue we can consider how to best implement it.
Simplicity is of course a goal, but this may mean we have to sacrifice some features when Redis Streams is used as a backend.
I'm glad you like Celery, this project in many ways realize what I wanted it to be.
https://github.com/closeio/tasktiger (supports subqueues to prevent users bottlenecking each other)
https://github.com/Bogdanp/dramatiq (high message throughput)
If you have a Faust app that depends on a particular feature we strongly suggest you submit it as an integration test for us to run.
Hopefully some day Celery will be able to take the same approach, but running cloud servers cost money that the project does not have.
These seem like pretty important details to acknowledge up front.
Are there plans to support idempotent processing, or is this already implemented and I just missed it?
The developer facing API looks very nice!
We do plan on adding support for Kafka exactly once semantics to Faust. As @dleclere pointed out, streaming apps are simply a topology of consumers and producers. We will be adding this as soon as the underlying kafka clients add support for exactly once semantics.
Quite the /faustian/ bargain you propose...
Other than their being cumbersome, but that is upfront, so I don't see the Faustian-ness there.
As a corollary though - the stronger the alignment with reality, the better the comedic payoff - all other things being equal.
It is called Functional Audio Stream for a reason.
I’m assuming the entrypoint would need a django.setup, and then you’d import the modules that had the Faust tasks? A how-to in the docs would be very useful.
I see in one of the other comments about acking due to an exception. This is one of the major issues with celery IMO (at least before version 4 before the new setting was added). I’d hope that exception acking would at least be configurable.
One question I had is how Robinhood might be doing message passing and persistence. For example, I might have a stream, that creates more messages that needs to processed, and then eventually I would want to persist the "Table" to an actual datastore.
Thanks for pointing this out. Fixed the links. You can also find the docs here: http://faust.readthedocs.io/en/latest/
Faust uses Kafka for message passing. The new messages you create can be pushed to a new topic and you could have another agent consuming from this new topic. Check out the word count example here: https://github.com/robinhood/faust/blob/9fc9af9f213b75159a54...
Also note that the Table is persisted in a log compacted Kafka topic. This means, we are able to recover the state of the table in the case of a failure. However, you can always write to any other datastore while processing a stream within an agent. We do have some services that process streams and storing state in Redis and Postgres.
I think Storm/Flink have the concept of a "tick tuple", a message that comes every n-seconds to tell the worker to flush the table to some other store. I've been looking over the project, and I'm not sure how I would do this in Faust yet, as far as I understand the "Table" is partitioned, so you'd have to send a tick to every worker.
Very interesting project!
Is it an apple and oranges comparison? When would one use this instead of Celery?
Found the following for those curious:
It's important to remember that users had difficulty understanding the concepts behind Celery as well, perhaps it's more approachable now that you're used to it.
Using an asynchronous iterator for processing events enables us to maintain state. It's no longer just a callback that handles a single event, you could do things like "read 10 events at a time", or "wait for events on two separate topics and join them".
Whats interesting about the Python angle is I've noticed there are data science teams who would want to manage streams in production, however there tends to be a heavy dependency on python and its ecosystem (numpy, scipy, tensorflow).
I haven't used RocksDB. Does being a LevelDB fork give it similar corruption issues?
We use RocksDB only as a cache and use log-compacted kafka topics as the source of truth. In the case of RocksDB corruption, we can simply delete the local rocksdb cache and the faust app should recover (and rebuild the rocksdb state) from the log compacted Kafka topic.
Ex ante Python seems like a poor choice for a large distributed system where network and CPU efficiency are important. Existing solutions have good performance, are easy to use, and thanks to broad adoption and community support, are very well maintained.
I can't speak for the developer, but a few things stood out to me:
1) It's Python. Which means there's no impedance mismatch in using numerical/data science libraries like Numpy and the like. For data engineers trying to productionize data science workloads, this is quite compelling -- no need to throw away all of the Python code written by data scientists. This also lowers the barrier to entry for streaming code.
2) I had the same reservations about CPU efficiency, but it looks like they're using best-of-class libraries (RocksDB is C++, uvloop is C/Cython, etc.). I was at a PyCon talk where the speaker demo'ed Dask (a distributed computation system similar to Spark) running on Kubernetes and it was very impressive. Scalability didn't seem to be an issue. Dask actually outperformed Spark in some instances.
I wonder if Kubernetes is the key to making these types of solutions competitive with traditional JVM type distributed applications like Spark Streaming, etc.
3) Not all streaming data is real-time. In fact, streaming just means unbounded data with no inherent stipulation of near real-time SLAs. Real-time is actually a much stricter requirement.
This answer sums up why we wanted to use Python for this project: it's the most popular language for data science and people can learn it quickly.
Performance is not really a problem either, with Python 3 and asyncio we can process tens of thousands of events/s. I have seen 50k events/s, and there are still many optimizations that can be made.
That said, Faust is not just for data science. We use it to write backend services that serve WebSockets and HTTP from the same Faust worker instances that process the stream.
Not every component in a distributed system needs to be "efficient". There is a broad class of software solving real problems that can benefit from Python having a stream processor on top of robust infrastructure.
Also, Robinhood using this in production says this performs reasonably well and battle tested to handle some operations behind a popular app such as theirs.
Definitely excited to try this out to see how this fares compared to Kafka Streams and Spark Structured streaming, both of which we use.