
Realtime Data Processing at Facebook - codepie
https://research.facebook.com/publications/realtime-data-processing-at-facebook/
======
levbrie
Facebook's targeting of seconds rather than milliseconds and their reasoning
behind it ought to serve as a template for companies with similar
requirements. Discussions of latency in Spark often center around micro-
batching and just how low you can go. In the context of Spark, that makes
perfect sense. If your system needs to guarantee sub-80 millisecond latency,
you don't want to commit to a system that isn't designed for it. But in the
larger context, there are plenty of systems that can provide close to ideal
performance with a latency of a few seconds. We've actually divided up our
realtime stream processing infrastructure in a similar way - we have features
that require milliseconds of latency and they actually go through a completely
different pipeline than the realtime features that require heavy processing
and provide all of their intended benefits with a few seconds of latency. What
really matters is knowing which is which.

