Nearly all data sources, including the changelogs of databases, are polling-based APIs, so you’re getting data from the source in (small) batches. If your goal is to put this data into a data warehouse like Snowflake, or a system like Materialize, the lowest latency thing you can do is just immediately put that data into the destination. I sometimes see people put a message broker like Kafka in the middle of this process, thinking it’s going to imbue the system with some quality of streamyness, but this can only add latency. People are often surprised that we don’t use a message broker at Fivetran, but when you stop and think about it there’s just no benefit in this context.
Author here. 100% agreed.
As an aside, I just came across your post about how Databricks is an RDBMS . I recently wrote a similar article from a slightly more abstract perspective .
Having worked heavily with RDBMSs in the first part of my career, I feel like so many of the concepts and patterns I learned about there are being re-expressed today with modern, distributed data tooling. And that was part of my inspiration for this post about data pipelines.
Another, closely related article is "The Log: What every software engineer should know about real-time data's unifying abstraction" by Jay Kreps.
So much this.
I work on Flow  at Estuary ; you might be interested in what we're doing. It offers millisecond latency continuous datasets that are _also_ plain JSON files in cloud storage -- a real-time data lake -- which can be declaratively transformed and materialized into other systems (RDBMS, key/value, even pub/sub) using precise, continuous updates. It's the materialized view model you articulate, and Flow is in essence composing data lake-ing with continuous map/combine/reduce to make it happen.
I was asked the other day if Flow "is a database" by someone who only wanted a 2-3 sentence answer, and I waffled badly. The very nature of "databases" are so fuzzy today. They're increasingly unboxed across the Cambrian explosion of storage, query, and compute options now available to us. S3 and friends for primary storage; on-demand MPP compute for query and transformation; wide varieties of RDBMSs, key/value stores, OLAP systems, even pub/sub as specialized indexes for materializations. Flow's goal, in this worldview, is to be a hypervisor and orchestrator for this evolving cloud database. Not a punchy elevator pitch, but there it is.
Have you read ThoughtWorks material on the "data mesh"? Sounds like your product is looking to be a part of that kind of new data ecology.
It is in definitely anticlimactic after data warehousing, data lakes, data lakehouses, etc. to just throw up your hands and say "data whenever you want it, wherever you want it, in whatever form you want it" (at whatever price you're willing to pay!) So I feel your pain on marketing your product,but I think the next 5 years or so will be heavily focused on automating data quality and standardized pipelines, computational governance and optimizing workloads, and intelligent "just in time" materializaton, caching, HTAP, etc.
Your big play in my mind is helping customers optimize the (literal financial) cost/benefit tradeoff of all those compute and query engines.
Is it basically to set up one data lake per analytics team, instead of a big centralized data lake?
It seems to me this is already being done from my experience. Each team has their own s3 bucket in which they store their data products and other teams can consume them using their favorite data engines as long as they can read parquet files from s3.
Or am I missing something?
Could you share a bit about how engineers express data transformations in Flow?
From a quick look at the docs, it doesn't look like you are using SQL to do this, which is interesting since it bucks the general trend in data tooling.
It's only on you to write mapper functions -- combine & reduce happens automatically, using reduction annotations of the collection's JSON-schema. Think of it as a SQL group-by where aggregate functions have been hoisted to the schema itself.
Here's a worked-up narrative example using Citi Bike system data:
| it bucks the general trend in data tooling
You can say that again. It's a bet, and I don't know how it will work out. I do think SQL is a poor fit for expressing long-lived workflows that evolve over joined datasets / schemas / transformations, or which are more operational vs analytical in nature, or which want to bring in existing code. Though I _completely_ understand why others have focused on SQL first, and it's definitely not an either / or proposition.
Pure batch systems are those where the processing window is infinite and no state is preserved. Everything is recomputed from scratch on every run. This seems to be the prefered way to do ETL because dragging state around and accidentally polluting it is better to be avoided if not handled properly.
What is more useful for real world data processing would be an "incremental batch" model, in which the processing system has a memory of what it has processed so far and after comparing that against source data, it would determine what will run in the next update batch.
Sadly, the industry is plagued with either pure streaming solutions, even though most data problems are not of this nature. Or ETL and workflow systems, which are thinking in terms of pure batching model. This results in me having to implement the necessary logic for incremental loads myself while not finding these ETL frameworks very useful.
I've honestly had more luck writing scripts myself than relying on excessively complicated frameworks for ETLing out there. They only seem to convolute stuff together like Ruby on Rails back in the days, instead of separating concerns like some small http library or web microframework.
Is there anything out there on the horizon which focuses on incremental batch processing, or as the article point out, updating materialized views that I manage myself?
In real life most people prefer taking a full snapshot each day because they don't have good solutions to these problems in batch systems (CDC is another story).
The whole point of ETL is to bring data from one database to another. The comparison of source and destination primary keys can be done in python outside of db. And should be done on entire partitions instead of individual rows. Eg. you only consider which 'day' partitions have been loaded, not which rows have been loaded.
"Negotiating with data provider" is never going to happen - SAP or IBM or whatever vendor of whatever you're integrating is not going to change how their systems work to make your life easier - more likely they would use it as an opportunity to pitch their reporting solution instead.
Meaning from simple data movement, you get need for CDC on source end, then the simple incremental movement, then deduplication on target end - and that one is pretty computationally expensive.
For small data and low refresh frequencies (like singular gigabytes in source size, so hundreds of megabytes in columnar, updated daily), this dance might not be worth it compared to daily full snapshots.
I wish you were right though, my life would be hella easier.
Data providers usually wait for a day or something worth of data to collect before validating and releasing it to customers.
For integrating some OLTP database updating in real time on the other hand, yes you will need CDC.
Most of data engineering is just incrementally adding new data to existing corpus and then running a big batch job to dedup, sort or partition. This last step surely is computationally expensive, but at least it is conceptually simple and can be solved by throwing hardware at it. The first part of incremental updates is what imo causes more troubles.
The last step being computationally expensive kinda puts lower bound on latency of streaming replication from CDC, especially if you're trying to do it at scale and can't fine-tune each individual table partitioning.
Boy, how do I wish all data was incremental events.
I'd beg to differ; having a message broker as part of the streaming pipeline allows to set up multiple consumers (e.g. Snowflake and Elasticsearch and something else). Also, depending on the settings, you can replay streams from the beginning.
That's why we see >90% of Debezium users running with Kafka or alternatives. There is a group of point-to-point users (e.g. caching use cases) mostly via Debezium engine, but it's a small minority.
Gazette consumers use this strategy: https://gazette.readthedocs.io/en/latest/consumers-concepts....
It is interesting when is works. :)
Things started out in a scholarly vein, but the rush of commerce hasn't allowed much time to think where we're going. — James Thornton, Considerations in computer design (1963)
Here's the WordNet RDF Linked Data for "jargon"; like the "Jargon File":
A Semantic MediaWiki Thesaurus? https://en.wikipedia.org/wiki/Semantic_MediaWiki :
> Semantic MediaWiki (SMW) is an extension to MediaWiki that allows for annotating semantic data within wiki pages, thus turning a wiki that incorporates the extension into a semantic wiki. Data that has been encoded can be used in semantic searches, used for aggregation of pages, displayed in formats like maps, calendars and graphs, and exported to the outside world via formats like RDF and CSV.
Google Books NGram viewer has "word phrase" term occurrence data by year, from books: https://books.google.com/ngrams
1. You need to colocate both stream processing for the data pipeline and real-time materialized view serving for the results.
2. You need one paradigm for expressing both of these things.
Let me try to describe a bit why that is.
1. You virtually always need both stream processing and view serving in practice. In the real-world, you ingest data streams from across the company and generally don’t have a say about how the data arrives. Before you can do the sort of materialization the author describes, you need to rearrange things a bit.
2. Building off of (1), if these two aren’t conceptually close, it becomes hard to make the whole system hang together. You still effectively have the same mess—it’s just spread over more components.
This is something we’re working really hard on solving at Confluent. We build ksqlDB (https://ksqldb.io/), an event streaming database over Kafka that:
1. Let’s you write programs that do stream processing and real-time materialized views in one place.
2. Let’s you write all of it in SQL. I see a lot of people on this post longing for bash scripting, and I get it. These frameworks are way too complicated today. But to me, SQL is the ideal medium. It’s both concise and deeply expressive. Way more people are competent with SQL, too.
3. Has built-in support for connecting to external systems. One other, more mundane part of the puzzle is just integrating with other systems. ksqlDB leverages the Kafka Connect ecosystem to plug into 120+ data systems.
You can read more about how the materialization pieces works in a recent blog I did. https://www.confluent.io/blog/how-real-time-materialized-vie...
Also, although built-in materialized views don't allow partial updates in Postgres, you can get a similar thing with normal tables and triggers. Hashrocket discussed that strategy here-- https://hashrocket.com/blog/posts/materialized-view-strategi... .
As early as 2004, developers using Oracle were figuring out how to express complex constraints declaratively (i.e. without using application code or database triggers) by enforcing them on materialized views .
It's quite impressive, but this level of sophistication in what materialized views can do and how they are used does not seem to have spread far beyond Oracle.
A long time ago, I tried to use SQL Server indexed views to maintain a bank account balances table based on transaction history .
I forget what I ended up doing, but I remember that one of the downsides of using indexed views was that they didn't support any constraints. There are many restrictions on what you can and can't put in a SQL Server indexed view .
In this regard, I think Oracle has a more mature materialized view offering, though I personally haven't used Oracle much and don't know how well their materialized views work in practice.
A materialized view is nothing more than a snapshot cache, a one time ETL job. So it can abide by any constraints and is completely untethered from the data that created it. So you have to create your own maintenance cycle, including schema validation and any dynamic / non-deterministic aspects of the MV.
An indexed view is modified just like the clustered index of any tablr object upon which it depends, as an affected "partition" of the DML. That's what the SCHEMABINDING keyword is for, binding the view to any DML statements of its underlying base table(s).
So no need to maintain it at all, at the expense of conforming to a fairly rigid set of constraints to ensure that maintenance is ..umm ..maintainable.
In practice most views' logic are perfectly simpatico with the constraints of an indexed view - the tradeoff is write performance vs the "cache hit" of your view.
I do way more OLAP/HTAP engineering in my day job so indexed views are less common vs. Columnstores, but indexed views are a highly underutilized feature of SQL Server.
Data engineering really is just a maintenance of incrementally updated materialized views, but no tool out there yet recognizes it.
They at best help you orchestrate and parallelize your ETLs across multiple threads and machines. They become glorified makefiles at the cost of introducing several layers of infrastructure into the picture (eg. Airflow) for what should have been solved by simple bash scripting.
Yet at best these tools only help with stateless batch processing. When it comes down to stateful processing, which is necessary for maintaining an incrementally updated materialized views and idempotent loads, I have to couple the logic of view state management (what has been loaded so far) with logic of the actual data transformation.
Response to difficulties of batch ETL from the industry is usually: batch data processing systems are resource hungry and slow, all you need from now is streaming.
No, actually I don't. For data analytics, pure streaming almost has no application. Data analytics is essentially data compression of big data to something smaller. Ie. some form of group by. I have to wait for a window of data to get close before computing anything useful. Analytics on real "real time" data on unclosed windows is confusing and useless.
So all data analytics will ever run on groups, windows and batches of data. Therefore I need a system which will help me run data transformations on batches. More precisely - stream of smaller batches. I need this to react to incoming daily, hourly or minutely batches and I need this to backfill my materialized view in the case I decide to wipe it off and start again.
You can literally do this in what was supposed to be the original system to orchestrate bunch of programs - shell scripting. And you'll be happier for it than using current complex frameworks. Only things you will miss is something to run distributed cron and to distribute load to multiple machines. At least the latter can be handled by gnu parallel.
This article hits the nail on its head with describing what conceptual model for ETL actually is and once others will follow, we might finally see new frameworks or just libraries to help us to greatly simplify ETLs. Perhaps one day data engineering will be just as simple as running an idempotent bash or python or sql script or even close to nonexistent.
1. It is only for structured SQL, not for arbitrary data. I can't use it to unpack raw zipped data for example
2. It couples logic for data transformation and view state management. Actually it makes you do it yourself, so it doesn't really help at all. You'll get burned by storing view state together with your data, eg. when a batch increment contains no data.
3. It is not built with "incremental materialized views" in mind. It still thinks in a batch refill mode and incremental mode according to this .
It is certaily an improvement over managing sql scripts by hand, but far from the ultimate goal of maintaining materialized views in a declarative way.
Maybe I am just too used to application developer workflows where models are defined in code and then there are ORMs and schema migration tools to help manage all that.
Ravendb seems like OLTP NoSQL database on the other hand.
"Turning the database inside-out with Apache Samza" (2015): https://www.confluent.io/blog/turning-the-database-inside-ou...
previous discussion https://news.ycombinator.com/item?id=9145197
Having spent 15+ years writing big data pipelines and building teams who do the same, I couldn't agree more... the conceptual model we're all quite comfortable with is this notion of cascading, materialized views. The challenge, however, is that they are expensive to maintain in a big data pipeline context -- paid either in system resource cost, or developer cost. The only reasonable way to achieve this is a fundamental shift away from imperative pipelines, and to declarative orchestration (a few folks mention this as well). We've seen this in other domains with technologies like React, Terraform, Kubernetes, and more to great success.
I've written about this in tldr form @ https://www.ascend.io/data-engineering/, namely the evolution from ETL, to ELT, to (ETL)+, to Declarative. A also gave a more detailed tech talk on this topic @ https://www.youtube.com/watch?v=JcVTXC0qPwE.
For those who are interested in a longer white paper on data orchestration methodologies, namely imperative vs declarative, this is a good read: https://info.ascend.io/hubfs/Whitepapers/Whitepaper-Orchestr...
That being said, it's very powerful, I love it
It's pretty non-trivial to try to make an effective-dated slowly changing dimension with materialized views.
A good tool makes the medium-difficulty stuff easy, and the complicated stuff possible. Materialized views do only the former.
I would love to be wrong about this.
Are you thinking of a specific implementation of materialized views? Most implementations from traditional RDBMSs would indeed be too limiting to use as a general data pipeline building block.
The post doesn't argue that, though. It's more about using materialized views as a conceptual model for understanding data pipelines, and hinting at some recent developments that may finally make them more suitable for more widespread use.
From the conclusion:
> The ideas presented in this post are not new. But materialized views never saw widespread adoption as a primary tool for building data pipelines, likely due to their limitations and ties to relational database technologies. Perhaps with this new wave of tools like dbt and Materialize we’ll see materialized views used more heavily as a primary building block in the typical data pipeline.
I'm not a master but believe a core piece of truth for data:
Move the data as little as possible
If you can use federated query (cost/performance is acceptable), do so. If you can use materialized views, do so. Data replication has tons of issues, you almost always have the 2 generals problem and reconciliation / recompile procedures. If you don't move the data, the original storage is the source and it is always right.
I think I went way out of responding directly to you, I do think materialize.io and delta lake and declarative pipelines are the solution to 95% of the data problems out there.
I'm speaking conceptually about materialized views as system implementations differ.