Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is there a way to efficiently subscribe to an SQL query for changes?
200 points by vaughan 22 days ago | hide | past | favorite | 104 comments
I know [RethinkDB][1] used to do this with their SQL-like ReQL language, but I looked around a bit and can't find much else about it - and I would have thought it would be more common.

I'm more interesting in queries with joins and doing it efficiently, instead of just tracking updates to tables that are modified, and re-rerunning the entire query.

If we think about modern frontends using SQL-based backends, essentially every time we render, its ultimately the result of a tree of SQL queries (queries depend on results of other queries) running in the backend. Our frontend app state is just a tree of materialized views of our database which depend on each other. We've got a bunch of state management libraries that deal with trees but they don't fit so well with relational/graph-like data.

I came across a Postgres proposal for [Incremental View Maintenance][2] which generates a diff against an existing query with the purpose of updating a materialized view. Oracle also has [`FAST REFRESH`][3] for materialized views.

I guess it's relatively easy to do until you start needing joins or traversing graphs/hierarchies - which is why its maybe avoided.

EDIT: [Materialize][1] looks interesting in this space: "Execute streaming SQL Joins" but more focused on the event streams rather than general-purpose DML/OLTP.

[1]: https://github.com/rethinkdb/rethinkdb_rebirth

[2]: https://wiki.postgresql.org/wiki/Incremental_View_Maintenance

[3]: https://docs.oracle.com/database/121/DWHSG/refresh.htm#DWHSG8361

[4]: https://materialize.com/




I'm not sure I would call this relatively easy even without joins. Without joins Google Cloud Firestore does exactly what you're describing. The initial query runs against the DB then the client gets a filtered stream of updates to only that query. Its distributed and scales logarithmically with the number of queries, as it doesn't need to keep the query results in memory/materialized.

The fun part of this problem is that it's really inverted from most traditional database literature out there. Mostly the problem is you have a query and need to find the matching rows. To make updates efficient you need to do this the other way around - if there is a row (that has changed) find all the matching queries.

With joins (or any data dependant query where you can't tell a row is in the result set without looking at other data) you need to keep the query results materialized otherwise you can't have enough information without going back to disk or keeping everything in memory, which isn't really feasible in most cases.

Source: I worked on Google Cloud Firestore from it's launch until 2020 and was the on responsible for the current implementation and data structures of how changes get broadcasted to subscribed queries.


Mind:

  - the query language is quite limited (fe, searching on the absence of an attribute is not possible)
  - shame you can't do projections on the changes server side.
    you get the whole thing and not just the fields you're interested in


The query language is limited, but a lot of the limitations are intentional.

Here is a post from a while back where I explained some of the reason for some of these limitations; https://groups.google.com/g/google-cloud-firestore-discuss/c...

Granted not all of them are due to this philosophy and some are just hard problems due to other constraints like the security rules verification system.

RE projections: this is actually due to some limitations in processing security rules and not a limitation in the query matching.

Edit: RE matching on missing attributes. This is due to Firestore having sparse indexes and being schemaless, if you write explicit null values you can query for those fields being null.


If your building a reactive website then firestore is actually great. Edit: but yes IMNSHO, there are also some design mistakes (schemaless, silly limitations on keys, ...) both in the service and the client libs. I only used the python and js clients though. I might be biased


Yupp, certainly don't disagree. It's not a perfect system - to comment on those two issues: there were some pipe dreams when I was there to support schemas, including partial schemas, but we needed a better migration story (and someone to convince the powers that be it was the right thing to work on). The limitations on keys are mostly due to the system being compatible with Cloud Datastore.


This sounds fascinating. Do you have any recommended source to read more about this? Like a book or a paper?


Sadly I never thought to trying to publish a white paper with my co workers while I was working on this. I'd love the chance to do so. We did do many internal company presentations on the matter however.

That being said as far as I know there isn't any published works on a "Reverse Query Engine" or RQM for short which is the name we settled on internally for this subsystem within Firestore.


I would love to read this paper when you write it.


Disclaimer: I work at Materialize.

This is the exact problem that we are solving here at Materialize! I wrote an example blogpost that details to how to subscribe to a SQL query in practice: https://materialize.com/a-simple-and-efficient-real-time-app...

Regarding your comment about "focus on streams", it's true that we first focused on extracting event data from other sources (Kafka, Debezium, direct change data capture from external databases such as Postgres, S3, etc). Over time, however, we plan to add additional features that will allow users to also treat Materialize as a general purpose database.

Hope this helps and happy to answer any questions!

edit: I was imprecise in my usage of the term event streams. Materialize supports inserts, updates and deletes at its core (the topK query shown in the blog post above shows this). Materialize is a more general solution than something focused on append-only event streams.


As cirego mentioned, we (Materialize) have the TAIL operator, which was built to allow users to subscribe to changes:

https://materialize.com/docs/sql/tail/


Out of curiosity, how does TAIL behave with updates?

For example, say that I have a table foo that has a single row, if that row's contents is updated, would I get two updates (a deletion and an insert)?


Yes, exactly. Roughly speaking you'd see something like this:

    (id, text1, text2, timestamp, diff)
    ----
    (42, before, data, Friday April 23 12:27AM, -1)
    (42, after, data, Friday April 23 12:27AM, +1)


You are correct! Updates are a expressed as a retraction and an insert that happen within the same timestamp.

An example may not be necessary but it might also help clarify. Assuming you're using the psql client to run "TAIL WITH (PROGRESS)", the logical grouping for a single update will be a set of rows like the following:

  ...
  1608081358001 f -1 ['Lockal', '4590']
  1608081358001 f 1 ['Epidosis', '4595']
  1608081358001 f -1 ['Matlin', '5220']
  1608081358001 f 1 ['Matlin', '5221']
  1608081359001 t \N ['\\N', '\\N']
  ...
All of these occur at the same timestamp, meaning that they should be applied atomically to maintain consistency of your dataset. In this case, my query is a top-10 query and Epidosis has now entered the top10 while Lockal has dropped out of the top10. Matlin remains in the top10 but their total has gone from 5220 to 5221. The final example record is produced when you run with PROGRESS enabled and serves as an indicator that 1608081359001 is now closed and no further updates will ever happen at timestamp 1608081359001.

I find that this stream of rows is very easy to convert to a data structure "{timestamp, inserts[], deletes[]}" and this, in turn, maps naturally onto reactive APIs, such as React or D3. My blog post, linked above, delves into this in more detail. Hope this explanation helps!


That's a great example, thanks a lot!


Materialize definitely looks interesting. I'd have a few questions regarding the current product:

Does Materialize provide a commercial non-cloud version where workers can run on multiple nodes in a cluster? On the website I only see references to the BSL-licensed single-node version and to the cloud version that is hosted at Materialize. Would a company be able to run Materialize on their own Kubernetes cluster?

Also is it possible to add custom code for data sources (for example Kafka, but with a different format than what Materialize currently expects) or is Materialize limited to the pre-defined sources?


Sorry if it's a bit of an enterprisy-response, but please reach out! We are supporting some non-cloud commercial customers on a case-by-case basis. The reason for this is that we find the support and maintenance burden to be much higher with a non-cloud delivery model, which isn't always a great experience for either party. We also have a managed product (where the data plane resides in your environment) that may work, depending on your infra and security requirements.

Re: custom code -- our codebase is fully source-available and open to contributions, but the source+sink code going through some refactoring to make it more beginner-friendly. Depending on your consistency requirements, we also support Debezium and our own CDC format (https://materialize.com/docs/connect/materialize-cdc/) for folks who want to bring in their own data sources. (For quick prototypes, we also support csv/json/plaintext source types, as well as SQL INSERTs!)


Following your project and Airbyte very closely, both seem to have caught and, if coupled, would provide some pretty compelling use cases, especially for retaining control of systems.

Thought for a while most of the analytics engines are really just getting around materialized view limitations.


Exactly! Since most ELT tools (including Airbyte) support json + csv output formats, those work perfectly well with Materialize out-of-the-box. I'm playing around with Slack+Stripe Airbyte sources to try and come up with some fun dashboards to show off in Materialize as we speak.


I am considering Materialize for a project, and I'm concerned about the limitation that materialized views should fit into the memory of a single host.

My use case is a materialized OLAP "fact" tables, created by joining a 5-10 separate tables. While the biggest source tables have under a million rows, the joined fact table would have perhaps 100 million rows which I fear might be too big.

One workaround I considered is creating a non-materialized view, TAIL-ing it, and persisting the results to another database.

Could you comment on this problem and workaround?


At the moment, we are focused on scaling up with a single instance. While there are a couple of strategies for scaling out, none of our current methods support scaling a single view beyond a single instance. This isn't really a technical limitation but rather more of a testing and supportability limitation. Timely and Differential (the incremental computation frameworks used by Materialize) suport scale-out and have been extensively tested in scale-out scenarios.

Do you have a limit in mind on how large you're willing to go? A brief bit of napkin math shows that, at 1KB per record, your largest view should fit within 100GB of memory. This is easily supported by Materialize and I can certainly understand if this exceeds your appetite!

If you're interested in reducing the size of the in-memory dataset, does your fact table have a temporal dimension to it? By default, Materialize stores data for all time (like a database) and you can write views in such a way that it will only materialize recent data (like a stream processor). Our co-founder Frank wrote a blog post[1] detailing how to do this.

I'm honestly not sure how TAIL-ing a non-materialized view would perform. Sounds like something that would be fun to test!

Happy to chat about this further in our community Slack channel if you have more questions.

[1] https://materialize.com/temporal-filters/


Here's how it works with JOINS in Postgres, using the NOTIFY/LISTEN mechanism

  -- Run the query and store result in updated_ids
  WITH updated_ids AS (
   UPDATE public.doc
   SET version = version + 1
   
  --  Joins are not directly part of Postgres' update syntax, but this achieves the same
   FROM public.org
   WHERE public.org.id = public.doc.org_id AND public.org.type = 'customer'
   
  -- Returns all ids of affected rows
   RETURNING public.doc.id
  )
  -- convert the returned ids into a comma-separated string that we can sent via the NOTIFY mechanism
  , id_string_result AS (
   SELECT ARRAY_TO_STRING(ARRAY_AGG(id) ,',') AS id_string FROM updated_ids
  )
  
  -- use LISTEN/NOTIFY to publish a changed_row_ids event with a comma separated list of changed row ids
  SELECT pg_notify( 'changed_row_ids', id_string_result.id_string )


Hmmm has anyone had problems with notify/listen and pg. IIUC transactions aren't finished until all listeners have acked the notifications, so if you have a/some misbehaving listener/s you're in for strange undebuggable problems? I know there are things with 0mq or I guess kafka plugs, but I was wondering about real world experience.


That's correct. According to the documentation the waiting notifications are queued until ACKed. If the queue is full, then new transactions can start to fail when they try to add to the queue via NOTIFY. Also, certain in-progress transactions can prevent cleanup of the queue so a long transaction could lead to it getting full. Per the documentation(https://www.postgresql.org/docs/current/sql-notify.html):

"There is a queue that holds notifications that have been sent but not yet processed by all listening sessions. If this queue becomes full, transactions calling NOTIFY will fail at commit. The queue is quite large (8GB in a standard installation) and should be sufficiently sized for almost every use case. However, no cleanup can take place if a session executes LISTEN and then enters a transaction for a very long time. Once the queue is half full you will see warnings in the log file pointing you to the session that is preventing cleanup. In this case you should make sure that this session ends its current transaction so that cleanup can proceed."

[edit: I previously said that you weren't quite correct... but I originally misread what you said, I've updated my comment to say you are correct.]


Hey thanks for taking the time to 1) answer (trying to teach) and 2) coming back to correct your answer and cite detailed explanations.

Wish I could upvote more than once sometimes.

I was thrilled when I learned - on this here site - about listen/notify, and then went on reading the docs and the code (pg's code is surprising easy to read when you know what you're looking for - in fact it gave me enough confidence to build my own extensions...) and felt a bit underwhelmed. Hopefully someone upgrades the whole listen/notify one day for pluggability.


Another "gotcha" with PG notify is that in PG the payload cannot be more than 8kb long. If the UPDATE query changes thousands of rows, it's quite easy to end up with a generated text longer than 8000bytes. I'm not sure if this error raised by NOTIFY would block the UPDATE from executing, but would be worth testing.


I don't think that's correct - this would only happen if the notification queue was already full.

Assuming the notification queue is not full then the original transaction does not wait for the notification to be ACKed.

You can avoid most problems with listeners by having a single dedicated listener connection per application process (so all requests handled by that process multiplex onto the same listener). Since this connection only listens for notifications and never enters a transaction, it would be very difficult to fill up the notification queue.

The queries themselves are executed on a separate connection pool, so that connections are never shared between the listener and other operations.


Microsoft SQL Server has a rarely-used feature to do exactly this: [Query Notifications][1].

It sounded great in theory when it came out in 2005, but the drawback was that every open subscription was an open connection to the database server, from every application server. The more web servers you have, and the more queries you want to subscribe to, the more open connections you end up with, which at large scale can cause performance issues even when the connections aren't doing anything.

You can mitigate this by using less subscriptions to monitor more data - but when anything changes in the underlying result set, you end up fetching the entire result set. With a lot of web/app servers, this means an avalanche of queries running at exactly the same time, trying to hit the same data to refresh their cache.

In a perfect world, I'd try to mitigate this by offloading these kinds of queries to read-only replicas. That's a feature limited to the expensive Enterprise Edition ($7K USD per CPU core), and when you scale out to a lot of database servers to leverage this, then it becomes prohibitively expensive.

Another way to mitigate the many-app-servers, many-subscriptions problem is to have a central app server manage the caching for everyone else.

[1]: https://docs.microsoft.com/en-us/sql/connect/ado-net/sql/ena...


SqlDependency...now that's a bit of a blast from the past for me. I remember helping a customer get up and running with that, sadly it's so long ago I don't remember the details, but it was a fun diversion from the daily grind :)


Supabase’s realtime library consumes the logical WAL and provides a Phoenix channel + JavaScript API to subscribe to matching events from it: https://github.com/supabase/realtime

Under the hood, the core implementation was copied (with credit / attribution) from: https://github.com/cainophile/cainophile

I happened to do a similar thing but I adapted cainophile into an Elixir “OffBroadway” producer: https://github.com/integratedb/integrate/blob/main/lib/integ...

These approaches rely on acking the WAL to confirm data has been processed. It’s simpler than running Debezium / Kafka for “zookept” CDC. However, they are “at least once” at best and it’s easy to shoot yourself in the foot so think twice before relying on this kind of thing for a real application.

Materialize is nice — TAIL is a lovely abstraction and their data ingest uses Debezium under the hood. That said, I believe their Postgres binlog source is still alpha / under active community dev.


Nice, I've actually extracted out supabases WAL subscribe bits to use alongside with Hasura so that I can react to insert, update and delete events right in Elixir.

Looks like you're doing something similar. I'd love to see this extracted out into a library by the supabase team.


> I'd love to see this extracted out into a library by the supabase team.

Supabase cofounder here - I'll raise this with the team. Feel free to email me if you have any specifics around how you're using it so we can make sure it fits your use-case (email in my profile)


Disclaimer: I work at Asana

We have an in house system (LunaDb) which is a little like this. There's a tech talk available about how it works at https://blog.asana.com/2015/10/asana-tech-talk-reactive-quer... - it's from a few years ago, but the core ideas are there. There's also some details on the caching layer we built for it at https://blog.asana.com/2020/09/worldstore-distributed-cachin...

A few properties based on your questions and the observations here:

- We don't attempt to incrementally update query results. Given the number of simultaneous queries the system handles, we've found it much more important to instead by very precise about only re-running exactly the right queries in response to data modification.

- We support joins (although not queries of arbitrary complexity). We avoid a race conditions and cross-table locking issues by using the binlog as our source of changes, which imposes a linearization on the changes matching the state in the db. Correctly determining which queries to update for these requires going back to the database.

- Performance is an interesting problem. It's easy to arrange situations where total load is a function of "rate of data change" * "number of queries over that data", so being overly broad in triggering recalculations gets expensive fast.

We're actively hiring to work on this - if you are interested my contact details are in my hn profile.


Maybe https://hasura.io is something to have a look at? It's a GraphQL server that gives you realtime GraphQL API's over Postgres/SQL.

I've been trying out https://nhost.io for some time now. They use Hasura and I'm impressed by how easy subscriptions are using Apollo to query the GraphQL API.


I believe hasura uses polling under the hood to accomplish subscriptions, right?


I work at Hasura.

Hasura uses a novel way of batching similar parameterized subscriptions together and then polls under the hood. This means that if there are 1000 subscribers of similar type of query, then underneath Hasura will only make a single query to Postgres (or few queries depending on the batch size). This approach, which we call "multiplexing" in short, scales really really well. And is also the simplest way to get live updates for _any_ query, no matter how many joins, etc.

We talk more about this approach (and comparisons with other approaches), and benchmarks in this post: https://hasura.io/blog/1-million-active-graphql-subscription...


Very clever approach and a great overview of the solution space.

> We made significant investment in investigating this approach coupled with basic incremental updating and have a few small projects in production that takes an approach similar to [this talk][1]. From https://github.com/hasura/graphql-engine/blob/master/archite...

Do you have/know of any findings/lessons learned from these projects?

[1]: https://www.postgresql.eu/events/pgconfeu2018/sessions/sessi...


It’s crazy how well it works. I was skeptical at first but one of projects has 15k complexish subscriptions and it scales amazing. A few Hasura hosts and a single Postgres instance does amazing. 8 cores Postgres instances running at about 20% utilization.


This feels like something I'd naturally use on a frontend, but would it be a dumb idea to use this for backend anything? Like triggering actions off data updates.


I wish this was possible to use without going full to Hasura.


I advocated Hasura at a prior company I worked at, and came up with a plan to add it incrementally to our existing software. I think this is quite doable.

Why do you see it as an all or nothing choice?


Is like Wordpress vs. Flask, I prefer libraries to full frameworks.


Not everything makes sense as a library. Your database for example.

There's no reason using Hasura requires going "full hasura".


There is also the research DB Noria[0] that's based on this idea. It maintains materialized views for queries and efficiently updates them when the data changes.

[0] https://github.com/mit-pdos/noria


the cool thing with noria is that it behave like a cache.

So it only store the row from the materialized view that are frequently needed instead of storing the complete materialized view.

But if you query for a row which is missing it will rematerialize this row on demand with "up-query" without having to run the expensive query that a materialized view refresh normally need.


This approach would be cool if it ran on a database on the frontend too. You could run a complex query against an in-browser DB with a few joins and some params, and then tweak the params, and it would only need to pull down the missing rows. Then your app is offline and super snappy.


Noria looks cool but it requires Zookeeper which is a PITA.


I like this idea a lot. In a sense it's thinking of your application as a spreadsheet, where the database is a data tab and the frontend is the summary tabs. If there's a change to the data tab, the "spreadsheet engine" (or "materialized view engine" in your case) walks the dependency graph and updates all the relevant parts of the summary tabs.

In the Rollbar UI, we implemented a lot of this type of logic by hand to make our real-time aggregated Items list work (i.e. each time an Item has a new occurrence, we update its stats in the database and push the diff to connected clients). It would save an immense amount of code to have had a solution that did this out of the box.

The closest thing I'm aware of to this is BigQuery's materialized views ( https://cloud.google.com/bigquery/docs/materialized-views-in... ), which take care of making otherwise expensive queries cheap fast, but they are rather limited (i.e. no subqueries or joins), and don't have the "streaming output of changes" you describe.


Very interesting topic!

I feel like this would be the right time to ask for an advice regarding doing something similar for user search results with PostgreSQL (v11)

Eg. User "subscribes" to product searches for "Women - Nike - Size M" and the system sends her a daily notification or email if there are new result within her filter.

How would one solve this kind of subscription logic? So far what I've up with is just to save and re-run the user queries from the API side and have last results last primary ID in a separate table to know which results are to be considered new. (So I can use primary key filter on the query to lessen the products DB has to go through, EG: "SELECT id FROM products WHERE [user filters] AND id > 1234"

But it doesn't feel right to bombard the DB with 10k queries on daily basis to achieve this, but maybe I'm overthinking and it is viable.

I looked through some of the comments mentioning NOTIFY / LISTEN but I don't think I could viably use this as I can't apply separate filters on this on the DB level.

Note: I'm looking for general ideas / things I should consider. I'm not expecting anyone to do my work.


It's probably the simplest solution if you are running batch job once per day. Just ensure you are running one query per filter set, and you have your indexes setup correctly.

Maybe you could speed up your queries by using a materialized view for `FROM products WHERE id > 1234`. You could then maybe derive more materialized views for other attributes depending on the clustering of your watched queries...e.g. gender might split the dataset in half, but this is not guaranteed to improve perf and indexes might actually be faster, place there are space concerns and additional indexes.

If you want to run less queries you could combine queries like [Hasura does][1].

To avoid running queries with no new results you would need to watch DB writes, and map DML statements to query subs...but pointless in a batch setting.

[1]: https://github.com/hasura/graphql-engine/blob/master/archite...


Because this can be done in a background batch job during off-peak hours, there's no harm in throwing 10K queries at the database - you should be able to run a 100x that with commodity cloud hardware without noticeably impacting performance, provided you take care to not just throw queries at the db at the max rate it can handle.

The simple approach is best here IMHO.


You could probably find some way to combine every subscription into a single query (by storing the query parameters themselves in the database).


It's not as simple as "subscribe to this SQL query", but you can do this relatively easy with PostgreSQL LISTEN / NOTIFY: https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237...

You just notify the clients with the primary keys of what changed, so you only need to run the query if something changed. If you implement it correctly, you only have to run the query once and push the same result to every client.

I have a small project where I used this and wherever you change anything and send the NOTIFY, every client updates real-time.


Fwiw this approach doesn't work for something like:

SELECT * FROM Products WHERE price > 5;

If a new product comes in with price 20 you won't know to add it to your result set and send it to the clients.


You would send the new id in NOTIFY when something inserts the new product.


You'd have to send the full row otherwise you wouldn't know if it matches the query or not


Even from a view? Materialized or not?


Only if you keep the whole table in memory or re-read every row that is written


In PostgreSQL you can either add yourself as a replica with logical decoding (e.g. PostGraphile supports GraphQL with live queries that way) or do it manually with LISTEN/NOTIFY.

On top of that you can use transaction ids or manual transaction serials to get only new rows (but of course note that transactions can be long-running, so you need to either accept potentially unbounded repeated data, or serialize all write transactions on a single serial assignment single-row table, or record the whole set of committed transactions whose data you have processed rather than a single serial number).


> I guess it's relatively easy to do

This is a very hard problem to do the right way and probably would need some changes on the RDBMS itself. You would need to monitor all tables that might affect your query for changes and how these changes affect your query (say you're just reading a value, aggregating with sum, doing average with count of rows, the list goes on). Add more complexity on top of that if you want to support querying from other views that also aggregate the data on your query.

As pointed on another comment there's DB Noria [0] but I'm not sure how production ready it's right now. You an idea of the complexity of the task on a interview with one of the project leads [1].

[0] https://github.com/mit-pdos/noria [1] https://corecursive.com/030-rethinking-databases-with-jon-gj...


> the right way

Yep. I meant it was easy to do it the inefficient way where you just refresh the entire query when any table mentioned in the query changes. You would just have to also check if something was a view and recursively parse the SQL that is used in the view. Just use Postgres `LISTEN` and triggers or the WAL for change monitoring.

> how these changes affect your query

Yeh, this is where it gets tricky. I think it can be simpler though if the SQL is kept simple with less exotic sub-queries, CTEs, JOINs.

Thanks for the links.


> Yep. I meant it was easy to do it the inefficient way where you just refresh the entire query when any table mentioned in the query changes. You would just have to also check if something was a view and recursively parse the SQL that is used in the view. Just use Postgres `LISTEN` and triggers or the WAL for change monitoring.

Unfortunately, it turns out that recursively refreshing views still leads to surprising behavior. I think post summarizes the problem quite nicely: https://scattered-thoughts.net/writing/internal-consistency-.... If you cannot refresh all of the views, at a single point in time, then there will be internal inconsistencies in your dataset.

When looking at automatic refreshing, simple triggers and `LISTEN/NOTIFY` don't scale, as was mentioned in the comment regarding Hasura's multiplexing. I think, in the absence of incrementally maintained views, their multiplexing strategy is a good compromise for databases like postgres. However, it should be noted that continuous query / subscription of views is the exact scenario under which incremental computation will provide both lower latency and greater resource efficiency.


> If you cannot refresh all of the views, at a single point in time, then there will be internal inconsistencies in your dataset.

In the simplest case, I'm talking about regular SQL non-materialized views which are essentially inlined.

> incremental computation will provide both lower latency and greater resource efficiency.

Wish we had some better database primitives to assemble rather than building everything on Postgres - its not ideal for a lot of things.


> In the simplest case, I'm talking about regular SQL non-materialized views which are essentially inlined.

I see that now -- makes sense!

> Wish we had some better database primitives to assemble rather than building everything on Postgres - its not ideal for a lot of things.

I'm curious to hear more about this! We agree that better primitives are required and that's why Materialize is written in Rust using using TimelyDataflow[1] and DifferentialDataflow[2] (both developed by Materialize co-founder Frank McSherry). The only relationship between Materialize and Postgres is that we are wire-compatible with Postgres and we don't share any code with Postgres nor do we have a dependence on it.

[1] https://github.com/TimelyDataflow/timely-dataflow [2] https://github.com/TimelyDataflow/differential-dataflow


> I'm curious to hear more about this

I think the [FoundationDB layer concept][1] said it well:

"When you choose a database today, you’re not choosing one piece of technology, you’re choosing three: storage technology, data model, and API/query language..."

I like the idea of [Apache Calcite][2] that provides an API to access the query planner. I think if you had more convenient access to some of the underlying components you could build a lot of cool stuff. There's too much magic where you punch in an SQL command or a config and hope it eventually does what you need.

I haven't look too much into the internals, but I'm keen to do so soon.

[1]: https://apple.github.io/foundationdb/layer-concept.html [2]: https://calcite.apache.org/


> Yep. I meant it was easy to do it the inefficient way where you just refresh the entire query when any table mentioned in the query changes.

You might find it’s cheaper to do just that. It might cost you in computational resources, but save you millions in engineering time.


If you have spare CPU capacity, you might want to just re-run and diff the query results. You are already paying for the idle CPU anyway.

I think the issue is that most SQL engines are not designed for streams, just a single request/response. Once the query is complicated enough I think it would be difficult to do efficiently and correctly.

I would go with SQLite in WAL mode (concurrent reads), many cores, and many threads/green threads in a compiled language.


I've implemented a RDBMS that supports this [1]. It handles joins, views (which are automatically materialized and incrementally updated), etc. It's memory only, and it doesn't support exotic stuff like recursive CTEs, but it does exactly what you're asking for. We used it in production successfully for frequently-updated real time data at the company where I used to work.

Notably, it uses persistent search trees such that each revision shares structure with the previous one, which makes diffing two closely-related revisions extremely efficient (just skip over any shared structure). Subscribers just receive a stream of diffs, with backpressure handled automatically by skipping over intermediate revisions. See [2] for a more detailed summary.

It also exposes revisions as first-class objects, which allows you to tag, diff, and even three-way merge them. Specifically, you can run arbitrary queries on both revisions and diffs. See [3] for examples.

It's no longer maintained, unfortunately. Someday I may revive it, perhaps adding support for spilling data that won't fit in memory to log-structured merge trees. I'd also rewrite it in a language like Rust, which will help flatten some of the more pointer-heavy data structures and reduce tail latencies. If anyone is interested in seeing that happen or helping out, let me know.

I'm really surprised this still isn't supported in mainstream DBMSes. The MVCC model in PostgreSQL seems particularly well suited to it.

[1]: https://github.com/ReadyTalk/revori

[2]: https://github.com/ReadyTalk/revori/wiki/Design-and-Implemen...

[3]: https://github.com/ReadyTalk/revori/wiki/CLI-Revori-Client


It is a feature, although experimental, of ClickHouse https://clickhouse.tech/docs/en/sql-reference/statements/cre...


The first thing that comes to mind ist LISTEN/NOTIFY from postgres.

You can listen per code and notify per code.

https://www.postgresql.org/docs/current/sql-listen.html

The js adapter for pg can handle this: https://medium.com/@simon.white/postgres-publish-subscribe-w...


Oh and you can also create triggers and notify based on that.


I think streaming-first systems like Flink are best positioned to answer your need.

You might need to abstract the query or subscription infrastructure into your data model though:

Subscriptions:

    INSERT INTO user_orders_total_feed(user_id, data)
    SELECT 
      uo.user_id,
      SUM(o.price) total
    FROM user_order uo
    JOIN order o ON uo.user_id = o.user_id
    JOIN user_live_subscription uls ON uo.user_id = uls.user_id
    GROUP BY uo.user_id
    WHERE uls.subscription_topic = 'orders_total'
And for queries you would join on a temporal snapshot using 'FOR SYSTEM TIME AS OF' [1]

You could have your data fully stored in Kafka or Pulsar, Kappa-architecture-style ; or you could have it in regular rdbms but with Change Data Capture on top [2] (basically shipping your WAL to Kafka/Pulsar)

The good thing about Kafka and Flink is that they scale. Your regular db might not sustain the query rate if every query needs a write to a table, but Kafka/Flink will.

[1] https://ci.apache.org/projects/flink/flink-docs-stable/dev/t...

[2] https://debezium.io/


Here's my thoughts. Not so much for you as for myself because it's an interesting question.

I once did a very hacky version of something like with mysql, where I was batch updating ranking data for a leaderboard, using internal variables based on a Stack Overflow answer, when I was a junior. Nobody should do that lol.

In fact, I'd avoid mysql for this if at all possible.

It seems like this fairly easy to do in postgres, and a pain in the neck in mysql.

In postgres I'd use a materialized view with a unique index and refresh it CONCURRENTLY where it will compare the view and the underlying data and only change things as needed. It just doesn't do this with a stream, you have to request updates. You can presumably refresh it internally using TRIGGERS.

I mean you can do your own stream with postgres as well (using NOTIFY and LISTEN to subscribe to notifications created with AFTER CUD TRIGGERs) or even in MYSQL in a hacky way and refresh it on the back end or in the browser, but it's likely more efficent to let the db handle batch updating itself rather than handrolling your own updates. Probably easier to update to incremental views if and when those are added.

I guess the question is, what to do when either of the above isn't fast enough? My thought would be to switch between two identical materialized views with one always updating (not using concurrently). Alternatively use an external cache to do something of the same thing. Then multiple systems can query data, with refreshes being triggered by one of the selects, which switches the view to read from after updating, or which updates the cache atomically.


In Go you can use GraphJin an automatic GraphQL to SQL compiler that supports GraphQL subscriptions (aka Live Queries). GraphJin works either as a standalone service or as a library in your own code. It uses a very efficient mechanism to manage subscriptions keeping db load low while subscribing to an SQL query using polling.

https://pkg.go.dev/github.com/dosco/graphjin@v0.16.44/core#e...

https://github.com/dosco/graphjin


We used flexviews for MySQL. It would use the MySQL bin log to read through the actual DML queries (UPDATE, DELETE, INSERT), parse the query and then make changes to the materialised views based on the dependency tree defined by the SQL queries.


This problem cannot really be solved without application participation, because to keep the data up to date across joins at some point you need cross table locking, which would become highly inefficient. This is the whole cqrs thing by the way


Can you speak more to how updates get pushed out to your UI clients? Are you using Firestore ? Hasura? Polling?

Something we're looking at enabling w/ Estuary Flow [1] is materializing to Firestore for this use case (and Flow already supports PostgreSQL materializations).

Flow would capture changes from your database tables (or you write normalized events directly to Flow collections). You aggregate and join as needed, and then materialize your views into Firestore (or another Postgres table) where you'd subscribe for live updates.

[1]: https://github.com/estuary/flow


A sqlite database observation system in swift: https://github.com/groue/GRDB.swift#valueobservation

I don't believe it includes any efficiency magic to know "which rows" are affected. Even knowing what tables are affected required some... creative coding. A little insight into how it's implemented: https://www.mail-archive.com/sqlite-users@mailinglists.sqlit...


Try Supabase out. Its basically a Postgres plugin and you can subscribe to tables live changes using Websockets.

If you have a query you could reorientate the table's changes and see if the changes affect the output incrementally manually.


We haven't updated it in a while, but check out DBToaster [1]. Give it some SQL queries and a set of tables, and it'll compile for you a Scala or C++ class with insert() and delete() functions for every table listed, as well as a functions to get the (incrementally maintained) result of any query. Supports SQL92, with some decent extensibility for UDFs, etc... and has quite a bit of theory backing it.

[1] https://dbtoaster.github.io/


PipelineDB might be of interest. https://github.com/pipelinedb/pipelinedb


PipelineDB was awesome - built some v. neat things with it 4-5 years ago. Wouldn't use it now since the team went to Confluent. https://ksqldb.io/ functionality is getting close.

Flink is a solid option. Materialize shows a ton of promise.


I also thought of it when I read the question, but looks like after they've been acquired by Confluent the project is essentially dead.


A few years ago, I made node.js packages to do this for MySQL (by reading changes from the replication log) and Postgres. (using triggers) Both require specifying cache invalidation functions for each dependent table.

https://github.com/numtel/mysql-live-select

https://github.com/numtel/pg-live-select


> I'm more interesting in queries with joins and doing it efficiently, instead of just tracking updates to tables that are modified, and re-rerunning the entire query.

can you give an example?


Take a standard M-M join where an `Item` can belong to many `Lists`.

    select l.name, i.name from list as l
    inner join list_items as li on li.list_id = l.id
    inner join items as i on i.id = li.item_id
    where items.completed = true

Imagine it's like a collaborative bookmarking app.

Your options are basically:

1. Polling - re-run query periodically

2. Monitor changes to tables using triggers or CDC

-- 2a. Re-run entire query when any table is modified with DML and diff against previous result

-- 2b. Only re-run query if the changes would effect the result (e.g. if we add an `item` with `item.completed = false` then we know that our above query will never need updating.)

-- 2c. Do an efficient diff using some relational algebra magic. See Postgres IVM link I posted.

I posted the same example [here][1] with some more details.

[1]: https://math.stackexchange.com/questions/4112326/how-can-i-u...


What exactly are you trying to solve/fix?

If you want to improve performance in an application, structure it in a way that lets you cache the result set in-memory at the app layer. Assuming the only way to update data is through the same app, refresh/rehydrate the related cache when records are updated/deleted/inserted. You'll only hit the database the first time, and then again when the cache is cleared in your app layer logic.

This is more or less how Active Record ORM caching in Rails works.

You gotta key off of something so it is very application specific but lets take your example a step further.

The items and list are for a specific user. Whenever an item gets added to a list or a list gets added to an item, nuke the cache key for 'user/$ID/list_items' or whatever you want to call it. Whenever you read, you hit the same cache key in-memory, so your reads are always avoiding a database roundtrip. You can do the same logic for deletes/updates, etc... and structure based on the hierarchy of your data or your tree dependency of records


A graph database would be a good fit for many-to-many data. In that biosphere, OrientDB has the concept of Live Queries.


I think most of the time I would use SQL subscriptions to notify web clients that they are stale but don't auto update the client's state unless the form is read only otherwise current client state could be destroyed

React server components can do intelligent merges, in client action or on tick, I guess it would just mean hooking into that system to prevent automatically merging in all cases in favour of manual merges client side sometimes


Informix has its "push data" feature - except it appears to be limited to a single table, not complex queries - I suspect you're right, it's hard to do safely and efficiently.

https://www.ibm.com/docs/en/informix-servers/14.10/14.10?top...


I'm not 100% sure but might you be able to use some sort of Event Stream Processing[1] product on top of your database (if you need the database) and use that layer in the architecture instead of SQL and the DB?

[1]: https://en.wikipedia.org/wiki/Event_stream_processing


Debezium is used for Change Data Capture, and created database changes event streams: https://debezium.io/documentation/reference/1.5/connectors/i...

It's not exactly what you're searching for, but might be worth a look.


It's certainly not production situation, but I quickly made this [1] a while back to help with "watching" certain queries for debugging. It uses a polling approach.

[1] https://github.com/grouparoo/db_watch


if you're looking at pg materialized views, look at listen/notify too.

so, the builder Listens and write queries then Notify, so re-do the m-view?

or just use Notify with good payload after each write you think is important? (I use this sometimes)

I mean, a trigger to push could work, yea?

or, some clever rig with pg-logical replication (I'd have to think on this one more, may be crap)


It's less turnkey, but Postgres does support Pub/Sub. If you're willing to follow your updates with a pub message, you can then manually refresh your data elsewhere in response. I use this feature to horizontally scale websocket updates across multiple servers.


ksqlDB (http://ksqldb.io/) does exactly this out of the box. You write SQL queries whose results you can subscribe to with a client connection. Today these subscriptions are somewhat expensive, but we're actively working on making them lightweight and scalable. I'm really looking forward to seeing the end result, I think it opens up whole new use cases.


We do this in Ryzom, but we had to build it from the ground up, and implemented a meteorjs-like protocol.


Disclaimer: I work at EsperTech

Esper provides streaming SQL that gets compiled to JVM or CLR byte code


In sql server you could create an index view for a simple query then add triggers.


Disclaimer: I work at EsperTech

Esper is open source and provides streaming sql


check out Kafka Streams, they have streaming SQL as well https://www.confluent.io/online-talks/intro-to-ksql-streamin...


I came here to say that one of the few I can think of is KSQL. The downside is you pretty much have to turn your entire table into an event/message system.

However, once you add a event/message system into place you could just have a topic you hang out on and just redo the changes on the query once the event pops. Something like select col1,col2 from table where lastchangetime >= mylastquerytime. Instead of polling with the same thing. Downsides to this is the extra index needed and polling overhead to name the biggest issues.

KSQL however is one of those things that looks really cool when you look at it. But if you dig a bit deeper, it makes less and less sense to use. For example in this case if you are using KSQL instead of normal streams/topic pumps you have a bit of overhead of SQL and yet another server cluster in the mix instead of the avro/json that is already built in. Also if you have already decomposed your data into messages into topics you can just as easily just wait on the topic for new data to show up. As that is something kafka does very easily and is built in already. Then also it does not work cross DB (say broker->sqlserver). Then also you now have a 4th type of code path in your system (kafka topics, kafka streams, SQL queries, and KSQL streams).


KSqlDb


It's not clear to me if this would solve your problem, because it kind of depends on how you are planning on reacting, but Github and Gitlab have the concept of a Code Owner and it can work for specific files. So if you know that your queries that you want to monitor exist in tidy DB Impl classes or maybe a specific repository, you could be the code owner of those files.

This approach is pro-active because you can block merges that aren't as well thought out, and you can be the final arbiter of when something gets merged.

I feel like it's kind of an out of a box way of doing what you want. It doesn't give you any visual tools or anything that makes the relationships clearer to see, but it does allow you to get closer to the code that affects what you care about.

Also since I do DevOps, if you are competent or your DevOps is competent (aka they are not just glorified Ops) you can create performance tests that either trigger on merges or like a cron job. They can even trigger daily or hourly depending on code change frequency. Then you can see trends, for example how certain inserts, deletes etc change over time, if you have a semi competent metrics driven architecture you could even see how things change over time graphically.


It sounds like you're talking about a totally different question. OP is interested in subscribing to changes in the results of a query at runtime, not changes to the text of the query itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: