Hacker News new | comments | show | ask | jobs | submit login
Taking the 'D' Out of CRUD with Soft Updates (scottsmerchek.com)
68 points by smerchek 633 days ago | hide | past | web | 58 comments | favorite



There was a very interesting presentation this year at PGConf New York with something similar [1], but where the history table has the same name as the normal table, but lives in a separate history schema. Also there is a timetravel schema whose tables again have the same name, but are views of the history schema, with an extra WHERE filter to only show rows that were active at a given time. So by changing your schema_search_path and setting the "current_time" setting you can easily query the whole database as if it were some other time! I haven't tried this out yet, but it looks to be very compatible with an ORM like ActiveRecord. This is something I've been looking for for a long time.

[1] http://www.hagander.net/talks/tardis_orm.pdf (pdf)


This seems to be the talk: https://www.youtube.com/watch?v=TRgni5q0YM8


That seems like a really clean way of building the abstraction.

Are the only downsides the overhead of writing to the extra schema and the space required? Anyone know how bad the hit would be for the writes?

With this setup it seems like you don't lose much by trying it. No interruption to the current db, you have the extra data if you want it, you can always cull to free up space and you can remove the whole thing at any point.

Or am I missing something?


I agree it seems pretty great.

One big pain point would be if the schema changes over time. What if you add a new NOT NULL column? What if you remove a column? What if you change the type of a column? What if you remove a table? What if you add a new CHECK constraint? Some of these things get mentioned in the video linked by 'tvon, but they are mostly passed over.


That's a good point. I guess in terms of the constraints you wouldn't mirror those to your history schema anyway?

The changing schema is not so easily solved. Again, you could be a little looser with it. Don't remove columns from the history table (just mark as nullable). Adding columns would be a similar thing.

I guess I'm thinking of the use case of wanting to get a bit of extra history of free vs the "needs to be auditable, 100% correct, user queriable" scenario.


Oh also, re extra writes: in the video Magnus points out that this approach adds no extra transactions, because everything happens inside a trigger. Also no extra trips to the database server, etc. So it's not free, but it just means when you're doing a write already, it's a little extra work.


He is talking about Effective Dating except in his version he can never use it. If you don't need to reference past for future dates then his idea is nice since it keeps the primary tables smaller and the SQL easier to handle.

Effective Dating: https://talentedmonkeys.wordpress.com/2010/05/15/temporal-da... http://www.cs.arizona.edu/~rts/tdbbook.pdf

I did a college project with this where you pick the data and the system will show you how the database looked at that date/time. The past was read only, but the future was editable. The SQL Selects are quite annoying and the data grows very fast.


I personally believe that the majority of CRUD projects actually need Temporal databases. I've come to realize that--in my 12 year career--every project I've worked on was in some way covered by at least one form of government regulation (whether Sarbanes-Oxley or HIPAA) that required the ability to audit change. The problem is, most organizations (at least the ones I've worked for) don't know that they are covered by these sorts of regulations, because most places are sub-100 employee consultoware shops that are just reimplementing ERP for small B2B clients who also don't have much visibility on their regulatory coverage.

That's not even getting in to user requirements. Users almost always eventually want to know what the past data looked like. They always say at the outset of the project that they won't need it, and they always change their mind a year in. And they never understand why you can't "just get it back. You said we had backups."

It's kind of the Wild West out there.


> They always say at the outset of the project that they won't need it, and they always change their mind a year in.

I feel your pain. They don't want to pay for it until they need it, but when they do need it, they need it bad. They want you to implement it retroactively so all the changes made even before you implemented it are there.

When I was in the position, the solution I came up with was to do a very quick and dirty version that logged all changes for every table to a text file. Then I charged separately to parse the text file into the auditable database. I was able to get away with this because the database was tiny.


Commenting to voice my agreement. Modeling and querying temporal schema in a traditional RDBMS is difficult so most developers don't even try to do it (and, when they do, they do it badly).


Yep, that's exactly right. So many shops live and die off of new college grads who--at best--only know relational algebra and nothing about optimization. I've seen a rare few people coming out of college who could design a schema that didn't completely destroy their data, but it's been rarer still to see anyone of any experience level in that environment who knows anything about making it work in a performant manner.


I came here to plug "Developing Time-Oriented Database Applications in SQL", but you already did.

Once you start developing systems w/ temporally-oriented schemas I find that it's difficult to go back. Queries can be painful (and arguably the functionality should be "baked-in" deeper down in the RDBMs stack), but the functional result is very nice.


Brixon, did you ever have to handle drafts or documents/records that needed approval workflow on update or creation? I'm currently debating whether or not to implement drafts/revisions in the same table for simplicity at the expense of table bloat.

Did you go to the University of Arizona? I know Prof. Snodgrass.


Another approach that is often used for these things is now generally referred to as Event Sourcing. You'll see it mostly associated with Domain Driven Design (DDD) and Command Query Responsibility Separation (CQRS), however it is a technique that has been widely used for decades prior to either of those terms being made popular.

In particular, it has been used in banking applications for quite a long time.


Yes, another way to think about this same topic is Append-Only data structures which is why Apache Kafka is popular for this.

Also, the transaction log files that RDBMS are built on, are append-only, and have this same Event Sourcing semantics.


+1 for mentioning RDBMS transaction logs, which are the canonical "prior art" in this area.


While I use event sourcing w/ Akka persistence, I find it is not a panacea for the problems that the blog post may reference. I find that oftentimes people store the resulting state in memory which may become unwieldy. And if you're storing it on DB/disk (not to be confused w/ storing your events/snapshots) you're basically back to the way traditional CRUD works except you have a stream of events if you need a new data structure from scratch. Also, many times event sourcing offers eventual consistency which, while it is obviously preferred to develop your app that way for scalability reasons, not all apps can do that and need atomic state.


I generally use a combination of both approaches. For entities with a complex lifecycle or which lots of updates or concurrency, I tend to prefer to ES approach. But for value objects, enumerations and lookup values, I'll go more with the approach he describes.

In the past I used the RDMBS trigger approach he describes, but have now moved to doing this more in the application layer (typically with ORM signals/events) because you can more easily write unit tests for it and make your code more portable. Neither of those are always requirements, though.

Would be interesting to see an ORM that could express triggers and generate trigger code in a declarative manner. Haven't seen anything like that, but could provide for some performance advantages in certain cases. Probably more complexity than it's worth, though.


For atomic updates we implement this via CAS-like semantics. A caller who obtains a point-in-time aggregate view of an object (equivalent to the 'R' in CRUD) is also given the sequence number of the latest event. They can then conditionally push new events into the object's queue and request a rejection if the current sequence id is not equal to the one they have. Since we use Redis for the queue, the event pushes have a consistent view of the queue at the time of the attempted push.


While scalable, this basically turns it to bidirectional by adding a side-channel for conflict resolution which can add system complexity. This is why many event sourcing approaches recommend making the event represent the mutation (e.g. count + 1) instead of the result (e.g. count = 15)...when it does have to be atomic, I just give up and use a centralized place like you have with Redis. Such is life with atomic, distributed systems...they can only be so distributed and still reject conflicts in real time.


Most of our events do in fact represent the state change as the mutation.

However I see no complexity issue with how we implement CAS semantics. Its 100% equivalent to any other implementation with all the implications -- reduced concurrency being the primary issue.

We avoid this at all costs for most objects and, like you suggest, resolve conflicts by de-duping events or combining state changes to the final aggregate.


The basic idea looks simple when you have a few entities/event and the numbers of rows in the table are reasonable. It seems to me that the overhead and complexity is somewhat manageable when thats the case. But what do you do or how do you manage the complexity when you have 100s of entities you have tables with rows in the 10s of millions and lots of complicated SELECTS and JOINS to do?

I'm not saying that it's impossible but how does people manage complexity at that scale and also the performance overhead.


With reasonable abstractions. Someone else mentioned DB transaction logs, but you can also look at Kafka and GetEventStore for additional reference points.

That said, you don't necessarily want/need to apply ES to every single entity in your model. Typically, you want to apply it to those entities that have complex lifecycles or that are highly dynamic and might involve a lot of complexity.

ES actually often makes it much easier to reason about complex lifecycles because you can isolate the transition logic into discrete units of code. So, it's actually tool for managing complexity.

In terms of performance overhead, ES usually results in significant write-side performance improvements because you can either remove or dramatically simplify your transaction coordination overhead (increase concurrency, reduced latency).

To address read-side performance, people generally use a cached or materialized view that is lazily updated or implemented with the events as the "write-behind" log for the cached view. This read-side detail is why ES is most often associated with CQRS or lambda-style architectures.

As an example, in banking scenarios, you would determine "current balance" by adding up all of the past transactions on an account. Rather than doing this computation every single time you want the current balance, you typically settle the account in a nightly job and store the "nightly balance" and then compute the "current balance" by only considering new transactions for the day.


I understand the basic architecture (or so I think) and its again the question of what happens when you have lots of complex entities with complex workflow/significance. For example in banking the transactions is one core thing while the rest of customer information is not so much, same goes for a the shipping example (or at least it looks that way from the outside). In a health scenario on the other hand I can see most information being required being the type that require an ES model.

What I gather from Kafka style architecture is the storing/persistent works through the abstraction the processing itself you have to do separately for each type of entity and if you have many of them and your query requirement are complex not sure how to abstract that. I might be missing something.

Again the idea of caching, lazy updating, horizontal scaling those resonates with me but I don't see how to write good abstractions that allows to apply that. For most complex entities it seems I have to do this case by case basis and I can't figure out ways to contain/manage that complexity. And while ES adds for write scalability transactional integrity seems to become much more a complicated questions and hard to abstract about, doesn't it?


I've been playing around with a Graph database OGM (G is for graph) https://github.com/emehrkay/Gizmo and its event sourcing is a very simple implementation of what Fowler described in a post a while back.

Graphs are simple which made this exercise even easier:

    entity(vertex) -> edge($RELATIONSHP) -> entity(vertex)
vertices are simple, unstructured maps, so in my library you simply add a source entity when saving and all of the changes will be tied to that source.

Pseudo:

    user = get_user('emehrkay')
    reply = get_reply($THIS_CONTENT)
    user_reply = connect(user, reply, 'posted')
    save = save(user_reply, source=user)
This would create a graph like this

    user('emehrkay') -> posted -> reply($THIS_CONTENT)
    reply($THIS_CONTENT) -> event -> event("reply's changes")
    event("reply's changes") -> source -> user('emehrkay') 
"emehrkay made these changes to the reply"

Very fun/cool stuff. I haven't had the need to query against the history yet, so I don't know how simple it would be or if it were possible to set a time and query the whole graph against that time (like in the example posted in another comment)


I can't upvote this enough.

The game backend I've been leading uses event sourcing for almost every "object" in our system. It's proven flexible enough to model both normal CRUD style access patterns and other activity feed-like features that may or may not have an aggregate state.

Some huge advantages we've identified:

* Modeling our objects as state changes has resulted in less code and improved readability. The effects of a state change are much more obvious than a series of CRUD-style updates. This is anecdotal of course.

* Audit trail, we have every mutation for every object in our system. We can go back in time and identify bugs and fix them by just overlaying new change events. We can also feed this data into any number analytics systems and create materialized views without having to know in advance what kind of data we wish to collect.

* Optimal performance characteristics for our chosen DBs (Redis+DynamoDB). We use Redis for storing incoming events, and DynamoDB to store our periodic snapshots that compress the aggregate state for an object (and keep the event history). This allows us to keep our Redis space usage bounded, as compressions flush the queue for each object. We optimize for DynamoDB pricing/performance by only needing eventually consistent reads for obtaining a snapshot. We keep our write capacity low because the only writes are when we write a new snapshot -- this happens much less frequently and is allowed to fail since a failure only means we continue using the previous snapshot.

* Consistent point in time snapshots. To obtain a backup of the objects, we simply need to snapshot our Redis databases. If we restore, all objects are consistent because the per-object snapshots we store in DynamoDB are immutable and will always exist with any given state of the Redis data. Another consequence of this architecture is that we can take an old snapshot and "go back in time" to the real state of the data.

* Highly concurrent without need for optimistic or pessimistic locking. Redis is obviously super performant and all we use it for is to push event items to the back of lists. For concurrency and conflicts, we can resolve the state of an object by de-duping events that may be invalid or duplicate from the point-in-time view of the aggregate object when the event was pushed. Obviously this won't work for every type of object state, but in our experience it has been extremely resilient.

A few disadvantages:

* Views on the objects must be eventually consistent and must be materialized externally to the storage of the objects.

* More CPU required to create the aggregate state when getting the object. Hasn't been an issue for us and can be mitigated by the frequency of compaction into snapshots.

* Space usage for snapshots. Eventually long lived objects will have a significant cumulative historical size. It's trivial though to traverse the snapshot history and trim the space usage by simply discarding snapshots after a certain point in time. The object will then be constructed from the last snapshot's aggregate state and events that happen after the purge.


Is your project a product or OSS?

I've been tinkering with a simple game back-end for building up turn-based, multi-player games (an engine for ripping off board games) mostly as a demo/proving ground for ideas and not as a serious project since this is a pretty active space with established projects.


Unfortunately its proprietary for our studio's games. I've been hoping I could release some parts of the infrastructure as OSS, but I might have to settle for a presentation at some point.


Are you part of the team building Underworld Ascendant? Because that sounds very much their speed.


Oh man I wish I was. Those games were practically life changing as a child. At one point my parents moved our computer into the bedroom to stop us from playing all hours of the night. They then woke up on a Saturday morning to a friend and I sitting on the edge of their bed playing. That did not go over well -- but they did move it out of their room after that!


I've used this approach, but although there are pros and cons I come down against the idea of using triggers to record the history records.

Triggers make things happen sort of "by magic" and it's easy to be confused by their behavior and how they impact transactions. You also need to remember to disable them if you ever want to do some kind of bulk update/correction that you don't want recorded in the history.

My approach is to use stored procedures for the updates, and make the history tracking explicit in that code. Overall this is easier to manage, with fewer gotchas.


How do you deal with future developers acting on the table? Triggers ensure that later on I don't develop some new code and forget to update the history (and does all your code deal with inconsistent/damaged history).

Perhaps if you're using some sort of abstraction layer then you can make sure that only developers who've read the docs and are aware of semantic links between tables which aren't enforced by the database then this can be mitigated?


I was about to say the same. I've made a lot of version 2 applications, and version 1 usually has this magic that always causes unintended side effects.

Even if it doesn't bite you, it will bite someone else.


We do the same thing except we don't keep the version number in the main table since we have no reason to ever display that unless we're looking at the history anyway. We have a history table with two JSON columns representing the before and after state, so we don't need to keep columns in sync. (Technically the before state might be redundant, but it makes comparisons simpler). I really like this idea of using triggers, though. We handle it at the ORM event level so the DB isn't entirely self-sufficient.

There have been many times where having the version information has helped us to debug and fix problems. We've had customers call us, freaking out because they accidentally deleted something, or confused because somebody else in the company changed something. We can tell them exactly when, what and by whom the data was changed and easily restore it.


It'd be nice if we could see a Postgres patch that works exactly like Oracle's Flashback [1]. You can simply do: SELECT * FROM TABLE AS OF '2015-07-01 08:05:04' and it works as expected.

This article is just reinventing Kimball's Slowly Changing Dimensions Type 4 [2].

[1] http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_...

[2] https://en.wikipedia.org/wiki/Slowly_changing_dimension#Type...


Postgres used to have timetravel built in, but there's still a basic timetravel implementation in the contrib: http://www.postgresql.org/docs/9.4/static/contrib-spi.html


We implemented a "point-in-time" database architecture to accomplish similar behavior based on this document:

https://www.simple-talk.com/sql/database-administration/data...

The article describes two columns, dateEffective and dateEnd. If dateEnd is null, the row is still effective. When the row has been updated or deleted, the dateEnd field is set.

This was implemented in a regulated industry where we had to show the state of the system at given points in time. It worked well for this purpose.


During 2004-5, I designed and implemented a K-V object store on the top of BerkeleyDB. It served as the persistence layer for a number of applications my then company developed for regulated industries.

The approach was very similar. The current versions of objects were stored in a set of tables, while old versions were stored in separate tables. In order to reduce bloat, I employed a few fairly simple compression techniques. They were reasonably good. On desktop-class machines with a Pentium processor, 256 MB of RAM and spinning hard disks, searching a million objects completed in single digit seconds.

Later, I added bit-mapped indices with dynamically self-adjusting bucket sizes that were determined based on locally-weighted densities along their respective dimensions. They reduced search time on a million objects to tens of milliseconds, with full history searched.

The obvious downside was that there was no SQL for ad hoc queries! So, a command line shell was provided with Ruby as the language, talking to a C API behind the scenes.

All in all, the system worked and scaled very well.


I don't understand what the 'version' column is for. You can get the sequence by just looking at the 'version_on' attribute. Deletes are already restricted on that table and it wouldn't solve concurrency issues since it's generated by the trigger.


Concurrency issues can be solved by restricting the UPDATE to the version passed in by the application.

If I think I'm updating version 2 of an entity, then only update that version, otherwise throw an error.

I've updated the post to address this scenario.


I don't see the update in the post. If you pass the version in an UPDATE statement it will get overridden by the trigger.

NEW.version := OLD.version + 1;


The version is part of the WHERE clause in the UPDATE statement. Therefore, it is only possible to update the version of the entity that you were updating. If the entity is updated before your statement, it would not succeed in updating the row.

Is there a different concurrency issue I'm not seeing caused by updating the version in the trigger?


Yes, you're right. I didn't notice the WHERE in the UPDATE statement. That makes a lot more sense now, thanks!

You still have the issue of someone setting the version explicitly in an update statement without a WHERE clause which checks for the current version. But I guess as long as you enforce not doing that it's fine.


Unless you wanted to have a count of versions, without having to execute a second query. Or have some future means of rollback - 'active' versioning system?


I rarely do a D in prod. I've built a generic trigger function for postgres and proc for sql server that will do the auditing in K-V format on every table.

And I call a proc to add the stuff to the table to create the audit tables and triggers.

Works well. I also capture information like the host and session references in an audit batch record that points to the audit detail.

I also employ the insert only table for some very special datasets where looking up the history in a quick manner is important.


Completely offtopic, but the site loads 1.1 MiBs worth of a grinning man at a resolution of 2446x2446, which is then rendered as an icon at 91x91 pixels.


Totally noticed this in google analytics, earlier. Whoops. It's now fixed!


I do something similar because we have drafts (first drafts and drafts of existing records) in our system. My question is whether there is any downside to maintaining drafts and historic revisions in the same table opposed to maintaining it in a separate table as the OP does. Thoughts? Bloat and table optimization come to mind as potential problems, but it does make it easier to query for records.


Could you get rid of the is_removed column again (by copying the delected row to the history table) so the main table again has only the currently active rows and simplify queries? (Moot if both tables are merged and views are used as suggested by others.)

Is it common to name timestamps _on, like version_on? That doesn't seem very clear to me. What's the rationale?


Oh, please, my eyes hurt so much. Please, please, please change that "WHERE boolean_column IS FALSE" to "WHERE NOT boolean_column". Thank you ;P


the problem seems only half solved - the author does not show how they select (only) the most recent record.

as soon as the app needs to generate a list of f.e. active users, this can get nasty pretty quick. sure, one could use an aggregate like max(version), but once there are joins or lots of records involved, this could become a performance hit...

EDIT: the records are kept in a separate table, so my arguments are moot.


Why? They're modifying the main table, which isn't growing with each update. The changes are simply tracked in the history table.


oh right. small but important detail... thanks for pointing it out to me.


Valid question though, I've updated the post to be more explicit and added a couple more query examples.


I guess you could work around that by having null in the version column for the active/latest row. Then your queries become 'where version is null' if you want to skip superseded rows. You could then use partial indexes (where version is null) to handle unique constraints etc.

EDIT I hadn't finished reading the article, so I didn't see that they were using triggers and the history table. Using the approach I outlined you could keep everything in the same table (if you were that way inclined).


we tried keeping everything in the same table in some web app of ours. we used a version id and an extra versioning status field that kept track of the states "latest published", "latest draft", "archived" which kinda worked fine...

the disadvantage is bloat, as in our case the history data was rarely used at all and just there as a safeguard.


[deleted]


The author does mention Datomic at the bottom of the post, along with CQRS (by which he most likely means storing events) as other options. HE then mentions that he's stuck using an RDBMS so Datomic is not an option.

The one thing I would add is that using an RDBMS does not preclude you from storing events along with your relational data. So you have a representation of current state, but also all of the events that took place to get you to that stage. And that will give you the same type of history.


It's a very cool feature of Datomic that the storage engine is pluggable, and easily changed depending on the characteristics of your data. Using Postgres is trivial:

http://docs.datomic.com/storage.html#sql-database




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: