
The Ideal Domain-Driven Design Aggregate Store? - zerd
https://vaughnvernon.co/?p=942
======
daigoba66
In one project I was able to achieve an identical "ideal ddd aggregate store"
using MSSQL.

\- The aggregates are XML serializable entities persisted into tables using
the XML datatype (fairly comparable to PostreSQL's JSON datatype).

\- MSSQL has numerous methods for querying, updating, and reading from XML
typed columns. They're certainly clunky, but they do work.

\- To simplify querying and indexing, we created computed columns with a user
defined function to query from the XML column. By marking these columns as
persisted, the UDF is only executed and the column updated when the row is
written. And as persisted, the computed columns can be included in normal
indexes to improve performance.

A lot of folks may scoff at using XML, and especially MSSQL's XML datatype.
But I will say that it works. And after you've built a simple foundation,
developing your aggregates and working with them almost becomes a breeze.

And as far as XML versus JSON... it almost doesn't matter in this case.

~~~
jchrisa
This sounds like the approach used by Couch-style incremental map reduce. For
each JSON document, run a JavaScript function to determine the document's
contribution to the map reduce index. (Basically a user-defined column with
optional custom aggregation.)

So you can easily normalize multiple incoming JSON types to a single index (eg
mix JSON data from different APIs into a single "column", etc.)

And that's not even getting into the reduce side. From [1] "if documents
represent your company’s financial transactions, a view can answer the
question of what the spending was in the last week, month, or year."

[1]
[http://guide.couchdb.org/draft/views.html](http://guide.couchdb.org/draft/views.html)

~~~
daigoba66
It's very much the same pattern.

Also check out RavenDB ([http://ravendb.net/](http://ravendb.net/)). It's
another JSON document database with JavaScript powered map/reduce indexes. One
caveat is that RavenDB updates its indexes async. It's a type of eventual
consistency. That means that after a write, an index may not immediately have
the updated data. Depending on the application, this may or may not be
acceptable.

~~~
jchrisa
There are as many sync/async options for Couch map reduce as you could
possibly need.

The key is for some of the patterns to work, you need to be able to know your
index queries come from the same database snapshot as your event feed, which
is easier to do when you aren't in clustered mode.

------
skrebbel
Does anyone have real field experience with using Postgres as a JSON data
store like this? Most notably, I'm curious about migrations and evolutionary
database design.

With oldschool relational SQL, you'd write migration scripts that alter the
database schema _and also_ port the data.

It is my understanding that many people who use MongoDB and the likes just
tend to keep old data as it is, and make sure that the code just supports
older versions (like missing keys, etc). So you'd get code like

    
    
        record = db.get(...)
        if(!record.subtype) record.subtype = "defaultSubType"
    

Or maybe something smarter with a version number in each document.

I prefer to not write that code, but I also really like the idea of (ab)using
Postgres JSON stores, with identically structured JSON for each row.

Did anyone ever write JSON data migrations with something like Postgres? Or
Mongo? If so, how? PL/pgSQL? Load everything into backend memory and
manipulate it and store it back? What if you want to split aggregates? (i.e.
you have 1M "Person" JSON blobs and want to split it into 1M "Person" blobs
and ≤1M "Address" blobs)

~~~
netghost
Just write ordinary migrations with updates and such. I think you should able
to manipulate the JSON without too much bother. Here's the functions available
for JSON: [http://www.postgresql.org/docs/9.4/static/functions-
json.htm...](http://www.postgresql.org/docs/9.4/static/functions-json.html)

Here's more about the JSON type in Postgres:
[http://www.postgresql.org/docs/9.4/static/datatype-
json.html](http://www.postgresql.org/docs/9.4/static/datatype-json.html)

There's also Postgres' hstore, which I believe now shares the same backing as
JSON(B), but may have a different API.

------
califield
The author doesn't mention the go-to option in this scenario: write a data
mapper[1] to dehydrate your object.

While I'm a huge proponent of PostgreSQL and the hstore[2]/JSON column types
(hstore column type is not mentioned but it existed way before JSON columns),
it's still more common to use PostgreSQL in a primarily relational fashion and
the author could have touched on that more before giving us the scoop on the
new shiny stuff.

[1]
[http://martinfowler.com/eaaCatalog/dataMapper.html](http://martinfowler.com/eaaCatalog/dataMapper.html)
[2]
[http://postgresguide.com/sexy/hstore.html](http://postgresguide.com/sexy/hstore.html)

~~~
bdavisx
I think it's a general concensus that a "data mapper" isn't much better than
an ORM (obviously a form of data mapper). While you can write a data mapper
that is less intrusive than a typical ORM, it's still not as nice as using a
schemaless type of store such as JSON.

~~~
mrottenkolber
Not a OOP wizard but from my experience ORMs tightly integrate with both your
objects and your DB. So a "DataMapper" would indeed be my choice for a sane
"persistence-layer" assuming:

* The objects don't know about the mapper * The DB doesn't know about the mapper

The "ideal DDD aggregate store" with a "schemaless type of store such as JSON"
on the other hand binds the implicit schema to your objects. Seems more
interlocking and less modular to me.

E.g. I'd rather settle on "this Mapper is the bridge, whatever it does"
instead of "the bridge is implicitly defined by however I entangle objects and
store".

Boils down to the old inheritance vs composition.

~~~
bdavisx
I'm not disagreeing, but I guess I was comparing a relational schema to a JSON
"schema".

I do agree a mapper is less intrusive than either. I think I have a bias right
now because I've been working on an event sourced model that's using JSON as
the serialization mechanism :).

------
ExpiredLink
I'm amazed that people still refer to DDD ('OOP on steroids').

~~~
socceroos
Do you mean that the approach of DDD feels obsolete or antiquated? I'm
genuinely curious...

~~~
dm3
It's neither obsolete, nor antiquated. People in this thread seem to conflate
DDD and OOP which is understandable given that 90% of DDD resources target the
typical enterprise Java/C# setting.

The truth is, DDD consists of two parts: tactical patterns, such as Entity,
Value Object or Aggregate Root and strategic design patterns such as
Ubiquituos Language, Bounded Context or Context Maps. Tactical patterns are
much easier to understand and apply, but most of the benefits claimed by DDD
are provided by the application of strategic design.

Sadly, most people starting with DDD focus on the tactical patterns and
technology, then get demotivated when the benefits do not appear.

~~~
Glide
If you manage to just get something like Ubiquitous Language through you've
already gotten a LOT of value from DDD.

It really helped me when I started when I had to reason about code,
requirements, and communication mismatches.

~~~
ExpiredLink
AFAIK, "Ubiquitous Language" nowadays is considered impractical by DDD
aficionados.

------
dustingetz
Datomic meets most or all of these criteria by design, per my understanding,
most importantly not having an OR impedance mismatch while preserving ACID

~~~
kitsune_
Most definitely, EDN is obviously a nice format to work with if you already
use Clojure.

Also, I think what all the DDD and Hibernate advocates failed to articulate
when they hyped their paradigms 10 years ago was that their model of
programming simply doesn't work that well. As soon as you introduce an ORM
your class design will have some inane arbitrary limitations that actually
fight the "principles of DDD" on every level. You end up with convoluted
domain models and services and a huge monstrous cathedral of abstraction.

