Events themselves are no longer interesting or semantically meaningful, because they're a single atomic change to a database field. A change in the way state is represented means a different set of events is produced. Subscribing to meaningful occurrences in this model is difficult, and probably will eventually result in the creation of a "meta-event" for each action that contains the semantic intent of the outcome.
Events are IMO most useful for analytics and processing when they correspond to meaningful business outcomes: steps in a workflow, consequences of user actions, and the like - despite this having the problem of making extraordinary and rare business outcomes more difficult to accomodate.
You say "fields", I say "facts".
Which of a set of nominal events or a graph of facts is higher-level will ultimately depend on the domain, but using Datomic over several years now has led me to think that it's much more often the latter than conventional Event Sourcing has trained us to believe.
In fact, it may well be that Event Sourcing has traditionally been relegated to those (few) use cases where the "set of nominal Event Types" approach is better, being the only situations where conventional Event Sourcing is practical.
> Events themselves are no longer interesting or semantically meaningful, because they're a single atomic change to a database field. A change in the way state is represented means a different set of events is produced. Subscribing to meaningful occurrences in this model is difficult,
Datoms are not 'single' or 'atomic' - they're packed coherently together in a Transaction, and are immediately relatable to entire database values. Subscribing is just pattern matching, and is not hard, as the following section of the article tried to show: https://vvvvalvalval.github.io/posts/2018-11-12-datomic-even....
> [...] and probably will eventually result in the creation of a "meta-event" for each action that contains the semantic intent of the outcome.
Which would only bring you back to the 'set of nominal Events' case, so it's not a regression compared to conventional Event Sourcing anyway. That's what the article meant by 'annotating Reified Transactions', and that's something you can even do after the fact (i.e in a later transaction, when the requirement for it becomes apparent) which means that you don't have to get these aspects rights upfront, nor commit to them.
For a more in-depth discussion of Datomic's Reified Transactions, I suggest this talk by Tim Ewald: https://docs.datomic.com/on-prem/videos.html#reified-transac...
This isn't true; changes to datoms are not considered individually, but are grouped together into transactions. Additionally, you can add arbitrary keys to annotate the transaction currently being committed.
You still get all the benefits of event sourcing, but then you can query the state much more easily.
At least that’s what it looks to me from a cursory look at datomic and daily struggles with normal event sourcing (kafka)
Life is too short to run proprietary software.
Agreed, but Datomic uses Datalog as query language:
> In practice, a Datomic Database Value is not implemented as a basic list; it's a sophisticated data structures comprising multiple indexes, which allows for expressive and fast queries using Datalog, a query language for relational data.
Hence the article should still be relevant to free Datalog implementations, such as the Racket Datalog package.
Also, having used datalog, I couldn’t disagree more. SQL is fine.
It's no wonder large companies have become fans of open source with the advent of cloud computing. People are throwing software at their magical money making machines entirely for free.
Edit: If you think this is just an abstract risk, remember that Google shuts down popular and incredibly useful products so often it's become a meme around here.
For most companies, and especially startups, that is far more important than the very unlikely risk that a vendor completely disappears overnight, and the even more unlikely risk that their software also stops working completely before you can migrate to something else.
Look, right now I am working for a company that is in the midst of attempting to transition out of a very old source code and release management system and they're having a hell of a time doing it. That system happens to be proprietary and the support plus licensing fees are astronomical while the actual tech support is abysmal.
Yes, the risk may seem like an easy tradeoff when you're starting out and you need to ship and you don't have any market share to worry about. It's a whole different story when you're dealing with a very clunky, yet very profitable legacy system that you're not allowed to fix because it's proprietary and yet your business depends on it.
> the vendor discontinuing the product
This is not a zero risk proposition with open source software either; your costs go up significantly if you have to start maintaining a legacy codebase.
> taking the product in a radically different direction
See above; if you just want to run the old version, proprietary software lets you do this as well.
> the company being acquired
This is definitely the biggest risk with Datomic; if Cognitect decides to EOL Datomic, there is a very high chance that they open source it (see the various free software they develop already), but if they are acquired by Oracle that chance becomes zero.
> or simply changing the licensing model (see Adobe Creative Cloud) to dramatically increase the costs of using the software.
Datomic licenses are perpetual I believe, so not a risk with Datomic.
On-prem is perpetual with a year of maintenance (which can be extended), while Datomic Cloud is integrated with AWS and charged and licensed like other AWS services: month-by-month.
How big is the company you're working for? Could they have gotten that big in the first place without using these tools? Companies change as they scale and solutions that worked when they were young will almost always need to change as they grow, so I don't see that as a particularly bad situation. It's a cycle of constant change management and risk mitigation.
Usually the bigger company has the resources to make changes while a startup trying to plan for 100x future size usually ends up limiting its own growth.
I don't like this much (especially the part when an exit-oriented startup will say they want to "change the world" and/or they "care about the users", but it's how things are.
There are many successful stories of proprietary dbs and also a number open sources ones in terms of profitability, ex: Elastic, Datastax, Confluent, Citus etc etc...
Personally I wished it was open-source, it looks quite capable but there is too much risk involved for me to be confortable using it, not to mention it's quite pricey. 1$/day is for dev setups, prod cloud setups start around 4-5k/year last time I read about it, it might be fine for a single deploy backing your service, not when that's a cost you have to add to every client.
Another thing is that it is very specific to some uses and has some limitations (subjectively) that will often require to pair it with other solutions to be actually usable for some things (ex: strings are limited to 4096 characters, no bytes type). All in all it makes sense given what you should use it for (and not use it for), but that's not your usual db product and sometimes I have the feeling that it's advertised as a potential drop-in replacement for <insert favorite relational db> when it's quite often not by itself (arguably, apples vs oranges).
There are also a number interesting of projects that got inspired by it in one way or another, but nothing directly comparable:
* datahike (and the upcoming datopia.io)
That said datalog is a pleasure to use and datomic looks fantastic it's just not for everybody.
This is just for the AWS-hosted cloud version; you can run the dev version locally for free.
However, it seems like privacy requirements like "forget you ever knew about this user" would throw a wrench in the gears.
Bonus: When your read model differs from the model you use to determine and enforce business logic constraints, this is a strategy that's known as CQRS (Command Query Responsibility Segregation) - this enables you to trade consistency and speed off against each other for both the read/write sides of your model independently of each other.
For example, you can have fast, inconsistent reads by reading from a cache which you use to power your user interfaces and when it comes to processing user commands, you separately and intentionally read the event stream and build up a consistent model to inform whether you can accept the incoming user command. This way you've achieved consistent-yet-slower writes, minimising the opportunity for the system state to become inconsistent with your programmed business logic, whilst minimising the processing requirement (thus time) to display a user interface, at the cost of consistency. That's not to say your user interface will be wrong - but you should be aware that under certain conditions (e.g. your read model cache is unable to reflect changes fast enough) then your read model and thus user interface will be inconsistent with the state chronicled in the event log.
ScyllaDB is a much stronger alternative. It is supported but not officially in Datomic Cloud. It will make Redis/Memcached almost redudant for most of the applications in the Datomic Cache layer as it has very good latencies for read and write.