Hacker News new | past | comments | ask | show | jobs | submit login
Building a CRUD App with Datomic Cloud Ions (jacobobryant.com)
100 points by tosh on Nov 10, 2019 | hide | past | favorite | 26 comments

I'm working on a web app using the mentioned Datascript library. It is in-memory DB with Datalog query language, written as open-source implementation of Datomic principles. It's usable from Clojure(script) and JS.

Takes some time to adapt way of thinking to its data model and I'm just starting to reap the benefits. Boy does this thing pack some serious power! Prodding the data for complex relations ("find two friends with same birthdays that also have cars of same color") becomes trivial, which opens up a whole new possible features I haven't dared to consider before. Combine it with React rendering and you get a very clean and powerful model for building complex web applications. I'm enjoying it a lot.

If anyone is interested in Datalog query language, here's a good resource: http://www.learndatalogtoday.org/

Performance is brutal though. For a moderately sized db client side datascript can be 100ms per interesting query unless you drop to indices. Might as well just do a server trip.

I had a beautiful dream of just streaming datomic out to datascriot but eventually gave up and went back to client side maps using reframe.

If Cognitect would release a performant JavaScript peer that would be a game changer for web dev. Just one db and effortless event streaming to and fro.

Raw index access from the client introduces data security issues amongst other things ... Perhaps someday trusted computations can be done in the edge. Otherwise we're talking about browser DRM.

Why would index access introduce security issues?

pretty sure he meant datalog indexes, not datomic


To me, Datalog looks like the ideas of Redux to their highest fulfillment. You have a single source of truth (a list of facts), and you have queries to ask questions about that truth, using the powerful and succinct language that Datalog is.

I hope more Datalog gains more interest in the frontend world, because it's about as declarative you can get – something that should go hand-in-hand with declarative frontend libraries like React. I hope that someone moves the frontend world further with an implementation in JavaScript. Datascript is a good start, but its size (80kb gzipped) and its JavaScript API can greatly be improved upon.

> "find two friends with same birthdays that also have cars of same color"

Wasn't that always a simple thing to do, e.g. in SQL?

And how about monitoring changes in the database based on a query? Is that possible and efficient?

It's possible in SQL, in Datascript it's effortless. There's a big difference in mental overhead and thus how often you would want reach into DB. There are other benefits.

You don't do any string mungling, your query is itself composed of data structures. You can build it easily with code, so your queries become as elaborate as you can generate them.

Unlike SQL tables that you have to figure out from start, the data model is a collection of facts, with each fact consisting of {entity_id, attribute, value, transaction_info}. By adding facts to database you build the knowledge about entities and different relations between entities in an organic way. You can still use the table model, or you can use graph model.

The whole DB is just a value in memory. You can transit it to server, store history copies in memory (for undo/redo), do diffs, whatever you need.

As for monitoring changes, I'm using the Posh lib (also mentioned in article). It monitors the DB and re-renders react component if there are transactions relevant to component's queries. It seems very efficient so far, but I still have to test it with large number of components and transactions.

> As for monitoring changes, I'm using the Posh lib (also mentioned in article). It monitors the DB and re-renders react component if there are transactions relevant to component's queries. It seems very efficient so far, but I still have to test it with large number of components and transactions.

Interesting. Do you know if, upon a change, it reruns the queries from scratch, or does it update the results based on the actual changes?

It will re-run the queries whenever new transactions contain new data for queried entities. It does so by pattern matching on transaction data.

For truly reactive datalog subscriptions, let's see what happens with clj-3df project. Unlike Datascript, it requires you to use server infrastructure right from start.

You can do it in SQL, but it's not simple.

Something like: WITH stuff AS ( SELECT bday, carcolor, ROW_NUMBER() OVER ( PARTITION BY bday,carcolor ORDER BY bday,carcolor ) rownum FROM friends ) SELECT * FROM stuff WHERE rownum > 1;

You could write this as a simple join too, right? At least assuming the friends table has a primary key.

select f1.id, f2.id from friends f1, friends f2 where f1.bday = f2.bday and f1.carcolor = f2.carcolor and f1.id > f2.id;

Both of you are missing the point. Once you normalize, car color would be an attribute on the car table, and to map those you need many-to-many lookup tables. It can be done obviously, but it's more effort to do, maintain and update. Depending on your requirements, graph db like features can provide an easier abstraction.

This feels like a straw man. Don't normalize if it causes pain.

Similarly, don't expect that a single data model will suffice for all needs in an application. Expect to have to maintain anything that is useful.

Obviously, if you put all data into one table, it's not a complex query. (But then you also just reinvented document databases with its own set of problems.)

Even having them in several tables is not rocket science, OP was just trying to give an easy example of types of queries that are easier with datomic than with sql.

My point is that data has to be maintained in whatever forms you are keeping it in. Often with different requirements on why it is there.

Yes, this is a large part of why we use databases. They will do most of this heavy lifting. However, I don't think it changes the game. If you want some access to be easy or quick, you have to maintain the data in that format.

No, you're completely wrong. Graph databases, document databases, K/V stores and so on have completely different models, guarantees and performance characteristics than relational databases.

That is restating my point. Different data stores have different characteristics. Identify which ones you care about and make the tradeoffs required. They all require effort. And it is not uncommon to want your data available in different ways at different times.

I see two problems with Datomic: the pricing seems a bit high for home use, and the professional version has the problem that you have to submit your data to the cloud, which is not always possible as clients may not always allow sharing of data with 3rd-parties.

The ideas seem interesting though. Is there an open-source equivalent of Datomic yet?

there’s also datahike https://github.com/replikativ/datahike

Seeing as Amazon are now Embrace, Extend, Extinguishing profitable open-source service providers that are the primary contributors to the projects, I think it's a good thing Datomic isn't open source, and that it's worth paying $30/mo for.

This looks promising https://juxt.pro/crux/index.html

datalog, high performance, flexible deployment options, open source. especially interesting for us because you can actually remove data for gdpr/compliance reasons. Datomic has a lot of trouble with that in our experience

It does look cool. Is anyone out there using it in production?

I'm guessing the creators solved a real business need and decided to go open source. Really nice folks btw. Props to juxt.

There's an on-premise version of Datomic as well.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact