Hi, Biff author here. There's background info + comparison to Firebase in the link, but to briefly summarize:
I've been doing Clojure web dev for a while but hadn't found a setup that I
really liked yet. My latest approach was using Firebase + ClojureScript (for my
startup[1]) which was really nice but had some significant limitations for my
use case (mostly related to the ephemeral Node backend). At one point I
realized that I could provide Firebase's query subscription feature using
Crux[2] instead without too much effort. So I started working on Biff, and I've
been using it in production for 2.5 months now. It's been a joy to use.
Also, I do think that web dev in Clojure can be a little hard to get into, due
to the (appropriate) community preference for libraries over frameworks. I
think there is a need for more frameworks that curate a set of libraries for
you. I'd love it if Biff eventually became a sort of "Rails for Clojure" that
would let anyone get up to speed quickly but would allow you to switch out the
components easily as you go on.
(About the name: I named it after Biff Tannen from Back to the Future).
I could not find this info in the docs or the faq:
Is Biff suitable for a Javascript front end? is there an API for a plain JS frontend to use Biff? (I like clojure backends with boring JS in the front)
That's my only question about it. Thank you for sharing it. I found the project very enjoyable to browse and really appreciate that you shared several of your key design decisions.
Thanks! I haven't used a plain JS backend with it myself, but I think that would work well (the majority of Biff is backend code anyway). I haven't exposed a JS api, but doing so should be pretty easy. All of Biff's frontend code is in the biff.client namespace, and the only function you need is biff.client/init-sub [1], which takes:
- an atom, the contents of which describe what data you're subscribing to
- another atom, which is populated with the results of your subscriptions
- a function for handling server-sent websocket events (if your app uses any)
and it returns a function for sending websocket events to the server (which is used for sending transactions + any custom events you define).
I think the biggest weak points of firebase are a. a weak query engine (as soon as you reach non-trivial complexity you end up having to de-normalize data) and b. a weak rule engine (relying on a bunch of boolean logic is not going to scale as your app grows. you need more abstraction power)
Looks like your library addresses both -- crux has datalog queryability, and the rule dsl you wrote seems pretty powerful.
Thanks! I'm also looking forward to finishing the Materialize[1] integration I've been working on (currently blocked while I wait for them to release some features). Although Crux has datalog, you can't subscribe to datalog queries. But with Materialize, you can basically subscribe to arbitrary SQL queries. I have a branch that lets you define the SQL queries on the backend and then subscribe to the results via Biff's existing subscription system.
1. Would love to learn a bit more about how the rule engine works.
From what I understand crux in essence only gives you key->blob semantics. How do you know that one `doc` is a `user`, and another doc is an `game`, etc
2. re: datalog subscriptions -- do you know if this is something inherently very difficult, or is it that crux hasn't implemented this yet?
1) Crux is schemaless at its core, but you can build your own ~class hierarchy model using regular attributes and Datalog rules to differentiate between types of entities.
2) Subscribing to arbitrary Datalog without any form of polling requires a fundamentally different form of query algorithm (i.e. incremental view maintenance, as seen in Materialize), however there's a whole spectrum of possibilities available if you are willing to accept polling at some level. Hasura has quite an inspiring take on subscriptions that we may yet try to recreate on top of Crux: https://hasura.io/blog/1-million-active-graphql-subscription...
1. You have to register specs for each kind of document. For example, from the frontend you can subscribe to a user document of a given ID with `{:table :users :id #uuid "some-uuid"}`. The backend will: (1) look up the document with the given ID, (2) verify that you've registered a spec for the `:users` "table" and verify that the document meets that spec, (3) run the authorization rule to make sure the client has access to that user document.
2. It is inherently difficult. The closest thing like this that exists for datalog is I believe clj-3df[1]. Hence I was very excited when Materialize was launched publicly a few months ago. A version for datalog would be cool too, but SQL is good enough I think.
Crux was a pretty straightforward choice. Didn't really consider any non-datalog DBs. I've used Datomic quite a bit, but preferred crux since it's open source, has filesystem persistence and doesn't require a separate process for the transactor (nothing wrong with Datomic's choices; the tradeoffs just make it unsuitable for Biff). I went with crux over datahike since it's more mature and has postgres persistence (and Kafka persistence, not that I need that yet). I haven't actually used the bihemporality features of crux yet, but maybe they'll come in handy.
That makes sense to me. I've been trying to decide out of Crux or Datahike for a side project. I was leaning towards Datahike because I didn't like the idea of having to do a full document update like you do in Crux, but I need to dig into it a bit more.
Are you concerned about that for performance reasons or for ease-of-use reasons? To address the latter, I made a separate transaction format for Biff that allows merging (it's similar to Firebase's transaction format). So e.g.
[[:crux.tx/put (merge (crux/entity db #uuid "some-user-uuid")
{:username "foo"})]]
There is a race condition though; if another tx updated that document after the crux/entity call but before `:username "foo"` was written, then that tx would get clobbered. I'm planning to add `:crux.tx/match` operations automatically to prevent that from happening. There was talk on #crux in Clojurians slack today of using the newly-added transaction function feature to do merges. Haven't looked into that myself.
Just for ease-of-use. In Datomic/Datahike you can update a single datom/fact. I'll have to look into making some similar sort of helpers for what you've done.
Yeah, I think Prismatic's end goal was pretty much the exact same as mine. (I never used Prismatic while it was running, but a while ago someone else mentioned that Findka seemed similar).
I've been doing Clojure web dev for a while but hadn't found a setup that I really liked yet. My latest approach was using Firebase + ClojureScript (for my startup[1]) which was really nice but had some significant limitations for my use case (mostly related to the ephemeral Node backend). At one point I realized that I could provide Firebase's query subscription feature using Crux[2] instead without too much effort. So I started working on Biff, and I've been using it in production for 2.5 months now. It's been a joy to use.
Also, I do think that web dev in Clojure can be a little hard to get into, due to the (appropriate) community preference for libraries over frameworks. I think there is a need for more frameworks that curate a set of libraries for you. I'd love it if Biff eventually became a sort of "Rails for Clojure" that would let anyone get up to speed quickly but would allow you to switch out the components easily as you go on.
(About the name: I named it after Biff Tannen from Back to the Future).
[1] https://findka.com
[2] https://opencrux.com