Hacker News new | comments | show | ask | jobs | submit login
Show HN: PumpkinDB, an event sourcing database engine (pumpkindb.org)
190 points by yrashk 238 days ago | hide | past | web | 48 comments | favorite

It's a lower-level "database engine" that allows you to build different types of higher level databases based on a very simple foundation:

1) BTree-based K/V engine (which gives you an ability to iterate over lexicographically sorted keys) 2) Strong immutability guarantees (data can not be overwritten) 3) ACID transactions 4) Server-side executable imperative language that gives you a control over querying costs

In a sense, it's as much of a database constructor as different MUMPS systems (GT.M, for example: https://en.wikipedia.org/wiki/GT.M)

PumpkinDB also aims to provide a good set of standard primitives that help building more sophisticated databases, ranging from hashing to JSON support, and more to come.

I am a big fan of this approach: Low-level, high-performance, immutable-by-default database engine as a building block, with bindings to multiple languages for easy application development.

TrailDB (http://traildb.io), which has many elements of event sourcing in it, follows this philosophy and it has proven to be pretty successful for its intended use cases.

I was delighted to notice that PumpkinDB has an imperative query language inspired by Forth. We recently open-sourced a similarly imperative query language inspired by AWK, http://github.com/traildb/reel :)

I will definitely follow PumpkinDB with great interest!

Thanks. I read the entire "Documentation" page and still didn't feel confident that I understood what this really was. "Event sourcing" to me implies that it's generating events.

Event Sourcing is a technical term, it's the idea that instead of mutating your database, you should instead have a log and insert a new log entry saying that you changed this value to that, etc. The idea is that this helps you do cool stuff like temporal queries (i.e. make a query "as the entire database looked like a month ago") or look at historical values and changes of things. This matters a lot in some fields. Of course then there's the matter of how to do this efficiently. You can build event sourcing on top of a regular RDBMS but if there is database-level support (as in PumpkinDB), then maybe some things are more efficient. Read more:


Virtually every db works like that under the hood. They expose it differently, though. Kafka has nothing but log, for example.

They do but it's usually not exposed in a useful way. Postgres once had this feature... https://www.postgresql.org/docs/6.3/static/c0503.htm

We actually expose something to support event sourcing: https://www.postgresql.org/docs/current/static/logicaldecodi... - although that's very different from the old time travel feature.

Edit: missing word

This sounds like my (probably wrong/incomplete) understanding of Datomic?

I believe you could use Datomic for event sourcing, though it wasn't explicitly designed for the job and may not be the best choice depending on the requirements of your system as a whole. Bobby Calderwood at Capital One has a talk on a system which includes both event sourcing (using Kafka, IIRC) and Datomic:


Nice, thank you.

We are definitely looking to improve the documentation. This is 0.1, after all.

This project was started as a backend for a lazy event sourcing approach (https://blog.eventsourcing.com/lazy-event-sourcing-ed7e59007... , https://m.youtube.com/watch?v=aqv8d1pjmU8) and beyond.

The idea behind it is that it provides primitives for building systems that are designed around immutable events, journals, indexing, etc. Hence the current positioning. We thought it would be useful to be targeting fairly narrowly early on.

Either way, we will definitely need to expand on that in our materials

Amazing looking project! I've been daydreaming about using LMDB's memory mapping to provide a flexible low level db primitive for quite a while, figuring a combo of erlang like actors and flexible data scripting would be killer. Needless to say I love the design.

Any thoughts on whether this could be used to implement a Q/kdb+ like computation system? Seems like PumpkinScript could be extended with a library of computational array primitives. (https://news.ycombinator.com/item?id=13481824)

That being said, it'd be great to be able to read how the "actor" system is implemented. The documentation alludes to actors and pub/sub channels. Not sure I can help much at this time, but will keep an eye on it to see!

Well, its a fancy word. Thats basically what I took away from it. Also, I agree, read the whole doc and took way longer to understand what it does.

Clearly, they are smart, but bad copywriters :)

Please write better docs.

Doing our best, one step at a time! It's clear to me that we haven't spent enough time on documentation yet. The whole project is only 3 weeks old and our initial focus was to get a working version out with some level of documentation (a goal that we somewhat attained, I believe)

Fair enough. I'm gonna be done this shitty program and have a piece of papers with some fancy seal on it in a couple of months. I'd be happy to contribute.

Bookmarked until then.

Also, if this topic is interesting to anybody who's good at writing, we are very open to contributions of all kinds!

Is the idea that apps talk to PumpkinDB in order to achieve this layering, or do you see it as a library?

Not often that you see MUMPS referenced on HN, by the way. It's one of those oddly productive niche languages that are (as far as I know) alive and well, but rarely encountered except if you work in that niche, e.g. finance or healthcare.

Some of the layering in terms of basic building blocks and higher level languages will become a standard part of PumpkinDB. For the rest, yes, it's expected that end applications or frameworks will compile higher level constructs to PumpkinScript.

I've been following MUMPS and using it for some ideas for some time and that's how some of their ideas became inspirations for PumpkinDB. As quirky M is, MUMPS was indeed oddly productive and I wanted to piggyback on that.

I hope that PumpkinDB is an alliterative reference to MUMPS :)


It seems to serve approximately the same purpose as Berkeley DB.

Well, that would have been true if we had no PumpkinScript. Then it would have been just a tiny wrapper around LMDB that we are using as a storage backend, indeed.

So basically something similar to FoundationDB?

To a certain degree, yes

From what I see in the published documentation, it's way too early to make it "show HN". If I didn't know what event sourcing is, I couldn't make heads or tails of your daemon, and even then, I don't understand how is user supposed to interact with it in a production-like deployment.

Its never too early to show HN.

Especially for open source projects how do you suggest you onboard people to get this thing progressed?

The voting mechanism helps us determine whether it is interesting anyway.

What to Submit

Show HN is for something you've made that other people can play with. HN users can try it out, give you feedback, and ask questions in the thread.

I like how every commit message is formatted as a problem and a solution: https://github.com/PumpkinDB/PumpkinDB/commits/master

Indeed, it's a neat "hack" to force yourself to write better commit messages. I think this style of commit messages originated from the zeromq community: https://github.com/zeromq/libzmq/commits/master https://github.com/zeromq/zproto/commits/master

Yes, I picked this style up form Pieter Hintjens

This is going to make for some awesome release notes and new feature release marketing. Very well done @yrashk.

Very interesting! I think one thing that this would benefit from is a lot of usage examples, especially around pumpkinscript. I was reading recently about MUMPS and Caché and it's interesting to see a modern implementation of similar ideas.

One question - what is the storage layout like? Do you have plans to support efficient range queries at all?

We definitely need better documentation! That's for sure. We only did a basic one just to get the basics out.

As for the layout -- everything is built around btree k/v, and the original idea behind PumpkinDB is to give primitives that are useful in building databases, indices in particular. The expectation is that, over time, we will grow our library to have more sophisticated primitives, including ready-made indices of different kind.

Does this help?

Written in Rust :) Inspiring, I am learning Rust now to try implementing HDFS-like storage.

What's an actual use-case for this? I am reading the documentation but still don't see why I should use it and what the actual advantages compared to current solutions are.

It was built as a kind of a database constructor for event sourced / journalled systems. It's design inspiration is largely stemming from MUMPS which provided a great ("oddly productive") combination of a database and a programming language.

Being a constructor it's also a great tool for building applications with a better control over querying mechanics (since everything is actually described in PumpkinScript)

It sounds sort of like "overlayfs" for data. Which might allow a non-destructive, no-copy way to...

- Do what-if analysis. Change the price of oil at some point in history and see how your financials would have played out from that point forward.

- Fork your database and have two live copies acting on different data or rules for a live comparison...without all the plumbing overhead. Perhaps having one set work with a fiscal year that is calendar year, and another with a different fiscal boundary.

I don't get this whole "never overwrite data" thing, including Datomic, for example.

Isn't the disk space needed for these schemes enormous?

I have reservations, too: it's important to be able to remove data even though disk is cheap.

* Removing very old data is a reasonable hedge for user privacy.

* Sometimes confidential data makes its way into the data set and needs to be removed.

* Old event data is often not useful but can impact performance or cost just the same. For example, one needs to allocate an EBS volume on AWS volume with a certain level of performance; but the cost of that is `IOPs * GBs`, not `IOPs * useful GBs`.

* Replicating and backing up the dataset takes longer and longer as the application grows.

I agree, this is an importabt aspect.

Our plan in PumpkinDB is to add key value association retirement, subject to defined retirement policies.

I have the same question. Presumably, you could store fairly efficiently doing a git-like diff-based storage scheme or something. But I would be interested in hearing analysis of this.

Exactly, events only represent changes in state. Furthermore, you aren't storing everything in the events; the business logic on services can add and build up models to much more than what the events hold (for example, an event could hold an id that the service uses to populate a bunch of properties into the actual model snapshot, something as simple as an order number would map to all kinds of order details).

Holy cow it's in Rust.

I'm doing a thesis in Classification Trees, doing R and hoping to do the backend of the R package in Rust (it looking to be C++). I'll look through the source code of this to see it's tree implementation. Probably used the rust standard library's implementation of BTree?

Documentation says they use LMDB for the backend. Looking over the documentation, it looks like you could readily use pumpkin be directly to implement the database/datacaching scheme and interface with it from R. Unless your thesis is on implementation of B-trees, definitely try bootstrapping on something like this first. BTW, lmdb provides memory mapping which can be very fast for computations.

Where would I use this in place of, say, Kafka and Samza?

Looks very interesting. I'd love to use it once it supports Akka persistence. Is this on the roadmap?

What an interesting commit message format.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact