Hacker News new | comments | show | ask | jobs | submit login
Irmin: Git-like distributed, branchable storage (openmirage.org)
147 points by amirmc 1282 days ago | hide | past | web | favorite | 25 comments



I've enjoyed using Irmin so far -- the APIs are well-designed and it's really handy to be able to use 'git log' for debugging. In fact a 'git log --pretty=online' has started to replace a lot of the ad-hoc debug logging which permeates my code.

Given how cheap storage is now, I think there are a lot of applications for which it makes sense to record all previous states by default (e.g. for debugging) and make mutating history (by squashing/compacting it away) the special case (a bit like running 'logrotate' every week)


For anyone new following at home, 'git log --pretty=online' has a typo. It is 'git log --pretty=oneline'


Just out of curiosity – where are you using Irmin? I'm curious to look at some real world code beyond the official examples and the merge-queues project.


I'm refactoring "Xenstore" (part of the control-plane for a Xen host) to use it. I want to make the server stateless so I can restart it if it crashes.. this mainly involves remembering lots of bits of state: connection information, watch registration, pending events etc. I also want to make it easier to debug while I'm at it.

I've got a draft of a blog post here: https://github.com/djs55/mirage-www/blob/a1ab2fec78fa7d96c64...

edited to add:

This work is a natural evolution of the "OXenstored" work presented at ICFP 2009 http://web.cecs.pdx.edu/~apt/icfp09_accepted_papers/83.html


I know it won't help, but I was thinking that one could keep hot backups by doing a fork() suspend() and then diffing the heap in realtime to migrate state between processes. Trying to keep state on the heap instead of the stack is a huge start. But this is stuff you already know.


Using process checkpointing is indeed one good way to implement fault tolerance. The Irmin style is more explicit -- the heap itself is tree structured, and the application uses it to checkpoint itself to disk/memory as a matter of course.

This ensures that only the minimal state required is stored (as opposed to the entire process heap), and also that state can be reconstructed intelligently to preserve sharing and special resources. For instance, file descriptors (if running in Unix mode) could be reified to a filename/offset and reopened, and memory mapped areas (such as the shared ring structures that Xen uses) could be re-granted from the hypervisor.


It would be interesting to have an operating system that used a queue of transactions for IO, esp against an immutable FS somewhat like Datomic.


We've got an experimental IMAP server using Irmin as a maildir-replacement backend, to give us e-mail provenance and undo for when mass moves go horribly wrong. It's not quite ready for open-sourcing yet, but should be over the summer. I'm switching my personal e-mail over to it soon gulp


How does lrmin compare to camlistore?


They are different design philosophies: Irmin is closer to "Sqlite-with-Git-instead-of-SQL", since it's just an OCaml library that lets you build graph-based datastructures that are persisted into memory or Git. Higher-level datastructures are built in the usual OCaml style on top of it.

You could build a Camlistore-like service on top of Irmin, but this design/eval hasn't happened yet to my knowledge. It's on the TODO list for Irmin applications -- while I really like Camlistore, I do also want finer control over routing of blobs in a content-addressed network to manage the physical placement of my personal data.

Another interesting aspect of Irmin is that the backend blob stores are also just another functor. In addition to the existing memory/HTTP-REST/Git backends, we've sketched out the design for a convergent-encrypted backend store as well. Other backends such as existing k/v stores like LevelDB or Arakoon shouldn't be difficult, and patches are welcome.


I frequently get thrown into projects with aging codebases, technical debt, and few people around who wrote the original code.

In this scenario using this kind of verbose state logging starts to sound like a huge win, especially if tools exist to visualize how state mutation is different on various legs and use that information to infer dependency guesses between execution branches.


This blog post just introduces the overall architecture and pointers to the source code. There's a getting started guide for those who want to play with it here: https://github.com/mirage/irmin/wiki/Getting-Started

More docs emerging in the coming weeks (particularly as we upstream the Xen toolchain integration, which has been a very helpful deployment to iron out the bugs in the betas). Do feel free to file questions on <https://github.com/mirage/irmin/issues> in the meanwhile.


A minor request:

I assume you're involved in the project. Do you have a way to fix the blog to include (at least parts of) the title in the .. title?

Right now the title is 'Blog' and people like me (I know, it's a bad habit) that open tabs to read them later will have trouble relocating that thing in a far too large list of tabs.


Woops, that was dropped in a refactoring of the website code. Fixed in https://github.com/mirage/mirage-www/pull/192 and should propagate live soon. Thanks for reporting!


This article reads very much like how git-annex works to me, except that while git-annex is built on top of git (and consequently works at a much higher level), Irmin sounds like it does pretty much the same thing but at a much lower level.

In other words, I think I could implement what's being described as a very thin shim on top of git-annex. You'd just need a special git merge driver (the same as Irmin, which requires 3-way merge providers), but with the extra caveat that all three components of the merge have to be present in the local annex before a merge can take place.

This is just based on the article, and assumes I've understood correctly.

Is this accurate?


That's a reasonable way to think about it, except that Git is just one backend option in Irmin, whereas git-annex is obviously specialized for git (and does it very well, too). The Irmin in-memory store, for example, never actually serializes into Git (and so is faster for IPC).

There are some interesting tricks going on from the native Irmin representation to the Git conversion (which is slightly less descriptive than Irmin and so virtual nodes are constructed to represent the extra data in Git). Will write that up in more detail in a future post I think, but for now:

https://github.com/mirage/irmin/blob/master/lib/backend/irmi... https://github.com/mirage/irmin/blob/master/lib/backend/irmi...

(is the Git serializer, and you can see in the interface how you can spawn an on-disk and in-memory Git).

and the simpler in-memory backend:

https://github.com/mirage/irmin/blob/master/lib/backend/irmi... https://github.com/mirage/irmin/blob/master/lib/backend/irmi...

where the implementation is mostly a noop since no mappings between representations needs to take place.

(Edited to note:) The reason for wanting an in-memory backend in the first place is that this is also very useful for IPC coordination. You could build a session layer where all the messages that go back-and-forth between two processes are recorded into an in-memory layer, and then when the whole process is done, the entire graph of communication can be dumped out to a Git tree as the log (for later analytics or debugging). If disk space is an issue, the Git tree can later be rebased to eliminate the intermediate communication commits. This is very, very useful for debugging.


irmin creators: awesome! Merkle DAGs ftw. Check out

- https://github.com/jbenet/ipfs - http://static.benet.ai/t/ipfs.pdf

Highly relevant :) (talk coming soon)


Wow, that looks really neat too! ZFS is another one (Merkle tree on block-level checksums, and kind of "distributed" if you squint at zfs send | netcat just right). Not sure if they published about it beyond the code, but some of the ex-Sun blog posts about it are great.

I just read through your Data Package Manager post, and I have to say 1) bravo and 2) I need that - yesterday.

I hope Dropbox doesn't acqui-quash you before these things get out of hand :)


I don't quite understand the use-case, because merging seems problematic if purely programmatic... A merge that seems clean might violate some invariants of the object. And if irmin detects that it can't automatically merge, how is the merge done programmatically?

Perhaps the idea is more like a database, and some administration is manual. So, merges are like mini-migrations. Or, merges could be thrown back to a human user of the app to combine them (perhaps with some domain specific interface) - again, manual.


The idea is that you can tailor the merge functions for the application problem domain, rather than grabbing an off-the-shelf distributed database with subtly different consistency semantics than the one you really need (e.g. Riak, Dynamo and Cassandra are all slightly different in how they reconcile, for very good reasons).

With Irmin, if your your application needs a distributed queue to coordinate workflow tasks, then you grab an MQueue datastructure that explains how it deals with multiple readers and writers, and you use that. If you instead need a distributed set with no strong ordering guarantees, then you can implement this as a series of pull/push/merge operations instead.

One interesting exercise we're doing at the moment is to build the equivalent of Okasaki's purely functional datastructures in Irmin. Since the module signatures of these datastructures are quite similar to their non-distributed counterparts, it should be possible to swap distributed/local datastructures depending on the deployment scenario of the application (with the local ones being much more efficient due to the lack of remote synchronization).

If a datastructure really can't merge something, then it can raise a conflict that will can ripple up to the user interface. The design aims to minimize this by letting the application specify a non-failing merge semantic where possible though.


this is very similar to how Bayou reconciles conflicts

http://research.microsoft.com/apps/pubs/default.aspx?id=7377...

BTW, would be much appreciated if you could point to related work on the subject (papers, other projects, blogs, etc).


That's correct, except that an Irmin client could choose not to reconcile if it would conflict, and just continue on with two active branches (presumably hoping for a future event that would help reconciliation). Bayou's a big inspiration for this system -- there's a filesystem under development that exposes some POSIX semantics using Irmin as a base. It should be possible to build rather interesting datastructures that go beyond conventional filesystems as well, though.

BTW, would be much appreciated if you could point to related work on the subject (papers, other projects, blogs, etc).

That'll certainly happen when we complete the research papers on the subject. It's a little out of scope for a blog post series that primarily focuses on trying to explain the stuff we're building.


Can irmin be used a block device filesystem? Or would it only support object based storage?


The next post in this series is also up. By djs55 on Using Irmin to add fault-tolerance to the Xenstore database.

http://openmirage.org/blog/introducing-irmin-in-xenstore


This is something that I never knew I needed.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: