Eventually there was some movement and the most active group of fork maintainers were given commit access, but by that time any enthusiasm over Doozer was long dead and gone.
Open source polyglot database viewer built by Heroku guys, released with a lot of hype, and then radio silence.
1. The protocol is difficult to implement. In theory you could just use Jute to codegen this part, but that assumes Jute supports the language you need. Doozer improves on this with a simple text-based protocol, and etcd goes a little further with a HTTP API.
2. The primitives Zookeeper exposes are very primitive. Implementing higher level abstractions such as locking or leader election on top of znodes is easy to get wrong. Just this year Curator has fixed critical bugs in both of those algorithms: https://github.com/Netflix/curator/blob/master/CHANGES.txt
My feeling is that etcd and doozer fare a little better on #2 just because their primitives are slightly easier to understand, but fundamentally the problem still exists. I'm looking forward to seeing more innovation in this area.
beast service --lock name=mailserver.foundationdb.com --run mailserver.sh &
beast service name=webserver.foundationdb.com --run webserver.sh &
You can use etcd from the command line with etcdctl or grab environment variables for your process using etcdenv.
I would love to see someone back a DNS server with etcd too.
Beastmaster is just a client that sits on top of FoundationDB's transactions and fault tolerance, and adds coordination-specific things like a global clock, fair locks with timeouts, and a data model for service discovery. And then provides useful command line tools and a simple DNS and REST server. We should have it on github soon.
My most recent blog post is on getting the thing to bootstrap: http://www.bringhurst.org/2013/09/09/consensus-quorum-bootst...
for example, zookeeper/zab allows reading directly from followers with a guarantee to get at least a past value that won't be rolled back. this was one reason zookeer didn't use paxos:
in my understanding, raft doesn't allow reading directly from followers, because the follower logs may get repaired/rolled-back when a new leader is elected. (though i'm sure an implementation can tweak the protocol to provide this support.)
that said, raft has a lot of interesting applications, and, in my opinion, is definitely more understandable than the many versions of paxos. (implementing zab yourself, at this point, would be a futile exercise.)
i found the videos from the raft user study to be very well done (and easier to understand than even their paper):
...however, i think they did paxos a disadvantage by not just focusing on multi-paxos (which is probably the most common implementation). but, it's certainly fair to say that info about paxos is spread out far and wide...with perhaps too many knobs to turn and implementation-related details to fill in yourself.
as a side note: i've just started implementing raft in a set of libraries (multiple languages) that will be open source - along with other protocols.
Raft only updates the state of the system once log entries are committed to a quorum the local state will never be rolled back. Log entries can be thrown out but they haven't been committed to the local state so it doesn't matter.
You can read from the leader if you need to ensure linearizability but that will kill your read scaling. Another approach is to read locally and check that the local raft node isn't in a "candidate" state (which would mean that it hasn't received a heartbeat from the master within the last 150ms). That approach works for a lot of cases.
As far as implementing multi-paxos, the authors behind Google Chubby have talked about how there is a large divide between theoretical multi-paxos and actually implementing multi-paxos. Also, there aren't any standalone multi-paxos Go libraries available. I wrote go-raft at the time because there wasn't an alternative distributed consensus library in Go at the time.
Let me know if you need an extra pair of eyes on your Raft implementation or if you have any questions (firstname.lastname@example.org).
In raft all client connections to followers redirected to use the current master.
> i think they did paxos a disadvantage by not just focusing on multi-paxos
Raft is equivalent to (multi-)Paxos
But I don't think there is anything stopping you from having followers service committed log entry reads, provided you're willing to live with being out-of-date.
log entries on a follower may get rolled back - and thrown out - since they were not accepted on a majority of followers.
The master server must be able to handle the entire read load?
Maybe I'm too technically conservative, but it seems like the advantages of Zookeeper's maturity and support base outweigh the ickiness of Java and the whizbang factor of Go. Complaining about the politics of the ASF doesn't really factor in; they're still much more likely to be around in 10 months
With regards to Apache, we have "mixed feelings". We're aligned with a lot of what they stand for and are definitely very thankful for what they enabled in the OSS community, but are not enthused with the way they handle the projects that join them.
The dependencies and JVM boot time are also annoying if you don't already use Java, though.
I'm making about ~80 writes/second with only ~125 keys and all 3 of my etcd nodes sit at 200M+ resident memory.
At the end of the day there are only ~125 keys (most of which aren't the 80, expire really quickly), and I'm only writing integers (maybe 8 bytes in length converted to strings).
However, I dont know if this is caused my use case or if its just a normal day for etcd. I haven't been able to confirm with anyone else what their memory patterns look like. While running my etcd cluster of the last couple of days my log file has grown 3x to 92M (expected), but my memory usage hasn't grown that much. It now sits at 295M, so I'm unconvinced it has anything to actually to do with the way I'm using it.
I opened an issue here, https://github.com/coreos/etcd/issues/162, and it seems the maintainer confirmed it.
Moreover, in our specific case, we did not want to introduce dependencies on hosts that are managed by our customers so as to not run into conflicts with their own stack.
Zookeeper has legitimate issues and from experience most of which stems from the documentation being so verbose that it takes a lot of fine tuning to get right. But if you put it on its own hardware, or at the very least, the write-ahead log on its own partition, you should be good go.
Certainly, Go will end up using less memory but in the day and age of small servers with 1.5GB of RAM, it's really a non-issue.
I have a product with an agent that runs around 80MB of RAM usage on giant servers, servers with 500+GB of RAM, and my customers still complain sometimes.
Every company in the world that isn't specifically a software-oriented tech company uses some form of DIY model. Actually, strike that, even they use a DIY model. These tools are the proof!
Look at the origins for every modern open source management framework or tool, and it was just a DIY tool that some startup-turned-huge-company developed out of their own needs, then cleaned up a lot and released to the world. Your needs may not match those of the company who developed the tool, so it may not work for you. But I guarantee you that no tool will work for every situation.
Pick the tool that best fits your needs and then fork it and maintain it internally. You'll be doing it anyway. (Unless you don't hire software developers, in which case you'll want to pay for a real product with a support contract) Once you've done that, stop writing naval-gazing blog posts about your infrastructure that won't apply to 99% of us.
Each application starts a thread that LISTEN's for NOTIFY's from postgresql.
I have a settings table (name text, value text). Configuration data is stored there.
insert into settings (name, value) values
It works great.
No additional moving parts. All my configuration is stored in the database, they aren't checked into files that need to be protected. No security concerns about api keys stored in a separate service. My settings are backed up along with the rest of my data.
Two examples of how a coordination service has been useful:
* cluster wide throttles to help protect overwhelmable backends
* redundancy in maintenance cronjobs that really only want to be run once per cluster per time period
(edited for formatting)
Atomic updates and locking is another matter, but for a lot of setups it's simply not needed.
Gluster in particular isn't a panacea for resiliency, you've got to really know where it departs from POSIX to not create problems for yourself.
"GlusterFS is fully POSIX compliant."
I would never consider NFS an option in any production system.
On a more general note, anything you build that relies on the particular semantics of any sort of traditional filesystem is sure to be wrong, either now or in the future when it needs to be run on a different filesystem. This is an area of software engineering that's a serious pain in the ass. Avoid it whenever you can.
1. coordination between services on reconf.
2. Consistent implementation on what constitutes a change through the API on the central server.
By making it API based you can hook into these updates and cause reconfs and coordinated responses (like rolling bounces) through that system as opposed to each system polling the file and hoping that the order comes out in the wash.
You could with enough work make it so that a client was aware of how to handle individual diffs from the file and coordinates through the file but at that point your now distributing common parsing logic across multiple systems (and potentially implementations) rather than a central system which sounds awfully un-DRY.
The two rationales I've heard so far are A. a cluster of one of these config servers will provide better uptime than just an NFS share and B. you can use one of these things as a distributed lock server.
Note: Does not depend on puppet, so it'll work with chef. A hiera-databag adapter would make sense.
I've been using Doozer at home on a few side projects and, mainly for it being written in Go, have enjoyed using it a lot more. The point on security is spot on. After reading this post I am definitely going to take a look at etcd.
It may very well be they don't deserve that, but if so they really need to improve their pr.
Easy to use and setup, reliable, good doc, supports ACL.
The only downside is that it's not broadcasting changes.
* DNS with slaves
* LDAP with slaves
* Rsync of a directory of config files
* Pair of web servers or NFS servers or Samba servers using either rsync or a redundant network filesystem (e.g. GlusterFS) or block device (e.g DRDB) to back it.
Solving that problem is easy. It's once you need/want the consistency guarantees things get dicy.
It's obviously no Zookeeper but it is proven and mature.
They also mention the DNS for service discovery approach starts to reach it's limits and Spotify is considering Zookeeper (quote):
We have not yet (as of January 2013) started implementing a replacement. We are
looking into using Zookeeper as an authoritative source for a static and dynamic
service registry, likely with a DNS facade.
For that matter, writing an authoritative only DNS backend is easy (been there, done that - took about one week from starting to read the RFC's until having a production ready backend; it takes little time because most/all of the hard work is in the recursive resolvers, and the DNS protocol is actively very well described in the RFCs)
And claiming DNS provides a static view of the world is a bit funny - DNS provides TTL values for everything. If you want a dynamic view, you specify low TTLs, and make sure your clients honour them, and couple that with fast replication of the zone data. There's plenty of options for that, from the duct tape (my DNS server could update however many records you could write to disk on your hardware per second - via a small script that used Qmail as a queueing messaging server...) to well polished products.
Couple that with NOTIFY and IXFR, the protocol provides every mechanism necessary for keeping zone data replicated and up to date. Many modern DNS servers also let you instead simply rely on database replication (e.g. you can have the DNS server serve data out of Postgres for example, and use Postgres replication to keep the zones up to date over multiple servers), or leave it to you to do updates.
The appeal with DNS here is the long track record and existence of servers that have been battered to death in far more hostile environments than most internal service discovery systems ever will need to deal with.
The downside to DNS is that to get things like guaranteed consistency, you'd need a backend that can guarantee it, and clients that don't cache (which means you need to be careful about what resolvers you rely on). And then it might be just as easy to just deploy one of the options in this article (but there's nothing inherent with DNS that prevents that either).
If someone builds this I will be their best friend
If you've ever wondered why DNS requires the "IN" (for "Internet") in all its record declarations, it's to make this distinction. The other two options are "HE" (for Hesiod), and "CH", for http://en.wikipedia.org/wiki/Chaosnet.