

Distributed Hash Tables, Part I - antoviaque
http://www.linuxjournal.com/article/6797

======
jedbrown
It would have been nice if the submission title noted that this is from 2003.

------
etrain
The "leave" portion of the protocol strikes me as unrealistic. Typically, when
a node leaves a network it does so without warning and doesn't have tie to
copy its data back. Maybe some level of redundancy will be discussed in the
followup.

~~~
lukesandberg
I noticed that too. But i can understand why the left it out. As soon as you
add redundancy you need to start worrying about write consistency.

~~~
peterwwillis
I would think one would dedicate N*2 nodes for a dataset N and just copy the
data in any given node to its closest neighbor node. If the lookup for the
primary node fails, try its closest neighbor, and if you really want to get
snazzy then have that failover neighbor start to copy data to its closest
neighbor until you get a replacement node up. A failover daemon on each node
will detect when a neighbor goes down and block writes to a copy of data until
it's sure the neighbor is down, and won't accept new copies until both it
detects the neighbor is back up and has resolved time-based differences in
datasets. Probably not foolproof but it's a start :)

------
guimarin
I think an interesting follow-up to this since it's now 8 years old, would be
an article on setting up DHTs in the cloud, like on Amazon's EC2. As demand
for the database/application goes up, rolling additional nodes in real-time
would be pretty snazzy. It would 'seem' to me that most use cases need a
redundancy level of 3 per location, and then spread across three locations.
The capital costs of physical equipment, racking, zoning within a facility for
redundant power etc. make a cloud solution much more interesting for 99% of
use cases.

------
ColdAsIce
Not much new on the distributed side of things since 2003, perhaps except for
bitcoin then.

Anyone know how it went with that ant-based file sharing network, mute or
something?

