
Ceph storage system - dduvnjak
http://www.techbar.me/2013/05/ceph-storage-system/
======
nasalgoat
Not sure if much has changed, but the last time I tried to install Ceph it
wouldn't work under CentOS. That, and it was far too complicated to set up.

GlusterFS, on the other hand, was incredibly easy, although I am not a fan of
FUSE due to the high CPU usage.

~~~
fintler
I still see Ceph as competition to supercomputer filesystems and not really as
competition to GlusterFS. For example, the design of it directly attacks the
problem of centralized metadata (especially useful after the DARPA project to
bring it to Lustre failed).

I was working on an unrelated project with one of the designers of Ceph
(UCSC's Scott Brandt) and in conversation he also seems to concur that Ceph
was really built as a replacement for PanFS or Lustre (but still may be useful
for other things of course).

Using it to replace GlusterFS still seems odd to me. It feels like they're
both solutions to different problems.

~~~
noahdesu
Recent gluster vs ceph debate at LCA.

<http://www.youtube.com/watch?v=JfRqpdgoiRQ>

~~~
fintler
Nice talk, but I think I was basically trying to say that Ceph _really_ shines
when you compare it to Lustre and PanFS.

It's not a clear winner when you compare it to GlusterFS because the original
design wasn't intended to replace GlusterFS (although it may do a good job of
this anyway).

~~~
stonith
Ceph is still a very long way behind Lustre for streaming bandwidth, so to say
it shines would be a little much. Lustre's weakness is in scaling to support
large file counts, but in real deployments this can be mitigated by using an
MDS server with a lot of grunt. Ceph can't compete with Lustre for HPC
deployments until it supports RDMA, and even with that it's still going to
take a long time to reach Lustre's performance (which is close to line rate at
this point)

------
yaksha
FLOSS Weekly recently interviewed the Ceph project. Link to the show:
<http://twit.tv/show/floss-weekly/250>

------
mad44
Here is a summary of how Ceph works.
[http://muratbuffalo.blogspot.com/2011/03/ceph-scalable-
high-...](http://muratbuffalo.blogspot.com/2011/03/ceph-scalable-high-
performance.html)

~~~
noahdesu
Inktank is the professional services company backing Ceph. Here is their
youtube channel with a whole lot of up-to-date videos:

<http://www.youtube.com/user/inktankstorage>

~~~
jmspring
Glad to see Sage commercializing this and happy to see research activities
from UC Santa Cruz's (where I went for grad/undergrad) Storage Systems
Research Center making it into commercial use.

More SSRC work and some of the original CEPH papers can be found at:

<http://www.ssrc.ucsc.edu/index.html>

------
bwb
The guys behind Ceph are very smart, I'd keep an eye on this one as it's going
to be awesome!

