Hacker News new | comments | show | ask | jobs | submit login
Graphene: A D3.js, Backbone.js based Graphite Dashboard Toolkit (github.com)
97 points by jondot on Mar 7, 2012 | hide | past | web | favorite | 16 comments

I'm tracking Graphite evolution since some articles from the Etsy engineers (Track every Release http://codeascraft.etsy.com/2010/12/08/track-every-release/ and Measure Everything http://codeascraft.etsy.com/2011/02/15/measure-anything-meas...)

However every time I look at the site it seems more and more an abandonware. Only a quickstart and no proper documentation.

Graphite is not abandonware:


The documentation could use a lot of work, but do note that it now lives here:


The common things you'd want to do in Graphite will be easy, more complex things requires knowledge which isn't in the docs, but easily found if you're willing to invest the time in searching and reading code. Of course everyone would like great documentation; however some open source projects may not have the time for everything -- and that's ok, because its all free and full of love. Graphite is a great engineering achievement, and an awesome tool to have. Give it a try, even though it lacks documentation at this stage.

I haven't notice that Graphite is written in python, this is a plus (since we have several projects in python).

However, why should I prefer Graphite over, for eg, Munin? It's still open source but also has a proper documentation.

I'm not sure they're comparable. I did the same evaluation, one of the many criteria that I had is a system that can handle many, many data points from many hosts (at the peak we had 50 production machines) and provide robust flexible queries over them; with almost 0 maintenance/configuration. Metrics are created dynamically, the Graphite database is optimized for this problem (more optimized than RRD - although it seems that recent versions of RRD closed the gap), and it can take a beating in terms of scale. A single Graphite instance does this effortlessly.

Mmm, not bad, not bad at all.

It doesn't support alarms, right?

Graphite is not a monitoring tool. It stores time series data and graphs it.

You can write a monitoring system that queries Graphite, and send notifications from there. Graphite can give you the raw data that backs the graphs:


We are using Graphite at Wavii. That Etsy trick is great. They just issued a new release so it's definitely not abandoned. It is cleanly coded python, we have had no trouble patching small bugs and figuring things out by reading the code.

Uhm, awesome. Love it.

I seen Graphite pop up a lot lately.

Why should I care about Graphite?

Graphite is a very simple time-series graphing solution that does one thing very well. Setting it up is easy and feeding it data is even easier. It's basically a little server which builds RRD instances on the fly and allows you to generate images of graphs using URLs. For example, you can feed it all kinds of data from your web server, in real time, and have it produce an graph as a PNG.

I guess for me the best part is the simplicity of the whole thing. Getting data in is just a simple TCP or UDP socket call (which you can do with almost anything, from nc to curl). Getting graphs out is a URL (albit a bit complex to create by hand :). Tying it all together is a simple, but functional, web interface.

Graphine solves the problem of graph generation and building dashboards in Graphite. By default Graphite builds PNG graphs which are expensive to build on the server and aren't dynamic. Graphine coverts these static PNG files to SVG and lets the browser do all the heavy lifting with regards to rendering.

Graphite is awesome; I've been stuffing more and more of our operational metrics into it since 2010.

It is, however, not RRD:


Having said that, Graphite does not use THE RRD, but it is a kind of round robin database. :-)

Graphite was the biggest changes I had to adjust to when I moved to my current job and will be the biggest thing I push on all future jobs.

We have stupid bash scripts that run on cron every minute. The scripts collect all sorts of system data. Things like load average, iowait, data in, data out.

They also collect all sorts of info about our custom network daemon. They take all of this info and using netcat, echo and sometimes statsd shove this all into graphite.

Having all of this info gives us several advantages. When we notice an issue we are able to track it back in time. We notice an increase in CPU usage at the same time as a sharp increase in data in? Our daemon preformed a bunch of "foo" actions at the same time, well obviously something about "foo" action is killing the box.

In addition to historical data we also can do A/B testing from an operations perspective. Take a couple of boxes split into two groups, tune sysctl "y" to x value on all of group A boxes. Turn up the same load and see how sysctl "y" affects performance of the box.

Because graphite has such a simple api for importing data we can easily expand our scripts push them to the boxes and we have new sources of data. Little fuss.

Come on, no online demo? Not even screenshots? How are people supposed to understand if there is anything good there?

Thanks - I hadn't noticed that

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact