Hacker News new | comments | show | ask | jobs | submit login
How to Set Up Metric Collection Using Graphite and Statsd on Ubuntu 12.04 LTS (kinvey.com)
30 points by kinvey 1955 days ago | hide | past | web | 22 comments | favorite

I know developers are used to setting stuff up in local environments, but that guide's about 20 steps too many. It would be nice to have an apt package built that encapsulates the operations in that script.

And is it really necessary to install Git just to download some software? The devs should make a tarball out of the 0.3.0 tag and follow standard packaging conventions. /opt/ is for 3rd-party/proprietary stuff or "really big crap" like KDE/Gnome (if you're on certain distros).

If the generation gap between modern developers and Linux package-managing folk is too big, i'll gladly write up some guides that explain how (and why) to package software instead of slapdash local installs.

I couldn't agree more. Graphite is awesome, but the state of its packaging is atrocious. That's probably the biggest barrier to its' widespread adoption, actually. I spent 2 days making my own Graphite packages and automating the install via puppet, which was worthwhile, but shouldn't be necessary.

Possible to share your puppet scripts ?

As a packager, I often wonders if developers should package their software themselves. I'm not sure they should produce more than a decent tarball, with clear INSTALL instructions and licensing information. Setting up a clean package building environment (mock for RPM or pbuilder for DEB) is still a hassle.

For what it's worth, I try to maintain RPM packages for graphite for RHEL (and derivatives) over at http://pakk.96b.it/

Yes that would be great. I've used debian for over a decade but I still don't know how I would turn my python packages or other programs into debs.

I don't get it. Is this a replacement for Alien? (http://en.wikipedia.org/wiki/Alien_%28software%29)

If it's supposed to make packages for random unpacked software i'm kind of lost. It could be I just really suck at reading Ruby (don't know the language) but I couldn't find any code that looks at autoconf/automake files or standard open-source developer conventions to generate a package from scratch using the values intended by the developer. I've written two very crappy tools that do this to automatically generate packages (mainly for Solaris, RedHat and Slackware).

To answer the guy above's question: you need to learn how to make basic Makefiles and after that it's all package-manager-specific stuff, which you can use something like Alien (or fpm) to convert to whatever format you wish.

Here's a video from a BayLISA talk Jordan gave on fpm, which might help clarify things: http://vimeo.com/23940598

Also, see "Use Case - Package something that uses 'make install'" on the fpm wiki at https://github.com/jordansissel/fpm/wiki/PackageMakeInstall

Aw, that's disappointing. I automated those steps in my old package generator(s). After you build a couple thousand linux packages from scratch, you notice most of 'em follow some simple conventions which you can look for and package in an automated way.

http://psydev.syw4e.info/new/autopkg.pl/autopkg.pl-1.5/usr/b... (oh god, this is bad code)


+1 for fpm. It makes building packages (rpm, yum, etc.) really easy. So you can deploy your own projects with all the benefits of a package manager. Jordan did a great job with this.

I'm the author, and thanks for the feedback.

You're right about the guide being a little too long. I was trying to establish a starting point for people looking to install and set up these tools but it would be better if there was a way to just "apt-get install" this stuff.

I'll see if I can put this together in a deb and release it somehow.

Excuse the blatant advertising (I contracted with these guys), but Librato Metrics is intended to work like 'Graphite as SaaS' so you don't have to go through this crazy process. They also subscribe to the thought that your metrics should be separate from your production infrastructure -- otherwise when your servers melt your graphing server melts too :). Obviously if you want to stay pure open source then this won't apply, but otherwise it's a pretty neat service. https://metrics.librato.com/ and https://github.com/librato

This service looks great, seems to be just like Graphite feature-wise, but without the clunkyness. The problem is that the pricing makes absolutely no sense and is hard to estimate.

They "charge $0.000002 per measurement. As simple as that." -> https://metrics.librato.com/pricing

The point is: what constitutes a "measurement"? A data point for each metric I submit? Or can I send data points for multiple metrics in a batch, is this considered a single "measurement"? I have no idea, and the support page doesn't cover that. That means, if I can't word it clearly to my financial department, they won't subscribe.

Appreciate the feedback (librato co-founder here), we should definitely make that clearer on the pricing page and elsewhere. We're in the process of adding some content around that now.

In short a measurement is a single datapoint i.e. <key, value, timestamp>. So if you have a sense of how many metrics you want to track and at what frequency, it's relatively straight-forward to calculate the list price. There's an estimator on the pricing page to help you do that once you understand what constitutes a "measurement".

Is that price competitive? I only looked at stat-hat recently, and for whatever reason their pricing seemed more reasonable. Perhaps it was just how they pitch it, as units by the million.

That said, librato seems like it would be nice if your stat volume was low.

I think it's definitely competitive if you look across all the entrants in this space (or similar/adjacent) ones. That doesn't necessarily make us the lowest priced however ;-). Roughly speaking a million measurements costs $2.00 with our current list pricing. We do have progressive volume discounts (described in the FAQ) that kick in at higher measurement counts.

Graphite is a nice piece of software, but with an absurdly confusing architecture and setup. You have to setup 3 different applications (graphite, carbon and whisper) to have a working server, but you're on your on - there's no comprehensive documentation. Also, I have no idea why it ships with 4 different user interfaces, of which 3 are semi-broken, or why it doesn't adopt RRD as it's default storage (since it can read RRD files if you symlink them). It could be refactored to be only a REST interface for graphing and be just as useful.

You also may want to have something like this in storage-aggregation:

  pattern = ^stats_counts\..*
  xFilesFactor = 0.25
  aggregationMethod = sum
Otherwise stats_counts get averaged at compaction/rotation, instead of summed. Summing seemed more reasonable, for our use cases at least.

I wanted to use Graphite+statsd, but the docs were really sparse.

I sticked to Munin for the time being.

It is flakey as hell for me. But Munin2Graphite gets you the best of both worlds.

It's very cool that all my metrics are in one place. Shame it is such an ugly, uncomfortable place.

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact