
Prometheus: An open-source service monitoring system and time series database - jjwiseman
http://prometheus.io/
======
SEJeff
As a graphite maintainer, see my other post about problems with graphite:

[https://news.ycombinator.com/item?id=8908423](https://news.ycombinator.com/item?id=8908423)

I'm super excited about prometheus, and can't wait to get some time to see if
I can make it work on my rasberry pi. That being said, I'm also going to
likely eventually work on a graphite-web / graphite-api pluggable backend to
use prometheus as the backend storage platform.

The more OSS metrics solutions, the better!

~~~
mdeeks
A huge problem I'm having with graphite right now (which is making me look at
influxdb, etc) is its inability to render graphs with lots and lots of lines.
For example CPU usage across a cluster of of hundreds of machines almost
always times out now. I essentially am graphing this: system.frontend-
_.cpu-0.cpu-used Where "frontend-_" expands to 200 or so machines. I'm not
entirely sure where the bottleneck is here. Would love to know if you have
ideas. Is this a limitation of graphite-web itself?

I have large graphite install of 20+ graphite carbon nodes running on SSDs and
three additional graphite-web instances in front generating graphs. Ingesting
something like 1 million metrics/min.

Also I didn't realize there were still graphite maintainers (seriously. not
trolling). There hasn't been a release of graphite in well over a year. I
assumed it was dead by now. Any idea when we'll get a fresh release?

~~~
SEJeff
We are in the final stages of the last 0.9.x release, 0.9.13. From then on,
we're going to be making some more noticeable changes that break some
backwards compat to make the project a lot more pleasant.

Note that 0.9.13 is almost ready to be cut: [https://github.com/graphite-
project/graphite-web/commit/7862...](https://github.com/graphite-
project/graphite-web/commit/78629d4c811cf7994639dfb852b01d40af8293c3)

[https://github.com/graphite-
project/carbon/commit/e69e1eb59a...](https://github.com/graphite-
project/carbon/commit/e69e1eb59aaade325d37e68d920d786f280d294a)

[https://github.com/graphite-
project/whisper/commit/19ab78ad6...](https://github.com/graphite-
project/whisper/commit/19ab78ad6f4aa2c34d4bd5b84bf876ad5212bc64)

Anything in the master branch is what will be in 0.10.0 when we're ready to
cut that. I think we'll spend some more cycles in 0.10.x focusing on non-
carbon / non-whisper / non-ceres backends that should allow much better
scalability. Some of these include cassandra, riak, etc.

For it timing out, it is a matter of general sysadmin spleunking to figure out
what is wrong. It could be IO on your carbon caches, or CPU in your render
servers (where it uses cairo). I'm a HUGE fan of grafana for doing 100% of the
dashboards and only using graphite-web to spit out json, or alternatively to
use graphite-api.

Take a look at the maxDataPoints argument to see if that will help your graphs
to not timeout however.

~~~
mdeeks
My brief experience with browser based rendering was not good. Our dashboard
pages often have 40-50+ graphs for a single cluster. I found it brought all
browsers to a crawl and turned our laptops into blazing infernos when viewing
longer timelines. Granted I didn't try out graphana so it could have been
related to badly optimized javascript in the ones I tried.

CPU on the render servers is low. IO on the carbon caches is acceptable (10k
IOPS on SSDs that support up to 30k or so). If the CPU Usage Type graph would
render it would show very little IO Wait (~5%). Graphs if you're interested:
[http://i.imgur.com/dCrDynY.png](http://i.imgur.com/dCrDynY.png)

Anyway thanks for the response. I'll keep digging. Looking forward to that
0.9.13 release!

~~~
SEJeff
maxDataPoints was a feature added by the guy who wrote giraffe[1], which is
for realtime dashboards from graphite. It was too slow until he added in the
maxDataPoints feature, and now it is actually really awesome when setup
properly.

Also look at graphite-api[2], written by a very active graphite committer. It
is api only (only json), but absolutely awesome stuff. Hook it up to grafana
for a real winner.

[1]
[http://giraffe.kenhub.com/#dashboard=Demo&timeFrame=1d](http://giraffe.kenhub.com/#dashboard=Demo&timeFrame=1d)

[2] [https://github.com/brutasse/graphite-
api](https://github.com/brutasse/graphite-api)

------
ddorian43
Announced here:

[https://developers.soundcloud.com/blog/prometheus-
monitoring...](https://developers.soundcloud.com/blog/prometheus-monitoring-
at-soundcloud)

~~~
bbrazil
The SoundCloud announcement is a great overview, there's also a series of blog
posts I did on
[http://www.boxever.com/tag/monitoring](http://www.boxever.com/tag/monitoring)
going into more depth with end-to-end examples.

------
ggambetta
"Those who cannot remember the Borgmon are doomed to repeat it" ;)

Just kidding, this is looking really good, I hope to get some hands-on
experience with it soon.

~~~
thesnider
After working at a job that had a horrible patchwork of monitoring techniques
(including at least two in-house systems), I was desperately pining for
borgmon, actually. Never thought those words would come out of my mouth.

This does seem to have addressed at least a couple of the issues with that
system, in that its config language is sane, and its scrape format is well-
defined and typed.

------
clarkevans
We've been looking for something like this, unfortunately the "pull" model
won't work for us. We really need a push model so that our statistics server
doesn't need access to every single producer. I see the pushgateway, but it
seems deliberately not a centralized storage.

I wonder what InfluxDB means by "distributed", that is, if I could use it to
implement a push (where distributed agents push to a centralized metric
server) model.

~~~
Rapzid
Same here; need push for integration. It appears that Prometheus favours pull
to a fault. To me it makes sense to have a push/message infrastructure that
you can then write scrapers for to your hearts content. InfluxDB has push, but
I read that it uses 12X the storage due to storing metadata with each metric;
Yikes!

~~~
jrv
(Prometheus author here)

Yeah, I did that benchmark with 11x overhead for storing typical Prometheus
metrics in InfluxDB in March of 2014. Not sure if anything has changed
conceptually since then, but if anyone can point out any flaws in my
reasoning, that'd be interesting:

[https://docs.google.com/document/d/1OgnI7YBCT_Ub9Em39dEfx9Bu...](https://docs.google.com/document/d/1OgnI7YBCT_Ub9Em39dEfx9BuiqRNS3oA62i8fJbwwQ8/edit)

~~~
rdsubhas
Hi, this definitely looks very cool, but how about cases where we have a bunch
of instances running behind a load balancer, and each servers its own metrics?

We can't pull them, because hitting the load balancer would randomly choose
only one instance.

Instances are scaled up based on load. So we can't specify the target
instances in Prometheus because it keeps on changing.

We'd like to try this out, but any ideas what to do for the above?

~~~
bbrazil
What you want to do is separately scrape each instance.

We're working on service discovery support[1] so that you can dynamically
change what hosts/ports Prometheus scrapes. Currently you can use DNS for
service discovery, or change the config file and restart prometheus.

[1][http://prometheus.io/docs/introduction/roadmap/](http://prometheus.io/docs/introduction/roadmap/)

------
mbell
From the storage system docs:

> which organizes sample data in chunks of constant size (1024 bytes payload).
> These chunks are then stored on disk in one file per time series.

That is concerning, is this going to have the same problem with disk IO that
graphite does? i.e. Every metric update requires a disk IO due to this one
file per metric structure.

~~~
falcolas
This was my first thought as well. Having one inode per metric measured can
easily overwhelm some file systems, and the IO overhead on those disks gets
silly (especially if it's not a SSD capable of constant-time writes against
sparse data).

Combine with the Cacti pull model, and I think a wait-and-see attitude is the
best for this for now.

~~~
beorn7
We were tempted to implement our own data management in a couple of bigger
files, but extensive testing of various modern file systems (XFS, ext4)
resulted in the perhaps not surprising effect that those file systems are way
better in managing the data than our home-grown solutions.

~~~
falcolas
So, the open source timeseries DBs (RRD Tool, Influx, and KairosDB), and
others like sqlite or even InnoDB didn't make the cut? That surprises me.

> file systems are way better in managing the data

Except they're not managing data, they're just separating tables, to extend
the DB metaphor. And you still run the chance of running out of inodes on a
"modern" file system like ext4.

After having briefly dug into the code, I'm particularly worried about the
fact that instead of minimizing iops by only writing the relevant changes to
the same file, Prometheus is constantly copying data from one file to the
next, both to perform your checkpoints and to invalidate old data. That's a
lot of iops for such basic (and frequently repeated) tasks.

Still in wait-and-see mode.

~~~
jrv
So we're admittedly not hardcore storage experts, but we did a lot of
experiments and iterations until we arrived at the current storage, and it
seems to be performing quite well for our needs, and much better than previous
iterations. We're happy to learn better ways of data storage/access for time
series data though.

RRD Tool: expects samples to come in at regular intervals and expects old
samples to be overwritten by new ones at predictable periods. It's great
because you can just derive the file position of a sample based on its
timestamp, but in Prometheus, samples can have arbitrary time stamps and gaps
between them, and time series can also grow arbitrarily large (depending on
currently configured retention period) and our data format needs to support
that.

InnoDB: not sure how this works internally, but given it's usually used in
MySQL, does it work well for time series data? I.e. millions of time series
that each get frequent appends?

KairosDB: depends on Cassandra, AFAICS. One main design goal was to not depend
on complex distributed storage for immediate fault detection, etc.

InfluxDB: looks great, but has an incompatible data model. See
[http://prometheus.io/docs/introduction/comparison/#prometheu...](http://prometheus.io/docs/introduction/comparison/#prometheus-
vs.-influxdb)

I guess a central question that touches on your iops one is: you always end up
having a two-dimensional layout on disk: timeseries X time. I don't really see
a way to both store _and_ retrieve data in such a way that you can arbitrarily
select a time range and time series without incurring a lot of iops either on
read or write.

~~~
falcolas
> does it work well for time series data

It's a key/value store at its heart, with all the ACID magic and memory
buffering built in.

Almost any KV store would preform relatively well at time series data by
simply by updates to overwrite old data instead of constantly deleting old
data (assuming the KV store is efficient in its updates).

Issuing updates instead of deletes is possible because you know the storage
duration and interval, and can thus easily identify an index at which to store
the data.

~~~
jrv
An earlier iteration of our storage was actually based on LevelDB (key-value
store), with this kind of key->value layout:

[time series fingerprint : time range] -> [chunk of ts/value samples]]

At least this scheme performed way worse than what we currently have. You
could say that file systems also come with pretty good memory buffering and
can act as key-value stores (with the file name being the key and the contents
the value), except that they also allow efficient appends to values.

> Issuing updates instead of deletes is possible because you know the storage
> duration and interval, and can thus easily identify an index at which to
> store the data.

Do you mean you would actually append/update an existing value in the KV-store
(which most KV stores don't allow without reading/writing the whole key)?

~~~
ithkuil
Did your previous leveldb approach perform way worse on reads, writes or both?

~~~
bbrazil
Both. When I switched our production Prometheis over, the consoles rendered
more than twice as fast for a simple test case.

------
perlgeek
This looks very interesting.

From
[http://prometheus.io/docs/introduction/getting_started/](http://prometheus.io/docs/introduction/getting_started/)

> Prometheus collects metrics from monitored targets by scraping metrics HTTP
> endpoints on these targets.

I wonder if we'll see some plugins that allow data collection via snmp or
nagios monitoring scripts or so. That would make it much easier to switch
large existing monitoring systems over to prometheus.

~~~
bbrazil
(One of the authors here)

Just last night I wrote the
[https://github.com/prometheus/collectd_exporter](https://github.com/prometheus/collectd_exporter),
which you could do SNMP with. I do plan on writing a purpose-designed SNMP
exporter in the next a few months to monitor my home network, if someone else
doesn't get there first.

~~~
perlgeek
Awesome. I wondered if I should mention collectd as a possible source of data,
and now you've already made it available.

------
bhuga
It's great to see new entrants into the monitoring and graphing space. These
are problems that every company has, and yet there's no solution as widely
accepted for monitoring, notifications or graphing as nginx is for a web
server.

Not that I'd do a better job, but every time I further configure our
monitoring system, I get that feeling that we're missing something as an
industry. It's a space with lots of tools that feel too big or too small; only
graphite feels like it's doing one job fairly well.

Alerting is the worst of it. Nagios and all the other alerting solutions I've
played with feel just a bit off. They're either doing too much or carve out a
role at boundaries that aren't quite right. This results in other systems
wanting to do alerting, making it tough to compare tools.

As an example, Prometheus has an alert manager under development:
[https://github.com/prometheus/alertmanager](https://github.com/prometheus/alertmanager).
Why isn't doing a great job at graphing enough of a goal? Is it a problem with
the alerting tools, or is it a problem with boundaries between alerting,
graphing, and notifications?

~~~
KyleBrandt
Shameless plug. But Bosun's focus is largely on Alerting
([http://bosun.org](http://bosun.org)). We have an expression language built
in that allows for complex rules. It leverages OpenTSDBs multi dimensional
facet to make alert instantiations, but you can also change the scope by
transposing your results.

It also now supports Logstash and Graphite as backends as well. The Graphite
support is thanks to work at Vimeo.

Another nice thing about Bosun is you can test your alerts against time series
history to see when they would have triggered so you can largely tune them
before you commit them to production.

~~~
jrv
Great. Yeah, adding unit tests for alerts is something that we still need to
add the capability for. But at least you can always already manually graph
your alert expressions and see how they would have evaluated over time.

Interesting work on Bosun by the way! Seems like there is quite some overlap
with Prometheus, but I yet have to study it in depth. Is my impression correct
that OpenTSDB is a requirement, or is there any local storage component? I
guess you could run OpenTSDB colocated on a single node...

~~~
KyleBrandt
OpenTSDB is our main backend. But we can also query graphite and logstash.
However the graphing page doesn't work with Graphite.

~~~
jrv
Ah, ok. By the way, in case you make it to GopherCon this year, it would be
interesting to exchange ideas! One of the things I'm happy about is that
finally systems with multi-dimensional time series (based on OpenTSDB or not)
are becoming more common...

~~~
KyleBrandt
I was there last year but not sure if I'm going this year yet. Matt is going
though ([http://mattjibson.com/](http://mattjibson.com/)). I'd love to chat at
some point though!

I need to take a closer look at stealing ideas from your tool :-) We are both
leveraging Go templates (Bosun uses it for Alert notifications, but I've
thought about using it to create dashboards as well).

------
e12e
So... how does this compare to [http://riemann.io/](http://riemann.io/) ? I
just re-discovered riemann... and was thinking of pairing it with logstash and
have a go. It would seem prometheus does something... similar?

~~~
bbrazil
From my look at Riemann, it seems more aimed as an event monitoring system
than a time-series monitoring system. You can (and many do) use Riemann as
time-series monitoring system, my understanding is that Prometheus is a bit
better for multi-dimensional labels.

I could see Riemann being used as an alert manager on top of Prometheus,
handling all the logic around de-duping of alerts and notification.
Prometheus's own alert manager is considered experimental.

------
simple10
Looks really promising for smaller clusters. However, the pull/scraping model
for stats could be problematic for larger scale.

I've been experimenting with metrics collection using heka (node) -> amqp ->
heka (aggregator) -> influxdb -> grafana. It works extremely well and scales
nicely but requires writing lua code for anomaly detection and alerts – good
or bad depending on your preference.

I highly recommend considering Heka[1] for shipping logs to both ElasticSearch
and InfluxDB if you need more scale and flexibility than Prometheus currently
provides.

[1] [https://github.com/mozilla-services/heka](https://github.com/mozilla-
services/heka)

~~~
bbrazil
> However, the pull/scraping model for stats could be problematic for larger
> scale.

From experience of similar systems at massive scale, I expect no scaling
problems with pulling in and of itself. Indeed, there's some tactical
operational options you get with pull that you don't have with push. See
[http://www.boxever.com/push-vs-pull-for-
monitoring](http://www.boxever.com/push-vs-pull-for-monitoring) for my general
thoughts on the issue.

> InfluxDB

InfluxDB seems best suited for event logging rather than systems monitoring.
See also
[http://prometheus.io/docs/introduction/comparison/#prometheu...](http://prometheus.io/docs/introduction/comparison/#prometheus-
vs.-influxdb)

~~~
simple10
Good point on push-vs-pull. I'm biased towards push because of microservices
that behave like batch jobs. In effect, I'm using AMQP in a similar way as the
Prometheus pushgateway.

Agreed that InfluxDB is suited for event logging out of the box, but the March
2014 comparison of Influx is outdated IMO.

I'm using Heka to send numeric time series data to Influx and full logs to
ElasticSearch. It's possible to send full logs to non-clustered Influx in 0.8,
but it's useful to split out concerns to different backends.

I also like that Influx 0.9 dropped LevelDB support for BoltDB. There will be
more opportunity for performance enhancements.

~~~
jrv
Yeah, I would be really interested in hearing any arguments that would
invalidate my research (because hey, if InfluxDB would actually be a good fit
for long-term storage of Prometheus metrics, that'd be awesome, because it's
Go and easy to operate).

However, if the data model didn't change fundamentally (the fundamental
InfluxDB record being a row containing full key/value metadata vs. Prometheus
only appending a single timestamp/value sample pair for an existing time
series whose metadata is only stored and indexed once), I wouldn't expect the
outcome to be qualitatively different except that the exact storage blowup
factor will vary.

Interesting to hear that InfluxDB is using BoltDB now. I benchmarked BoltDB
against LevelDB and other local key-value stores around a year ago, and for a
use case of inserting millions of small keys, it took 10 minutes as opposed to
LevelDB taking a couple of seconds (probably due to write-ahead-log etc.). So
BoltDB was a definite "no" for storing the Prometheus indexes. Also it seems
that the single file in which BoltDB stores its database never shrinks again
when removing data from it (even if you delete all the keys). That would also
be bad for the Prometheus time series indexing case.

~~~
pauldix
InfluxDB CEO here. It's true that Bolt's performance is horrible if you're
writing individual small data points. It gets orders of magnitude better if
you batch up writes. The new architecture of InfluxDB allows us to safely
batch writes without the threat of data loss if the server goes down before a
flush (we have something like a write ahead log).

Basically, when the new version comes out, all new comparisons will need to be
done because it's changing drastically.

------
0xdeadbeefbabe
While monitoring is obviously useful, I'm not understanding the obvious
importance of a time series database. Can you collect enough measurements for
the time series database to be useful? I worry that I would have lots of
metrics to backup my wrong conclusions. I also worry that so much irrelevant
data would drown out the relevant stuff, and cause the humans to ignore the
system in time. I work with computers and servers, and not airplanes or
trains.

~~~
bbrazil
> Can you collect enough measurements for the time series database to be
> useful?

Yes, instrument everything. See
[http://prometheus.io/docs/practices/instrumentation/#how-
to-...](http://prometheus.io/docs/practices/instrumentation/#how-to-
instrument)

> I worry that I would have lots of metrics to backup my wrong conclusions.

This is not so much a problem with time series as a question of epistemology.
Well chosen consoles will help your initial analysis, and after that it's down
to correct application of the scientific method.

> I also worry that so much irrelevant data would drown out the relevant stuff

I've seen many attempts by smart people to try and do automatic correlation of
time series to aid debugging. It's never gotten out of the toy stage, as there
is too much noise. You need to understand your metrics in order to use them.

------
zeus13i
After reading this thread and comparing Influx and Prometheus, I've concluded
that both look promising. I was going to go with Prometheus (as it's easier to
get started with), but I was really put off by the 'promdash' dashboard - it
uses iframes and depends on mysql. So I'm going with InfluxDB + Grafana and
I'll keep an eye out for developments.

~~~
jrv
The only time where PromDash uses iframes is when you specifically add a
"frame" (iframe) widget to your dashboard to embed arbitrary web content.
Native Prometheus graphs and pie charts don't use iframes at all.

Some kind of SQL backend is a dependency for now, however.

~~~
zeus13i
Ah! Good to know, thanks. Somehow I missed that.

------
kanwisher
Would be interesting how this compares to InfluxDb

~~~
sagichmal
[http://prometheus.io/docs/introduction/comparison/#prometheu...](http://prometheus.io/docs/introduction/comparison/#prometheus-
vs.-influxdb)

------
mhax
I'm a little wary of a monolithic solutions to monitoring/graphing/time series
data storage - it gives me flashbacks of nagios/zabbix ;)

I currently use a combination of sensu/graphite/grafana which allows a lot of
flexability (albeit with some initial wrangling with the setup)

~~~
gambiter
I'm not sure what's wrong with nagios or zabbix... I use them both in
different capacities, and they are good at what they do.

Of course a piecemeal solution is more flexible, but as you said,
configuration can be a beast, so many people prefer monolithic systems.

------
tinco
In your architecture I see a single monolithic database server called
'Prometheus'. Does it shard? I can't find it in the documentation. You mention
it's compatible with TSDB, why did you choose to implement your own backend,
or is this a fork of TSDB?

The tech does look awesome though!

~~~
bbrazil
> Does it shard?

Currently you can manually vertically shard, and in future we may have support
for some horizontal sharding for when the targets of a given job are too many
to be handled by a single server. You should only hit this when you get into
thousands of targets.

Our roadmap[1] includes hierarchical federation to support this use case.

> You mention it's compatible with TSDB, why did you choose to implement your
> own backend, or is this a fork of TSDB?

Prometheus isn't based on OpenTSDB, though it has the same data model. We've a
comparison[2] in the docs. The core difference is that OpenTSDB is only a
database, it doesn't offer a query language, graphing, client libraries and
integration with other systems.

We plan to offer OpenTSDB as a long-term storage backend for Prometheus.

[1]
[http://prometheus.io/docs/introduction/roadmap/](http://prometheus.io/docs/introduction/roadmap/)
[2]
[http://prometheus.io/docs/introduction/comparison/#prometheu...](http://prometheus.io/docs/introduction/comparison/#prometheus-
vs.-opentsdb)

------
secure
I used to use InfluxDB + a custom program to scrape HTTP endpoints and insert
them into InfluxDB before.

After playing around with Prometheus for a day or so, I’m convinced I need to
switch to Prometheus :). The query language is so much better than what
InfluxDB and others provide.

~~~
jrv
(Prometheus author here)

Thanks, that's awesome to hear! Feel free to also join us on #prometheus on
freenode or our mailing list:
[https://groups.google.com/forum/#!forum/prometheus-
developer...](https://groups.google.com/forum/#!forum/prometheus-developers)

------
paulasmuth
Shameless plug: This looks quite similar to FnordMetric, which also supports
labels/multi dimensional time series, is StatsD wire compatible and supports
SQL as a query language (so you won't have to learn yet another DSL)

------
xfalcox
Guys, I've seen the libs for collecting services info, but how do I get OS
level info, like load average, disk utilization, ram etc..?

I suppose that there's a simple service that we need to deploy on each server?

Any tips on this use case?

~~~
bbrazil
We support that use case, here's a guide: [http://www.boxever.com/monitoring-
your-machines-with-prometh...](http://www.boxever.com/monitoring-your-
machines-with-prometheus)

~~~
wyldfire
That's great!

> For machine monitoring Prometheus offers the Node exporter

Is it possible for the frontend to utilize data from the cron-invoked sar/sadc
that already covers much of this data?

[http://sebastien.godard.pagesperso-
orange.fr/](http://sebastien.godard.pagesperso-orange.fr/)

~~~
bbrazil
The default instrumentation that comes with node_exporter covers far more than
what systat provides. Retrieving data from /proc at every scrape also gives
you a more accurate timestamp, which helps reduce graph artifacts.

As an aside, if you have machine-level cronjobs you want to expose metrics
from you can use the textfile[1] module of the node_exporter, which reads in
data from *.prom files in the same format as accepted by the Pushgateway.

[1][https://github.com/prometheus/node_exporter/blob/master/coll...](https://github.com/prometheus/node_exporter/blob/master/collector/textfile.go)
[2][https://github.com/prometheus/pushgateway](https://github.com/prometheus/pushgateway)

------
reinhardt1053
Promdash, the dashboard builder for Prometheus, is written in Ruby on Rails
[https://github.com/prometheus/promdash](https://github.com/prometheus/promdash)

------
corford
This looks great! Is an official Python client library on the roadmap?

~~~
bbrazil
Yes, I've got something working at the moment. It needs cleanup, docs,
unittests etc.

If you want to help out, it's up at [https://github.com/brian-
brazil/client_python](https://github.com/brian-brazil/client_python)

------
XorNot
Huh, how fortuitous. I've been looking for this exact type of thing and HN
gives me a great starting place to evaluate.

------
rgj
Is it me or is it impossible to navigate the documentation on an iPad?

~~~
jrv
I don't own any Apple devices, so I can't test, but it works well on my
Android phone (being Bootstrap and using responsive design). The top menu
collapses into a button with three horizontal bars which shows the menu upon
click. The documentation-specific navigation is always displayed expanded when
in the docs section of the site, but the contents are displayed underneath it.
What does it look like for you?

