We currently run InfluxDB 0.10.0-nightly-614a37c (I have yet to upgrade it to the stable release) on a single DigitalOcean instance with 8GB of RAM with 30-something GB of storage. The previous stable release (0.9 something) didn't fare very well, even after we significantly reduced the amount of data we were sending (we were sending a lot of data we didn't really need).
Switching to 0.10.0-nightly-614a37c in combination with switching to the TSM engine resulted in a very stable InfluxDB instance. So far my only gripe has been that some queries can get pretty slow (e.g. counting a value in a large measurement can take ages) but work is being done on improving the query engine (https://github.com/influxdb/influxdb/pull/5196).
To give you an idea of the data:
* Our default retention policy is currently 30 days
* 24 measurements, 11975 series. Our largest measurement (which tracks the number of Rails/Rack requests) has a total of 28 539 279 points
* Roughly 2.3 out of the 8 GB of RAM is being used
Unfortunately, I need 2+ instances with Active/Active or failover to seriously consider anything for production which is why I've not touched InfluxDB beyond some light testing.
Switching to 0.10.0-nightly-614a37c in combination with switching to the TSM engine resulted in a very stable InfluxDB instance. So far my only gripe has been that some queries can get pretty slow (e.g. counting a value in a large measurement can take ages) but work is being done on improving the query engine (https://github.com/influxdb/influxdb/pull/5196).
To give you an idea of the data:
* Our default retention policy is currently 30 days
* 24 measurements, 11975 series. Our largest measurement (which tracks the number of Rails/Rack requests) has a total of 28 539 279 points
* Roughly 2.3 out of the 8 GB of RAM is being used
* Roughly 4 GB of data is stored on disk
This whole setup is used to monitor GitLab.com as well as aid in making things faster (see https://gitlab.com/gitlab-com/operations/issues/42 for more info on the ongoing work).