
Ask HN: What do you use for Log Management? - shubhamjain
In the past few years, there has been a surge in number of log management solutions — Loggly, LogDNA, Scalyr, Sumo Logic. Which one is being used by you &#x2F; your company?
======
dz0ny
Papertrail, super friendly and insightful support.

So let me elaborate. Mostly what will you get from support is: "We are fixing
problem", but in our case they were specific, "We have problems with Heroku
logspout connection, 'heroku log' should still work." And the other time we
went a bit over limit so they upped plan for free for a short period, se we
could figure about what the problem was. Alerts are also what we use the most
(no limits, no delays) which cannot say for the other providers.

Good work Papertrail, if you are reading this.

~~~
joepvd
Using Papertrail as well, but I really miss my terminal and awk, grep and less
here. Or even regex search. I know I can download the archives, and I do that,
but that puts the output of all in one file. Saving different topics in
different files just makes sense, IMHO...

Still getting used to the ways of the cloud, I suppose...

~~~
akramhussein
Have you checked out the Papertrail CLI[0], not sure if that helps?

[https://github.com/papertrail/papertrail-
cli](https://github.com/papertrail/papertrail-cli)

~~~
joepvd
Ha! No, I did not. I will, thanks for posting!

------
markpapadakis
We build our own. All events are published on Tank(
[https://github.com/phaistos-networks/TANK](https://github.com/phaistos-
networks/TANK) ) , and we have a bunch of consumers that consume from various
Tank topics. They process the data and either publish to other Tank topics(to
be consumed by other services), or they update state on various services.

\- For data exploration, we use memSQL. We keep the last day’s worth of data
there(we DELETE rows to keep the memory footprint down), and because most of
the time it’s about understanding something that has happened recently, it’s
almost always sufficient. Each row contains the event’s representation as
JSON, and we also have a few more columns for faster lookup. memSQL’s JSON
support is great(we used mySQL for that but it was too slow), so we can take
advantage of joins, aggregations, windowing etc.

\- For data visualisation, we use ELK (but it’s pretty slow), a tool our ops
folks built (“otinanai”: [https://github.com/phaistos-
networks/otinanai](https://github.com/phaistos-networks/otinanai)) and we have
a few smaller systems that generate graphs and reports.

\- For alerts and tickets, our ops folk built another tool that monitors all
those events, filters them and executes domain-specific logic that deals with
outliers, notifications routing, and more.

This solves most of our needs, but we plan to improve this setup further, by
monitoring even more resources and introducing more tools(Tank consumers) to
get more out of our data.

~~~
atmosx
Great tools, congrats. The name 'otinanai' though is rather dodgy (for those
who knows what it means) although I can see it stems from _[...] designed to
graph anything_.

~~~
Normal_gaussian
For those of us that don't: I found a Yahoo Answers page [0] which appears to
suggest it is a dismissive term similar to 'whatever' and 'hot air'

[0]
[https://answers.yahoo.com/question/index?qid=20100103042837A...](https://answers.yahoo.com/question/index?qid=20100103042837AAcczlv)

------
tkfx
We covered this topic quite extensively on the Takipi blog. Grepping through
huge unstructured text is quite frustrating.

Sumo Logic, Graylog, Loggly, PaperTrail, Logentries, Stackify:
[http://blog.takipi.com/how-to-choose-the-right-log-
managemen...](http://blog.takipi.com/how-to-choose-the-right-log-management-
tool/)

ELK vs Splunk: [http://blog.takipi.com/splunk-vs-elk-the-log-management-
tool...](http://blog.takipi.com/splunk-vs-elk-the-log-management-tools-
decision-making-guide/)

Hosted ELK tools: [http://blog.takipi.com/hosted-elasticsearch-the-future-of-
yo...](http://blog.takipi.com/hosted-elasticsearch-the-future-of-your-elk-
stack/)

We're actually building (and using) a log alternative called OverOps
([https://www.overops.com](https://www.overops.com)), it's a native JVM agent
that adds links to each log warning / error / exception that lead to the
actual variable state and code that caused them, across the entire call stack.
Disclaimer: I work there, would be happy to answer any question.

~~~
real_joschi
FWIW, Graylog is _not_ SaaS but can/should/must be installed on-premise:
[https://www.graylog.org/](https://www.graylog.org/)

~~~
tkfx
Whoops, thanks, edited the last message

------
jorrizza
Graylog is working quite well for us so far.
[https://www.graylog.org/](https://www.graylog.org/)

~~~
whocanfly
Please help me understand whether Graylog is better/easier/simpler than ELK? A
quick glance through the docs and it looks very similar.

~~~
ajsalminen
One thing it has over ELK worth mentioning is that it has authentication in
the open source version. Also able to configure it more through the web
interface than ELK.

------
FooBarWidget
We use plain old syslog, configured to log to a remote log host. The
connection is secured with TLS and old log files are compressed with LZMA.

Our analysis frontend is plain old SSH, bash, grep and less.

~~~
tstack
Give lnav ([http://lnav.org](http://lnav.org)) a try, it's a powerful tool for
analyzing log files using a terminal.

------
pbowyer
I'm using [http://logentries.com](http://logentries.com)

The one I really wanted to use/like was
[http://scalyr.com](http://scalyr.com). However even after their redesign, I
still can't use their query language. With LogEntries, it's pretty natural.

------
TeeWEE
I was used to Google Cloud logs (comes for free with Appengine).. Now I"m
working with an AWS based system with an ELK stack... Its ui is horrible.
Finding the right log entries is a hell. And it often breaks and somebody has
to update it.. I hope we can move to some log cloud provider soon.

------
renaud92
Logmatic.io was not mentionned but we are more known in Europe so far.
Disclaimer I work there. We invest a lot in analytics, parsing/enrichment and
fast & friendly UX. We try to be a premium solution for the same reasonable
price as others and our users tend to say great things about us (eg
[http://www.capterra.com/log-management-
software/](http://www.capterra.com/log-management-software/) ). Happy to
answer if you have any questions. :)

~~~
x6i4uybz
We are using Logmatic.io in my team (switch from logentries). Our stack is
based on mesos/docker with plenty of microservices. Sending logs, building
analytics are very easy, and clickable dashboard just amazing.

------
crummy
Logentries. Not sure if I'd say I'm satisfied, but I haven't found anything
better.

Pros:

* Decent Java logging integration (some services treat things line-by-line, this is a deal breaker for things like multi-line Java exceptions)

* Reasonably priced

* Alerts are kinda nice

* Hosted

Cons:

* Sometimes UI maxes my Chrome CPU

* Live mode not stable at all

* UI is clunky to say the least. It's not always clear what the context of a search is, the autocomplete is obnoxious. I heard they have a new UI coming out sometime, who knows when

------
wodow
LogDNA: powerful, easy to get started and still improving. Using in parallel
with Papertrail and instead of Logentries (which we had horrific problems with
earlier in the year).

~~~
paullth
we had an awful time with logentries: "live" mode never working, the search
facility is bizarre, over charging us, terrible UX. been with LogDNA for about
2 months and we are quite happy with it

------
k33n
Rsyslog+ELK all day. Every aspect can be scaled, and cost can be easily
controlled by managing our own deployments.

~~~
jhgg
We use EK but not L, instead writing own daemon that rsyslog sends loglines to
and bulk inserts them into ES. We use kibana & grafana for visualization. We
index approx 20k log-lines per sec (at peak) w/o a sweat (whereas logstash
would choke up fairly often). A little over half a billion log lines a day -
retained for a week - costs us around $800/mo on GCE (for storage & compute).

~~~
lmedovarsky
We use ENK instead. N as nxlog, open source release is great for many
backends. Unlike Logstash, it's fast, written in C. Scriptable, no downtime
for reconfig. Extendable with Ansible&co through include files.

------
janvdberg
We use Splunk, which is pretty great but costly. We are now also in the
process of checking out Elasticsearch.

------
xbryanx
Moving several systems over to the ELK stack (Elasticsearch, Logstash, and
Kibana).

------
zer0gravity
Whatever solution you use to store your logs, I would suggest to generate them
as events. This will help you to reconcile two important aspects that have
been separated for too long, with no real reason : logging and analytics. It
may require a little bit more effort but I believe it's worth it.

I've expanded on this idea here [1]

[1] - [https://github.com/acionescu/event-
bus#why](https://github.com/acionescu/event-bus#why)

~~~
shubhamjain
Interesting! this is something that I have thought over often. My own
experience of log aggregation is limited to ELK stack and Loggly (for a brief
time) where the setup worked fine but the workflow didn't. We just stopped
browsing logs after a while. Although, a giant centralised system for logs
sounds incredible convenient, making sense of them starts to become a huge
problem and then, it's just easier to ignore log system.

I am sure the solutions discussed here have features to overcome this (filters
/ alerts) but IMHO, we'd be better of collecting less things — limited app
events that have fixed formatting and are easier to make use of in debugging
and monitoring.

~~~
zer0gravity
If you come to think at it, logs are app events. You actually want to collect
as many events as you can, with respect to performance, and analyze the hell
out of them.

You would also get all sorts of benefits, because then you can corelate the
app events with user events, and it may be easier to track bugs and unintended
behaviour.

------
jordanthoms
Previously used logentries and papertrail, but they became expensive as our
log volumes got larger and flexibility was missing.

Now we use self-hosted ELK (elasticsearch, logstash, & kibana) and I'm not
itching to go back to any of the hosted services. It's not as good as
something like papertrail for tailing log streams live (although that isn't
very useful at larger scale) and the UI of Kibana does take a bit of getting
used to though.

------
joeyspn
Happy Papertrail customer here...

We use
[https://github.com/gliderlabs/logspout](https://github.com/gliderlabs/logspout)
to forward all our docker logs to Papertrail... it's like you are watching
your nodejs services running in your terminal. Seamless experience.

------
jakozaur
Sumo Logic: [https://www.sumologic.com/](https://www.sumologic.com/)

Disclaimer: I work there :-), happy to answer any of your questions.

~~~
wlk
I wanted to try out sumologic at some point, but it sucks that you cannot sign
up with Gmail email account

~~~
jakozaur
Right now we do allow @gmail emails. Few years ago initially when we started,
our initial focus was larger clients with a lot of personal touch. We later
added self-serve model: public prices, credit cards, easier setup of popular
data sources... As a startup its really tricky to address all segments at
once.

------
hbz
Self hosted ELK stack, not HA at the moment. Will move the ES nodes to AWS's
managed service once I'm ready to make it more resilient.

------
kevinshinobi
Been using LogDNA for 4 months now with no complaints. Previously using
logentries, I found the search speed to be faster.

------
scanr
Internally hosted ASP.NET application:

[https://getseq.net](https://getseq.net)

------
scrollaway
We're hosted on AWS and used Papertrail. Found it super useful but it got
really expensive. Since the new cloudwatch UI improvements, we're down to only
using Cloudwatch logs. The UI still sucks quite a lot, but not enough to
justify tripling logging costs.

------
cbismuth
Java / Logback / Filebeat 5.0.0 / Elasticsearch 2.3 / Kibana 4

~~~
cbismuth
Waiting for Elastic stack stabilization to version 5.0.

------
mohanlal1803
Hi all, The stack which we used in our organisation is, 1) Fluentd - for log
line transporting 2) Elastic search - for indexing 3) Kibana - for viewing
(remote log viewer)

------
xyz-x
Logary – [https://github.com/logary/logary](https://github.com/logary/logary)
with F#, InfluxDB and ELK.

------
thesorrow
We use ELK + etsy/411
([https://github.com/etsy/411](https://github.com/etsy/411)) for alerting.

------
d33
Couldn't help but say "toilet":

[http://bash.org/?76909](http://bash.org/?76909)

------
mohanlal
Hi All,

The stack which we used in our organisation is,

1)Fluentd - for log line transporting 2)Elastic search - for indexing 3)Kibana
- for viewing (remote log viewer)

------
somedanishguy
We're currently looking into using Humio.

------
throwaway2016a
We use AWS Cloudwatch Logs for aggregation. For reporting we are still trying
to find a solution.

------
exceptione
I see a lot of solutions. What do you recommend for setups with 1 to 3 small
servers?

------
alienjr
Elasticsearch+Flunetd+Kibana for logs and KairosDB+Fluentd+Grafana for
metrics.

------
toddkazakov
ELK with AWS hosted Elastic Search. Works like a charm with Kubernetes.

------
zp-j
No one mentioned Flume + Kafka? This sounds to be a mature solution.

------
eloycoto
I use to work with cloudwatch + awslogs and works like a charm.

------
aprdm
Used to use logentries.

Now a days, on premise, logstash + elastic + kibana.

------
sairamkunala
missed splunk?

~~~
cmeerbeek
Missed it as well. I am a freelance Splunk consultant and use it on a daily
basis with great results for all my clients. Price can be an issue but user
friendliness and number of features almost alway win.

------
vacri
Papertrail is beautiful for watching loglines roll in.

ELK (logstash, self-hosted) is... consuming. The software is free, but it
takes a lot of compute resources, and isn't trivial to come to grips with
(setup or daily use). If you can spare the staff-hours, ELK can be pretty
powerful, though.

