
Show HN: LogDNA – Set up Kubernetes logging with 2 kubectl commands - leeab
https://github.com/logdna/logdna-agent#kubernetes-logging
======
leeab
Hi everyone, I'm a co-founder / CTO of LogDNA. We were in Y Combinator's W15
batch and launched our cloud logging platform last year
([https://news.ycombinator.com/item?id=11074537](https://news.ycombinator.com/item?id=11074537))

Based on user feedback, we're happy to announce our super easy Kubernetes
integration. No more wrestling with fluentd configs, fiddling with
Elasticsearch knobs or following 30-step guides cutting and pasting other
people's configs.

    
    
      kubectl create secret generic logdna-agent-key --from-literal=logdna-agent-key=<YOUR LOGDNA API KEY>
      kubectl create -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-ds.yaml
    

We're looking for some feedback on how we can improve this integration. We
currently extract Kubernetes metadata: pod name, container name, container id,
namespace.

Feel free to try it out. Happy to answer any questions!

~~~
weitzj
Also interesting:

I have used the docker integration of logdna beforehand, before moving to
Kubernetes.

The integration was done using docker compose per docker compose environment.

The logs which contained `err` were marked red as errors in logdna and I could
trigger an alarm.

The same containers in Kubernetes with the same logs seem to marked as `info`
now. I am not sure, why this is and how I can get the same behavior as before.
Is there a way to tell Kubernetes about stderr/stdout? Or how would I trigger
logdna to treat a log as error instead of info?

~~~
leeab
Hmmm this may have been a bug. It should show stderr as err. Let me look into
this.

------
manojlds
Seems odd that this is getting upvotes. Most stuff is 2 (actually 1) commands
away in Kubernetes ONCE you have the necessary manifests.

~~~
andrewstuart2
I don't think it's odd that it's getting upvotes, but centralized logging is
already a feature (as an addon) of k8s. I have a helm chart that's a one-shot
for setting up ELK and FluentD as a system service that aggregates all docker
stdout logs and tags each stream with k8s metadata, so you can very easily
slice/dice your logs even at scale. It includes a cronjob running es-curator
so that logs older than some configurable threshold are automatically deleted.

e.g. output from my dev cluster:

    
    
          ~CK/elk/templates git:(master)  kc cluster-info 
        Kubernetes master is running at https://192.168.16.16:8443
        Elasticsearch is running at https://192.168.16.16:8443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
        Heapster is running at https://192.168.16.16:8443/api/v1/proxy/namespaces/kube-system/services/heapster
        KubeDNS is running at https://192.168.16.16:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns
        monitoring-grafana is running at https://192.168.16.16:8443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
    
    

I think people want easy management of this, but I do wonder how successful
this particular integration will be with such low-hanging self-managed
alternatives. Where I think LogDNA will have a win is that you probably also
have things running outside k8s that LogDNA helps integrate. So if you want
all your k8s and non-k8s logs aggregated, and don't want to mess with it, then
you might go with LogDNA.

~~~
ShaneOG
Is this extremely useful chart public?

~~~
andrewstuart2
It is now. [https://github.com/andrewstuart/helm-
charts](https://github.com/andrewstuart/helm-charts)

------
jdotjdot
Congrats! LogDNA is one of my favorite products. We're likely moving to
Kubernetes shortly and were actually worried about logging, so this makes my
life a lot easier

~~~
leeab
Thanks JJ! Yeah, we hope to earn your Kubernetes business :)

------
jazoom
I've used LogDNA for a while now. It has been a good experience.

~~~
leeab
Awesome! And definitely let us know if we can improve in any way.

