> 1) 'docker logs' relies on using the json logdriver which means the log file is stored in /var/lib/docker/..... and grows forever. No rollover. No trimming. FOREVER.
Even without that issue, I'd prefer my logs to be centralised. So as well as my app should I be running a logging daemon, process monitoring, etc for each docker instance?
What we do at work is that we have our containers be in charge of talking 'out' on a given address and format for logs, and have things configured so that entire sets of machines end up speaking with the same log server (an ELK stack, in our case). The process monitoring is done per host: There are docker-aware tools that look at the host, and can peer into the container, to do this basic tracking.
People are not kidding through, when they say that everything gets very complicated. All the things that we did by convention and manual configuration in regular VMs that are babysat manually have to be codified and automated.
Docker is going to be a great ecosystem in 3 years, when the entire ecosystem matures. Today, it's the wild west, and you should only venture forth if having a big team of programmers and system administrators dedicated just to work on automation doesn't seem like a big deal.
Similar to hibikir's reply, what we do is attach a volume container to all app containers and logs are written to that. The run elk stack to view parse logs. For process monitoring we run cAdvisor on each host to view the resource usage of each container. Since your apps are containerized it easy to monitor them for resource usage, hook it to nagios etc. We have built custom gui to do all this.
Even without that issue, I'd prefer my logs to be centralised. So as well as my app should I be running a logging daemon, process monitoring, etc for each docker instance?