Hacker News new | past | comments | ask | show | jobs | submit login

That's what the DEBUG level is for, as explained at the end of the section you're referring. So it would be like:

    D: Connecting to DB
    D: Reading config file
    D: Connecting to API server
    D: Sending updated record
    I: Updated user record for $USER
Or, if it fails, the last line is replaced by:

    E: Failed to connect to API server; could not update user record. [IP=..., ErrNo=..]



Personally I'm not a fan of DEBUG level (unless you always keep it on). You can have a transient system issue that appeared for 30 minutes and then went away. If DEBUG logs weren't turned on at the time, they won't help me.


Of course you always keep it on — the point of the different levels is that you can filter them out at read time (e.g. to eyeball logs, or to alert when the rate of >= WARN is too high)


This traditionally has not been log level's use case.

syslog() and setlogmask() being the obvious examples.

Many (if not all) companies I have worked out filtered out everything DEBUG at compile time to improve performance in production as well.

The best option I've seen is putting everything in DEBUG or lower into a memory ring buffer per process that's dumped and collated with other logs at each ERROR/FATAL log.


That is actually quite nifty. That way you can carry debug logs and only materialise/send them in case of errors when you are likely interested in it without having to put those logs at error and see them all the time etc.

Does anyone know of an implementation for this in any of the Java logging frameworks?


I've actually used this idea in production and it worked great.

In my web app if nothing unexpected happens, only INFO/LOG level stuff is pushed to logs. If, however, I get an exception, I will instead print out all the possible traces. I.e. I always store all the logs in-memory and choose what should be printed out based on the success/failure of the request.

Now, of course, this is just a web API running in AWS Lambda, and I don't have to care overly much about the machine just silently dying mid-request, so this might not work for some of you, but this works great for my use-case and I'm sure it will be enough for a lot of people out there.


Fwiw sentry does that out of the box, anything below warning (I think) is used as context for “actual events”.


Performance degradation is noticeable with DEBUG level in production.


Performance degradation is noticeable with excessive logging; the level is irrelevant.


Debug was not designed to always be left on. That's why you need to enable it in the first place.


Leaving debug levels on in production will bankrupt most companies.



Nope, this produces unmanageable log volume.


How does making these DEBUG logs into INFO logs make the volume manageable?

Or, to flip that around, if you take a program that produces a manageable amount of INFO logs, and change some of those INFOs to DEBUGs, how does that suddenly become unmanageable?


My point is that DEBUG level logging is (hopefully!) not on by default, and that this is what it makes the production log volume manageable.

My experience has been that 1 customer-facing byte tends to generate something like ~10 DEBUG-level telemetry bytes. This level of request amplification can't be feasibly sustained at nontrivial request volumes, your logging infrastructure would dwarf your production infrastructure.


Why not have the program write logs to a temp file somewhere at trace/debug level, and only output the standard level to the console?


Wouldn't that end up slowing down the whole system from writing to a file though?


Ah, suppose it depends on how much you're doing and/or how verbose your logs are. If you've got lines speeding past then yeah I maybe wouldn't write them all.


Writing verbose logs is very expensive.


It depends which costs what.

In a case where there are plenty of resources and the thing you are logging is very heavy then even verbose logging is trivial.

If you are beset by failures it is much cheaper than making your developers guess.


The log file should, by default, have all necessary info to debug the issue - or at the very least enough information to narrow down the problem so far that you can replicate it locally.

I've spent way too many hours adjusting log levels on a client environment and trying to replicate the issue without breaking or losing data while doing it.


Debug level is not for production, it's for debugging.


Have you never needed to debug an issue that was only happening on production?


Even though it can be lost by an optimistic configuration, I prefer to use the TRACE level(s) and leave the DEBUG one for, well, debugging.


We're discussing a NullPointerException situation; how is that not debugging?


I think OP's case was not debugging a crash, just that if a crash happens he would like the log to be before rather than after. The log statements would be there no matter whether there is a crash or not, just in case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: