Hacker News new | past | comments | ask | show | jobs | submit login
Log4r for Ruby on Rails: Scalable and Robust Logging for Web Applications (mmlac.com)
42 points by mmlac on Aug 19, 2013 | hide | past | favorite | 21 comments

This is really cool. Rails is currently looking for feature recommendations for 4.1. You (or author) might want to share this with them. I can see it being a pretty popular idea.


It would be a good idea to get some more robust logging into Rails 4.1. Rails has grown up, and so have many of the applications using it. Rails logging starts to brake down when you need to track down the specific server and PID that is associated with an error. If you use a log aggregator, without information on PID and server, log messages interleave, and become hard to visually associate when you have 100 rails processes distributed across 10 servers.

Which is where Scout, Honey Badger and New Relic come in. But, I agree, a more robust version for development environments would be nice.

Scout, NewRelic and HoneyBadger are great tools to get started with. Their limitations are that they do what they do, how they think it should be done. If you need different metrics, another way to split up the data or correlate your mongo logs with app requests via NewRelic, you are out of luck.

This is when creating your own logs with exactly the data you needs becomes valuable, when you can measure what you need how you need.

My first post set the foundation for being able to easily get the right metrics and information by creating solid logs. It does not mean that this has to completely replace the aforementioned tools, on the contrary, it can supplement them to give you exactly the data you need when you need it.

stuff you need to pay for? There is surely a case for including some production logging in rails as default?

I would argue that the standard rails logs are perfectly fine for local development and debugging. NewRelic even ships a local version that allows to track down performance issues.

This works great for one or two requests triggered by the developer, but fails when thousands of requests bombard a live application split over a multiple servers with multiple rails instances running on each.

Log4r[1] is nice, but last I checked it isn't as well maintained or feature rich as the Logging[2] gem. [1] https://rubygems.org/gems/log4r [2] https://rubygems.org/gems/logging

Yes, I think the Logging gem gives you a lot out of the box (like email notifications), and feels really easy to work with. It also plays nicely with gems like lograge[1] - which condenses your log lines very nicely whilst still keeping most important info.


Lograge's functionality imho can be replaced by Log4r's formatters and the notification code in the initializer.

Log4r does feel very robust and customizable, but also feels much more verbose and heavy.

You get pretty much all of that and with less verbose configuration with the Logging gem + lograge, yet they are still relatively easy to tweak. The email notification of the Logging/Logging-rails gem in particular is really great, and you can even define message thresholds (not more than one email per minute, only if X number of errors etc)

I must say that I did not know of the Logging gem. It looks really promising and has some nice features like coloring for i.e. stdout and rspec support.

Sadly the docs are even worse than the ones for Log4R. I will take a look into writing another blogpost as an introduction to Logger as well as the differences and advantages of Logger. Thanks for making me aware of this gem!

I wish we could move away from unclear log levels and sprintf argument mashing. Log4r is complete but it's design is inherited from a static world.

Something simple like console.log() makes it easy for the developer to add more context to the line because he doesn't have to think how he will represent the data or choose a log level.

I've been working on a similar solution in the ruby world. It's nearly stable API-wise and already running in production: https://github.com/zimbatm/lines-ruby

All the mentioned solutions are great! I personally think it does not matter if we use key-value output or "sentences", they have to be parsed anyway. I prefer writing Function: Foo instead of fn=Foo, just because I think it's easier to read when a human actually has to dig into the raw logs. For machines it's really the same.

I will detail how I parse logs with Logstash in a future post of this series. (And some logs like apache / nginx / mysql cannot be changed to use key-value format easily, so you might not be able to get around string parsing anyway)

Log4r allows you to use a key-value format as well, just modify the PatternFormatter and your output (in the logger.rb in my example).

I'm a big fan of heroku's scrolls gem[1], it uses a key=value format and is intended to be streamed straight into stdout. We're using it with splunk storm[2] which automatically parses logs in the key=value format and from there you can slice/dice the logs using their query language and graphing. This makes it super easy to spot patterns in the logs where there might be an issue.

[1] https://github.com/asenchi/scrolls [2] https://www.splunkstorm.com/

You would really dig l2met.[1] It is a system that creates metrics from k=v log data. https://github.com/ryandotsmith/l2met

Yes Lines-ruby is similar to scrolls with richer data types and support for ActiveRecord and Rack loggers out of the box. It also provides the parser to extract the data back.

Rails does need a better logger by default, and your idea is really good but I am just not prepared to push this much configuration into my apps. We're talking about a yml config file, 14-odd lines in application.rb and an additional 67-line logger initializer.

Given the choice between crappy, non-automatable but still searchable default logs with zero configuration, or this operationally superior solution that requires over 100 lines of code/config I will take the former any time. Also, no way I am introducing a global log check time variable. Is it just me that finds this way too messy? I'd rather push for a Heroku/Scrolls-style logger by default in Rails.

Especially because `Time.now - SOME_TIME` is actually a relatively expensive operation. IIRC, it is faster to have SOME_TIME converted to an int once and to do Time.now.to_i.. or if your timing needs are less precise (such as in this case) you can get an even bigger speedup using a tiny cext: https://github.com/aaronblohowiak/fasttime shameless plug)

However, I ultimately disagree that the filesystem is where you want to store the log level setting but I understand that this is just an example for the blog post :D

This is a great point about the time, thank you for the feedback!

About storing logs in the fs: I was thinking about i.e. the database but then how do I solve cases like "this one server acts up, can I get the verbose logs from there but not from my other servers?"

How would you store the loglevels?

I can totally understand your point. This is an excessive example with lots of customization. You can get by with a lot less code if you just want to get a sane default-format or a lot more if you want to really fine-tune your logs. The beauty is that you have the option :)

And then there is the option to push all this code into a gem . One line of code in the Gemfile and you are set. :)

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact