The article was fairly devoid of the necessary details. One of the main ones being: how many events are we talking about? 10 events/sec? 100/sec? 10_000/sec ? And what is the size of these events? How many event emitters are connecting to the Redis server?
With the details, it would be a much more interesting post.
Take a look at logstash , it's on GitHub . I think it could replace or integrate with RedisLogHandler.
logstash takes logs from various inputs (syslog, files, Redis, HTTP), filters/normalized the formats into JSON, and outputs various formats (ElasticSearch, Redis, MongoDB, Graylog2). There is a WebUI with search and graphs. It's designed to scale-out and run on multiple machines.
Depends on how much you care about latency, right?
The easy solution is just, y'know, write the log to a file and scp it back to some central place every so often. But then you have to either (a) keep track of how much of a file you've copied, which is a pain; or (b) only grab files that you're no longer actively writing to (as determined by naming scheme or something), but that introduces some latency, depending on how often you rotate.
I don't, really (you can use syslog pretty successfully, I guess, if you don't mind it), but UDP has certain advantages over TCP, namely that your code keeps running even if the server you're sending to goes down or is unable to respond to messages, it's faster, etc.
Ideally, I'd use a function that sent things to a small server over UDP, which would then put them in Redis. This assumes you don't mind losing a few lines, of course.
If you continuously send UDP messages to a server that isn't accepting them, you will eventually get an error from the sendto() system call; the uninterested receiving host is generating ICMP messages saying "I don't want these". It's true that there is a flavor of sloppy socket coding where that error doesn't manifest itself. But if you're writing good code, TCP is no less fire-and-forget than UDP.
UDP is marginally faster than TCP, but the tradeoff for that is that under heavy load, UDP imposes more costs on the rest of your traffic. Since we're talking about logging, though: who gives a shit how fast it is? With either transport, if logs are taking more than hundreds of milliseconds to clear, you have a problem you need to fix.
Yikes, losing messages? If anyone designs a system that knowingly makes itself harder to diagnose under stress by destroying evidence, I only hope it's themselves and not their successors who are there to endure the pain.
It provides a common logging interface across platforms.
It's extremely simple (the script that pipes syslog output directly into Redis is probably just a couple lines long).
I have to believe that the people who really seem to like syslog have never worked in organizations that had to deploy things like Splunk or (worse) LogLogic and ArcSight just to make sense of the giant morass of useless text gunk they generate.
Have you noticed how none of the cool kids postprocess http log files anymore?
I don't really understand your list in view of the article. How is this solution more queryable than syslog: they record events into redis without any schema related to it (just a string), so I fail to see the improvement there. They put it back to a log file anyway.
It is not less or more centralized than syslog configured with centralization (which is trivial to set up).
How is this more common than syslog across platforms (unless you include windows in "across platforms ?").
It is not simpler than syslog either, since writing to syslog is just a matter of using the right python logging backend.
Analysing tons of data from syslog is a pain, but I don't see how any solution will not require at some point in the stack to enforce a format/structure in your log. How is this fundamentally different than post-processing http log ?
He's LPUSH'ing logs into a list key. Just by doing that one thing, he now has evented logging; he can subscribe to his raw logs with BRPOPLPUSH and without changing anything clientside start indexing them in better ways, or applying different policies to different logs.
And Redis is easier to understand than syslog. We're pretending that there is zero friction to understanding syslog, as if any competent Unix person should automatically grok it. But syslog is a janky old rube goldberg machine and understanding it well pays off solely in the form of understanding a janky old rube goldberg machine that nobody will be using in 10 years.
The lady doth protest too much, no? The guy is using Redis as a buffer to what ultimately ends up just being a central log file. You know. rsyslog. None of the fancy things you have in mind. Which brings me to the same point someone's already made: using Redis buys you nothing, even if you had a more involved use case in mind.
Saying that he can do something by subscribing to an event in Redis is sorta silly, isn't it? You could just as easily tail the logs, as many systems actually built for this purpose do. Once again, there are already things in *nix for this.
The reality is, this is a janky, nubile solution to an already well-solved problem that is now getting thousands of views because of HN and antirez tweeting about it. Instead of learning the correct way of doing things, I bet a bunch of inexperienced developers are now going to say "BOY THIS IS SWELL" and cut themselves a nice fat chunk of technical debt for the not-that-far-off future.
I don't understand how BRPOPLPUSH helps you doing things you could not do with centralized syslog: you can also start indexing things with syslog by analysing the central logs without changing anything client-side. And in both cases, what will be needed is essentially the same: if the logs are not structured at the source, you will need to post-process them, the storage medium does not change any of that.
The redis is easier than syslog is a bit of a strawman, because you will have to understand syslog anyway, since that's the only thing spoken by quite a few applications. In the OP'case, they are already using redis, so the cost on that side is very low, though.
If there was a fancy new protocol actually involved, I'd agree with you, but this is a case of simply using an existing tool for a problem it is very well suited to. No special new logging protocol is required.
I agree, it is nothing different than syslog if all you want to do is just dump all logs in one file. But one advantage of this approach in our case was that we wanted to show last 1000 critical logs on a web interface and with logs stored in redis, it was pretty easy to do that. And as redis was part of our stack, it was very easy to hook it up for this specific task.
Syslog implementations like syslog-ng support both TCP and UDP relaying of all log data on a machine to a centralized Syslog server, and can even bypass storing those logs to the source machine's disk at all. Syslog-ng also supports inserting that data directly into MySQL, and there are various other backends (like Splunk, though I know it's commercial) that can accept the TCP and UDP streams and index them in all sorts of fancy ways.
I think the key point here is that all the above mentioned implementations have significant adoption and are in a sense "battle-tested". For example, what if your background worker has failed and log events are piling up in the Redis list you are using as queue? Do you have monitors in place to detect that situation, and at what value do your alarms go off? Projects like this have a way of taking a lot more time than originally thought, often at the expense of your core development time. I personally don't like spending the time writing and maintaining code for a project that isn't aligned with the problem I'm trying to solve, so I avoid it whenever possible.
On the flip side, if you are setting out to build a really robust logging system on top of Redis, and that's something of value to your organization, then more power to you!
Logs "feel" like one of those problem domains that Redis is a good fit for. When Redis is a good fit for something, it tends to really, really be a good fit. Capped list keys feel almost like they were designed to hold logs.
The thing to realize here is that Redis isn't like Riak or Mongodb or even MySQL. It is stupid simple to stand up a Redis instance. The code to push logs to it: also stupid simple. Even without clever indexing, just stuffing text crud into it, Redis is already natively a great log store.
Syslog is a pile of shit, Ted. It's a relic. You clearly happen to love that relic, and I think you should find a way to place it just-so in a nicely lit alcove in your apartment. The rest of us should move on from it. I don't think less of you for admiring it. I have useless old things on display in my house too.
* Freeform text is a terrible way to track system events.
* Periodically rotated flat files are not a great way to store log information.
* Goofy little UDP messages are not a good way to convey system events
* The syslog PRI field dates back to when we exchanged messages with UUCP.
I could keep going, but since you're just going to reply with "lolwut umad?", I'll leave it at that.
And I'll add it tends to ship in a horrible default configuration with events scattered randomly over multiple files, no safe-guards against filling up the disk and no safeguards to ensure the stupid daemon is actually running.
Freeform text is a terrible way to track system events.
Nothing stops you from logging structured text.
Periodically rotated flat files are not a great way to store log information.
Modern syslog daemons will write to pretty much anything you want.
Goofy little UDP messages are not a good way to convey system events
Modern syslog daemons offer tcp transport. Some even try to offer some delivery guarantees (disk-backed spool), although personally I wouldn't rely on that for truly critical stuff.
The syslog PRI field dates back to when we exchanged messages with UUCP.
Thanks, I always wondered where those were from...
And, well, you forgot a couple bullets:
* syslog() is available everywhere, out of the box
* It's trivial to move from file-based logging to syslog
* We have mature syslog-daemons that dispatch events pretty reliably
* Unless you're facebook you probably don't need anything more fancy.
So, I'd say syslog gets the job done quite well, as long as you don't mistake it for a message queue.
It's not just that logs tend not to be structured, it's that the metadata is all inband. And syslog in particular exacerbates the problem by decoupling logging clients from log storage policy: the log generator has no idea if its syslog has a super-smart storage policy and so has to assume everything needs fine grained human readable timestamps, a custom facility/severity notation, the proctitle and pid, and so on.
Most syslog daemons will store anywhere. Sure. The last syslog daemon I actually read had stack overflows in it, so it's been awhile for me. But if you've got syslog writing to a real store, what value is syslog adding? The standardized network protocol? Redis has a trivial network protocol too.
The issue with syslog facilities isn't that the "UUCP" facility harms anything directly; it's that one the few bits of out-of-band metadata syslog truly offers forces applications to decide whether they're "kern" or "daemon" or "local2". Whose application actually breaks down cleanly into syslog facilities?
But who's suggesting that? Nobody is saying "let's have the kernel log to Redis". By all means, hot potato the kernel stuff and the wrapper stuff and your authlog and whatever through syslog before it gets dumped into Redis.
But if you control it, why bother with syslog? Syslog is a piece of junk. Don't bother. Any ad hoc scheme you come up with that uses a real backend store will be better than syslog.
I'd ask this backwards: Why bother with a homegrown solution when syslog exists and gets the job done?
A home-grown solution takes at least a couple weeks to stabilize, likely much longer before the last bugs are ironed out. Syslog takes about a day to beat into shape.
Any ad hoc scheme you come up with that uses a real backend store will be better than syslog.
You know better than that.
Shipping messages reliably is a surprisingly tedious problem. First you realize you need a disk-spool. Then you realize that spool should be size-capped anyhow. Eventually your boss says you need a network topology more complex than A->B for some idiotic but inevitable reason.
And then next month you run into some redis limitation and realize some kind of datastore abstraction would have been a better idea to begin with. Hmm, perhaps dump to plain-text files until we figure that one out?
See the pattern here?
At the end you have reinvented syslog. Sure, yours may be nicer or at least different.
But that's a whole lot of work to avoid something that, despite all its warts, already works.
I think my contention would be that if you shake off the early-90s-Unixisms and adopt any modern store (forget Redis even and just "use sqlite instead of fopen"), the machinery required to do 98% of what the best syslog daemons do is trivial. The syslog infrastructure just doesn't add much... except compatibility with a very dubious ecosystem of tools.
We use syslog where I work, and I've always felt the same way, but never heard any suggestions for better options with as wide adoption, support, and background as syslog.
Out of pure curiosity, what do you see as the tool most likely to displace syslog in the future? Is there any alternative available that fixes most of these problems without rolling your own from pieces and parts?
I think we're stuck with syslog, but for app logging, ad-hoc database storage --- especially if you have either (a) a database optimized for message queueing or (b) a schemaless database --- is going to tend to beat syslog. You're not going to realize it until you need the information you're logging, though; until then, it's going to seem like syslog is everything you could reasonably need.