Author here: Logdy is a self-hosted, single binary, open source cli tool that catches stdout from a terminal and sends it to web browser UI.
It's goal is to structure the logs and save time searching and browsing them.
This is an early version, I'm looking for feedback.
For local development, I cannot recommend lnav[1] enough. Discovering this tool was a game changer in my day to day life. Adding comments, filtering in/out, prettify and analyse distribution is hard to live without now.
I don't think a browser tool would fit in my workflow. I need to pipe the output to the tool.
This is a great idea for a project. You should also make it work as a Docker logging driver plugin[1] so that people can easily view their local Docker container logs through Logdy
This looks neat. Getting a few examples on how it can connect to multiple sources like docker engine and k8s may make it a solid tool for developers that need to develop in complex infra setup. I have seen K9s fill this space with room for improvement.
Wouldn't make more sense to have the same observability stack on production and development? For instance, open-observe is also a single binary that provides UI for logs, metrics and traces, although every log producer would have to be properly configured and routing to it.
Another idea: maybe chrome dev-tools could be repurposed to display server logs instead of client logs, somehow [2].
I'm not sure if that's possible but in the most simple way, just open a webrowser tab as a panel in VSCode so you don't have to switch between the editor and browser.
This project might solve the problem I have been having for a long time but falls short. My usecase is that I have a directory with log files in multiple subdirectories. I would like to explore these logs by pointing logdy at this directory. Is this possible somehow?
That depends on the constraints you have. Currently you can start a tail on multiple files if you don't need to distinguish the filenames in Logdy
- $ logdy stdin 'tail -f file1.log file2.log' this use case is described here [1].
Logdy currently works best with tailing, however exploring big files is not supported well as Logdy will stream the whole content if used like $ logdy stdin 'cat bigfile.log'. I'm planning to introduce another mode [2] that will work more like a 'less' command.
I would need more feedback on a specific use case. I started with forwarding development logs in mind, however during the process I quickly realized there might be more use cases when one would find useful forwarding logs to web UI, example: ops, data science, long-running scripts. I hope to get feedback from these angles as well.
Sure and journalctl even has `--output json` flag that I suppose would work here. But still I think native journal support would be just convenient, and would avoid needing this two layers of filtering (first journalctl and then logdy)
I'm working on it as we speak! My plan is to listen on multiple ports then attach the "origin" field to each log message with the port number it was produced with. This way you should be able to do:
Not sure if you want to go that route, but unix sockets allow you to use SO_PASSCRED to get the pid of the connected process, you could use that as natural "origin" without requiring many ports
My initial idea for a name was "logd" but a few projects on github already take that name. Adding "y" doesn't change the pronunciation IMO so you can say "logdy" same as "logd"
I think you might want to rethink it. One of the cardinal sins of branding is making a word that does not have an immediately obvious pronunciation since it exponentially decreases the likelihood of being mentioned in conversation.
This is an early version, I'm looking for feedback.
If you would like to try the tool without downloading it visit: https://demo.logdy.dev/ Latest release: https://github.com/logdyhq/logdy-core/releases