Most logging libraries are bad because they're not designed to account for the requirements of a real-time system.
What you need:
- an asynchronous design where both the formatting and the writing to disk can happen on another thread.
- a lockfree buffer using pre-allocated memory. In particular every log record might be of different size.
- the ability to control where/when to drain that buffer, perform formatting and write data to disk (no good library should impose a threading model, needs to be able to cooperatively run with other tasks on one thread)
- an extensible and fast binary serialization/deserialization scheme -- easiest is to rely on simple copy construction as a default.
Most libraries focus on silly things like formatting and filtering which are all trivial problems.
> Most libraries focus on silly things like formatting and filtering which are all trivial problems.
I wanted to print structured data on stdout, without any fuss. All the other libraries that try to tackle the non trivial problems you mention, feel like a bazooka aiming at the ant that is my use case.
FWIW, in the small desktop app I'm working on now, this library would fit my use case perfectly. Certainly the features mentioned above are powerful, but I think you're right in your belief that plenty of projects don't necessitate such complexity.
Right. We did exactly that, for the reasons you wrote, starting from [0] and making it pre-allocated and async. Formatting is as late as possible, even after we store the static and dynamic parts separately in a db. Very versatile (can log to console, json, web frontend, pandas,...) and space and resource efficient.
I have often written my own logging implementations for similar reasons. I think it is pretty difficult to turn the set of features into a generic library because it necessarily is opinionated about multiple aspects of the software architecture that uses it.
Not all formatting is amenable to being pushed to another thread without introducing undesirable concurrency concerns.
Which is good for 100% of my use cases, since I don't deal with real time systems, or performance intensive applications, or memory/allocation constrained environments.
I would agree with all of this, _except_ maybe the first bullet. Async can be fine so long as there's a mechanism for flushing all buffers before crashing. Otherwise, there's a high risk that you're gonna miss logging an important event that helps pinpoint why the crash occurred
Ideally, modules should first of all be implemented fully according to the standard - CMake does not support gcc13 (14 isn't released yet), MSVC spits internal compiler errors when mixing #includes with imports, VS IntelliSense is buggy, clangd has no modules support whatsoever.
There's barely a point in supporting moduleso sfar unless you really enjoy pain.
And gcc will only start ot be usable with CMake when version 14 is released - that has not happened yet.
And, as I mentioned before, IDE support is either buggy (Visual Studio) or non-existing (any other IDE/OS). So you're off to writing in a text editor and hoping your compiler works to a somewhat usable degree. Yes, at some point people should start using modules, I agree, but to advise library maintainers to ship modularized code... the tooling just isn't there yet.
I mean, the GitHub issue is Microsoft trying to ship their standard library modularized, they employ some of the most capable folks on the planet and pay them big money to get that done, while metaphorically sitting next to the Microsoft compiler devs, and they barely, barely get it done (with bugs, as they themselves mention). This is too much for most other library maintainers.
> Ideally, anything past C++20 should be made available as module as well.
> Not every library has to work in all compilers of the world.
These two statements directly contradict each other. It's great that you are in a space where you can use modules, but most other people are not, and expecting them to ship modularized versions of their libraries is inconsiderate given the limitations they face.
Choosing not to sympathize with the reality of most C++ devs is up to you, you've made it very clear that you refuse to acknowledge my point. No need to get snarky.
I know you're an expert, and maybe not wrong. But every couple of months I try out the state of modules. And I didn't get even remotely close to a point where I would consider them ready. Always the most bleeding edge preview version of VS
I won't deny that there aren't issues with them, still if we don't use and complain about what is still not there, they won't get better.
We already moved beyond a state where nothing really worked, to at very least being able to write CLI stuff using import std and our own libraries, alongside module fragments.
I have a couple of hobby projects on Github using modules.
How is it dated? That's how the Python logging library works, that's how Go's `slog` package works, that's how most of Rust's structured logging crates works.
I guess the definition of "effortless" varies from people to people, I find this "no hard setup required" way to be effortless.
debug/info/warn/error methods are coming. For the latter, since I use a pack expansion ( `...` ) of `field<T>`, maybe it is already possible to omit the type name. I shall put that in a test case and update the README :)
Two good reasons: it is an excellent way to practice using some more advanced C++ features and existing libraries may not be able to do what you want in the manner you want. Logging libraries are also a well-bounded project you can build and refine incrementally to a high degree of sophistication as a single dev.
And of course, it eliminates a third-party code dependency in your other projects.
I've probably been reading too much lately about FOSS developers not being able to find help maintaining packages. But yeah, it's not a helpful comment.
What you need:
- an asynchronous design where both the formatting and the writing to disk can happen on another thread.
- a lockfree buffer using pre-allocated memory. In particular every log record might be of different size.
- the ability to control where/when to drain that buffer, perform formatting and write data to disk (no good library should impose a threading model, needs to be able to cooperatively run with other tasks on one thread)
- an extensible and fast binary serialization/deserialization scheme -- easiest is to rely on simple copy construction as a default.
Most libraries focus on silly things like formatting and filtering which are all trivial problems.