So, Logchef has a concept of "Source" which represents a ClickHouse table. You give the DSN (essentially host/port/user/password for now) and connect. In prod scenarios, usually you only `GRANT SELECT ON db_name.table_name TO user_name;`
Once you add the source, you can "connect" the source to a team. Only the members of the team will be allowed to query this source. So you can have multiple teams and add users accordingly. A source can be added to multiple teams as well.
What I don’t like about Claude Code is why can’t they give command line flags for this stuff? It’s better documented and people don’t have to discover this the hard way.
Similarly, I do miss an —add command line flag to manual specify the context (files) during the session. Right now I pretty much end up copy pasting the relative paths from VSCode and supply to Claude. Aider has much better semantics for such stuff.
I'm curious to understand the implications of the AGPL-3.0 license in the context of this project. It sounds like the author doesn't want big corps to offer a hosted version of this tool (because doing that would require them to release the source code of any modifications they make to the software, which many companies are reluctant to do) and it's also an OSI approved license (unlike the recently famous SSPL or BSL). What's so wrong about AGPL-3.0 then?
The intention is often to prevent companies from building proprietary services on top of open source software and I feel AGPL 3.0 is a sensible choice here.
Looks like a solid project, will be interested in giving it a shot
Aside: I built my own expense tracker [1] as well to categorise expenses using LLMs as I needed a quick way to log the entries. I’ve been meaning to export these to Actual budget for a detailed analysis but haven’t done that yet.
Never really caught on with the idea of integrating external I/O in something as fundamental as logging. More often that not a pull vs push approach is suited for logging. There are dozens of high performant log collectors/aggregators who can collect, transform, and push logs to N number of stores. What’s the advantage of doing this right inside the app? (Besides maybe serverless functions?)
Sometimes you don't have the resources or need to spin up an external collector. I might be in a situation where I want to collect logs with DataDog, but I also am not big enough nor would I have the time to set up a sidecar or configure the agent itself. It's possible you don't have access to the host environment to install something like an agent either (as you mentioned, like serverless functions).
The advantage for the in-app case is that it's fast to on-board with and doesn't require significant / any devops involvement compared to the above scenario.
One downside though is depending on how those logs get shipped, it might not scale well if you're writing an extreme frequency of logs (it's possible to reach HTTP limits with DataDog and New Relic log publish APIs in a high volume/frequency situation), and that's when you would want to move to and external collector that can process logs in a much more asynchronous fashion.
I'm not sure if the question is specific to LogLayer or just talking about logger transports in general that does in-app push, but I see the cloud collector features of LogLayer as something that you can easily add to your logging stack when you start out small and need it, but transition to an external collector later.
(One might argue that some OTEL tools can also do this, but as stated in another response, I'm not familiar enough to know if they'd do a better job or not in terms of the overall logging DX that LogLayer provides; their primary job is to ship logs from my understanding.)
+1 on this. Kids, don't rely on log to remote inside the application.
You get a growing combination of N apps x M log servers to support and link into the application. Even if you only use one of them yourself. Ops should be able to change to a new logging backend without rebuilding every application.
Bugs in the log shipper has the possibility to take down your app, you never want this. Also think about the unnecessary security surface this adds, especially when multiplied by M above. (log4j anyone?)
When the log server is unavailable or saturated you still want those file or console logs available for last resort debugging.
Shutting down the app risks loosing last minute of unshipped logs, unless you require slow graceful shutdowns, which is never a good idea. Think about crashes, where you need logs the most.
Use sidecar and similar strategies instead. You are most likely also running some third party application elsewhere, where you don't have the option to rebuild with a new logger, so you need this type of knowledge anyway.
> In addition, we have given out many sizeable grants to FOSS projects and organisations in India. While funding projects outside India involves significant paperwork and operational overhead, we have reached out to several small and large projects that we use at work and have managed to pay them. This highly ad hoc approach is something that has increasingly bugged me though.
1. Regarding the validation, this error seems to be related to the provenance check mechanism in the spec. This is to prove ownership of that project/domain. The wellKnown field is designed to handle cases where the webpageUrl doesn't match the manifest URL.
2. Will definitely be passing the feedback to our team and evaluate this further!
Thanks for the reply. It turns out the current JSON file approach can't prove ownership of the project nor the domain, so perhaps there's a gap in my understanding or your team's understanding...? Feel free to contact me about this a because I believe in your mission: joel@joelparkerhenderson.com
Some options that I use successfully with other donations services and funding services...
- A unique token per project published in a project file
- A unique token per domain published in a DNS TXT record
- A verification of the project's existing setup, such as using GitHub API access or OAuth
- A forwarding to the project's existing funding link, such as using a project's GitHub sponsors link
- A heuristic with the person's existing payment links, such as contact info being identical on GitHub and Venmo
- A challenge/response, such as verifying a small random payment
- A dedicated KYC process such as with a background checking service.
Typst has been pretty amazing, and at my organization, we’re very happy with it. We needed to generate over 1.5 million PDFs every night and experimented with various solutions—from Puppeteer for HTML to PDF conversions, to pdflatex and lualatex. Typst has been several orders of magnitude faster and has a lighter resource footprint. Also, templating the PDFs in LaTeX wasn’t a pleasant developer experience, but with Typst templates, it has been quite intuitive.
Never heard that someone is generating PDF documents at that pace. I'm working on a product that is used for mass PDF reporting based on Puppeteer. With nightly jobs, caching, and parallel processing, the performance is ok.
I think we don't reach that much quantity, but we do a hefty number. What for? Invoices!
We're having problems because until now PDFs are being generated by the ERP system, and it can't keep pace. I know there's a Dev team working on a microservice for PDF generation, but never thinked about doing it with Typst.
pdf/ps can easily be created in a way that data for text and qr code fields are easily in plain text. seems like yall focusing too much on the higher level tools instead of what's right in front of you.
Company branding is an important aspect of PDF creation that many tools struggle to handle correctly. PDF documents often need to include logos, company colors, fonts, and other branding elements. Puppeteer is popular because you can control these aspects through CSS. However, Puppeteer can be challenging to work with for larger documents, as each change requires programming effort or when your software needs to needs to serve multiple clients each with different requirements.
Yep, it's mostly about branding and control. It needs certain concrete layout and logos, and has to be relatively easy to change them.
We also render shipping labels in PDF, and we have to be VERY strict with that. But we're still not touching that, as that process is not at slow and problematic as the invoicing one.
Have you tried reportlab as well? It was a good solution when I had to deal with a similar problem many moons ago. Not quite the same volume you have but still.
Having used ReportLab a bunch, I'd agree it's a good solution, but not maybe on the more mediocre side of good. Generating LaTeX was a better solution for me, and while I haven't used it, Typst looks a lot better.
Regulatory requirements mandate that. Stock brokers in India are required to generate this document called “Contract Notes” which includes all the trades done by the user on the stock exchanges. It also contains a breakdown of all charges incurred by the user (brokerage, various taxes etc). And this has to be emailed to every user before the next trading session begins.
I don’t know the situation in India but brokers in Austria and Germany do the same. The law does not stipulate the format but PDF is what everyone uses. I assume it’s because it can be signed and archived and will outlast pretty much anything. You need to keep these for 7 years.
Yes, in India, the law mandates that ECNs (electronic contract notes) need to be digitally signed with a valid certifying authority. While it's true that XML/docx/xls files could also support digital signatures, but I think PDFs are prevalent and also allow clients to verify this on their end, quite easily.
Look, when it comes to corporate reporting, PDFs are pretty much the gold standard. Sure, they've got some potential security issues, but any decent company's IT department has them well in hand.
Think about it - you want your reports to look sharp, right? PDFs deliver that professional look every time, no matter who opens them or on what device. Plus, they've got all those nifty features like password protection and digital signatures that the big guys love.
CSV files? They're great for crunching numbers, but let's face it - they look about as exciting as a blank wall. Try sending a CSV report to the board of directors and watch their eyes glaze over.
So, yes, for reporting in a company that's got its security act together, PDFs are your best bet. They're like the well-dressed, security-savvy cousin of other file formats - they look good and keep things safe.
Once you add the source, you can "connect" the source to a team. Only the members of the team will be allowed to query this source. So you can have multiple teams and add users accordingly. A source can be added to multiple teams as well.
Hope that answers your question!