Hacker News new | past | comments | ask | show | jobs | submit | mr-karan's comments login

So, Logchef has a concept of "Source" which represents a ClickHouse table. You give the DSN (essentially host/port/user/password for now) and connect. In prod scenarios, usually you only `GRANT SELECT ON db_name.table_name TO user_name;`

Once you add the source, you can "connect" the source to a team. Only the members of the team will be allowed to query this source. So you can have multiple teams and add users accordingly. A source can be added to multiple teams as well.

Hope that answers your question!


What I don’t like about Claude Code is why can’t they give command line flags for this stuff? It’s better documented and people don’t have to discover this the hard way.

Similarly, I do miss an —add command line flag to manual specify the context (files) during the session. Right now I pretty much end up copy pasting the relative paths from VSCode and supply to Claude. Aider has much better semantics for such stuff.


Maybe I’m not getting this, but you can tab to autocomplete file paths.

You can use English or —add if you want to tell Claude to reference them.


Never knew about `tab` shortcut, thanks for letting me know!

BTW, as I was using it today, I was quite surprised to see `@` working now. Turns out in 0.27.5 they added this feature: https://github.com/anthropics/claude-code/blob/main/CHANGELO... :)


I'm curious to understand the implications of the AGPL-3.0 license in the context of this project. It sounds like the author doesn't want big corps to offer a hosted version of this tool (because doing that would require them to release the source code of any modifications they make to the software, which many companies are reluctant to do) and it's also an OSI approved license (unlike the recently famous SSPL or BSL). What's so wrong about AGPL-3.0 then?

The intention is often to prevent companies from building proprietary services on top of open source software and I feel AGPL 3.0 is a sensible choice here.


Looks like a solid project, will be interested in giving it a shot

Aside: I built my own expense tracker [1] as well to categorise expenses using LLMs as I needed a quick way to log the entries. I’ve been meaning to export these to Actual budget for a detailed analysis but haven’t done that yet.

[1]: https://github.com/mr-karan/gullak


thanks! yours looks great too, love the report aspect


Never really caught on with the idea of integrating external I/O in something as fundamental as logging. More often that not a pull vs push approach is suited for logging. There are dozens of high performant log collectors/aggregators who can collect, transform, and push logs to N number of stores. What’s the advantage of doing this right inside the app? (Besides maybe serverless functions?)


Sometimes you don't have the resources or need to spin up an external collector. I might be in a situation where I want to collect logs with DataDog, but I also am not big enough nor would I have the time to set up a sidecar or configure the agent itself. It's possible you don't have access to the host environment to install something like an agent either (as you mentioned, like serverless functions).

The advantage for the in-app case is that it's fast to on-board with and doesn't require significant / any devops involvement compared to the above scenario.

One downside though is depending on how those logs get shipped, it might not scale well if you're writing an extreme frequency of logs (it's possible to reach HTTP limits with DataDog and New Relic log publish APIs in a high volume/frequency situation), and that's when you would want to move to and external collector that can process logs in a much more asynchronous fashion.

I'm not sure if the question is specific to LogLayer or just talking about logger transports in general that does in-app push, but I see the cloud collector features of LogLayer as something that you can easily add to your logging stack when you start out small and need it, but transition to an external collector later.

(One might argue that some OTEL tools can also do this, but as stated in another response, I'm not familiar enough to know if they'd do a better job or not in terms of the overall logging DX that LogLayer provides; their primary job is to ship logs from my understanding.)


+1 on this. Kids, don't rely on log to remote inside the application.

You get a growing combination of N apps x M log servers to support and link into the application. Even if you only use one of them yourself. Ops should be able to change to a new logging backend without rebuilding every application.

Bugs in the log shipper has the possibility to take down your app, you never want this. Also think about the unnecessary security surface this adds, especially when multiplied by M above. (log4j anyone?)

When the log server is unavailable or saturated you still want those file or console logs available for last resort debugging.

Shutting down the app risks loosing last minute of unshipped logs, unless you require slow graceful shutdowns, which is never a good idea. Think about crashes, where you need logs the most.

Use sidecar and similar strategies instead. You are most likely also running some third party application elsewhere, where you don't have the option to rebuild with a new logger, so you need this type of knowledge anyway.


I've also made a similar tool: https://doggo.mrkaran.dev/. Can be useful when you want to query with a custom nameserver.


From the blog post

> In addition, we have given out many sizeable grants to FOSS projects and organisations in India. While funding projects outside India involves significant paperwork and operational overhead, we have reached out to several small and large projects that we use at work and have managed to pay them. This highly ad hoc approach is something that has increasingly bugged me though.


Hey, I'm from Zerodha team.

1. Regarding the validation, this error seems to be related to the provenance check mechanism in the spec. This is to prove ownership of that project/domain. The wellKnown field is designed to handle cases where the webpageUrl doesn't match the manifest URL.

2. Will definitely be passing the feedback to our team and evaluate this further!


Thanks for the reply. It turns out the current JSON file approach can't prove ownership of the project nor the domain, so perhaps there's a gap in my understanding or your team's understanding...? Feel free to contact me about this a because I believe in your mission: joel@joelparkerhenderson.com

Some options that I use successfully with other donations services and funding services...

- A unique token per project published in a project file

- A unique token per domain published in a DNS TXT record

- A verification of the project's existing setup, such as using GitHub API access or OAuth

- A forwarding to the project's existing funding link, such as using a project's GitHub sponsors link

- A heuristic with the person's existing payment links, such as contact info being identical on GitHub and Venmo

- A challenge/response, such as verifying a small random payment

- A dedicated KYC process such as with a background checking service.


Aloha! I think there's something novel you could do here that would catch on like wildfire. Here is me coding up the basics:

https://youtu.be/4BH8DRXwVRw?t=317

Feel free to connect via email if you want to chat more breck7@gmail.com


I'll confirm doesn't work.


Hi, I'm from Zerodha. Thank you for bringing this to our attention. It was a misconfiguration on our end, which we've now patched.


Typst has been pretty amazing, and at my organization, we’re very happy with it. We needed to generate over 1.5 million PDFs every night and experimented with various solutions—from Puppeteer for HTML to PDF conversions, to pdflatex and lualatex. Typst has been several orders of magnitude faster and has a lighter resource footprint. Also, templating the PDFs in LaTeX wasn’t a pleasant developer experience, but with Typst templates, it has been quite intuitive.

We’ve written more about this large-scale PDF generation stack in our blog here: https://zerodha.tech/blog/1-5-million-pdfs-in-25-minutes


This is a really great write up. Kudos for the obvious effort, both on the technical side and sharing the process with the rest of us.


Never heard that someone is generating PDF documents at that pace. I'm working on a product that is used for mass PDF reporting based on Puppeteer. With nightly jobs, caching, and parallel processing, the performance is ok.

https://www.cx-reports.com


I think we don't reach that much quantity, but we do a hefty number. What for? Invoices!

We're having problems because until now PDFs are being generated by the ERP system, and it can't keep pace. I know there's a Dev team working on a microservice for PDF generation, but never thinked about doing it with Typst.

I think I'm going to send them @mr-karan link.


pdf/ps can easily be created in a way that data for text and qr code fields are easily in plain text. seems like yall focusing too much on the higher level tools instead of what's right in front of you.


Company branding is an important aspect of PDF creation that many tools struggle to handle correctly. PDF documents often need to include logos, company colors, fonts, and other branding elements. Puppeteer is popular because you can control these aspects through CSS. However, Puppeteer can be challenging to work with for larger documents, as each change requires programming effort or when your software needs to needs to serve multiple clients each with different requirements.


Yep, it's mostly about branding and control. It needs certain concrete layout and logos, and has to be relatively easy to change them.

We also render shipping labels in PDF, and we have to be VERY strict with that. But we're still not touching that, as that process is not at slow and problematic as the invoicing one.


how having the value and name in an invoice as plain text affect branding? or do you mean there client branding is added to the invoice??

the end result when you open the file is still a regular pdf. it's just encoded with some areas unpacked


It's not about the plaintext, but about the layout, design and logos. At least in our case.


so why you bring that up on a thread that had nothing to with layout? I'm honestly confused


Have you tried reportlab as well? It was a good solution when I had to deal with a similar problem many moons ago. Not quite the same volume you have but still.


Having used ReportLab a bunch, I'd agree it's a good solution, but not maybe on the more mediocre side of good. Generating LaTeX was a better solution for me, and while I haven't used it, Typst looks a lot better.


Just wondering: did your organisation contribute anything back to the project, or supported it financially in any way?


Hey. Yes we did support them financially from our FOSS fund. We’ll be happy to do it once more as well.


What is the use case for generating that many PDFs?


Regulatory requirements mandate that. Stock brokers in India are required to generate this document called “Contract Notes” which includes all the trades done by the user on the stock exchanges. It also contains a breakdown of all charges incurred by the user (brokerage, various taxes etc). And this has to be emailed to every user before the next trading session begins.


Does the law specify PDF? I would have thought pain text or even HTML would be sufficient.


I don’t know the situation in India but brokers in Austria and Germany do the same. The law does not stipulate the format but PDF is what everyone uses. I assume it’s because it can be signed and archived and will outlast pretty much anything. You need to keep these for 7 years.


Yes, in India, the law mandates that ECNs (electronic contract notes) need to be digitally signed with a valid certifying authority. While it's true that XML/docx/xls files could also support digital signatures, but I think PDFs are prevalent and also allow clients to verify this on their end, quite easily.


PDF is less likely to contain executable malicious code than other formats.


Is it? More so than say .csv file ?

I was under the impressions that pdfs are not that safe. I thought they can do stuff like execute a subset of PostScript and Javascript.


Look, when it comes to corporate reporting, PDFs are pretty much the gold standard. Sure, they've got some potential security issues, but any decent company's IT department has them well in hand.

Think about it - you want your reports to look sharp, right? PDFs deliver that professional look every time, no matter who opens them or on what device. Plus, they've got all those nifty features like password protection and digital signatures that the big guys love.

CSV files? They're great for crunching numbers, but let's face it - they look about as exciting as a blank wall. Try sending a CSV report to the board of directors and watch their eyes glaze over.

So, yes, for reporting in a company that's got its security act together, PDFs are your best bet. They're like the well-dressed, security-savvy cousin of other file formats - they look good and keep things safe.


More than plain text? I doubt so.


common people don't talk about plain text. what are you? a hacker?!?


I mean I guess you don't care as long as the file is signed if it is just some regulatory stuff that barely anyone would ever read anyway.


There are of course way more efficient methods for generating templated pdfs than using a typesetter.


I'm interested to hear what you would propose.


Not sure what GP had in mind, but one can programmatically generate PDFs directly, without using something like Typst as a "middleman".


Have you tried doing that? It’s no fun at all and far from easy. I don’t quite see a benefit in doing it without some utility.


Besides, the generation of PDF reports is usually decoupled from the templates, so you will have to work on your own "middleman".


Apache iText, for example.


I guess some webkit solution like wkhtmltopdf


How is that more efficient than Typst exactly?


I was reading second sentence, and I knew it was zerodha. It’s good to see more open source in your tech stack.


Thanks for your writeup, that was exceptionally well-presented.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: