The idea is that git is for source code/configs. Not data. Data gets stored in databases. Secrets/Tokens are stored in secret stores/vaults. Blobs like PDFs/etc should be stored in blob storage.
If a diff is not useful, don't store it in git. Doing a code review on PII is not useful.
You also shouldn't add packages in a git repo if they can be downloaded immutably in CI/locally. It adds a lot of space. Every commit is a snapshot + new code. Commits are not diffs.
I disagree. They were using it as sample data, when they should have used synthetic sample data. It's totally sane to check-in a JSON (or similar) file with a few hundred [edit: non-confidential] samples so that you can run integration tests without needing to set anything else up.
Similarly, I keep small pdf manuals in git. I add them in their own commit, which doesn't have a useful diff. In exchange, they're always there and I don't need to spin up some special one-off system.
Fixture data (for testing) is fine. Sometimes a few lines is the minimum viable to test, sometimes a hundred or so lines of JSON. As long as the data is being optimized for testing and you aren't just storing a GB because it's easier.
I was replying to a comment asking why PII data shouldn't be stored in git vs a database. Just tried to give a general rundown of how to think about the tools.
Totally agreed that the data should have been synthetic/scrubbed and that a small sample set (large enough to validate) would be ok.
Sometimes a PDF or a few pictures are necessary (or just much simpler). I get that. I have seen some repos with an unreasonable amount of PDFs/Docs/pictures/etc.. and that's when a script that copies them into a gitignored directory from someplace (Dropbox/S3/etc..) is a better fit.
Do keep in mind that you might want a set of trusted TLS certificates and the timezone database. Both will be annoying runtime errors when you don't trust https://api.example.com or try to return a time to a user in their preferred time zone. Distroless includes these.
I have had great luck using rayon for distributing CPU intensive work across available cores. If you are doing I/O I can see why async is preferable but rayon is an excellent library for parallel work.
I have found that async or even in Go with goroutines, a large amount of small threads are not faster even if they should be in theory when it comes to CPU intensive work.
I use it to generate a 3D universe (at the atomic level) but store the data as enums.
There's a chance it's using Tauri (rust) https://tauri.studio/en/ if it's truly not using Electron but a similar concept. However WASM builds in Electron would make more sense if they use the term Electron.
yeah I'm aware it contains chrome, I mentioned that much of the functionality has been moved out and is actually implemented in rust, this is verifiable simply by running `1password --log trace`
EDIT:
168MiB resident memory on my system (just checked).
Though it malloc'd (but never used) 32G, that's worrying.
Link below to the behind-the-scenes article. I have no experience using the Linux app so I'm not sure of the performance but a lot of the core seems to be written in Rust compiled to WASM which is the way it's now being done for their browser extension. Personally everyone keeps cribbing about electron but I hope 1Password has found a way to make good on performance
If a diff is not useful, don't store it in git. Doing a code review on PII is not useful.
You also shouldn't add packages in a git repo if they can be downloaded immutably in CI/locally. It adds a lot of space. Every commit is a snapshot + new code. Commits are not diffs.