I once knew a founder of a pre-GPT-3 AI product that analyzed certain cost-adjacent documents to find "hidden" optimizations. The "AI" was the founder, an expert in that industry, churning through the uploaded documents himself and writing reports by hand detailing potential cost savings. How far we've come!
If there are only a handful of operators in a given industry and they use the same billing format, why not?
Create a known good OCR to calculation mechanism, then generate reports based off it. If it is inaccurate, its probably a small amount of logic to fix it.
With GPT you could even get it to write the parsing logic for you perhaps, and maybe process bill data when a bill doesn't exactly match existing parser data.
This is all very good and true, but as usual the devil is found in the details. For instance, my company sells Docker images that depend on a very old and recently unmaintained binary. Over the years, I've found issues with that binary that make it very hard to be sure issues are completely reproducible from system to system (or, as the article suggests, from local to production). Sometimes, it's as simple as a newer base image updating a core dependency (e.g. Alpine updating musl), but other times it seems like nothing changes but the host machine, and diagnosing kernel-level issues - say, your local Mac OS' LinuxKit kernel versus your production Amazon Linux or Ubuntu, and don't forget x86 emulation! - make "test what you develop and deploy what you test" occasionally very daunting.
These are the sort of issues that Nix <https://nixos.org/> solves quite well. Pinning dependencies to specific versions so the only time dependencies change is when you explicitly do it - and the only packages present in your images are ones you specifically request, or dependencies of those packages. It also gives you local dev environments using the ~same dependencies by typing `nix develop`.
Once you get past the bear that is the language, it's a great tool.
I found setting up nix shells to be more time consuming than docker setups. Nixpkgs can require additional digging to find the correct dependencies that just work on other distributions. That being said, I’m a huge fan of NixOS, but I haven’t seen it as a replacement for docker for reproducible dev environments yet.
I'll grant that the kernel version+config shifting is a pain point, but I'd expect that containers help with the rest of it (userspace)? Yes, obviously changing the base image is a potential breaking change, but with containers you package up the ancient binary and the base image and any dependencies into a single unit, and then you can test that that whole unit works (including "did that last musl upgrade break the thing?"), and if it passes then you ship the whole image out to your users safe in the knowledge that the application will only be exposed to the libraries you tested it against and no newer versions.
Sounds like yall are doing a poor job building the container. It's one thing to rely on built in musl/glibc if it's modern software. However, if you are dragging technical debt, all those dependencies should be hard locked to the proper version.
reply