It's halfway what I'm talking about, thanks. Absolutely yes to the idea of being able to query everything!
But for me the idea of writing (CREATE, UPDATE, DELETE) would be an integral part of it as well -- whether inserting a property into a JSON file, adding a cronjob as column parameters rather than a text line, or creating files.
See also https://augeas.net/, which presents structured file contents as navigable paths that you update.
Red hatters maintained it (hung out in irc) until cloud container workflows surpassed in-place system administration. It pairs nicely with puppet, and can be used manually.
I don’t think this is a fair reason to write the tool off. They support Homebrew, and I’m sure will add other install methods in the future. Piping to bash is no worse than clicking “yes” on every step of an install wizard.
The alternative is not “install wizard”, it is either “here is a deb file with no post* scripts” or “here is a tar.gz, extract it anywhere and add symlink”
Both of those are vastly safer, because they do not require root privileges, and guarantee to uninstall cleanly.
This. Simply the decision to eschew standard package management already says unpleasant things about their approach to integrating with my environment, or in whatever it is the installation does being reversible using standard tools. "Install wizards" do indeed have exactly the same issue, which is why those are also terrible.
Often these "pipe to Sh scripts" support --help and a variety of configuration options anyway. The benefit of a script over a binary installer at least is that you can inspect the script before running it!
Have you inspected one of these scripts? What have you found? (I've tried it a few times and haven't felt like I learned anything meaningful from doing so.)
I almost always inspect these kinds of scripts before running, more out of curiosity than anything, but also so that I know its not going to do that so stupid that even I can see it's stupid. Usually you can just pipe to `cat`, which is super low effort to do.
I've occasionally seen scripts that install some other application where it was not clear that it was a dependency and there was no heads up to the user that this was going to happen: that kind of behavior makes me more distrustful of the author, so there's a useful signal there.
Most scripts like this seem to amount to "configuration defaults + cli completions installation". To that end, I find looking at them useful because it gives me a sense of the expectations of the tooling and hints at where I might find things if I need to go debugging later.
When they are provided by the same entity as the program you wish to run, I don't see how it's significantly more risky to run these scripts than running the application code itself with the active user's permissions. Still, if there was something, by glancing at the script, you have half a chance of seeing it. If there's something there that doesn't make sense, that's both a "proceed with caution" and a learning opportunity.
It’s not really intended as a defense against being owned per se, it’s more about knowing what’s going on and getting an additional signal about the risk profile (not just from maliciousness) of the thing I’m about to run.
That said, I generally pipe to file and cat the file, yes, if only because it somehow feels wrong to download it twice.
I understand, but cating the saved file or printing from pipe to the terminal directly has identical issues. Terminal ANSI escape sequences are interpreted either way.
The usual annoying thing is the automated package install. I have not looked at this particular package, but in the past, I have seen:
- installing specific gcc version and making it system-default.
- installing “virtualbox” packages - this was the worst, as the machine had a few KVM VMs running at the same time, and KVM and VirtualBox cannot run at the same time.
In general, I now stay away from every install script, and if I cannot, then I run it in docker container. Life is short, I do not want to spend it fixing my primary workstation.
(And I examine postinst scripts in deb files too, but those are usually much more boring)
Yes, plenty of times. Usually I find that they do what you would expect them to do: set up a bunch of parameters like an installation prefix, and then copy files around. They also handle user input options and maybe prompt for some stuff.
Using something like io-ts gives you the benefits of static types and runtime types. JSON Schema typically just provides runtime guarantees but doesn't come with the same development experience. I've been very happy with io-ts so far.
I know there’s a zillion libraries in this space by now (and so far I’ve found io-ts quite good), but I feel like there’s actually room for more, with some features currently missing from all of them. (So I’m working on one!)
The dynamic JS space uses things like JSON Schema because they provide documentation and enable runtime validation, in a similar way to libs like io-ts. But they also provide API/network boundary documentation, as well as enable automatically generating client libraries (regardless of the client’s stack).
There’s a good opportunity for a library with all of these value propositions:
There are (to my knowledge) no tools available providing all of those in one package. But I’ve built one (it was built on top of io-ts), so I know it can be done. But it was proprietary, for an employer, so I can’t share it.
But! I learned a lot in the process, and I’m building a new one. Zero dependencies, fundamentally a different design/approach. But I’ll definitely be sharing it with HN when it’s ready.
Edit to add: another thing is that many of the tools that provide some subset of those goals (especially static types <-> JSON Schema tools) require is out of band code/file generation (eg supply JSON Schema file input, get .d.ts file output, or vice versa). IMO this is additional DX friction and more error prone than the runtime/compile time parity of the common TS libraries. So another goal for my project is that the default approach is to define one interface and have APIs available to access all the other representations in the same flow, with file generation a secondary concern if there’s a need/desire.
You don't really need to be bought into fp-ts to use io-ts. Just create a helper function that morphs the native decode output, `Either<E, A>`, to something like `E | A` or `A | null`.
ML-assisted query optimization is super interesting to me (I've had to fight the uphill battle against anomalies in Postgres stats and estimations[1]), but I'd also love to see more optimization and auto-tuning across the "stack":
- Use column-oriented storage for columns that are frequently scanned in analytical queries
- Automated index management, based on user-provided boundaries for write throughput, disk footprint, query latency
- Calibrated optimism for heap updates. AFAIK, current DBs are either optimistic (assume transactions are more likely to commit than rollback) and update the heap in-place, and write the old value elsewhere in case of rollback, or pessimistically write all updates in new, versioned tuples and let MVCC (and garbage collection) handle the rest. Would be interesting to see the performance improvement that could come from modeling commit / rollback outcomes and optimizing accordingly.
- Using variable-sized pages to reduce overhead for older pages, as a sort of light cold storage
Anyone know of any DBs that automatically tune these aspects?
I wish Terraform still allowed destroy-time provisioners to access variables, it seems that it went away due to some refactoring, and it isn't coming back.
Yes, I know. I meant for some resources (the code for creating was added, but not for destroying those resources). I had to write a custom provider to temporarily fix it
Needs some sort of a demo or screenshot before I would consider signing up. Even better, let people lurk (read without creating an account) before joining.